-
-
Notifications
You must be signed in to change notification settings - Fork 153
The Algorithm
The FSRS (Free Spaced Repetition Scheduler) algorithm is based on a variant of the DSR (Difficulty, Stability, Retrievability) model, which is used to predict memory states.
Here is a visualizer to preview the interval with specific parameters and review history: Anki FSRS Visualizer (open-spaced-repetition.github.io)
In case you find this article difficult to understand, perhaps you will like this more: https://expertium.github.io/Algorithm.html.
-
$R$ : Retrievability (probability of recall) -
$S$ : Stability (interval when$R=90%$ )-
$S^\prime_r$ : new stability after recall -
$S^\prime_f$ : new stability after forgetting
-
-
$D$ : Difficulty ($D \in [1, 10]$ ) -
$G$ : Grade (rating at Anki):-
$1$ :again
-
$2$ :hard
-
$3$ :good
-
$4$ :easy
-
[0.212, 1.2931, 2.3065, 8.2956, 6.4133, 0.8334, 3.0194, 0.001, 1.8722, 0.1666, 0.796, 1.4835, 0.0614, 0.2629, 1.6483, 0.6014, 1.8729, 0.5425, 0.0912, 0.0658, 0.1542]
The
$w_i$ denotes w[i]. This version uses 21 parameters. The memory state is represented by Stability (S) and Difficulty (D).
The formula of stability after a same-day review is changed in this update:
So the S increases faster when it's small and slower when it's large. S will converge when
In practice, we should ensure that
The forgetting curve's decay is trainable:
where
[0.40255, 1.18385, 3.173, 15.69105, 7.1949, 0.5345, 1.4604, 0.0046, 1.54575, 0.1192, 1.01925, 1.9395, 0.11, 0.29605, 2.2698, 0.2315, 2.9898, 0.51655, 0.6621]
The
$w_i$ denotes w[i]. This version uses 19 parameters. The memory state is represented by Stability (S) and Difficulty (D).
The stability after a same-day review:
The initial difficulty after the first rating:
where Again
.
Linear Damping for the new difficulty after review:
In FSRS 5,
The other formulas are the same as FSRS-4.5.
[0.4872, 1.4003, 3.7145, 13.8206, 5.1618, 1.2298, 0.8975, 0.031, 1.6474, 0.1367, 1.0461, 2.1072, 0.0793, 0.3246, 1.587, 0.2272, 2.8755]
The
$w_i$ denotes w[i]. This version uses 17 parameters. The memory state is represented by Stability (S) and Difficulty (D).
The formula of forgetting curve is changed in this update.
The retrievability after
where
The next interval can be calculated by solving for
where
In FSRS v4,
In FSRS-4.5,
The new forgetting curve drops sharply before
[0.4, 0.6, 2.4, 5.8, 4.93, 0.94, 0.86, 0.01, 1.49, 0.14, 0.94, 2.18, 0.05, 0.34, 1.26, 0.29, 2.61]
The
$w_i$ denotes w[i]. This version uses 17 parameters. The memory state is represented by Stability (S) and Difficulty (D).
The initial stability after the first rating:
For example, again
. When the first rating is easy
, the initial stability is
The initial difficulty after the first rating:
where the good
.
The new difficulty after review:
It will calculate the new difficulty with
The retrievability after
where
The next interval can be calculated by solving for
where
The new stability after a successful review (the user pressed "Hard", "Good" or "Easy"):
Let
- The larger the value of
$D$ , the smaller the$SInc$ value. This means that the increase in memory stability for difficult material is smaller than for easy material. - The larger the value of
$S$ , the smaller the$SInc$ value. This means that the higher the stability of the memory, the harder it becomes to make the memory even more stable. - The smaller the value of
$R$ , the larger the$SInc$ value. This means that the spacing effect accumulates over time. - The value of
$SInc$ is always greater than or equal to 1 if the review was successful.
In FSRS, a delay in reviewing (i.e., overdue reviews) affects the next interval as follows:
As the delay increases, retrievability (R) decreases. If the review was successful, the subsequent stability (S) would be higher, according to point 3 above. However, instead of increasing linearly with the delay like the SM-2/Anki algorithm, the subsequent stability converges to an upper limit, which depends on your FSRS parameters.
You can modify them in this playground: https://www.geogebra.org/calculator/ahqmqjvx.
The stability after forgetting (i.e., post-lapse stability):
For example, if
[0.9605, 1.7234, 4.8527, -1.1917, -1.2956, 0.0573, 1.7352, -0.1673, 1.065, 1.8907, -0.3832, 0.5867, 1.0721]
The
$w_i$ denotes w[i]. This version uses 13 parameters. The memory state is represented by Stability (S) and Difficulty (D).
The initial stability after the first rating:
where the again
. When the first rating is easy
, the initial stability is
The initial difficulty after the first rating:
where the good
.
The new difficulty after review:
It will calculate the new difficulty with
The retrievability of
where
The next interval can be calculated by solving for
where
Note: the intervals after Hard and Easy ratings are calculated differently. The interval after Easy rating will multiply easyBonus
. The interval after Hard rating is lastInterval
multiplied hardInterval
.
The new stability after recall:
Let
- The larger the value of
$D$ , the smaller the$SInc$ value. This means that the increase in memory stability for difficult material is smaller than for easy material. - The larger the value of
$S$ , the smaller the$SInc$ value. This means that the higher the stability of the memory, the harder it becomes to make the memory even more stable. - The smaller the value of
$R$ , the larger the$SInc$ value. This means that the spacing effect accumulates over time. - The value of
$SInc$ is always greater than or equal to 1 if the review was successful.
The following 3D visualization could help understand.
In FSRS, a delay in reviewing (i.e., overdue reviews) affects the next interval as follows:
As the delay increases, retrievability (R) decreases. If the review was successful, the subsequent stability (S) would be higher, according to point 3 above. However, instead of increasing linearly with the delay like the SM-2/Anki algorithm, the subsequent stability converges to an upper limit, which depends on your FSRS parameters.
The stability after forgetting (i.e., post-lapse stability):
For example, if
You can play the function in post-lapse stability - GeoGebra.
[1, 1, 1, -1, -1, 0.2, 3, -0.8, -0.2, 1.3, 2.6, -0.2, 0.6, 1.5]
The
$w_i$ denotes w[i]. This version uses 14 parameters. The memory state is represented by Stability (S) and Difficulty (D).
The initial stability after the first rating:
The initial difficulty after the first rating:
The difficulty is clamped within the range
The new difficulty after review:
It first computes a temporary new difficulty, then applies mean reversion towards
The new stability after a successful review (the user pressed "Hard", "Good" or "Easy"):
The new stability after forgetting (the user pressed "Again"):
[2, 5, 3, -0.7, -0.2, 1, -0.3]
The
$w_i$ denotes w[i]. This version uses 7 parameters. The memory state is represented by Stability (S), Difficulty (D), and Lapses (L), the total number of times the card has been forgotten.
The retrievability of
The initial state after the first rating:
- Initial Stability:
$$S_0(G) = w_0 \cdot 0.25 \cdot 2^{G-1}$$ - Initial Difficulty:
$$D_0(G) = w_1 - (G - 3)$$ - Initial Lapses:
$$L_0(G) = \max(0, 2-G)$$ (This means$L_0$ is 1 if the first rating isAgain
, and 0 otherwise.)
The new difficulty after review:
The new stability after a successful review (the user pressed "Hard", "Good" or "Easy"):
The new stability after forgetting (the user pressed "Again"):
In this version, the post-lapse stability only depends on the total number of lapses.
Updating lapses after review:
The lapse count is incremented by 1 each time the user presses Again
.
If you find this introduction helpful, I'd be grateful if you could give it a star:
My representative paper at ACM KDD and IEEE TKDE: A Stochastic Shortest Path Algorithm for Optimizing Spaced Repetition Scheduling [中文版] & Optimizing Spaced Repetition Schedule by Capturing the Dynamics of Memory [中文版]
My fantastic research experience on spaced repetition algorithm: How did I publish a paper in ACMKDD as an undergraduate?
The largest open-source datasets on spaced repetition with time-series features: open-spaced-repetition/FSRS-Anki-20k & open-spaced-repetition/anki-revlogs-10k
FSRS is an independent open-source project driven by its community. We are grateful for the support from organizations like 墨墨背单词 (MaiMemo Inc.), who champion open source by enabling core contributors like Jarrett Ye to invest time and expertise into FSRS. This collaboration helps ensure FSRS remains a leading-edge, freely available spaced repetition algorithm for everyone.