MemoryLab logo

MemoryLab’s new Model-Based Mastery algorithm helps students learn faster

Adaptive learning systems can redefine learning experiences by shaping education to individual needs, optimizing student performance. Within this landscape, MemoryLab has recently introduced the Model-Based Mastery (MBM) algorithm, which strategically excludes sufficiently studied facts from repetition schedules, increasing learning efficiency. In this blog post I delve into our comparative analysis of the MBM algorithm versus our traditional Optimal Spacing (OS) algorithm, showing that the MBM algorithm helps students master lessons faster.

Adaptive learning systems are changing the way we learn by personalizing education to fit each individual’s needs. These systems use clever technology to predict learner performance and adjust the learning schedule accordingly, making it easier to memorize facts involving declarative knowledge, such as vocabulary or toponyms (Lindsey et al., 2014; Papousek et al., 2014). Research has consistently shown that adaptive systems enhance learning efficiency across various subjects, both in controlled laboratory settings and classrooms (van Rijn et al., 2009; Papousek et al., 2014)

MemoryLab’s computer-based adaptive learning system is designed to enhance long-term retention for declarative material. It employs strategies like testing and spaced practice to facilitate learning (van Rijn et al., 2009). These techniques build upon an Optimal Spacing (OS) algorithm which schedules fact repetitions based on accuracy scores and response times of individual learners. Research has shown that learners utilizing MemoryLab demonstrate an improvement of about 10 to 20% in test scores compared to those using less-adaptive learning methods like flashcards (van Rijn et al., 2009; Wilschut et al., 2021; Sense & van Rijn 2022). Besides its traditional scheduling algorithm, MemoryLab has also been working on a model-based assessment of longer-term retention that we call Model-Based Mastery (MBM). MBM represents our understanding of how learners retain information over time. It assesses whether a fact is sufficiently studied (mastered), and excludes this fact from the repetition schedule.The implementation of this algorithm creates more room to focus on other unmastered facts, leading to more efficient learning. By excluding mastered facts, learners may progress more quickly through lessons, resulting in time savings or greater emphasis on less familiar content. To determine whether a fact is sufficiently studied (mastered), the algorithm predicts longer-term activation of each fact during practice. A fact is mastered once the predicted future activation of that fact exceeds a threshold value. Specifically, the cognitive model predicts activation of a fact over a minimum retention time (24 hours) and evaluates whether that activation exceeds a forgetting threshold (see Figure 1). Whenever a fact’s predicted activation exceeds this threshold and is categorized as mastered, it will be removed from the fact repetition schedule.

Figure 1: Projected activation for two facts after a practice session, one of which has been mastered (red).

Comparison between OS and MBM

We aimed to compare the efficiency of the OS algorithm with the newly developed MBM algorithm while studying in MemoryLab. To achieve this, we analyzed the learning data of students who utilized both algorithms while practicing facts within MemoryLab. These students were engaged in studying French and English lessons. Our primary objective was to determine whether the MBM algorithm offers greater efficiency compared to the OS algorithm. A scheduling algorithm can be considered as more efficient if learners achieve the same learning gains in less time, or more learning gains in the same amount of time. Specifically, we investigated:

  1. The number of facts practiced by students within a specific timeframe.
  2. The time taken by students to master all facts within a lesson (referred to as lesson mastery).

Given that the MBM algorithm gradually excludes mastered facts, unlike the OS algorithm, we expected an advantage for the MBM algorithm. This advantage would be apparent through:

  1. A higher number of facts covered within the same timeframe.
  2. A shorter duration required for students to attain mastery of lessons.

How many new facts are practiced within a specific timeframe?

To assess whether learners using the MBM algorithm encountered more facts within specific timeframes, we examined the quantity of newly presented facts over time with both the MBM and OS algorithms. Both algorithms introduce new facts into the learning session whenever none of the current facts require immediate rehearsal. We measured the number of new facts that were introduced per minute of study, provided that the learner had not yet encountered every fact in a lesson. Figure 2 depicts the average rate at which new facts are introduced over a study period of 12 minutes. Initially, both algorithms exhibit a similar rate of introducing new facts within the first minute. However, as time progresses, the MBM algorithm demonstrates a noticeable advantage in the pace of presenting new facts. By the fifth minute of the session, the MBM algorithm is introducing about twice as many facts per minute as the OS algorithm. This higher rate can be attributed to the exclusion of sufficiently mastered facts by the MBM algorithm, creating additional space for the introduction of new, unstudied facts. To rule out the possibility  that the higher fact introduction rate of the MBM algorithm came at the cost of worse learning performance, we investigated whether there was a difference in response accuracy and response times between both algorithms. We found no such differences, suggesting that the MBM algorithm’s exclusion of mastered facts did not impair how well students learned the items.

Figure 2: Number of new facts introduced per minute of study

How fast do students reach mastery?

Given that learning with the MBM algorithm results in encountering more new facts per minute, we anticipated that this algorithm leads to faster attainment of lesson mastery. Students do not always practice until every fact from a lesson is mastered. Assuming that learners do reach lesson mastery, how many trials does it take them? Figure 3 illustrates how many retrieval practice were required on average to reach lesson mastery for lessons of different size.

The results indicated that on average it takes fewer practice trials to reach lesson mastery when studying with the MBM algorithm. For a lesson with an average number of facts (about 11), it takes 45 trials on average to master the full lesson with the MBM algorithm. With the OS algorithm, it takes about 10.8 extra trials to master the same lesson, about 24% more.

Figure 3

The advantage of the MBM algorithm becomes larger when the number of facts within a lesson increases. Figure 4 displays this advantage by visualizing what percentage fewer trials are required to reach lesson mastery with the MBM algorithm. For instance, a lesson with 15 facts requires almost 30% fewer trials to reach mastery when a student practices with the MBM algorithm as opposed to the OS algorithm.

Figure 4


We investigated the effectiveness of MemoryLab’s recently developed Model-Based Mastery algorithm by comparing it to our traditional Optimal Spacing algorithm.

Previous studies have shown that the OS algorithm, which schedules fact repetition based on accuracy scores and response times, yields significant learning gains compared to traditional flashcard-based methods. However, our analysis of the newly developed MBM algorithm suggests even greater efficiency in learning. By dynamically excluding mastered facts from the repetition schedule, the MBM algorithm optimizes learning by putting greater emphasis on unmastered material. The analyses demonstrate that the MBM algorithm introduces new facts at a faster rate, with learners mastering lessons in fewer trials than with the OS algorithm.. As the number of facts within a lesson increases, the advantage of the MBM algorithm becomes more pronounced. This suggests that implementing this algorithm results in a considerable improvement in the effectiveness of study sessions, particularly when lessons contain more items.


Lindsey, R. V., Shroyer, J. D., Pashler, H., & Mozer, M. C. (2014). Improving students’ long-term knowledge retention through personalized review. Psychological science, 25(3), 639–647.

Papoušek, J. & Pelánek, Radek & Stanislav, V.. (2014). Adaptive practice of facts in domains with varied prior knowledge. Proc. of Educational Data Mining. 6-13. 

Sense, F., & van Rijn, H. (2022, January 27). Optimizing Fact-Learning with a Response-Latency-Based Adaptive System.

Van Rijn, H., van Maanen, L., & van Woudenberg, M. (2009). Passing the Test: Improving Learning Gains by Balancing Spacing and Testing Effects. In A. Howes, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 9th International Conference on Cognitive Modeling (pp. 110–115). UK: Manchester.

Offerte aanvragen?

Vul het onderstaande formulier in:

Of mail naar:

Aan de slag!

Wilt u meer weten?

Vul het onderstaande formulier in:

Of mail naar: