Why Do Students Quit Online Learning? 

Online learning environments (OLEs) are becoming increasingly popular due to their flexibility, accessibility, and ability to adapt to individual learners. These platforms allow students to study at their own pace and revisit material as needed, making them particularly useful for self-directed learning. Many systems now incorporate adaptive algorithms to optimize learning schedules, presenting items at the ideal moment for long-term retention. 

Despite these advantages, student engagement remains a significant challenge. In particular, many learners quit their sessions early, often before reaching the end of the study window, and sometimes while further progress is still possible. 

This raises a key question: What causes students to stop studying? 

Our research identifies two consistent and compelling predictors: sequences of consecutive errors and item difficulty

Understanding early quitting in OLEs 

Previous research on early quitting in OLEs has largely focused on stable individual traits, such as motivation, self-regulation, and self-efficacy, as predictors of persistence. These factors are undoubtedly important, but they are also difficult to influence in the moment. As a result, recent work has shifted toward identifying real-time predictors of quitting that can be addressed during a study session. 

One such predictor is error streaks. A study by ten Broeke et al. (2021) found that students who made three consecutive errors were over 13 times more likely to quit a session compared to those who answered the same item correctly. The key insight from this work is that it’s not single mistakes that drive disengagement, but rather the accumulation of errors over time. 

This finding aligns with Metcalfe and Kornell’s (2005) Judgment of Rate of Learning (jROL) theory, which suggests that learners make ongoing assessments about how much they are learning. When students perceive that their learning has stalled, particularly after repeated errors, they may decide that further effort is unlikely to be worthwhile. 

Our aim was to examine whether these patterns; error-induced quitting and the effect of item difficulty, would occur in a different type of learning environment: one specifically designed to minimize errors. 

Why study MemoryLab?

MemoryLab is an adaptive learning platform that applies principles from memory science, such as spacing and forgetting curves, to optimize when items are reviewed. Unlike other adaptive systems that intentionally challenge learners with difficult material, MemoryLab aims to present items just before they are forgotten. In this way, the system supports high accuracy by design. 

Because the platform minimizes errors through its scheduling algorithm, it offers a useful context for testing the robustness of quitting predictors. If consecutive errors and item difficulty still predict quitting, even when they occur rarely and are not intended by the system, this would suggest these effects are robust across different types of digital learning environments. 

Dataset and analysis 

We analyzed anonymized data from 61,094 study sessions collected through VocaTrainer, a platform developed in collaboration with MemoryLab and Klett und Balmer in Switzerland. Each session begins with the student selecting a lesson and a duration of either 4, 6, or 8 minutes. Once the session begins, the adaptive algorithm selects items for review based on the student’s prior performance and predicted forgetting. 

To maximize interpretability, we focused on sessions that were initially set to 8 minutes in length. We then categorized sessions into two groups: 

Early Termination: Sessions that ended before the 8-minute mark 

Full Completion: Sessions that lasted the full duration or were extended 

We then examined quitting behavior in relation to the number of consecutive errors made prior to a given trial and the difficulty level of each item. 

Results: Error streaks and item difficulty both predict quitting Consecutive errors increase the likelihood of quitting 

The results confirmed our hypothesis. The probability of quitting increased substantially with each additional consecutive error.

Figure: Consecutive errors and quitting probability: The likelihood that a student quits a session increases sharply with each additional consecutive mistake. Compared to trials with no recent errors, students were over three times more likely to quit after three or more errors in a row. 

Even one error more than doubled the odds of quitting. After two errors, the likelihood increased further. Students who made three or more consecutive mistakes were nearly four times more likely to quit compared to those who answered correctly. 

Item difficulty also predicts quitting

In addition to error streaks, we found that item difficulty was a significant independent predictor of quitting. More difficult items—those with lower historical accuracy across all learners—were more likely to result in session termination, regardless of the student’s immediate error history. Importantly, there was no significant interaction effect between item difficulty and error streaks, suggesting that these factors operate independently in predicting quitting.

Figure: Predicted probability of quitting by item difficulty: As item difficulty increases, so does the likelihood of quitting, regardless of recent error history. Each line represents a different streak level, showing that both item difficulty and error streaks are independent predictors of early session termination. 

To better understand this effect, we inspected a subset of items that were associated with the highest quitting rates. Many of these items did not necessarily involve conceptually difficult material, but rather required exact spelling, use of accents, or punctuation. These formatting demands—while useful in some contexts—often resulted in incorrect classifications for responses that were close or partially correct. 

This finding indicates that item difficulty is not always about semantic challenge. Sometimes, the difficulty lies in how narrowly correctness is defined. In these cases, a student’s failure may reflect a minor formatting mistake rather than a genuine retrieval failure.

This has important design implications. Future adaptive systems should carefully evaluate whether their fact lists and correctness rules are unnecessarily strict. When systems penalize learners for small deviations, they risk misclassifying successful recall as failure—ultimately increasing frustration and the likelihood of quitting. 

What do students do before quitting? 

To better understand what quitting looks like, we also examined the final trials that occurred before session termination. Interestingly, most students were not submitting blank or nonsensical answers before they quit. Instead, they appeared to be making real attempts, just ones that were incorrect. 

This suggests that quitting is not simply the result of disengagement or apathy. Rather, students are continuing to try, but the lack of perceived progress likely prompts them to stop. The choice to quit may reflect a moment of frustration or demotivation, particularly after several unsuccessful trials. 

Implications for system design 

Given that both error streaks and item difficulty appear to predict quitting, learning platforms can respond by implementing simple, real-time interventions. One effective approach—supported by ten Broeke et al. (2021)—is to present an easier item immediately after an error

Even a single correct response can help restore momentum, reduce frustration, and increase the learner’s perception of progress. These small design changes do not reduce the overall challenge of the session, but they can interrupt the negative feedback loop triggered by repeated errors. 

MemoryLab, with its adaptive infrastructure, is well-positioned to integrate this kind of strategy. The system already tracks user behavior and schedules reviews dynamically. Integrating error-streak detection into the algorithm could allow it to respond with supportive, achievable content when students are at risk of quitting. 

Similarly, item difficulty could be managed by ensuring that the most difficult items are not shown back-to-back, especially early in the session or after a streak of incorrect responses. Even spacing more challenging content can help sustain motivation. 

Why this matters 

Perhaps the most important insight from our study is that quitting occurs even in systems designed to avoid errors, and that both error streaks and item difficulty play an independent role in this process. 

This suggests that the decision to stop is not solely a function of content difficulty or personal traits. Instead, learners are making real-time evaluations about how productive their efforts feel.

When they make multiple mistakes or encounter items they find disproportionately difficult, they are more likely to conclude that continuing won’t be worthwhile. 

Recognizing these patterns allows us to make targeted adjustments to system design. With small changes, we can help students stay engaged, even when they encounter challenges. 

Conclusion

Quitting in online learning environments often follows a predictable pattern: several mistakes in a row, and/or the presentation of highly difficult material. This pattern, observed even in a platform like MemoryLab that minimizes errors, highlights the powerful role of perceived failure and challenge in shaping engagement. 

By identifying these predictors and responding to them in real time, learning platforms can reduce premature quitting and support more consistent, sustained learning. 

As digital education continues to grow, it is essential that our systems do more than just present content. They must also recognize when students are struggling—and respond in ways that support persistence. 

References

● ten Broeke, S., van Rijn, H., & Sense, F. (2021). Predicting quitting behaviour in online learning environments. Journal of Learning Analytics, 8(3), 33-48. 

● Metcalfe, J., & Kornell, N. (2005). A region of proximal learning model of study time allocation. Journal of Memory and Language, 52(4), 463–477. 

● Pan, H. (2023). Adaptive learning and its effectiveness in online learning environments. International Review of Education, 69(1), 45–69. 

● Akpen, P. A., Opoku, A., & Boateng, E. (2024). Self-regulated learning and performance in online courses: A meta-analytic review. Educational Psychology Review, 36(1), 77–103.


Offerte aanvragen?

Vul het onderstaande formulier in:

Of mail naar:

Aan de slag!

Wilt u meer weten?

Vul het onderstaande formulier in:

Of mail naar: