Adaptive Learning in Educational Practice: More than an Engineering Challenge

Adaptive learning systems are a prime example of applied cognitive science. They implement theories of cognition in computational models, these models enable a quantified interpretation of learners’ behaviour, and this interpretation is used to shape the learning experience. Using these systems in real-world educational settings confronts us with new questions about whether technological solutions developed in carefully controlled environments translate to a messy real-world setting, and what other (non-technological) factors come into play in such a setting?

If something works in the lab, does it also work outside the lab?

Typically, the way we study cognition is in tightly controlled lab environments, in which we try to rule out as many confounding factors as possible. While this may allow for very precise measurements of cognitive performance, the lab environment is very different from the typical environment—in a busy classroom, at the kitchen table, on the bus, etc.—in which learners do their studying. An adaptive learning system that only works in a lab environment wouldn’t be very useful, which is why it’s important that we test these systems in realistic settings. 

A key requirement for the functioning of an adaptive fact learning system is that it can reliably identify individual differences in learners’ ability to memorise information, as well as differences in the memorability of individual items. Being able to tell slow from fast learners, and difficult from easy facts, is what allows the system to adapt effectively (Sense et al., 2016). We can think of this as a technological or engineering challenge: can we build such a system? Below, I list a few examples of how we approach this question with the MemoryLab learning software.

In a recent study, we analysed the performance of almost 140 thousand high school students learning vocabulary in a second language with MemoryLab (van der Velde et al., 2023). Crucially, this learning took place in the context of these students’ regular education, which means that we had no control over the circumstances in which the learning sessions happened. Despite this relative lack of control, we found that the rate of forgetting, the adaptive parameter that the system estimates for each fact that a learner studies, was relatively consistent across measurements. That consistency made it possible to improve predictions of future performance by using rate of forgetting estimates from previously observed learning sessions. This finding shows that it is still possible to reliably estimate the parameter that captures individual differences, even when operating with less control over the context in which the estimation is made.

In an earlier study, we let university students practise glossary terms related to the course in which they were enrolled using the MemoryLab algorithm (Sense et al., 2021). These terms were tested on the exam at the end of the course. Students’ choices on if, what, where, and when to study with the system were entirely self-directed. Nonetheless, their performance in learning sessions was predictive of their later success on the exam: items estimated to be easier to remember were more likely to be recalled correctly on the exam, and learners with a lower average rate of forgetting obtained higher exam grades. It was therefore still possible to derive meaningful insights from students’ retrieval practice performance.

This also seems to be the case in non-educational settings. For instance, Holly Hake of the University of Washington performed a study in which older adults completed weekly brief memory assessments with the MemoryLab system over the span of 6+ months (Hake et al., 2023). Each assessment yielded its own independent estimate of a participant’s ability to memorise information. These estimates were highly correlated with one another (the average correlation coefficient was r = 0.70), showing that the system was able to reliably attune to participants’ individual ability, even though the assessments were short sessions performed at home without oversight. (Granted, older adults may be more likely anyway to do such study sessions in a quiet environment that allows them to focus, even if there is no experimenter looking over their shoulder.) 

All these examples show that the method by which our algorithm estimates memory performance from individual retrieval responses appears to be relatively robust to disturbances caused by a less-controlled environment.

What is needed to make adaptive learning systems actually work in schools?

While it may be tempting to focus on the engineering challenges involved in building an adaptive learning system—as I have done above—there is much more to making such a system work in an applied setting like a school classroom. Education is a complex and inherently social activity, and any technological innovation we design has to be seen within that context.

Understanding how teachers, students, parents, and other stakeholders see and experience technology in the classroom helps in designing technology that is actually useful. For example, teachers use analytics dashboards in many different ways: they may look for information about the progress of an individual student, about errors on a specific activity, or characteristics of the class as a whole, and they may or may not act on that information in some way (Molenaar & Knoop-van Campen, 2019). We can design better dashboards if we know what data teachers are looking for and understand how they interpret and act on that information. The same goes for other forms of educational technology: involving teachers in the selection and design of tools improves the odds that teachers can use them productively. This also means that it is incumbent on developers of educational technology to communicate clearly and transparently about how their technology works and what it can and cannot do, so that teachers are empowered to make well-informed decisions about how they use it. 

Similarly, it is important to know how learners perceive and interact with technology. For instance, we saw that many students in the Sense et al., (2021) study only used the MemoryLab learning system one or two days before the final exam — not exactly what we’d hoped to see. Some of this may have been last-minute cramming, but such usage patterns can also indicate that students may have simply been using the tool in a different way than we intended: to review the knowledge that they had already learned, rather than acquiring the knowledge from scratch. In digital learning environments, there are also plenty of examples of learners finding clever ways of “gaming the system” or finding loopholes that help them get through assignments without actually learning very much (Baker et al., 2008). In many cases, this is because learners feel that using the software is frustrating or unhelpful. Listening to students, giving them more agency when using software tools, and involving learners in the design process can all help in creating educational technology that learners can, and want to, use.

Earlier this month, we started a new multi-year project on using an automated assessment of students’ mastery of knowledge to partly replace traditional knowledge tests in schools. In this project, we are looking to implement many of the ideas outlined here. Working together with researchers, teachers and students, we’ll design a system that not only works in a technological sense, but also explicitly focuses on the educational context. This means finding out whether an automated assessment does indeed provide an accurate measure of students’ knowledge, but also requires understanding what expectations teachers and learners have of such an assessment, and what is needed for it to replace a regular test. Stay tuned for updates on this project!

Summary

The application of scientific insights in a real-world context, such as in adaptive learning systems and other forms of educational technology, is not just an engineering challenge, but also requires understanding the (social) context in which that application takes place. Only by considering both can we make educational technology that truly works.

References

Baker, R., Walonoski, J., Heffernan, N., Roll, I., Corbett, A., & Koedinger, K.. (2008). Why Students Engage in “Gaming the System” Behavior in Interactive Learning Environments. Journal of Interactive Learning Research, 19(2), 185–224. https://psycnet.apa.org/record/2008-07015-001

Hake, H. S., Leonard, B., Ulibarri, S., Grabowski, T., Van Rijn, H., & Stocco, A. (2023). Breaking New Ground in Computational Psychiatry: Model-Based Characterization of Forgetting in Healthy Aging and Mild Cognitive Impairment [Preprint]. Neurology. https://doi.org/10.1101/2023.05.13.23289941

Molenaar, I., & Knoop-van Campen, C. A. N. (2019). How Teachers Make Dashboard Information Actionable. IEEE Transactions on Learning Technologies, 12(3), 347–355. https://doi.org/10.1109/TLT.2018.2851585

Sense, F., Behrens, F., Meijer, R. R., & van Rijn, H. (2016). An Individual’s Rate of Forgetting Is Stable Over Time but Differs Across Materials. Topics in Cognitive Science, 8(1), 305–321. https://doi.org/10.1111/tops.12183

Sense, F., van der Velde, M., & van Rijn, H. (2021). Predicting University Students’ Exam Performance Using a Model-Based Adaptive Fact-Learning System. Journal of Learning Analytics, 8(3), 155–169. https://doi.org/10.18608/jla.2021.6590

Van Der Velde, M., Sense, F., Borst, J. P., & Van Rijn, H. (2023). Large-scale evaluation of cold start mitigation in adaptive fact learning: Knowing ‘what’ matters more than knowing ‘who’ [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/z3vtn

Offerte aanvragen?

Vul het onderstaande formulier in:

Of mail naar:

Aan de slag!

Wilt u meer weten?

Vul het onderstaande formulier in:

Of mail naar: