Ido Roll
Technion – Israel Institute of Technology
Miri Barhak-Rabinowitz
Technion – Israel Institute of Technology
Ido Roll
Technion – Israel Institute of Technology
Miri Barhak-Rabinowitz
Technion – Israel Institute of Technology
This chapter describes an assessment of self-regulated learning (SRL) that is based on an extended learning task with embedded feedback and resources. These resources serve the dual purpose of providing learners with meaningful choices to support their learning as well as documenting their behaviours. Three affordances of such resources are identified: experimentation, where learners can express and test ideas; explicit feedback, where learners can monitor their progress towards their learning goals; and information-seeking, where learners can receive information about the task and its environment. The chapter analyses the PISA 2025 Learning in the Digital World assessment to demonstrate how these resources support the assessment of SRL. The chapter then addresses the interplay between data and theory in constructing learning and assessment tasks. It discusses some of the main design challenges, focusing particularly on the effect of prior knowledge and supporting generalisable inferences.
The goals of assessment are expanding from evaluating the application of existing static knowledge (learning outcomes) to evaluating the dynamics of acquiring and developing new knowledge (learning processes) – see Chapters 2 and 4 of this report as well as Bransford and Schwartz (1999[1]), National Research Council (2001[2]) and Cutumisu, Chin and Schwartz (2019[3]). To support such inferences, assessment should provide learners with authentic and complex problem-solving tasks where learners are required to manage their own learning and exercise agency. Indeed, common to most 21st Century skill frameworks is the view of students as agentic learners who regulate their learning process (Dede, 2010[4]; Kirschner and Stoyanov, 2018[5]; OECD, 2018[6]). This chapter takes a deeper dive into how assessment can support inferences about agentic learners and discusses the affordances, design guidelines as well as challenges of using digital resources to assess students’ self-regulated learning (SRL).
The term SRL refers to students’ goal-directed actions that guide the dynamics of accessing, constructing, and applying knowledge (Schunk and Zimmerman, 2013[7]). SRL includes the use of cognitive and metacognitive strategies as well as regulating one’s motivational and affective states (Panadero, 2017[8]). Cognitive strategies support making progress towards one’s learning goals. Metacognitive strategies support the planning, monitoring and adjustment of these strategies by setting goals, monitoring progress and adjusting the learning process (Flavell, 1979[9]; Schunk and Zimmerman, 2013[7]). The motivational and affective components of SRL refer to the processes through which learners manage their emotional states while learning such as their willingness to persist with learning activities and environments even in the face of difficulty (Fredricks, Blumenfeld and Paris, 2004[10]; Järvenoja et al., 2018[11]; OECD, forthcoming[12]).
As implied by this view of SRL, regulative processes are essential in authentic learning contexts that have the following characteristics: 1) a challenge to overcome such as a goal to be learned or a problem to be solved; 2) the challenge is non-linear requiring learners to make meaningful choices that affect how they progress towards their goal or solution. Providing learning resources serves two important goals in this context. First, they support non-linear learning trajectories and invite learners to make meaningful choices. Second, they collect digital traces of these choices. Learning resources in complex tasks thus offer valuable opportunities to assess SRL (Roll and Winne, 2015[13]; Shute and Rahimi, 2021[14]).
We begin this chapter by describing key affordances of digital learning resources that facilitate and capture SRL behaviours. We then demonstrate the use of such resources in assessment using the example of the PISA 2025 Learning in the Digital World assessment (OECD, forthcoming[12]). Last, we describe design considerations and main challenges for inferring SRL skills from digital trace data.
Providing learners with digital tools and representations is a known strength of learning with technology. Digital tools that help learners organise their thinking and support knowledge construction are often referred to as “cognitive tools” (Drew, 2019[15]; Jonassen, 1992[16]; Nesbit, Niu and Liu, 2018[17]). We view digital learning resources as a class of cognitive tools that offer interactivity to support meaning-making. We emphasise interactivity not only in that learners can use these tools flexibly but also in that the tools provide learners with additional information that is not available without the tools. Being interactive, resources provide learners with opportunities to access new knowledge. Being digital, each action that learners make can be logged, thus offering a window into their reasoning and learning processes.
Learning resources offer learners multiple affordances to enact goal-oriented behaviours. We group these affordances into three families: experimentation, explicit feedback and information-seeking. Experimentation allows learners to interrogate and represent their ideas and execute them in a manner that produces responses from the environment. For example, coding environments let students code, compile, execute and observe outcomes (conversely, coding tasks where learners enter code but cannot execute it are not considered learning resources here). Another example is interactive scientific simulations where learners can manipulate elements and observe the outcome of their exploration (Wieman, Adams and Perkins, 2008[18]). The main benefit of such resources comes from their responses to learner actions, often termed “situational feedback” (Nathan, 1998[19]; Roll et al., 2014[20]). Situational feedback provides learners with representations of the real-world equivalence that follows from learners’ actions. For example, an interactive simulation for electricity will adjust the shown light intensity based on the voltage that learners set (Roll et al., 2018[21]; de Jong et al., 2018[22]). Situational feedback is implicit and originates within the task situation itself, consistent with the internal logic of the task. That is, learners are not flagged or graded by an external all-knowing model but instead given opportunities to elicit, observe and interpret relevant information from the environment response (Nathan, 1998[19]). Observing how learners respond to situational feedback can be used to evaluate their monitoring behaviours and the corresponding adjustments that they make in their cognitive strategies.
Explicit feedback affordances enable learners to evaluate their actions. This can include a range of inputs, from error flagging to explanations about the nature of error or suggestions for future work (Deeva et al., 2021[23]). Feedback can be triggered on-demand (e.g. using a “test” button) or automatically (e.g. following a set number of failed attempts). Unlike situational feedback that is built into the narrative of the challenge, explicit feedback is external. It assumes an “all-knowing” agent or environment that can compare the student input to the desired state. The use of on-demand explicit feedback offers a direct measure of learners’ metacognitive strategies like monitoring or which sub-goals they pursue (Winstone et al., 2016[24]). As with situational feedback, students who adjust their cognitive strategies effectively following explicit feedback demonstrate productive metacognition (Kinnebrew, Segedy and Biswas, 2017[25]).
Information-seeking affordances support learners by providing additional communication about the task at hand. Informational resources include hints (Aleven et al., 2016[26]), instructional videos (Seo et al., 2021[27]), worked examples (Ganaiem and Roll, 2022[28]; Glogger-Frey et al., 2015[29]), searchable databases, etc. Information sources can be fixed, as in most tutorials, or adaptive, as in hints about the specific problem step (see VanLehn et al. (2007[30]) for example). When using information sources, learners make choices regarding when to use them (e.g. when to ask for hints), how to use them (e.g. navigating videos) and how to apply the information to the challenge at hand. Effective and strategic learners seek just-in-time information to fill their own knowledge gaps (Seo et al., 2021[27]; Wood, 2001[31]). Thus, interactions with information resources can provide meaningful insights into learners’ help-seeking and monitoring processes (Roll et al., 2014[20]).
Table 9.1 summarises these three families of learning affordances and their associated aspects of SRL.
Affordance |
Description |
Opportunities for SRL assessment |
Examples of resources |
---|---|---|---|
Experimentation |
Enabling students to express and evaluate different ideas. |
Evaluating the enactment of sub-goals and use of affordances to adjust strategy use. |
Coding environments Executable concept maps Interactive simulations |
Explicit feedback |
Allowing learners to evaluate their progress towards their goal. |
Evaluating learners’ use of feedback to reduce uncertainty and monitor progress. |
“Test” button to check solution correctness Automatic feedback in the form of error flagging or explanations |
Information-seeking |
Allowing learners to access and curate new information on demand. |
Evaluating learners’ effectiveness in identifying knowledge gaps and choice to seek information to support problem solving. |
Tutorials Searchable information sources On-demand hints Worked examples/contrasting cases |
As described in the Introduction and Chapter 5 of this report, Evidence Centred Design (ECD) provides a principled framework for designing digital assessments of complex constructs. It can therefore support the design of task features and affordances that elicit relevant evidence about SRL-related target competencies. At the heart of the ECD framework for SRL are rules that associate observed test behaviours (evidence) with the target SRL competencies (inferences). ECD breaks this process into two types of rules: 1) evidence rules, which quantify observations about learners’ outputs (or SRL behaviours, in our case); and 2) a statistical model, which specifies the relationship between these observations and estimates of learner competencies (Mislevy, 2013[32]). Here we use the term “rules” to describe the combination of these processes into a single evidence model that links observable behaviours with inferred SRL competencies. We focus on the PISA 2025 Learning in the Digital World (LDW) assessment to demonstrate the use of such rules to design tasks for assessment of SRL.
Chapter 6 of this report elaborated on the assessment design processes of domain analysis and domain modelling in the context of the LDW assessment, as first steps in a principled ECD process. As a reminder, PISA 2025 defines the LDW construct as “the capacity to engage in an iterative process of knowledge building and problem solving using computational tools. This capacity is demonstrated by effective self-regulated learning while applying computational and scientific inquiry practices” (OECD, forthcoming[12]). Here, we focus on the SRL-related components of the task and evidence models for the assessment to exemplify how feedback and resources can provide inferences about test takers’ SRL competencies.
In one prototype unit for this assessment named “I Like That!”, described in the draft framework document (OECD, forthcoming[12]), learners are asked to create a recommender system that evaluates movie properties and predicts their popularity with a certain user called Alex. While the unit is concerned with movie preferences, it is, in fact, a scientific inquiry task. Learners investigate the relationship between predictive variables (such as price, length, release data and reviews) and the outcome variable (Alex’s movie preferences).
The “I Like That!” prototype unit includes several resources that support the evaluation of learners’ SRL. One key resource is an interactive data inquiry tool. Using this tool, aptly named “YouCompare”, learners can compare features of different movies to identify their underlying relationship with Alex’s viewing preferences (see Figure 9.1). Each movie is represented using a card (see [1] in Figure 9.1). Learners can then choose different cards and compare their attributes and their rankings on the testbed (see [2]). Students can also ask to see additional cards (see [3]). This process is analogous to choosing specific experimental set-ups to compare. Thus, YouCompare affords experimentation. The movies were designed so that learners can study each property (such as price or length) in isolation as well as interactions between properties. Productive learners are expected to use various cognitive strategies such as the control of variables strategy (CVS) to evaluate these effects. For example, learners can compare cards with different movie lengths and the same characteristics for all the other variables to identify the relationship between movie length and Alex’s preferences. Furthermore, learners can also create test cases to compare with the provided cards, specifying the values of the variables for that card by adjusting the sliders (see [4]).
A second interactive resource of the task is “YouModel”. This resource allows learners to create models for their recommender system. Learners first identify the relevant properties (see [5]) and links them to their model (see [6]). Learners then specify the relationship between the variables by choosing graphs that correspond to the desired relationship on a concept map (see [7]). When pressing the “Check Model” button, the system provides feedback on the model by highlighting elements that are incorrect (see [8]). Thus, the YouModel tool affords explicit feedback.
The LDW assessment also includes information-seeking resources. One way in which information is made available is via worked examples. Learners are given the option to study challenges together with their solutions. Each challenge-solution set focuses on a different aspect of the “I Like That!” task. These worked examples allow learners to study solution strategies that are applicable to the main task. From an assessment perspective, these worked examples can reveal how learners identify their own knowledge gaps and act strategically to overcome them.
To make inferences regarding learners’ SRL, the resources were designed in tandem with evidence rules that support the interpretation of various behavioural patterns. Table 9.2 demonstrates how different behaviours produce evidence that supports inferences about students’ SRL competencies, as defined by the LDW framework document (OECD, forthcoming[12]). While this list showcases a variety of inferences that can be made from students’ resource usage, for various reasons, not all rules are implemented in the specific “I Like That!” task.
The rules in Table 9.2 apply different types of indicators that serve as evidence for different aspects of SRL. Indicators can be derived from the choice to engage with a certain resource (e.g. viewing a worked example as strategic help-seeking). Indicators can also be derived from the way the resource is used (e.g. how learners navigate worked examples can provide evidence for their awareness of knowledge gaps). Finally, indicators can be derived from the actions that follow the use of the resource (e.g. applying the provided advice or information). One approach for interpreting learners’ actions in context is coherence analysis, which looks for logical sequences of actions taken by the student (Kinnebrew, Segedy and Biswas, 2017[25]). Coherence analysis assumes that learners’ actions generate information that can then be used in subsequent actions; when learners act on this information, then their actions are coherent. For example, the action of pause, by itself, cannot be evidence of reflection. It is important to use patterns of actions, rather than single actions, as evidence (see also Chapter 8 of this report). A coherent sequence of pauses followed by modelling of the tested relationship is a positive sign of reflection that can be interpreted as evidence. It is important to emphasise that interpretation of such patterns should be validated using data, as explained below.
SRL inferences |
Evidence |
---|---|
Evaluating and identifying one’s knowledge gaps |
|
Planning appropriate sub-goals |
|
Monitoring progress towards one’s goals and adapting strategies |
|
Managing one’s motivation and affect, so to persist in the face of difficulty |
|
Reflecting on one’s performance |
|
There are several considerations to keep in mind in the design of an assessment of SRL.
An essential element of assessments that target the measurement of SRL is to provide student agency through choice (Bransford and Schwartz, 1999[1]; Cutumisu et al., 2015[33]). To provide meaningful agency, learners should be given a large interaction space where their choices have visible and meaningful consequences that affect the task situation. We contrast that with more constrained assessments with a pre-determined sequence of desired actions. Indeed, in the “I Like That!” prototype unit described earlier there are many ways for students to engage with the tasks, for example by isolating different variables, testing different movies, plotting relationships, etc.
The large design space for learners to explore does not mean that the goal state is under-defined. In fact, to support assessment, the goal state should be clearly defined and distance from the goal state should be quantifiable. For example, in the “I Like That!” prototype unit the correct solution is the underlying model that determines the movie ratings and overall performance can be measured by looking at deviations from this model.
Table 9.2 describes rules for interpreting evidence of productive SRL behaviours. The converse is also true – rules can interpret evidence of ineffective regulation. Assessments of SRL can have a diagnostic value and capture common non-productive behaviours. For example, competent learners who ask for help might signal an intention to game the system.
Designing evidence rules also depends to a large degree on our model of the domain, commonly referred to as the competency model. To be able to define SRL rules, this model must be very specific (Roll et al., 2007[34]). Saying that students who struggle need help is probably true, but it is insufficient as an inference rule. Rules that include specific behaviours – such as learners who do not progress within five minutes should look at examples, as done in the “I Like That!” unit – are much more informative.
A good approach for identifying relevant evidence rules (and their level of specificity) is to combine a top-down (i.e. theory first) process with a bottom-up (i.e. data first) process of knowledge discovery. Ritter and colleagues (2019[35]) describe several methods for identifying sequences of actions that yield productive learning. In the context of the “I Like It!” unit, such sequence mining can help us identify productive or efficient approaches to solutions as well as ones that were not anticipated, such as which type of information resource is useful and when. However, one should avoid the tyranny of data in which correlational evidence suffices. Each identified rule should have a strong theoretical justification that provides a mechanistic explanation for how a specific SRL construct manifests itself in the observed behaviour.
A major challenge for any complex assessment – and assessments of SRL are no different – is that of generalisability and validity (see Chapter 1 of this report for more on these challenges in assessment, more generally). While some rules may be too broad (and thus provide inconclusive evidence), others may focus on overly specific solution approaches that do not apply to other tasks or environments. For example, is clicking a button marked “?” in a specific test environment indicative of the quality of one’s help-seeking in other contexts?
The challenge of generalisability goes back to how the competency model is defined (see Chapter 6 of this report for more on how the domain and competency model for the LDW assessment were defined). The competency model aims to define aspects of SRL, often considered a rather domain-general set of competencies. However, SRL behaviours are only meaningful in context and can only be interpreted within the specific context in which they occur (as argued in Chapter 2 of this report). Simon (1969[36]) describes the journey of an ant on a sandy beach and how interpreting the ant’s behaviour should consider the terrain of the beach. This creates a major challenge for learning about ant behaviour, and similarly, for learning about students’ SRL competencies. What are reasonable boundaries of generalisation from the instantiated behaviours in the assessment? How can tasks be designed to support such generalisation?
Several solution approaches can mitigate this challenge. The first is to triangulate inferences using several rules to infer about each construct. As seen in Table 9.2, SRL constructs can use evidence from multiple rules. Notably, when these rules are applied in the same task, they do not solve the dependency of the observations on the task topic and scenario. Another solution is to design tasks that use parallel evidence rules. Constructs that are assessed in “I Like That!” are also assessed, for each student, in at least one additional task with a different topic and set of tools. For instance, instead of a concept map for experimentation, alternative tasks use a block-based coding tool. This is similar to observing ants on multiple beaches. This approach also helps to distil key features of the task model (that are similar across tasks) from more superficial ones (that can vary across tasks).
An important risk when designing evidence rules is construct-irrelevant variance. For instance, productive use of verbose hints may be indicative of reading comprehension more than of help-seeking and vice versa (unproductive use of these resources may indicate a lack of reading comprehension). In fact, construct‑irrelevant variance may lead to opposite results than intended. For example, when hints are too verbose or unhelpful, students who regulate their learning well typically avoid help (Roll et al., 2014[20]). A related challenge to validity is cultural relevance, as rules may unintentionally introduce biases due to different ways of approaching challenges across cultural groups (see also Chapter 11 of this report).
As mentioned above, one approach to examining the generalisability and validity of rules relies on the interplay between theory and data. It is essential to collect data early on to validate the rules. The provision of think-aloud protocols and cognitive labs is effective in that regard. A good evidence model has both predictive power (who learns well?) and explanatory power (what makes them learn well?).
Numerous attributes mediate the relationship between SRL and the use of feedback and resources. Identifying these attributes and assessing performance as a function of those variables is a significant challenge for interpreting student behaviours. One key dependency is that the strategic use of resources depends on students’ domain knowledge, as experts and novices manage their learning differently (Kalyuga and Singh, 2015[37]; Zohar and Barzilai, 2013[38]).
SRL assessments should consider two aspects of prior knowledge. One aspect is domain-level knowledge – for instance, knowing and understanding a domain inevitably affects one’s need for help. The second aspect is familiarity with relevant resources – for example, students who are used to concept maps or learning from examples may have an advantage when encountering these resources. The coupling between SRL behaviours and prior knowledge of the topic or tools is inherent to any task, as students regulate their learning to overcome knowledge gaps. Thus, knowledge gaps are a key motivation for choosing which SRL strategies to enact.
One approach to mitigate the effect of prior knowledge is to assess it and adapt rules accordingly. This means that rules become conditional on prior knowledge – for example, asking for help only when lacking knowledge or trying to correct errors on their own if sufficiently knowledgeable. In some cases, assessing prior knowledge is rather straightforward: when tasks build on well-defined domains such as coding or mathematics, traditional items can be used to assess domain knowledge. However, as most SRL assessments require complex tasks, assessing prior knowledge is non-trivial. For example, in the “I Like That!” prototype unit, assessing prior knowledge of modelling practices is challenging. One solution is deconstructing and testing each modelling practice separately, without resources. A similar approach can be used to evaluate prior knowledge of tools. For example, learners can be asked to perform specific tasks with the tools to assess technical fluency, although such an approach is inefficient and unauthentic.
Another approach is to minimise the relevance of learners’ prior knowledge in the task. One good example of this approach is the “I Like That!” prototype. This task is a typical scientific inquiry scenario: students are given data and are asked to identify the relationship between different variables by applying inquiry strategies (Pedaste et al., 2015[39]; Roll et al., 2018[21]). However, implementing the task in a scientific context would have created construct-irrelevant variance, namely prior exposure to the relevant scientific topics. Instead, the task uses a made-up scenario. This scenario levels the playing field in numerous ways. Intuitively, most students understand what a recommender system for movies (such as Netflix) does. Practically, though, very few have experimented with building one. In addition, learners do not have prior knowledge of the model of the specific made-up user.
A third approach seeks to minimise variability in prior knowledge. By using tutorials, examples and walkthrough problems, students can be given basic knowledge with which they can approach the task. This is especially useful regarding knowledge of the task-specific tools, such as learning to use the concept map tool in the “I Like That!” example.
This chapter focused on assessing metacognitive and cognitive strategies using feedback and resources. Such analysis overlooks the very significant role of motivational regulation in SRL. Prior work has demonstrated the potential of assessing affective states in digital environments (Calvo and D’Mello, 2010[40]; Woolf et al., 2009[41]). The provision of resources may further aid this assessment. For example, students’ use of hints can be used to identify learners who attempt to make progress without putting in the required effort (Baker et al., 2013[42]). However, the interaction between affective states and metacognitive strategies should be further studied, especially regarding resource use (Shum and Crick, 2012[43]). Using resources requires deliberation and sustained effort (Tishman, Jay and Perkins, 1993[44]). Students need to be aware of their motivational states, their ability to control them and their impact on their learning processes (Wolters, 2003[45]). This in turn affects learners’ engagement (or disengagement) (Miele and Scholer, 2017[46]; O’Brien et al., 2022[47]). Similarly, additional data sources such as self-reports may be warranted. However, such analysis is beyond the scope of the current chapter.
The assessment of SRL requires open and interactive task situations in which learners have agency through choice. The availability of interactive learning resources provides ample opportunities to assess the way learners make these choices. Capitalising on students’ choices to assess SRL is done using an inference model, triangulated across tasks, validated using data and contingent on prior knowledge. Such assessments are exciting in that they measure students’ capacity to learn, above and beyond their static knowledge state.
[26] Aleven, V. et al. (2016), “Help helps, but only so much: Research on help seeking with Intelligent Tutoring Systems”, International Journal of Artificial Intelligence in Education, Vol. 26/1, pp. 205-223, https://doi.org/10.1007/s40593-015-0089-1.
[42] Baker, R. et al. (2013), “Modeling and studying gaming the system with educational data mining”, in International Handbook of Metacognition and Learning Technologies, Springer International Handbooks of Education, Springer, New York, https://doi.org/10.1007/978-1-4419-5546-3_7.
[1] Bransford, J. and D. Schwartz (1999), “Rethinking transfer: A simple proposal with multiple implications”, Review of Research in Education, Vol. 24/1, pp. 61-100, https://doi.org/10.3102/0091732X024001061.
[40] Calvo, R. and S. D’Mello (2010), “Affect detection: An interdisciplinary review of models, methods, and their applications”, IEEE Transactions on Affective Computing, Vol. 1/1, pp. 18-37, https://doi.org/10.1109/t-affc.2010.1.
[33] Cutumisu, M. et al. (2015), “Posterlet: A game-based assessment of children’s choices to seek feedback and to revise”, Journal of Learning Analytics, Vol. 2/1, https://doi.org/10.18608/jla.2015.21.4.
[3] Cutumisu, M., D. Chin and D. Schwartz (2019), “A digital game‐based assessment of middle‐school and college students’ choices to seek critical feedback and to revise”, British Journal of Educational Technology, Vol. 50/6, pp. 2977-3003, https://doi.org/10.1111/bjet.12796.
[22] de Jong, T. et al. (2018), “Simulations, games, and modeling tools for learning”, in Fischer, F. et al. (eds.), International Handbook of the Learning Sciences, Routledge, New York, https://doi.org/10.4324/9781315617572-25.
[4] Dede, C. (2010), “Comparing frameworks for 21st century skills”, in Bellanca, J. and R. Brandt (eds.), 21st Century Skills, Solution Tree Press, Bloomington.
[23] Deeva, G. et al. (2021), “A review of automated feedback systems for learners: Classification framework, challenges and opportunities”, Computers & Education, Vol. 162, pp. 1-43, https://doi.org/10.1016/j.compedu.2020.104094.
[15] Drew, C. (2019), “Re-examining cognitive tools: New developments, new perspectives, and new opportunities for educational technology research”, Australasian Journal of Educational Technology, Vol. 35/2, https://doi.org/10.14742/ajet.5389.
[9] Flavell, J. (1979), “Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry”, American Psychologist, Vol. 34/10, p. 906.
[10] Fredricks, J., P. Blumenfeld and A. Paris (2004), “School engagement: Potential of the concept, state of the evidence”, Review of Educational Research, Vol. 74/1, pp. 59-109, https://doi.org/10.3102/00346543074001059.
[28] Ganaiem, E. and I. Roll (2022), “The effect of different sequences of examples and problems on learning experimental design”, Proceedings of the International Conference of the Learning Sciences, pp. 727-732.
[29] Glogger-Frey, I. et al. (2015), “Inventing a solution and studying a worked solution prepare differently for learning from direct instruction”, Learning and Instruction, Vol. 39, pp. 72-87, https://doi.org/10.1016/j.learninstruc.2015.05.001.
[11] Järvenoja, H. et al. (2018), “Capturing motivation and emotion regulation during a learning process”, Frontline Learning Research, Vol. 6/3, pp. 85-104, https://doi.org/10.14786/flr.v6i3.369.
[16] Jonassen, D. (1992), “What are Cognitive Tools?”, in Cognitive Tools for Learning, Springer, Berlin/Heidelberg, https://doi.org/10.1007/978-3-642-77222-1_1.
[37] Kalyuga, S. and A. Singh (2015), “Rethinking the boundaries of cognitive load theory in complex learning”, Educational Psychology Review, Vol. 28/4, pp. 831-852, https://doi.org/10.1007/s10648-015-9352-0.
[25] Kinnebrew, J., J. Segedy and G. Biswas (2017), “Integrating model-driven and data-driven techniques for analyzing learning behaviors in open-ended learning environments”, IEEE Transactions on Learning Technologies, Vol. 10/2, pp. 140-153, https://doi.org/10.1109/tlt.2015.2513387.
[5] Kirschner, P. and S. Stoyanov (2018), “Educating youth for nonexistent/not yet existing professions”, Educational Policy, Vol. 34/3, pp. 477-517, https://doi.org/10.1177/0895904818802086.
[46] Miele, D. and A. Scholer (2017), “The role of metamotivational monitoring in motivation regulation”, Educational Psychologist, Vol. 53/1, pp. 1-21, https://doi.org/10.1080/00461520.2017.1371601.
[32] Mislevy, R. (2013), “Evidence-centered design for simulation-based assessment”, Military Medicine, Vol. 178/10S, pp. 107-114, https://doi.org/10.7205/milmed-d-13-00213.
[19] Nathan, M. (1998), “Knowledge and situational feedback in a learning environment for algebra story problem solving”, Interactive Learning Environments, Vol. 5/1, pp. 135-159, https://doi.org/10.1080/1049482980050110.
[2] National Research Council (2001), Knowing What Students Know, National Academies Press, Washington, D.C., https://doi.org/10.17226/10019.
[17] Nesbit, J., H. Niu and Q. Liu (2018), “Cognitive tools for scaffolding argumentation”, in Adesope, O. and A. Rud (eds.), Contemporary Technologies in Education: Maximising Student Engagement, Motivation and Learning, Springer, Cham, https://doi.org/10.1007/978-3-319-89680-9_6.
[47] O’Brien, H. et al. (2022), “Rethinking (dis)engagement in human-computer interaction”, Computers in Human Behavior, Vol. 128, pp. 107-109, https://doi.org/10.1016/j.chb.2021.107109.
[6] OECD (2018), The Future We Want, https://www.oecd.org/education/2030-project/contact/E2030%20Position%20Paper%20(05.04.2018).pdf.
[12] OECD (forthcoming), PISA 2025 Learning in the Digital World assessment framework (draft), OECD Publishing, Paris.
[8] Panadero, E. (2017), “A Review of self-regulated learning: Six models and four directions for research”, Frontiers in Psychology, Vol. 8, https://doi.org/10.3389/fpsyg.2017.00422.
[39] Pedaste, M. et al. (2015), “Phases of inquiry-based learning: Definitions and the inquiry cycle”, Educational Research Review, Vol. 14, pp. 47-61, https://doi.org/10.1016/j.edurev.2015.02.003.
[35] Ritter, S. et al. (2019), “Identifying strategies in student problem solving”, in Sinatra, A. et al. (eds.), Design Recommendations for Intelligent Tutoring Systems, US Army Research Laboratory, Orlando.
[34] Roll, I. et al. (2007), “Designing for metacognition—applying cognitive tutor principles to the tutoring of help seeking”, Metacognition and Learning, Vol. 2/2-3, pp. 125-140, https://doi.org/10.1007/s11409-007-9010-0.
[21] Roll, I. et al. (2018), “Understanding the impact of guiding inquiry: the relationship between directive support, student attributes, and transfer of knowledge, attitudes, and behaviours in inquiry learning”, Instructional Science, Vol. 46/1, pp. 77-104, https://doi.org/10.1007/s11251-017-9437-x.
[20] Roll, I. et al. (2014), “Tutoring self- and co-regulation with Intelligent Tutoring Systems to help students acquire better learning skills”, in Sottilare, R. et al. (eds.), Design Recommendations for Intelligent Tutoring Systems, US Army Research Laboratory, Orlando.
[13] Roll, I. and P. Winne (2015), “Understanding, evaluating, and supporting self-regulated learning using learning analytics”, Journal of Learning Analytics, Vol. 2/1, pp. 7-12, https://doi.org/10.18608/jla.2015.21.2.
[7] Schunk, D. and B. Zimmerman (2013), “Self-regulation and learning”, in Reynolds, W. and G. Miller (eds.), Handbook of Psychology, John Wiley & Sons, Hoboken.
[27] Seo, K. et al. (2021), “Active learning with online video: The impact of learning context on engagement”, Computers & Education, Vol. 165, p. 104132, https://doi.org/10.1016/j.compedu.2021.104132.
[43] Shum, S. and R. Crick (2012), “Learning dispositions and transferable competencies: Pedagogy, modelling and learning analytics”, 2nd International Conference on Learning Analytics & Knowledge, http://oro.open.ac.uk/32823/1/SBS-RDC-LAK12-ORO.pdf.
[14] Shute, V. and S. Rahimi (2021), “Stealth assessment of creativity in a physics video game”, Computers in Human Behavior, Vol. 116, pp. 1-13, https://doi.org/10.1016/j.chb.2020.106647.
[36] Simon, H. (1969), The Sciences of the Artificial, The MIT Press.
[44] Tishman, S., E. Jay and D. Perkins (1993), “Teaching thinking dispositions: From transmission to enculturation”, Theory Into Practice, Vol. 32/3, pp. 147-153, https://doi.org/10.1080/00405849309543590.
[30] VanLehn, K. et al. (2007), “What’s in a step? Toward general, abstract representations of tutoring system log data”, in Conati, C., K. McCoy and G. Paliouras (eds.), User Modelling 2007. Lecture Notes in Computer Science, Springer, Berlin/Heidelberg.
[18] Wieman, C., W. Adams and K. Perkins (2008), “PhET: Simulations that enhance learning”, Science, Vol. 322/5902, pp. 682-683, https://doi.org/10.1126/science.1161948.
[24] Winstone, N. et al. (2016), “Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes”, Educational Psychologist, Vol. 52/1, pp. 17-37, https://doi.org/10.1080/00461520.2016.1207538.
[45] Wolters, C. (2003), “Regulation of motivation: Evaluating an underemphasized aspect of self-regulated learning”, Educational Psychologist, Vol. 38/4, pp. 189-205, https://doi.org/10.1207/s15326985ep3804_1.
[31] Wood, D. (2001), “Scaffolding, contingent tutoring and computer-supported learning”, International Journal of Artificial Intelligence in Education, Vol. 12, pp. 280-292.
[41] Woolf, B. et al. (2009), “Recognizing and responding to student affect”, in Jacko, J. (ed.), Human-Computer Interaction. Ambient, Ubiquitous and Intelligent Interaction, Lecture Notes in Computer Science, Springer, Berlin/Heidelberg, https://doi.org/10.1007/978-3-642-02580-8_78.
[38] Zohar, A. and S. Barzilai (2013), “A review of research on metacognition in science education: Current and future directions”, Studies in Science Education, Vol. 49/2, pp. 121-169, https://doi.org/10.1080/03057267.2013.847261.