Experimental Dataset on Eye-tracking Activity During Self-Regulated Learning

Wait 5 sec.

Background & SummaryThe cognitive processing of multimedia materials that combine words and images—such as interactive simulations, videos, or digital textbooks with accompanying audio recordings—has long been a subject of scholarly inquiry. A unifying cognitive theory of multimedia learning is based on the hypothesis that designing materials to align with the functional principles of the human mind can enormously enhance learning efficiency1. This theory stems from traditional psychological approaches describing memory processes, including Paivio’s dual coding theory2, Baddeley’s working memory model, which distinguishes between channels for processing auditory and visual information3, and Sweller’s cognitive load theory4, which suggests limited capacity within Baddeley’s channels. Additionally, the cognitive theory of multimedia learning portrays people as active agents who select, organize, and integrate information to construct coherent mental representations.The ‘self-regulation’ component of learning has garnered substantial attention, resulting in its fragmented, albeit convergent (in some respects) definitions. Most previous work portrays Self-Regulated Learning (SRL) as involving cognitive, metacognitive, affective, and motivational processes that enable comprehensive learning, deepen the adaptability of learning strategies, and set self-motivational goals5,6. Students who engage in SRL tend to be adept at managing their education and excelling in academic tasks, ultimately leading to their scholastic success7,8. However, the scientific support for establishing a solid relationship between self-regulatory processes and learning outcomes is insufficient and inconclusive9.Possible explanations intersect factors related to the inability to employ appropriate regulatory strategies when studying and issues with the study materials themselves (e.g., cognitive overload arising from their multimedia content). A potential remedy comes in the form of guided studying and metacognitive prompts, which have been demonstrated to improve student learning outcomes10,11,12,13,14,15. Prompts do not provide additional information but stimulate the cognitive, metacognitive, and motivational aspects of SRL16. They can prompt the learner to reflect on study materials, assess the knowledge gained using self-assessment checklists, or summarize what they have learned in their own words. Despite the clear benefits discovered by numerous studies, even this approach faces a replication crisis (e.g.17,18,19), underscoring the importance of future investigations.A promising avenue for rigorously deciphering the mental processes behind these conflicting patterns of results is eye-tracking (ET) technology20. ET data provide various behavioral events, such as detailed insights into shifts in visual attention, known as ‘transitions,’ characterized by rapid eye movements, or saccades, into new visual field locations. Information about eye movements is highly advantageous when interacting with multimedia study materials that require constant shifts of visual attention, possibly influenced by metacognitive prompts. An illustrative example of the utility of ET technology in this field is Catrysse and colleagues’21 study, which found that eye movements during reading tasks could be used to differentiate between more and less strategic learners.Contributing to the current body of knowledge, the present dataset offers a comprehensive collection of behavioral data on study performance influenced by metacognitive prompts supplemented by ET recordings that capture participants’ gaze throughout the entire study session. Participants were randomly assigned to two groups, studying text-only and multimedia materials on a scientific topic but differing in whether they received metacognitive prompts between study blocks. The dataset challenges previous findings on the usefulness of metacognitive prompts in learning outcomes.MethodsThe study employed a controlled laboratory experiment with a 2 × 2 within-between-subjects factorial design to evaluate participant performance with specific learning materials while recording eye-tracking data. Participants were randomly assigned to two equally sized groups in a between-subjects condition, with self-regulating metacognitive prompts in the study materials manipulated as the independent variable (first factor, two levels). The experimental group received metacognitive prompts at three stages: before the first learning material (orientation and planning), midway (monitoring and regulation), and after the final material (evaluation and reflection). These prompts encouraged recalling prior knowledge, adjusting study strategies, and summarizing key concepts. The initial prompt, provided before the learning began, emphasized orientation and planning. Try to remember what you already know about the topic. Think about how you can make sure that you learn everything you need to know from the learning material. It also included basic guidance on how participants should approach each material or which questions to consider while studying (e.g., What points have I not yet understood? What concepts were not sufficiently explained?). The mid-session prompt focused on monitoring and adjusting one’s learning strategy. Summarize what you have learned so far. Would it be useful to change your current approach to studying the learning materials? The final prompt, presented after the session, encouraged evaluation and reflection. Repeat in your own words the most important things you learned from the materials presented. Could you explain the concepts and principles presented to someone else? The control group received only general instructions without metacognitive prompts. All prompts and instructions were presented in a standardized ppt-like format for the same duration across participants. More details on the experimental methodology can be found in a recent study by Juhaňák and his colleagues22.Simultaneously, all participants engaged with two types of study materials in a within-subjects condition: plain text and multimedia content (text accompanied by two relevant images or diagrams) as the second independent variable (second factor, two levels). Materials included four plain-text and four multimedia (text with images) resources, presented in a PowerPoint-like format on a desktop monitor. The study materials were sourced from the specialized optics domain, which was expected to be unfamiliar to the research sample of humanities and social sciences university students. Task sequencing was randomized to mitigate potential order effects. Participants were randomly allocated to two groups regarding the utilization of metacognitive prompts, reducing the likelihood of unintended transfer of prompt-based learning strategies to non-prompt tasks. Knowledge acquisition across the eight optics topics was assessed using a piloted didactic post-test, with each topic containing three pre-tested questions of varying difficulty, requiring a comprehensive understanding of the content. The experiment was conducted using the Experiment Centre 3.7.69 software provided by SMI (SensoMotoric Instruments). The experimental setup included an SMI RED 250 remote eye-tracking device with a sampling rate of 250 Hz. The stimuli were presented on a 22'' LCD monitor with a resolution of 1600 × 900 pixels and a refresh rate of 60 Hz.ParticipantsThe participant sample consisted of 110 neurotypical university students majoring in social sciences or humanities (63.7% identified as female) with normal or corrected-to-normal vision, aged between 19 and 25 years (M = 20.7 years). Before the experiment, each participant was screened for potential medical limitations, including visual or learning difficulties. It was assumed that participants had limited or no prior knowledge of advanced optics. Before the experimental session, participants were fully informed about the study’s aims and assured that they could withdraw at any time without consequences. Participants were recruited via email invitations, posts on social and academic websites, and the snowball sampling method. To maintain sample homogeneity, only full-time bachelor’s students were invited. Additionally, only native Czech speakers were recruited, as the study materials were presented in Czech.Ethics statementThe study followed the ethical standards outlined in the Declaration of Helsinki. Participants were thoroughly briefed on the experiment’s purpose and provided written informed consent before the experimental session. They were informed that they could withdraw from the study at any time without facing any consequences. The research project, which includes this study, received approval from the Research Ethics Committee of Masaryk University (project identification number: EKV-2020-037).ProcedureThe whole study procedure is depicted in Fig. 1. Students in their 2nd and 3rd year of bachelor studies at Masaryk University were recruited via social media university groups and university information system groups. Email invitations were sent to potential participants explaining the study’s purpose. Upon arrival at the laboratory, participants were welcomed, briefed on the procedure, and provided with an informed consent form, which they could sign. They were explicitly informed that they could withdraw from the experiment at any time and request that their recorded data be removed until data collection and anonymization were complete. Participants first completed a questionnaire addressing demographic information, potential visual impairments, and fatigue levels, and then a short battery of items measuring selected dimensions of SRL (a total of four dimensions of SRL were measured using four subscales of the Motivated Strategies for Learning Questionnaire (MSLQ) developed by Pintrich & De Groot23). These were intrinsic and extrinsic goal orientation, metacognitive self-regulation, and critical thinking. Additionally, they completed a pre-test assessing their prior knowledge of optics.Fig. 1Schematic diagram of the experimental design.Full size imageAfter completing the questionnaire, participants were seated in front of a computer screen, their heads positioned in a chinrest to minimize movement, and eye-tracking was calibrated for accurate recording of eye movements. Instructions on the experimental tasks were provided orally and on the computer screen. The experiment was then initiated, with participants engaging in eight learning tasks involving reading texts and viewing images to acquire new knowledge. The time limit was set to 5 minutes for each task. Participants in the experimental group received materials with metacognitive prompts designed to stimulate self-regulated learning, while the control group received the same materials without prompts. Task sequencing was randomized to control for order effects, and participants were randomly assigned to conditions to prevent cross-task learning strategy transfer. All materials were presented using SMI’s Experiment Center software in a PowerPoint-like format. See the example of the multimedia Learning slide in Fig. 2.Fig. 2The example of multimedia learning slide from the experiment.Full size imageThe experimental session occurred in an isolated laboratory environment with consistent lighting and minimal ambient noise. Participants were seated in a height-adjustable chair, approximately 60–70 cm from the screen, and used a keyboard and mouse to navigate the experiment. During data collection, one research assistant was present, and all equipment was disinfected between sessions. Participants were tested individually, with brief breaks to allow for air circulation in the laboratory.Following the learning session, participants completed a post-test assessing their acquired knowledge and provided judgments of learning regarding their confidence in the learned content. The knowledge post-test was created in cooperation with the psychologist and comprised 24 items derived from the learning content presented in the study materials. To ensure balanced representation, three test items were assigned to each of the eight learning materials (8 × 3 items). These items varied in difficulty, with each material including one easy, one medium, and one difficult question. Prior to the experiment, all items were piloted to confirm their suitability.Once all tasks were completed, participants were debriefed, thanked for their participation, and given a small gift (e.g., pen or mug) before leaving. For ethical reasons, participants with inadequate calibration were allowed to repeat the experimental procedure.Data RecordsThe data is openly available on figshare24. Next to the Final dataset provided below, we also provide original raw dataset for potential analysis.Basic information about the participants of the experiment is available in the file participants.csv with the following structure:part_ID - participant idexperiment_condition - main experimental condition (“Prompt” vs “Non-prompt”)gender - genderage - agestudy_degree - degree of the study program (“bachelor,” “master,” “both”)study_form - a form of the study program (“full-time,” “part-time,” “both”)semester - semester studiedfatigue_level - reported level of fatigue on a scale of 1-10.learning_disabilities - information about whether the participant has been diagnosed with a learning disability and, if so, what kind (“no,” “dyslexia,” “dysorthography,” “ADHD,” “possibly”). The “possibly” option refers to cases where the participant has not been diagnosed but has been convinced they have a learning disability.dominant_eye - dominant eye (“L” vs. “R”)The data obtained through the SRL questionnaire are available in a questionnaire.csv file with the following structure:part_ID - participant idgoal_orient_int (01 to 04) - intrinsic goal orientation from MSLQgoal_orient_ext (01 to 04) - extrinsic goal orientation from MSLQself_regulation (01 to 12) - metacognitive self-regulation scale from MSLQcritical_thinking (01 to 05) - critical thinking from MSLQData regarding the individual stimuli used during the experiment are available in the stimuli.csv file and the related Stimuli_pic folder. The folder contains images of all the stimuli that participants were exposed to during the experiment, i.e. the initial instructions (“Intro.jpg”), the first, second and third prompt/non-prompt stimuli (“Prompt_1.jpg” to “Prompt_3.jpg” and “Non-prompt_1.jpg” to “Non-prompt_3.jpg”), eight study materials (“Task_1.jpg” to “Task_8.jpg”) and the final instructions (“Outro.jpg”). The file has the following structure:part_ID - participant idstimulus_name - stimulus name (“Task_1” to “Task_8”)stimulus_type - stimulus type (“Text” vs. “Multimedia”)stimulus_time - time spent on the stimulus (in seconds)tracking_ratio - tracking ratio on the stimulus (%)Transitions_* - several variables containing information about the number of transitions between individual AOIs on the stimulusThe data obtained in the knowledge post-test are available in the posttest.csv file with the following structure:part_ID - participant idquestion_name - question name (“Question_01” to “Question_24”)question_score - an indication of the correct/incorrect answer (0 vs 1)question_time - time spent on the question (in seconds)stimulus - stimulus name (“Task_1” to “Task_8”)The ET_event_data folder contains event-based eye-tracking data from the experiment. The folder contains a separate file for each participant and each study material. The naming of each file is uniform and always contains information about the participant (part_ID) and the name of the stimulus (stimulus_name, i.e., “Task_1” to “Task_8”). Each file in the folder contains the following eye-tracking data:TrialTrial Start Raw Time [ms]Trial Start Time of Day [h:m:s: ms]StimulusExport Start Trial Time [ms]Export End Trial Time [ms]ParticipantTracking Ratio [%]Eye L/RIndexEvent Start Trial Time [ms]Event End Trial Time [ms]Event Start Raw Time [ms]Event End Raw Time [ms]Event Duration [ms]Fixation Position X [px]Fixation Position Y [px]Fixation Dispersion X [px]Fixation Dispersion Y [px]Saccade Start Position X [px]Saccade Start Position Y [px]Saccade End Position X [px]Saccade End Position Y [px]Saccade Amplitude [°]The ET_data_raw folder contains raw eye-tracking data from the experiment. The folder contains a separate file for each participant. The naming of each file is uniform and always contains information about the participant (part_ID). Each file in the folder contains the following eye-tracking data:RecordingTime [ms]TrialStimulusExport Start Trial Time [ms]Export End Trial Time [ms]ParticipantTracking Ratio [%]Pupil Size Right X [px]Pupil Size Right Y [px]Pupil Diameter Right [mm]Pupil Size Left X [px]Pupil Size Left Y [px]Pupil Diameter Left [mm]Point of Regard Right X [px]Point of Regard Right Y [px]Point of Regard Left X [px]Point of Regard Left Y [px]AOI Name RightAOI Group RightAOI Scope RightAOI Order RightAOI Name LeftAOI Group LeftAOI Scope LeftAOI Order BinocularGaze Vector Right XGaze Vector Right YGaze Vector Right ZGaze Vector Left XGaze Vector Left YGaze Vector Left ZEye Position Right X [mm]Eye Position Right Y [mm]Eye Position Right Z [mm]Eye Position Left X [mm]Eye Position Left Y [mm]Eye Position Left Z [mm]Pupil Position Right X [px]Pupil Position Right Y [px]Pupil Position Left X [px]Pupil Position Left Y [px]Mouse Position X [px]Mouse Position Y [px]Technical ValidationData was collected using established psychological experimental methodologies and state-of-the-art technologies for technical validation. This ensured rigorous adherence to best practices across the entire data lifecycle, including data collection, pre-processing, and post-processing.Hardware and software specificationsThe eye-tracking experiment utilized the SMI RED 250 remote eye-tracking device, which operates at a sampling rate of 250 Hz. Stimuli were presented on a 22-inch LCD monitor with a resolution of 1600 × 900 pixels and a refresh rate of 60 Hz. For software, Experiment Center 3.7.69 (by SensoMotoric Instruments, SMI) was employed for stimulus presentation, while SMI BeGaze software (version 3.7) was used for data processing and analysis. The eye-tracking recordings and subsequent data processing were based on established methodologies20,25,26.Pilot study for technical validationA pilot study was conducted as part of the experimental preparation to validate the technical setup. This pilot study aimed not only to ensure the correctness of the experimental procedure but also to identify and mitigate potential technical issues arising from the relatively complex multi-device setup.Ensuring high-quality eye-tracking dataGiven that eye-tracking data are prone to signal loss and high mortality rates, several measures were implemented to maximize data quality:1.Participant screening: Individuals with significant visual impairments were excluded from the study.2.Calibration protocol: Each participant underwent an initial eye-tracker calibration before the experiment and an additional re-calibration at the session’s midpoint.Data Pre-processing and quality controlThe collected data underwent systematic pre-processing and primary validation, which included:1.Exclusion of participants with severe data distortions or signal loss: Recordings from participants whose signals were significantly distorted or wholly lost were removed.2.Tracking ratio thresholding: Participants with a tracking ratio below 80% were excluded, as this threshold is typically considered insufficient for reliable analyses.3.Manual correction of distorted data: The data were corrected and retained if distortion was identified but could be corrected manually using BeGaze’s “gaze offset correction” function. Otherwise, the recording was excluded.All data processing was performed using SMI BeGaze v.3.7, allowing for the recorded data’s refinement and quality enhancement.Eye-Tracking event detection and data cleaningPrior to statistical analysis, eye-tracking event detection was configured in BeGaze using high-speed saccade-based detection parameters:Minimum saccade duration: 22 msSaccadic peak velocity threshold: 4°/sMinimum fixation duration: 50 msSubsequently, visual evaluation and data cleaning were conducted. Any recordings with a tracking ratio below 80% were excluded.Expert validation and final data exclusionEach recording underwent independent visual inspection by two expert evaluators, following standard best practices for eye-tracking data validation.If systematic distortions were detected and could be corrected using gaze offset correction in BeGaze, the recording was manually corrected using the BeGaze function, and data was retained.If the distortion was irreparable, the recording was excluded from the dataset.Defining Areas of Interest (AOIs) and transitionsFor analysis, eight learning slides described above were used. Each slide contained four areas of interest (AOIs) corresponding to text paragraphs and images within the learning material (see Fig. 2). Size of each AOI was 359,550 px.Transitions, a core eye-tracking metric, were analyzed to assess gaze movement between AOIs.A transition is defined as a saccade from one AoI to another.The total number of transitions per learning slide per participant was calculated.Exclusions and final datasetOut of 110 participants, 17 recordings were excluded due to severe signal distortion or complete data loss due to technical reasons. A total of 3 participants were removed from the dataset because they gave up/did not complete the assignment. Additionally, six more recordings were removed because their ET tracking ratio fell below 80%, resulting in 84 complete recordings. Such a final dataset represents high-quality eye-tracking data capturing visual processing during learning with metacognitive prompting and self-regulated learning. This dataset provides a robust foundation for investigating gaze behavior and its relationship with cognitive and metacognitive processes in educational contexts.Usage NotesThe Dataset24 is publicly accessible for research and development purposes, offering valuable opportunities for both academic institutions and commercial entities who can use it freely. As a rich source of eye-tracking data collected from an online user study, it presents numerous opportunities for exploration across disciplines such as cognitive psychology, engineering, human-computer interaction, and virtual environment design. The dataset’s detailed structure allows researchers to investigate various aspects of self-regulated learning, cognitive load, and attentional patterns. It is beneficial for studies focusing on learning sciences and instructional design.One of its key strengths is its potential for analyzing visual attention allocation during different instructional (experimental) conditions. The data enable a detailed examination of how metacognitive prompts influence gaze behavior and learning outcomes and how different types of study materials, such as text-based versus multimedia content, impact cognitive engagement and follow-up performance. Moreover, the final didactic test, designed to assess learning performance, consists of three difficulty levels, reflecting varying degrees of comprehension. This layered structure allows nuanced analyses to account for how different cognitive demands shape learning processes.At the level of eye-tracking data, the dataset provides an opportunity to explore the dynamics of study behavior in terms of both the quality and quantity of eye movements. Researchers can conduct detailed analyses of various eye-tracking events, such as transitions between different quartiles of the learning slides27, and generate heatmaps that visualize patterns of search activity. This can be particularly useful for understanding how learners navigate and process educational content, shedding light on cognitive strategies employed during self-regulated learning.Beyond educational research, the dataset offers valuable applications in human-computer interaction and virtual/online learning environments (OLEs). By analyzing gaze-based interactions, researchers can optimize interface designs, improve multimedia content, and develop adaptive e-learning platforms that respond dynamically to user engagement. Furthermore, insights into fixation patterns and attentional shifts can inform the design of virtual and augmented reality applications, helping to create more intuitive and cognitively efficient digital learning experiences.Despite its broad applicability, certain methodological limitations should be considered. The dataset does not include direct assessments of cognitive capacities, such as IQ testing, as balancing and randomization within the experimental design aimed to control for variability across participants. Additionally, the final dataset may be affected by previously discussed errors and signal inconsistencies in eye-tracking recordings, which should be considered when interpreting the results. However, the extensive demographic information included in the dataset enables researchers to test hypotheses related to individual differences, further enhancing its potential for exploring diverse research questions. The dataset represents a comprehensive resource for understanding human eye-tracking behavior during learning and provides a foundation for studying the interplay between cognitive processes, instructional design, and attentional mechanisms. By leveraging this data, researchers can develop more effective learning strategies, refine educational technologies, and contribute to the growing cognitive and applied psychology field.Code availabilityNo custom code was used concerning the presented data.ReferencesMayer, R. E. Cognitive Theory of Multimedia Learning. in The Cambridge Handbook of Multimedia Learning (ed. Mayer, R. E.) 43–71 https://doi.org/10.1017/CBO9781139547369.005 (Cambridge University Press, 2014).Paivio, A. Mental Representations. https://doi.org/10.1093/acprof:oso/9780195066661.001.0001 (Oxford University Press, 1990).Baddeley, A. D. & Logie, R. H. Working Memory: The Multiple-Component Model. in Models of Working Memory (eds. Miyake, A. & Shah, P.) 28–61 https://doi.org/10.1017/CBO9781139174909.005 (Cambridge University Press, 1999).Sweller, J. Cognitive load theory, learning difficulty, and instructional design. Learn. Instr. 4, 295–312 (1994).Article  Google Scholar Panadero, E. A Review of Self-regulated Learning: Six Models and Four Directions for Research. Front. Psychol. 8, 422 (2017).Article  PubMed  PubMed Central  Google Scholar Zimmerman, B. J. & Schunk, D. H. Self-Regulated Learning and Performance: An Introduction and an Overview. in Handbook of Self-Regulation of Learning and Performance (Routledge, 2011).Boekaerts, M., Zeidner, M. & Pintrich, P. R. Handbook of Self-Regulation. (Elsevier, 1999).McInerney, D. M., Cheng, R. W., Mok, M. M. C. & Lam, A. K. H. Academic Self-Concept and Learning Strategies: Direction of Effect on Student Academic Achievement. J. Adv. Acad. 23, 249–269 (2012).Google Scholar Sitzmann, T. & Ely, K. A meta-analysis of self-regulated learning in work-related training and educational attainment: What we know and where we need to go. Psychol. Bull. 137, 421–442 (2011).Article  PubMed  Google Scholar Azevedo, R., Cromley, J. G., Moos, D. C., Greene, J. A. & Winters, F. I. Adaptive Content and Process Scaffolding: A key to facilitating students’ self-regulated learning with hypermedia. Psychological Test and Assessment Modeling 53, 106–140 (2011).Google Scholar Devolder, A., Van Braak, J. & Tondeur, J. Supporting self‐regulated learning in computer‐based learning environments: a systematic review of effects of scaffolding in the domain of science education. J. Comput. Assist. Learn. 28, 557–573 (2012).Article  Google Scholar Guo, L. Using metacognitive prompts to enhance self‐regulated learning and learning outcomes: A meta‐analysis of experimental studies in computer‐based learning environments. J. Comput. Assist. Learn. 38, 811–832 (2022).Article  Google Scholar Hoch, E., Fleig, K. & Scheiter, K. Can Monitoring Prompts Help to Reduce a Confidence Bias When Learning With Multimedia? Z. Für Entwicklungspsychologie Pädagog. Psychol. 55, 77–90 (2023).Google Scholar Manlove, S., Lazonder, A. W. & Jong, T. D. Trends and issues of regulative support use during inquiry learning: Patterns from three studies. Comput. Hum. Behav. 25, 795–803 (2009).Article  Google Scholar Sweller, J., Ayres, P. & Kalyuga, S. Intrinsic and Extraneous Cognitive Load. in Cognitive Load Theory 57–69 https://doi.org/10.1007/978-1-4419-8126-4_5 (Springer New York, New York, NY, 2011).Bannert, M. Promoting Self-Regulated Learning Through Prompts. Z. Für Pädagog. Psychol. 23, 139–145 (2009).Google Scholar Berthold, K., Nückles, M. & Renkl, A. Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts. Learn. Instr. 17, 564–577 (2007).Article  Google Scholar Lehmann, T., Hähnlein, I. & Ifenthaler, D. Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning. Comput. Hum. Behav. 32, 313–323 (2014).Article  Google Scholar Moser, S., Zumbach, J. & Deibl, I. The effect of metacognitive training and prompting on learning success in simulation‐based physics learning. Sci. Educ. 101, 944–967 (2017).Google Scholar Duchowski, A. T. Eye Tracking Methodology. https://doi.org/10.1007/978-3-319-57883-5 (Springer International Publishing, Cham, 2017).Catrysse, L. et al. How are learning strategies reflected in the eyes? Combining results from self-reports and eye-tracking. Br. J. Educ. Psychol. 88, 118–137 (2018).Article  PubMed  Google Scholar Juhaňák, L., Juřík, V., Dostálová, N. & Juříková, Z. Exploring the effects of metacognitive prompts on learning outcomes: An experimental study in higher education. Australasian Journal of Educational Technology 41(1), 42–59 (2025).Google Scholar Pintrich, P. R. & De Groot, E. V. Motivated Strategies for Learning Questionnaire. Am. Psychol. Assoc. APA https://doi.org/10.1037/t09161-000 (1990).Juřík, V., Juhaňák, L., Ružičková, A., Dostálová, N. & Juříková, Z. Experimental Dataset on Eye-tracking Activity During Self-Regulated Learning. figshare https://doi.org/10.6084/m9.figshare.28304069 (2025).Hessels, R. S. & Hooge, I. T. C. Eye tracking in developmental cognitive neuroscience – The good, the bad and the ugly. Dev. Cogn. Neurosci. 40, 100710 (2019).Article  PubMed  PubMed Central  Google Scholar Juříková, Z. Eye of the teacher: Using eye tracking to explore professional vision of teachers and teacher gaze. (Masaryk University, Faculty of Arts, Department of Educational Sciences, 2022).Dostálová, N., Juhaňák, L., Juříková, Z. & Juřík, V. Gaze transitions as a mediator of the effect of metacognitive prompts on student learning: An eye-tracking experiment (unpublished manuscript).Download referencesAcknowledgementsThis work was supported by the Czech Science Foundation [grant number 21-08218S, Multimodal learning analytics to study self-regulated learning processes within learning management systems]. Vojtěch Juřík was also supported by a grant from Masaryk University [grant number MUNI/A/1673/2024, Factors of successful psychological and social functioning in a changing world]. The study was also supported by infrastructures GREY Lab and HUME Lab held at the Faculty of Arts, Masaryk University, Brno.Author informationAuthors and AffiliationsDepartment of Educational Sciences, Faculty of Arts, Masaryk University, Brno, Czech RepublicVojtěch Juřík, Libor Juhaňák, Nicol Dostálová & Zuzana JuříkováDepartment of Psychology, Faculty of Arts, Masaryk University, Brno, Czech RepublicVojtěch Juřík & Alexandra RužičkováInstitute of Computer Aided Engineering and Computer Science, Faculty of Civil Engineering, Brno University of Technology, Brno, Czech RepublicVojtěch JuříkHUME Lab, Experimental Humanities Laboratory, Faculty of Arts, Masaryk University, Brno, Czech RepublicZuzana JuříkováAuthorsVojtěch JuříkView author publicationsYou can also search for this author inPubMed Google ScholarLibor JuhaňákView author publicationsYou can also search for this author inPubMed Google ScholarAlexandra RužičkováView author publicationsYou can also search for this author inPubMed Google ScholarNicol DostálováView author publicationsYou can also search for this author inPubMed Google ScholarZuzana JuříkováView author publicationsYou can also search for this author inPubMed Google ScholarContributionsThe authors made contributions based on their individual skills and areas of interest. Libor Juhaňák: Funding Acquisition, Project Administration, Conceptualization, Methodology, Data curation, Formal analysis, Visualization, Writing – original draft preparation, Writing – review & editing. Vojtěch Juřík: Conceptualization, Methodology, Investigation, Data curation, Writing – original draft preparation, Writing – review & editing. Nicol Dostálová: Conceptualization, Methodology, Data curation. Zuzana Juříková: Conceptualization, Methodology, Formal analysis, Writing – review & editing. Alexandra Ružičková: Investigation, Writing – original draft preparation, Writing – review & editing.Corresponding authorCorrespondence to Vojtěch Juřík.Ethics declarationsCompeting interestsThe authors declare no competing interests.Additional informationPublisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Supplementary informationSensitive Data ChecklistRights and permissionsOpen Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.Reprints and permissionsAbout this article