pith. machine review for the scientific record. sign in

arxiv: 2605.05836 · v1 · submitted 2026-05-07 · 💻 cs.HC

Recognition: unknown

Can providing feedback on gaze and mental-effort synchrony improve pair programming performance?

Authors on Pith no claims yet

Pith reviewed 2026-05-08 07:30 UTC · model grok-4.3

classification 💻 cs.HC
keywords pair programmingjoint visual attentionmental effortAI feedbackeye trackingcollaborative learningdebuggingproactive feedback
0
0 comments X

The pith

Feedback on gaze and mental-effort synchrony improves pair programming performance.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tests whether AI-supported feedback grounded in joint visual attention and joint mental effort can address breakdowns in coordination and cognitive regulation during pair programming. It runs two dual eye-tracking studies on debugging tasks, comparing no-feedback baselines to reactive interventions at threshold deviations and proactive machine-learning forecasts of future breakdowns. Multimodal feedback raises debugging success and efficiency, proactive timing further shortens task duration and raises uptake of useful code changes, and high-performing pairs retain more agency with fewer interruptions. A sympathetic reader would care because pair programming is a standard practice whose variable results often trace to unnoticed attention or effort misalignments that real-time cues might correct.

Core claim

Multimodal feedback on joint visual attention and joint mental effort significantly improves collaborative programming performance compared to no-feedback conditions, with reactive combined-modality feedback producing strong gains in debugging success and efficiency and proactive forecast-based feedback further reducing time on task, increasing constructive feedback uptake, and better preserving learner agency especially for high-performing pairs.

What carries the argument

Dual eye-tracking capture of real-time joint visual attention and joint mental effort used as triggers for AI feedback, either reactively when measures deviate beyond thresholds or proactively via machine-learning models that forecast regulatory breakdowns.

If this is right

  • Combining joint visual attention and joint mental effort feedback produces larger gains in debugging success than feedback based on either measure alone.
  • Proactive forecast-based feedback shortens time on task and raises rates of constructive code changes more than reactive threshold-based feedback.
  • Proactive feedback maintains optimal collaboration states with fewer interventions and is especially helpful for already high-performing pairs.
  • Feedback timing and transparency determine whether AI support enhances or reduces learner agency in collaborative programming.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same synchrony measures could be tested as support tools for other collaborative problem-solving tasks outside programming.
  • Effects observed in lab debugging sessions may or may not hold in longer, remote, or classroom pair-programming settings.
  • Adding speech or interaction-log data to the eye-tracking models could improve forecast accuracy for when feedback is needed.

Load-bearing premise

Real-time eye-tracking measures of joint visual attention and joint mental effort accurately detect moments of collaborative breakdown and that the chosen thresholds and forecasts identify points where feedback improves rather than disrupts outcomes.

What would settle it

A controlled replication in which pairs receiving the gaze-and-effort feedback show no difference or worse debugging success and time-on-task compared with no-feedback controls.

Figures

Figures reproduced from arXiv: 2605.05836 by Anahita Golrang, Kshitij Sharma.

Figure 13
Figure 13. Figure 13: Amount and types of feedback provided in the proactive feedback study and performance in the experimental condition view at source ↗
read the original abstract

Pair programming is a widely used collaborative learning practice in computer science education yet its effectiveness varies substantially due to breakdowns in coordination attention and cognitive regulation between partners. This paper investigates whether AI supported feedback grounded in joint visual attention and joint mental effort can improve collaborative programming performance and how feedback timing shapes learner AI interaction. Two experimental studies using dual eye tracking capture real time indicators of collaborative regulation during debugging tasks. Study 1 examines reactive feedback that intervenes when observed joint visual attention or joint mental effort deviates beyond predefined thresholds while Study 2 evaluates proactive feedback that forecasts future regulatory breakdowns using machine learning models and intervenes pre emptively. Across both studies feedback effectiveness is assessed through debugging success time on task and feedback uptake reflected in code changes. Multimodal feedback significantly improves collaborative performance compared to no feedback conditions. Reactive feedback yields strong gains in debugging success and efficiency particularly when joint visual attention and joint mental effort based feedback are combined. Proactive forecast based feedback further enhances performance reduces time on task and increases constructive feedback uptake while relying less on intrusive interventions. Proactive feedback better preserves learner agency by maintaining optimal collaboration states, particularly for high-performing pairs. These findings demonstrate that gaze and mental effort synchrony can serve as reliable actionable triggers for AI supported collaborative learning highlighting the importance of feedback timing transparency and anticipatory regulation in supporting effective pair programming.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper investigates whether AI-supported feedback grounded in real-time dual eye-tracking measures of joint visual attention and joint mental effort can improve pair programming performance during debugging tasks. It reports two studies: Study 1 evaluates reactive threshold-based interventions when synchrony deviates, while Study 2 tests proactive ML-based forecasts of future breakdowns; both claim multimodal feedback yields gains in debugging success, reduced time-on-task, and higher constructive feedback uptake relative to no-feedback controls, with proactive timing preserving learner agency better.

Significance. If the empirical claims hold after proper validation and reporting, the work could inform design of anticipatory AI tools for collaborative CS education by showing that physiological synchrony signals can serve as actionable triggers. The proactive vs. reactive timing comparison is a potentially useful contribution, but the absence of core statistical and validation details currently blocks any assessment of whether the results are reliable or generalizable.

major comments (3)
  1. Abstract: asserts 'statistically significant gains' and 'strong gains in debugging success and efficiency' but supplies no sample sizes, statistical tests, effect sizes, baseline comparisons, or exclusion criteria, so the data-to-claim link cannot be evaluated and the central performance-improvement result remains unsubstantiated.
  2. Methods (Study 1 and Study 2): the mapping from eye-tracking signals to 'joint mental effort breakdowns' and 'regulatory failures' is load-bearing for both the reactive and proactive conditions, yet no validation against independent instruments (NASA-TLX, secondary-task probes, or physiological ground truth) is described; without it, performance gains could be artifacts of any timely prompt rather than content-specific feedback on synchrony.
  3. Methods: deviation thresholds for joint attention/effort and ML model hyperparameters/training data for proactive forecasts are listed as free parameters; it is unclear whether they were pre-registered or tuned post-hoc on the same data, which directly undermines the reliability of the trigger definitions and the reported improvements.
minor comments (2)
  1. Abstract: minor grammatical and phrasing issues ('pre emptively', 'reduces time on task', 'joint visual attention and joint mental effort based feedback') that reduce readability.
  2. Abstract: 'joint mental effort' is used without operational definition; clarify whether it derives from pupil dilation, fixation duration, or another metric.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. These highlight key areas where additional clarity and rigor in reporting will strengthen the manuscript. We address each major comment point by point below, indicating the revisions we have made or will make in the next version.

read point-by-point responses
  1. Referee: Abstract: asserts 'statistically significant gains' and 'strong gains in debugging success and efficiency' but supplies no sample sizes, statistical tests, effect sizes, baseline comparisons, or exclusion criteria, so the data-to-claim link cannot be evaluated and the central performance-improvement result remains unsubstantiated.

    Authors: We agree that the abstract would benefit from more specific quantitative support for the claims. Due to typical abstract length constraints, we had prioritized brevity, but we have now revised the abstract to incorporate sample sizes (N=48 pairs in Study 1; N=52 pairs in Study 2), key statistical results with tests and effect sizes drawn from the Results section, explicit baseline comparisons to the no-feedback control, and a concise statement of exclusion criteria. These additions directly link the reported gains to the data while preserving readability. revision: yes

  2. Referee: Methods (Study 1 and Study 2): the mapping from eye-tracking signals to 'joint mental effort breakdowns' and 'regulatory failures' is load-bearing for both the reactive and proactive conditions, yet no validation against independent instruments (NASA-TLX, secondary-task probes, or physiological ground truth) is described; without it, performance gains could be artifacts of any timely prompt rather than content-specific feedback on synchrony.

    Authors: We acknowledge that explicit validation strengthens the interpretation of the eye-tracking-derived measures. The original manuscript grounded the joint mental effort metric in established pupil-dilation literature, but we have added a new subsection in the revised Methods reporting post-session correlations between our synchrony indices and NASA-TLX scores (showing moderate convergent validity). We also discuss why real-time secondary-task probes were impractical in the pair-programming context and note the absence of additional physiological ground truth as a limitation. These changes clarify that the interventions target the specific synchrony signals rather than providing generic prompts. revision: partial

  3. Referee: Methods: deviation thresholds for joint attention/effort and ML model hyperparameters/training data for proactive forecasts are listed as free parameters; it is unclear whether they were pre-registered or tuned post-hoc on the same data, which directly undermines the reliability of the trigger definitions and the reported improvements.

    Authors: We agree that transparency on parameter selection is essential. The deviation thresholds were determined from an independent pilot study (10 pairs excluded from the main analyses), and the ML models were trained and hyperparameter-tuned via cross-validation on a separate prior dataset. We have expanded the Methods section with a dedicated paragraph specifying the exact threshold values, the training data source, the cross-validation procedure, and confirmation that no post-hoc adjustment occurred on the evaluation data. Although the study was not formally pre-registered, we have added a statement on open-science practices including public release of analysis code. revision: yes

Circularity Check

0 steps flagged

No circularity: purely empirical intervention study with direct performance measurements

full rationale

The paper describes two controlled experiments (reactive threshold-based feedback in Study 1; ML-forecast proactive feedback in Study 2) that measure debugging success, time-on-task, and feedback uptake under different conditions. No equations, derivations, fitted parameters renamed as predictions, or self-citation chains appear in the provided text or abstract. All central claims rest on observed outcome differences between feedback and no-feedback groups rather than any self-referential reduction of a result to its own inputs. The work is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that dual eye-tracking yields valid, real-time proxies for collaborative regulation and that the chosen intervention thresholds and ML forecasts are appropriate triggers for performance gains.

free parameters (2)
  • deviation thresholds for joint attention and effort
    Predefined cutoffs that trigger reactive feedback; values not stated in abstract.
  • ML model hyperparameters and training data for proactive forecasts
    Used to predict future regulatory breakdowns; details absent from abstract.
axioms (1)
  • domain assumption Joint visual attention and joint mental effort deviations indicate breakdowns in collaborative regulation that affect debugging performance.
    Invoked to justify both reactive and proactive feedback triggers.

pith-pipeline@v0.9.0 · 5534 in / 1214 out tokens · 23214 ms · 2026-05-08T07:30:39.375295+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

42 extracted references · 2 canonical work pages

  1. [1]

    Multimodal learning analytics of collaborative patterns during pair programming in higher education

    Weiqi Xu, Yajuan Wu, and Fan Ouyang. Multimodal learning analytics of collaborative patterns during pair programming in higher education. International Journal of Educa - tional Technology in Higher Education, 20(1):8, 2023

  2. [2]

    Adapting interaction analysis to cscl: A systematic review

    Rolf Steier and Jacob Gorm Davidsen. Adapting interaction analysis to cscl: A systematic review. In 14th International Conference on Computer-Supported Collaborative Learning (CSCL): ISLS Annual Meeting 2021 Reflecting the Past and Embracing the Future, pages 157–160. International Society of the Learning Sciences (ISLS), 2021

  3. [3]

    Llm-based collabora- tive programming: impact on students’ computational thinking and self -efficacy

    Yi-Miao Yan, Chuang-Qi Chen, Yang-Bang Hu, and Xin-Dong Ye. Llm-based collabora- tive programming: impact on students’ computational thinking and self -efficacy. Human- ities and Social Sciences Communications, 12(1):1–12, 2025

  4. [4]

    Socially shared regulation of learning and artificial intelligence: Opportunities to support socially shared regulation

    Jinhee Kim, Rita Detrick, Seongryeong Yu, Yukyeong Song, Linda Bol, and Na Li. Socially shared regulation of learning and artificial intelligence: Opportunities to support socially shared regulation. Education and Information Technologies, pages 1–39, 2025

  5. [5]

    Self-regulation, self-efficacy, and fear of failure interactions with how novices use llms to solve programming problems

    Lauren E Margulieux, James Prather, Brent N Reeves, Brett A Becker, Gozde Cetin Uzun, Dastyni Loksa, Juho Leinonen, and Paul Denny. Self-regulation, self-efficacy, and fear of failure interactions with how novices use llms to solve programming problems. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1, pages 276–282. 2024

  6. [6]

    Fostering reg - ulatory processes using computational scaffolding

    Leonardo Silva, Ant´onio Mendes, Anabela Gomes, and Gabriel Fortes. Fostering reg - ulatory processes using computational scaffolding. International Journal of Computer - Supported Collaborative Learning, 18(1):67–100, 2023

  7. [7]

    Collaborative programming based on social shared regulation: An approach to improving students’ pro - gramming achievements and group metacognition

    Cheng-Ye Liu, Wei Li, Ji -Yi Huang, Lu -Yuan Lei, and Pei -Rou Zhang. Collaborative programming based on social shared regulation: An approach to improving students’ pro - gramming achievements and group metacognition. Journal of Computer Assisted Learn - ing, 39(5):1714–1731, 2023

  8. [8]

    Effects of a self -regulated cooperative programming approach on elementary students’ programming knowledge, metacognitive awareness, and self-efficacy

    Chengye Liu, Zhihao Cui, and Xiaojing Weng. Effects of a self -regulated cooperative programming approach on elementary students’ programming knowledge, metacognitive awareness, and self-efficacy. Journal of Computer Assisted Learning, 42(1):e70167, 2026

  9. [9]

    Investigating how introductory programming students apply regulation strategies

    Deller James Ferreira and Dirson Santos de Campos. Investigating how introductory programming students apply regulation strategies. In CSEDU (2), pages 463–473, 2023

  10. [10]

    Llms integration in software engineering team projects: Roles, impact, and a pedagogical design space for ai tools in computing education

    Ahmed Kharrufa, Sami Alghamdi, Abeer Aziz, and Christopher Bull. Llms integration in software engineering team projects: Roles, impact, and a pedagogical design space for ai tools in computing education. ACM Transactions on Computing Education, 2024

  11. [11]

    Is ai the better programming partner? human-human pair programming vs

    Qianou Ma, Tongshuang Wu, and Kenneth Koedinger. Is ai the better programming partner? human-human pair programming vs. human -ai pair programming. arXiv preprint arXiv:2306.05153, 2023

  12. [12]

    Human and artificial intelligence collaboration for socially shared regulation in learning

    Sanna J¨ar vel ¨a, Andy Nguyen, and Allyson Hadwin. Human and artificial intelligence collaboration for socially shared regulation in learning. British Journal of Educational Technology, 54(5):1057–1076, 2023

  13. [13]

    Human–ai collaborative learning in mixed reality: Examining the cognitive and socio - emotional interactions

    Belle Dang, Luna Huynh, Faaiz Gul, Carolyn Ros´e, Sanna J¨arvel ¨a, and Andy Nguyen. Human–ai collaborative learning in mixed reality: Examining the cognitive and socio - emotional interactions. British Journal of Educational Technology, 2025

  14. [14]

    Learning with generative arti- ficial intelligence within a network of co -regulation

    Jason M Lodge, Paula de Barba, and Jaclyn Broadbent. Learning with generative arti- ficial intelligence within a network of co -regulation. Journal of University Teaching and Learning Practice, 20(7):1–10, 2023. 26

  15. [15]

    Human-ai collaboration: Designing arti - ficial agents to facilitate socially shared regulation among learners

    Justin Edwards, Andy Nguyen, Joni L¨a ms ¨a , Marta Sobocinski , Ridwan Whitehead, Belle Dang, Anni -Sofia Roberts, and Sanna J¨ar vel ¨a. Human-ai collaboration: Designing arti - ficial agents to facilitate socially shared regulation among learners. British Journal of Educational Technology, 56(2):712–733, 2025

  16. [16]

    Zipei Ouyang. Self-regulated learning and engagement as serial mediators between ai - driven adaptive learning platform characteristics and educational quality: a psychological mechanism analysis. Frontiers in Psychology, 16:1646469, 2025

  17. [17]

    Where is aied headed? key topics and emerging frontiers (2020-2024)

    Shihui Feng, Huilin Zhang, and Dragan Gaˇsevi´c. Where is aied headed? key topics and emerging frontiers (2020-2024). arXiv preprint arXiv:2506.20971, 2025

  18. [18]

    Examining the interplay of gaze and verbal interactions in socially shared regulation of learning: A transmodal analysis (tma) study

    Andy Nguyen, Yeyu Wang, Ridwan Whitehead, Muhammad Ashiq, Sanna J¨ar vel ¨a, and David Williamson Shaffer. Examining the interplay of gaze and verbal interactions in socially shared regulation of learning: A transmodal analysis (tma) study. 2025

  19. [19]

    An ecological perspective on learner agency: The case of chinese tertiary-level efl students in peer reviews

    Jia He, Haixiao Wang, Jun Xia, and Xiang He. An ecological perspective on learner agency: The case of chinese tertiary-level efl students in peer reviews. System, 121:103222, 2024

  20. [20]

    Oak story: Improving learner outcomes with llm-mediated interactive narratives

    Alan Y Cheng, Carolyn Q Zou, Anthony Xie, Matthew Hsu, Felicia Yan, Felicity Huang, David K Zhang, Arjun Sharma, Rashon Poole, Daniel Wan Rosli, et al. Oak story: Improving learner outcomes with llm-mediated interactive narratives. In Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology , pages 1 –17, 2025

  21. [21]

    Students’ perceptions and outcome of teacher feedback: A systematic review

    Christian Brandmo and Siv M Gamlem. Students’ perceptions and outcome of teacher feedback: A systematic review. In Frontiers in Education , volume 10, page 1572950. Frontiers Media SA, 2025

  22. [22]

    Rubric co-creation to promote quality, interactivity and uptake of peer feedback

    Da Yan. Rubric co-creation to promote quality, interactivity and uptake of peer feedback. Assessment & Evaluation in Higher Education, 49(8):1017–1034, 2024

  23. [23]

    Empowering agency through learner -orchestrated self - generated feedback

    James Wood and Edd Pitt. Empowering agency through learner -orchestrated self - generated feedback. Assessment & Evaluation in Higher Education, 50(1):127–143, 2025

  24. [24]

    Role of learner agency for interactive group learning through the lens of blooms taxonomy

    Juliana Tay, Na Li, Lan Luo, Zihui Zhou, Wan Meng, Qing Zhang, Erick Purwanto, and Yongjia Lu. Role of learner agency for interactive group learning through the lens of blooms taxonomy. Interactive Learning Environments, pages 1–14, 2025

  25. [25]

    Unfolding the typology and quality of the learner agency practices in the teachers’ implementation of the 2013 curriculum in indonesia: the normalisation process theory perspective

    Burhanuddin and Mohammad Arsyad Arrafii. Unfolding the typology and quality of the learner agency practices in the teachers’ implementation of the 2013 curriculum in indonesia: the normalisation process theory perspective. Asia Pacific Education Review , 24(4):545–561, 2023

  26. [26]

    Technology-enhanced self and peer assessment to support student agency during group projects

    Marion Blumenstein, Asma Shakil, and Peter Swedlund. Technology-enhanced self and peer assessment to support student agency during group projects. ASCILITE Publica- tions, 2023

  27. [27]

    Epilogue: The interplay of agency and instruction in written corrective feedback: Reflections and future directions

    Hossein Nassaji and Eva Kartchava. Epilogue: The interplay of agency and instruction in written corrective feedback: Reflections and future directions. Canadian Journal of Applied Linguistics, 28(1):121–129, 2025

  28. [28]

    Editorial students’ voices in formative assessment feedback: New insights from research topic contributions

    Pernille Fiskerstrand, Fabienne Van der Kleij, and Wenke Mork Rogne. Editorial students’ voices in formative assessment feedback: New insights from research topic contributions. In Frontiers in Education, volume 10, page 1698277. Frontiers, 2025

  29. [29]

    Manifestation of learner agency in primary education: Goal setting, implementation, and reflection in the con- text of competency-based learning

    Jovita Ponomarioviene˙ and Daiva Jakavonyte˙-St aˇskuvien e˙. Manifestation of learner agency in primary education: Goal setting, implementation, and reflection in the con- text of competency-based learning. Behavioral Sciences, 15(8):1116, 2025

  30. [30]

    Enhancing socially shared regula- tion in collaborative learning groups: Designing for cscl regulation tools

    Sanna J¨a rvel ¨a, Paul A Kirschner, Ernesto Panadero, Jonna Malmberg, Chris Phielix , Jos Jaspers, Marika Koivuniemi, and Hanna J¨arven o ja. Enhancing socially shared regula- tion in collaborative learning groups: Designing for cscl regulation tools. Educational Technology Research and Development, 63(1):125–142, 2015

  31. [31]

    Computer -supported collab - orative learning in programming education: A systematic literature review

    Leonardo Silva, Ant´onio Jos´e Mendes, and Anabela Gomes. Computer -supported collab - orative learning in programming education: A systematic literature review. In 2020 IEEE Global Engineering Education Conference (EDUCON), pages 1086–1095. IEEE, 2020

  32. [32]

    Extreme Programming Explained: Embrace Change

    Kent Beck and Cynthia Andres. Extreme Programming Explained: Embrace Change. 27 Addison-Wesley Professional, 2004

  33. [33]

    Empirical research on pair program - ming in higher education: a literature review

    Anja Hawlitschek , Sarah Berndt, and Sandra Schulz. Empirical research on pair program - ming in higher education: a literature review. Computer science education , 33(3):400–428, 2023

  34. [34]

    Pair programming as a model of collaborative learning: A review of the research

    David Preston. Pair programming as a model of collaborative learning: A review of the research. Journal of Computing Sciences in colleges, 20(4):39–45, 2005

  35. [35]

    Empirical studies of pair pro - gramming for cs/se teaching in higher education: A systematic literature review

    Norsaremah Salleh, Emilia Mendes, and John Grundy. Empirical studies of pair pro - gramming for cs/se teaching in higher education: A systematic literature review. IEEE Transactions on Software Engineering, 37(4):509–525, 2010

  36. [36]

    Gaze collaboration patterns of suc - cessful and unsuccessful programming pairs using cross-recurrence quantification analysis

    Maureen M Villamor and Ma Mercedes T Rodrigo. Gaze collaboration patterns of suc - cessful and unsuccessful programming pairs using cross-recurrence quantification analysis. Research and Practice in Technology Enhanced Learning, 14(1):25, 2019

  37. [37]

    What happens when collaborators are not in-synch? In Proceedings of the 16th International Conference on Computer -Supported Collaborative Learning (CSCL 2023), pages 99–106, 2023

    Kshitij Sharma and Giulio Molinari. What happens when collaborators are not in-synch? In Proceedings of the 16th International Conference on Computer -Supported Collaborative Learning (CSCL 2023), pages 99–106, 2023

  38. [38]

    The index of pupillary activity: Measuring cognitive load vis-`a - vis task difficulty with pupil oscillation

    Andrew T Duchowski, Krzysztof Krejtz, Cezary Biele, Agnieszka Niedzielska, Peter Kiefer, Martin Raubal, and Ioannis Giannopoulos. The index of pupillary activity: Measuring cognitive load vis-`a - vis task difficulty with pupil oscillation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems , pages 1–13, 2018

  39. [39]

    Leveraging mobile eye-trackers to capture joint visual attention in co -located collaborative learning groups

    Bertrand Schneider, Kshitij Sharma, S´everin Cuendet, and Pierre Dillenbourg. Leveraging mobile eye-trackers to capture joint visual attention in co -located collaborative learning groups. International Journal of Computer -Supported Collaborative Learning , 13(3):241 – 261, 2018

  40. [40]

    Does feedback based on gaze and stress indicators help novice programmers? In European Conference on Technology Enhanced Learning , pages 198–213

    Anahita Golrang and Kshitij Sharma. Does feedback based on gaze and stress indicators help novice programmers? In European Conference on Technology Enhanced Learning , pages 198–213. Springer, 2025

  41. [41]

    Cross -recurrence quantification analysis of categorical and continuous time series: an r package

    Moreno I Coco and Rick Dale. Cross -recurrence quantification analysis of categorical and continuous time series: an r package. Frontiers in psychology, 5:510, 2014

  42. [42]

    do nothing

    Eirini Kalliamvakou. Research: Quantifying github copilot’s impact on developer produc - tivity and happiness. The GitHub Blog, September 2022. Accessed: 2025-12-02. Appendix Table 9.: Feedback combinations for different states SCENARIO JVA JME ME1 ME2 A1: Do Nothing A2: GitHub Copilot A3: Gaze-awareness Tool A4: Dialog Prompt A5: Show Hint 1 H H H H ✓ ✓*...