Recognition: no theorem link
RoboBlockly Studio: Conversational Block Programming with Embodied Robot Feedback for Computational Thinking
Pith reviewed 2026-05-13 04:57 UTC · model grok-4.3
The pith
RoboBlockly Studio links block programming to robot actions and AI conversations to support computational thinking skills.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The system establishes a closed loop where students author code in blocks, run it on a physical robot, observe the outcome, and revise based on that plus dialogue with the AI agent. This setup is intended to fulfill four objectives: maintain learner agency, ensure program transparency, embed programming in embodied tasks, and provide AI-based scaffolding for reflection. Observations from the student deployment indicate shifts in code interaction patterns, strategy reflections, and conceptual grasp.
What carries the argument
The tight iterative loop of authoring code, executing it on the robot, observing results, and revising with AI input, which ties abstract logic to physical embodiment.
If this is right
- Feedback from the robot and AI prompts students to interact differently with their code.
- Students reflect more on their problem-solving strategies.
- Understanding of computational thinking concepts is influenced positively.
- The design provides insights for building similar AI and robotics integrated learning tools.
Where Pith is reading between the lines
- This model could be tested in subjects beyond computing, such as physics or engineering, where physical outcomes matter.
- Adding pre- and post-assessments would allow measuring specific gains in computational thinking abilities.
- The conversational AI could be refined based on common student misconceptions observed during use.
Load-bearing premise
Qualitative observations from a single session with 32 high school students can adequately show that the system meets its goals of agency preservation, transparency, embodiment, and AI scaffolding, even absent numerical data or comparison groups.
What would settle it
If a follow-up experiment with a control group using only block programming finds no difference in students' ability to explain program behavior or solve problems compared to those using the full RoboBlockly Studio, the value of the added robot and AI elements would be called into question.
Figures
read the original abstract
Computational thinking (CT) is increasingly promoted as a core literacy, yet learners and teachers face challenges in connecting abstract program logic to meaningful outcomes. We design and evaluate RoboBlockly Studio, an integrated interactive system that combines block-based programming, a conversational AI teaching agent, and embodied robot execution. RoboBlockly Studio creates a tight iterative loop of authoring, running, observing, and revising. Informed by interviews with five programming teachers, the system was designed to support four goals: (1) preserving learner agency in computational thinking, (2) making program behavior transparent and interpretable, (3) grounding programming in embodied, classroom-aligned tasks, and (4) scaffolding reflection through pedagogically grounded AI dialogue. We deployed RoboBlockly Studio with 32 high school students, observing how robot and AI feedback influenced students' interactions with code, reflections on problem-solving strategies, and understanding of CT concepts. We discuss design insights and implications for creating interactive, embodied learning environments that integrate AI and robotics to support CT learning in computing education.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces RoboBlockly Studio, an integrated system combining block-based programming, a conversational AI teaching agent, and embodied robot execution to support computational thinking (CT) education. The design, informed by interviews with five programming teachers, targets four goals: preserving learner agency, ensuring program behavior transparency, grounding tasks in embodied classroom activities, and scaffolding reflection via pedagogically grounded AI dialogue. The system is evaluated through a deployment with 32 high school students, where observations are reported on how robot and AI feedback influenced code interactions, problem-solving reflections, and CT concept understanding, followed by discussion of design insights.
Significance. If the evaluation claims can be substantiated with more rigorous data, the work could contribute to HCI and computing education by demonstrating a novel integration of conversational AI and physical robotics with block programming to bridge abstract logic and tangible outcomes. It offers potential design insights for embodied CT tools, but the current reliance on uncontrolled qualitative observations limits its immediate impact and generalizability.
major comments (2)
- [Deployment and Observations] The central evaluation claim—that observations from the 32-student deployment demonstrate achievement of the four design goals (agency, transparency, embodiment, AI scaffolding)—is load-bearing but unsupported. No pre/post quantitative measures of CT understanding, control condition, baseline comparisons, or inter-rater reliability for qualitative coding are described, preventing attribution of student behaviors to the system rather than novelty or task structure.
- [Abstract and Evaluation] In the abstract and evaluation description, the manuscript states that robot and AI feedback 'influenced students' interactions with code, reflections on problem-solving strategies, and understanding of CT concepts,' yet supplies no specific data excerpts, coded examples, or metrics to substantiate these influences or map them directly to each of the four goals.
minor comments (3)
- [System Description] The description of the conversational AI agent lacks details on the underlying model, prompt engineering, or how pedagogical grounding is implemented, which would aid reproducibility.
- [Design Process] No information is provided on the specific robot hardware, classroom task examples, or exact interview protocol with the five teachers, limiting readers' ability to assess alignment with the stated design goals.
- [Discussion] The paper would benefit from a table summarizing the four design goals alongside corresponding system features and observed student behaviors for clarity.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback, which highlights important opportunities to strengthen the presentation of our evaluation. We agree that the exploratory nature of the 32-student deployment requires clearer documentation of observations and analysis to support the reported influences. We will revise the manuscript to address these points while preserving the qualitative, design-oriented focus of the work.
read point-by-point responses
-
Referee: [Deployment and Observations] The central evaluation claim—that observations from the 32-student deployment demonstrate achievement of the four design goals (agency, transparency, embodiment, AI scaffolding)—is load-bearing but unsupported. No pre/post quantitative measures of CT understanding, control condition, baseline comparisons, or inter-rater reliability for qualitative coding are described, preventing attribution of student behaviors to the system rather than novelty or task structure.
Authors: The evaluation is an exploratory deployment study designed to observe student interactions with the integrated RoboBlockly Studio system in a naturalistic classroom setting, rather than an experimental study claiming causal attribution or generalizability. The four design goals informed the system design based on teacher interviews, and the reported observations illustrate how robot and AI feedback manifested in student behaviors and reflections. We did not include pre/post measures or a control condition, as the study prioritized understanding the tight iterative loop of authoring, execution, and reflection in an embodied context. In revision, we will expand the methods section to detail the observation protocol, how behaviors were recorded and mapped to the design goals, and any steps taken for analysis consistency. If multiple observers were involved, inter-rater reliability will be reported; otherwise, we will note this as a limitation. revision: partial
-
Referee: [Abstract and Evaluation] In the abstract and evaluation description, the manuscript states that robot and AI feedback 'influenced students' interactions with code, reflections on problem-solving strategies, and understanding of CT concepts,' yet supplies no specific data excerpts, coded examples, or metrics to substantiate these influences or map them directly to each of the four goals.
Authors: We agree that the abstract and evaluation sections would be strengthened by concrete examples. In the revised version, we will add specific excerpts from the deployment observations—such as student quotes, described interaction sequences, or patterns in code revisions—that illustrate the influences of robot and AI feedback. These will be explicitly mapped to each of the four design goals (agency, transparency, embodiment, and AI scaffolding) to provide clearer substantiation of the reported effects on code interactions, reflections, and CT concept understanding. revision: yes
- The absence of pre/post quantitative measures of CT understanding and a control condition, as these elements were outside the scope of the original exploratory deployment study and cannot be added without new data collection.
Circularity Check
No circularity: empirical design and qualitative observation only
full rationale
The paper presents a system design informed by five teacher interviews and evaluates it through direct qualitative observations from a single deployment with 32 students, noting influences on code interactions, reflections, and CT understanding. No mathematical derivations, equations, fitted parameters, predictions, or self-citations appear in the abstract or described content. The four design goals are stated as motivations for the system rather than outputs derived from any chain that reduces to the inputs by construction; the work is therefore self-contained as a straightforward design-and-observation study with no load-bearing circular steps.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Omolola A Adeoye-Olatunde and Nicole L Olenik. 2021. Research and scholarly methods: Semi-structured interviews.Journal of the american college of clinical pharmacy4, 10 (2021), 1358–1367
work page 2021
-
[2]
Myers, Erik Andersen, and François Guimbretière
Ian Arawjo, Cheng-Yao Wang, Andrew C. Myers, Erik Andersen, and François Guimbretière. 2017. Teaching Programming with Gamified Semantics. InPro- ceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA)(CHI ’17). ACM, New York, NY, USA, 4911–4923. doi:10.1145/3025453.3025711
-
[3]
Aaron Bangor, Philip T Kortum, and James T Miller. 2008. An empirical evaluation of the system usability scale.Intl. Journal of Human–Computer Interaction24, 6 (2008), 574–594
work page 2008
-
[4]
Serena Booth, Sanjana Sharma, Sarah Chung, Julie Shah, and Elena L Glassman
-
[5]
In2022 17th ACM/IEEE International Conference on Human- Robot Interaction (HRI)
Revisiting human-robot teaching and learning through the lens of human concept learning. In2022 17th ACM/IEEE International Conference on Human- Robot Interaction (HRI). IEEE, 147–156
-
[6]
Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative research in sport, exercise and health11, 4 (2019), 589–597
work page 2019
-
[7]
Alex Cao, Keshav K Chintamani, Abhilash K Pandya, and R Darin Ellis. 2009. NASA TLX: Software for assessing subjective mental workload.Behavior research methods41, 1 (2009), 113–117
work page 2009
-
[8]
Julia Chatain, Manu Kapur, and Robert W. Sumner. 2023. Three Perspectives on Embodied Learning in Virtual Reality: Opportunities for Interaction Design. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA ’23). ACM, New York, NY, USA, Article 281, 8 pages. doi:10. 1145/3544549.3585805
-
[9]
Jiaxin Chen and Jiaojiao Hui. 2024. Put two and two together: A systematic review of combining computational thinking and project-based learning in STEM classrooms.STEM Education Review2 (2024), 1–18
work page 2024
-
[10]
Liuqing Chen, Shuhong Xiao, Yunnong Chen, Yaxuan Song, Ruoyu Wu, and Lingyun Sun. 2024. ChatScratch: An AI-augmented system toward autonomous visual programming learning for children aged 6-12. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–19
work page 2024
-
[11]
Valerie Chen, Alan Zhu, Sebastian Zhao, Hussein Mozannar, David Sontag, and Ameet Talwalkar. 2025. Need Help? Designing Proactive AI Assistants for Pro- gramming. InProceedings of the 2025 CHI Conference on Human Factors in Com- puting Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 881, 18 pages. doi:10.1145/3706598.3714002
-
[12]
Morgane Chevalier, Laila El-Hamamsy, Christian Giang, Barbara Bruno, and Francesco Mondada. 2021. Teachers’ perspective on fostering computational thinking through educational robotics. InInternational Conference on Robotics in Education (RiE). Springer, 177–185
work page 2021
-
[13]
Morgane Chevalier, Christian Giang, Laila El-Hamamsy, Evgeniia Bonnet, Vaios Papaspyros, Jean-Philippe Pellet, Catherine Audrin, Margarida Romero, Bernard Baumberger, and Francesco Mondada. 2022. The role of feedback and guidance as intervention methods to foster computational thinking in educational robotics learning activities for primary school.Compute...
work page 2022
-
[14]
Morgane Chevalier, Christian Giang, Alberto Piatti, and Francesco Mondada
-
[15]
Fostering computational thinking through educational robotics: A model for creative computational problem solving.International journal of STEM education 7, 1 (2020), 39
work page 2020
-
[16]
Vivienne Bihe Chi and Bertram F Malle. 2024. Interactive human-robot teaching recovers and builds trust, even with imperfect learners. InProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. 127–136
work page 2024
-
[17]
Jillianne Code. 2020. Agency for learning: Intention, motivation, self-efficacy and self-regulation.Frontiers in Education5 (2020), 19
work page 2020
-
[18]
Dagoberto Cruz-Sandoval, Michele Murakami, Alyssa Kubota, and Laurel D Riek. 2025. PODER: A Robot Programming Framework to Further Inclusion of People with Mild Cognitive Impairment in HRI Research. In2025 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 599–609
work page 2025
-
[19]
Valentina Dagien˙e and Gerald Futschek. 2008. Bebras international contest on informatics and computer literacy: Criteria for good tasks. InInternational conference on informatics in secondary schools-evolution and perspectives. Springer, 19–30
work page 2008
-
[20]
Valentina Dagien ˙e and Sue Sentance. 2016. It’s computational thinking! Bebras tasks in the curriculum. InInternational conference on informatics in schools: Situation, evolution, and perspectives. Springer, 28–39
work page 2016
-
[21]
Bernice d’Anjou, Saskia Bakker, Pengcheng An, and Tilde Bekker. 2019. How Peripheral Data Visualisation Systems Support Secondary School Teachers during VLE-Supported Lessons. InProceedings of the 2019 on Designing Interactive Systems Conference(San Diego, CA, USA)(DIS ’19). ACM, New York, NY, USA, 859–870. doi:10.1145/3322276.3322365
-
[22]
Maitraye Das, Megan Tran, Amanda Chih-han Ong, Julie A Kientz, and Heather Feldner. 2025. Cultivating Computational Thinking and Social Play among Neurodiverse Preschoolers in Inclusive Classrooms. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–22. RoboBlockly Studio: Conversational Block Programming with Embodied Robot ...
work page 2025
-
[23]
Norman K Denzin. 2007. Triangulation.The Blackwell encyclopedia of sociology (2007)
work page 2007
-
[24]
Griffin Dietz, Nadin Tamer, Carina Ly, Jimmy K Le, and James A Landay. 2023. Visual storycoder: A multimodal programming environment for children’s cre- ation of stories. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–16
work page 2023
-
[25]
Yihuan Dong, Veronica Catete, Robin Jocius, Nicholas Lytle, Tiffany Barnes, Jen- nifer Albert, Deepti Joshi, Richard Robinson, and Ashley Andrews. 2019. PRADA: A practical model for integrating computational thinking in K-12 education. In Proceedings of the 50th ACM technical symposium on computer science education. 906–912
work page 2019
-
[26]
Hayden Fennell, Joseph A Lyon, Aasakiran Madamanchi, and Alejandra J Magana
-
[27]
Journal of Engineering Education109, 170.176 (2019)
Computational apprenticeship: Cognitive apprenticeship for the digital era. Journal of Engineering Education109, 170.176 (2019)
work page 2019
-
[28]
Jennifer Fereday and Eimear Muir-Cochrane. 2006. Demonstrating Rigor Using Thematic Analysis: A Hybrid Approach of Inductive and Deductive Coding and Theme Development.International Journal of Qualitative Methods5, 1 (2006), 80–92. doi:10.1177/160940690600500107
-
[29]
Felipe Fronchetti, Nico Ritschel, Logan Schorr, Chandler Barfield, Gabriella Chang, Rodrigo Spinola, Reid Holmes, and David C Shepherd. 2024. Block-based pro- gramming for two-armed robots: a comparative study. InProceedings of the 46th IEEE/ACM International Conference on Software Engineering. 1–12
work page 2024
-
[30]
Rosella Gennari, Alessandra Melonio, and Mehdi Rizvi. 2023. A Tool for Guiding Teachers and their Learners: the Case Study of an Art Class. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI EA ’23). ACM, New York, NY, USA, Article 376, 6 pages. doi:10. 1145/3544549.3573863
-
[31]
Suzanne Groothuijsen, Antoine Van den Beemt, Joris C Remmers, and Ludo W van Meeuwen. 2024. AI chatbots in programming education: Students’ use in a scientific computing course and consequences for learning.Computers and Education: Artificial Intelligence7 (2024), 100290
work page 2024
-
[32]
Shuchi Grover and Roy Pea. 2013. Computational Thinking in K–12: A Review of the State of the Field.Educational Researcher42, 1 (2013), 38–43
work page 2013
-
[33]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. InAdvances in Psychology. Vol. 52. Elsevier, 139–183. doi:10.1016/S0166-4115(08)62386-9
-
[34]
Roberta Heale and Dorothy Forbes. 2013. Understanding triangulation in research. Evidence-based nursing16, 4 (2013), 98–98
work page 2013
-
[35]
Andreas Hinderks. 2017. Design and evaluation of a short version of the user experience questionnaire (UEQ-S).International Journal of Interactive Multimedia and Artificial Intelligence(2017)
work page 2017
-
[36]
Michael S. Horn and Robert J. K. Jacob. 2006. Tangible programming in the classroom: a practical approach. InExtended Abstracts of the 2006 CHI Conference on Human Factors in Computing Systems (CHI EA ’06). ACM, New York, NY, USA, 869–874. doi:10.1145/1125451.1125621
-
[37]
Michael S. Horn and Robert J. K. Jacob. 2007. Tangible programming in the classroom with tern. InExtended Abstracts of the 2007 CHI Conference on Human Factors in Computing Systems (CHI EA ’07). ACM, New York, NY, USA, 1965–1970. doi:10.1145/1240866.1240933
-
[38]
Wendy Huang and Chee-Kit Looi. 2021. A critical review of literature on “un- plugged” pedagogies in K–12 computer science and computational thinking education.Computer Science Education31, 1 (2021), 83–111
work page 2021
-
[39]
Yasmin B Kafai and Chris Proctor. 2022. A revaluation of computational think- ing in K–12 education: Moving toward computational literacies.Educational Researcher51, 2 (2022), 146–151
work page 2022
-
[40]
Majeed Kazemitabaar, Viktar Chyhir, David Weintrop, and Tovi Grossman. 2022. Codestruct: Design and evaluation of an intermediary programming environment for novices to transition from scratch to python. InProceedings of the 21st Annual ACM Interaction Design and Children Conference. 261–273
work page 2022
-
[41]
Majeed Kazemitabaar, Runlong Ye, Xiaoning Wang, Austin Zachary Henley, Paul Denny, Michelle Craig, and Tovi Grossman. 2024. Codeaid: Evaluating a classroom deployment of an llm-based programming assistant that balances student and educator needs. InProceedings of the 2024 chi conference on human factors in computing systems. 1–20
work page 2024
-
[42]
W Brian Lane, Terrie M Galanti, and XL Rozas. 2023. Teacher re-novicing on the path to integrating computational thinking in high school physics instruction. Journal for STEM Education Research6, 2 (2023), 302–325
work page 2023
-
[43]
Bettina Laugwitz, Theo Held, and Martin Schrepp. 2008. Construction and Evaluation of a User Experience Questionnaire. InHCI and Usability for Education and Work (USAB 2008) (Lecture Notes in Computer Science, Vol. 5298). Springer, 63–76
work page 2008
-
[44]
Zuzanna Lechelt, Yvonne Rogers, Nicola Yuill, Lena Nagl, Grazia Ragone, and Nicolai Marquardt. 2018. Inclusive Computing in Special Needs Classrooms: Designing for All. InProceedings of the 2018 CHI Conference on Human Factors in Computing Systems(Montreal QC, Canada)(CHI ’18). ACM, New York, NY, USA, 1–12. doi:10.1145/3173574.3174091
-
[45]
Tak Yeon Lee, Matthew Louis Mauriello, John Ingraham, Awalin Sopan, June Ahn, and Benjamin B. Bederson. 2012. CTArcade: Learning Computational Thinking While Training Virtual Characters Through Game Play. InCHI ’12 Extended Abstracts on Human Factors in Computing Systems. ACM, 2309–2314
work page 2012
-
[46]
Sorin Lerner. 2020. Projection Boxes: On-the-fly Reconfigurable Visualization for Live Programming. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–7. doi:10.1145/3313831.3376494
-
[47]
James R Lewis. 2018. The system usability scale: past, present, and future.Inter- national Journal of Human–Computer Interaction34, 7 (2018), 577–590
work page 2018
-
[48]
Myles Lewis, Pranay Joshi, Wesley Cade Junkins, Vincent Ingram, and Chris S. Crawford. 2025. PhysioBots: Engaging K-12 Students with Physiological Com- puting and Robotics. InExtended Abstracts of the 2025 CHI Conference on Human Factors in Computing Systems (CHI EA ’25). Association for Computing Machinery, New York, NY, USA, Article 434, 8 pages. https:...
-
[49]
Mingyuan Li, Duan Wang, Erick Purwanto, Thomas Selig, Qing Zhang, and Hai-Ning Liang. 2025. VisualCodeMOOC: A course platform for algorithms and data structures integrating a conversational agent for enhanced learning through dynamic visualizations.SoftwareX30 (2025), 102072
work page 2025
-
[50]
Ally Limke, Saminur Islam, Bahare Riahi, Xiaoyi Tian, Marnie Hill, Veronica Cateté, and Tiffany Barnes. 2025. What Does It Take to Support Problem Solving in Programming Classrooms? A New Framework from the K-12 Teacher Perspec- tive. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems(Yokohama, Japan)(CHI E...
-
[51]
Ally Limke, Nicholas Lytle, Maggie Lin, Sana Mahmoud, Marnie Hill, Veronica Cateté, and Tiffany Barnes. 2023. Empowering Students as Leaders of Co-Design for Block-Based Programming. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI EA ’23). ACM, New York, NY, USA, Article 98, 7 pages. doi:10.1145/...
-
[52]
Ally Limke, Nicholas Lytle, Maggie Lin, Sana Mahmoud, Marnie Hill, Veronica Cateté, and Tiffany Barnes. 2023. Empowering students as leaders of co-design for block-based programming. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–7
work page 2023
-
[53]
Zhichun Liu, Zarina Gearty, Eleanor Richard, Chandra Hawley Orrill, Shakhnoza Kayumova, and Ramprasad Balasubramanian. 2024. Bringing computational thinking into classrooms: a systematic review on supporting teachers in integrat- ing computational thinking into K-12 classrooms.International Journal of STEM Education11 (2024), 51
work page 2024
-
[54]
Ackerman, and Tawanna R Dillahunt
Alex Jiahong Lu, Gabriela Marcu, Mark S. Ackerman, and Tawanna R Dillahunt
-
[55]
In Proceedings of the 2021 ACM Designing Interactive Systems Conference(Virtual Event, USA)(DIS ’21)
Coding Bias in the Use of Behavior Management Technologies: Uncovering Socio-technical Consequences of Data-driven Surveillance in Classrooms. In Proceedings of the 2021 ACM Designing Interactive Systems Conference(Virtual Event, USA)(DIS ’21). ACM, New York, NY, USA, 508–522. doi:10.1145/3461778. 3462084
-
[56]
Lauren R. Milne and Richard E. Ladner. 2018. Blocks4All: Overcoming Accessi- bility Barriers to Blocks Programming for Children with Visual Impairments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal, QC, Canada)(CHI ’18). ACM, New York, NY, USA, Article 69, 10 pages. doi:10.1145/3173574.3173643
-
[57]
Chiara Montuori, Filippo Gambarota, Gianmarco Altoé, and Barbara Arfé. 2024. The cognitive effects of computational thinking: A systematic review and meta- analytic study.Computers & Education210 (2024), 104961
work page 2024
-
[58]
An Instructor is [already] able to keep track of 30 students
Tricia J. Ngoon, David Kovalev, Prasoon Patidar, Chris Harrison, Yuvraj Agar- wal, John Zimmerman, and Amy Ogan. 2023. "An Instructor is [already] able to keep track of 30 students": Students’ Perceptions of Smart Classrooms for Improving Teaching & Their Emergent Understandings of Teaching and Learning. InProceedings of the 2023 ACM Designing Interactive...
-
[59]
Nurul Hazlina Noordin. 2025. Computational Thinking Through Scaffolded Game Development Activities: A Study with Graphical Programming.European Journal of Educational Research14, 4 (2025), 1137–1149. doi:10.12973/eu-jer.14.4.1137
-
[60]
Eleanor O’Rourke, Erik Andersen, Sumit Gulwani, and Zoran Popović. 2015. A Framework for Automatically Generating Interactive Instructional Scaffolding. InProceedings of the 33rd Annual ACM Conference on Human Factors in Com- puting Systems(Seoul, Republic of Korea)(CHI ’15). Association for Computing Machinery, New York, NY, USA, 1545–1554. doi:10.1145/2...
-
[61]
Fan Ouyang and Weiqi Xu. 2024. The effects of educational robotics in STEM education: A multilevel meta-analysis.International Journal of STEM education 11, 1 (2024), 7
work page 2024
-
[62]
Sitong Pan, Robin Schmucker, Bernardo Garcia Bulle Bueno, Salome Aguilar Llanes, Fernanda Albo Alarcón, Hangxiao Zhu, Adam Teo, and Meng Xia. 2025. Tutorup: What if your students were simulated? training tutors to address en- gagement challenges in online learning. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–18
work page 2025
-
[63]
Bjarke Kristian Maigaard Kjær Pedersen, Didde Marie Jacobsen, Lukas Juhl Lyk Teichert, and Jacob Nielsen. 2021. Educational robotics and mediated trans- fer: transitioning from tangible tile-based programming, to visual block-based programming. InCompanion of the 2021 ACM/IEEE International Conference on DIS ’26, June 13–17, 2026, Singapore, Singapore Li ...
work page 2021
-
[64]
Price, Joseph Jay Williams, Jaemarie Solyst, and Samiha Marwan
Thomas W. Price, Joseph Jay Williams, Jaemarie Solyst, and Samiha Marwan. 2020. Engaging Students with Instructor Solutions in Online Programming Homework. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–7. doi:10.1145/3313831.3376857
-
[65]
Mitchel Resnick, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, Jay Silver, Brian Silverman, et al. 2009. Scratch: programming for all.Commun. ACM52, 11 (2009), 60–67
work page 2009
-
[66]
Glenda Revelle, Oren Zuckerman, Allison Druin, and Mark T. Bolas. 2005. Tangi- ble user interfaces for children. InExtended Abstracts of the 2005 CHI Conference on Human Factors in Computing Systems (CHI EA ’05). ACM, New York, NY, USA, 2051–2052. doi:10.1145/1056808.1057095
-
[67]
Peter Robe and Sandeep Kaur Kuttal. 2022. Designing pairbuddy—a conver- sational agent for pair programming.ACM Transactions on Computer-Human Interaction (TOCHI)29, 4 (2022), 1–44
work page 2022
-
[68]
Filipa Rocha, Hugo Simão, João Nogueira, Isabel Neto, Tiago Guerreiro, and Hugo Nicolau. 2025. Awareness in Collaborative Mixed-Visual Ability Tangible Programming Activities. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–15
work page 2025
-
[69]
Bernard Rosner, Robert J Glynn, and Mei-Ling T Lee. 2006. The Wilcoxon signed rank test for paired comparisons of clustered data.Biometrics62, 1 (2006), 185– 192
work page 2006
-
[70]
Elizabeth Rowe, Jodi Asbell-Clarke, Ryan Baker, Santiago Gasca, Erin Bardar, and Richard Scruggs. 2018. Labeling implicit computational thinking in pizza pass gameplay. InExtended abstracts of the 2018 CHI conference on human factors in computing systems. 1–6
work page 2018
-
[71]
Nine Sellier and Pengcheng An. 2020. How Peripheral Interactive Systems Can Support Teachers with Differentiated Instruction: Using FireFlies as a Probe. InProceedings of the 2020 ACM Designing Interactive Systems Confer- ence(Eindhoven, Netherlands)(DIS ’20). ACM, New York, NY, USA, 1117–1129. doi:10.1145/3357236.3395497
-
[72]
Valerie J. Shute, Chen Sun, and Jodi Asbell-Clarke. 2017. Demystifying computa- tional thinking.Educational Research Review22 (2017), 142–158. doi:10.1016/j. edurev.2017.09.003
work page doi:10.1016/j 2017
-
[73]
Woonhee Sung, Junghyun Ahn, and John B Black. 2017. Introducing computa- tional thinking to young learners: Practicing computational perspectives through embodiment in mathematics education.Technology, Knowledge and Learning22, 3 (2017), 443–463
work page 2017
-
[74]
Deepti Tagare. 2024. Factors That Predict K-12 Teachers’ Ability to Apply Compu- tational Thinking Skills.ACM Transactions on Computing Education24, 1 (2024), 1–26
work page 2024
-
[75]
Giovanni M Troiano, Michael Cassidy, Daniel Escobar Morales, Guillermo Pons, Amir Abdollahi, Gregorio Robles, Gillian Puttick, and Casper Harteveld. 2025. CT4ALL: Towards Putting Teachers in the Loop to Advance Automated Compu- tational Thinking Metric Assessments in Game-Based Learning. InProceedings of the 2025 CHI Conference on Human Factors in Computi...
work page 2025
-
[76]
Wengran Wang, Rui Zhi, Alexandra Milliken, Nicholas Lytle, and Thomas W. Price. 2020. Crescendo: Engaging Students to Self -Paced Programming Prac- tices. InProceedings of the 51st ACM Technical Symposium on Computer Science Education (SIGCSE ’20). ACM. doi:10.1145/3328778.3366919
-
[77]
Nathaniel Weinman, Armando Fox, and Marti A. Hearst. 2021. Improving In- struction of Programming Patterns with Faded Parsons Problems. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(Yokohama, Japan)(CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 53, 4 pages. doi:10.1145/3411764.3445228
-
[78]
David Weintrop, Elham Beheshti, Michael Horn, Kai Orton, Kemi Jona, Laura Trouille, and Uri Wilensky. 2016. Defining computational thinking for mathe- matics and science classrooms.Journal of science education and technology25, 1 (2016), 127–147
work page 2016
-
[79]
David Weintrop and Uri Wilensky. 2017. Comparing block-based and text-based programming in high school computer science classrooms.ACM Transactions on Computing Education (TOCE)18, 1 (2017), 1–25
work page 2017
-
[80]
Randi Williams, Safinah Ali, Raúl Alcantara, Tasneem Burghleh, Sharifa Al- ghowinem, and Cynthia Breazeal. 2024. Doodlebot: An educational robot for creativity and AI literacy. InProceedings of the 2024 ACM/IEEE international conference on human-robot interaction. 772–780
work page 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.