pith. machine review for the scientific record. sign in

arxiv: 2604.27349 · v1 · submitted 2026-04-30 · 💻 cs.CY · cs.AI

Recognition: unknown

Profiles of AI Dependency: A Latent Class Analysis of Filipino Students' Academic Competencies

Authors on Pith no claims yet

Pith reviewed 2026-05-07 08:56 UTC · model grok-4.3

classification 💻 cs.CY cs.AI
keywords AI dependencylatent class analysisacademic competenciesFilipino college studentscritical thinkingeducational policyAI literacy
0
0 comments X

The pith

Latent class analysis of Filipino college students identifies four AI dependency profiles with AI-dependent learners showing the weakest academic competencies.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper applies latent class analysis to survey responses from 651 students in Pampanga higher education institutions to map patterns of AI use. It finds four groups: highly engaged independent learners, selective AI users, moderate AI users, and AI-dependent learners. The AI-dependent group reports the lowest levels of critical thinking, writing skills, research skills, learning independence, and academic engagement. The authors argue this pattern signals a risk that heavy reliance on AI outputs erodes core academic abilities and call for policies that combine AI literacy with safeguards for independent skill development.

Core claim

Using latent class analysis on self-reported data, the study identifies four distinct profiles of AI dependency among Filipino students. AI-dependent learners, who rely heavily on AI-generated outputs for research and writing tasks, exhibit the weakest academic competencies across critical thinking, writing, research, independence, and engagement compared to the other profiles.

What carries the argument

Latent class analysis applied to survey items on AI usage frequency and perceived effects, which partitions students into four profiles that differ in dependency levels and reported academic competencies.

If this is right

  • AI-dependent learners show significantly weaker critical thinking, writing, research skills, learning independence, and academic engagement than other groups.
  • Moderate to high AI dependency appears concentrated in research and writing tasks.
  • Educational institutions need policies that integrate AI literacy training while protecting core academic skill development.
  • Curriculum adaptations should balance technological tools with explicit instruction in independent thinking and ethical AI use.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the profiles hold, interventions could target the moderate and AI-dependent groups with skill-building modules that require unaided work before AI assistance.
  • Longitudinal tracking of the same students could test whether AI dependency predicts later declines in skill performance beyond self-report.
  • The four-profile structure might shift if the survey is repeated with objective measures such as writing samples or problem-solving tests instead of perceptions.

Load-bearing premise

Students' self-reported survey answers accurately capture their real AI usage behaviors and the effects on their skills without meaningful social desirability bias or recall error.

What would settle it

An independent study that tracks actual AI tool usage logs or measures competencies through direct tasks and finds no systematic difference between the self-reported AI-dependent profile and the other profiles.

read the original abstract

The increasing dependency among Filipino college students on artificial intelligence (AI) poses concerns about the potential decline of fundamental academic competencies. This study examines the extent of AI dependency and its perceived effects on students' critical thinking, writing skills, learning independence, research skills, and academic engagement. Using a cross-sectional research design, data was collected from 651 students enrolled in higher education institutions (HEIs) in Pampanga, Philippines accredited by the Commission on Higher Education. The survey data was analyzed using Latent Class Analysis (LCA) to identify AI dependency patterns. Findings indicated that students show moderate to high AI dependency, specifically in research and writing tasks. LCA identified four distinct profiles: highly engaged independent learners, selective AI users, moderate AI users, and AI-dependent learners. Notably, AI-dependent learners demonstrated the weakest academic competencies, with significant dependency on AI-generated outputs. The study highlights the need to foster educational policies that integrate AI literacy while preserving essential academic skills. HEIs must also balance technological advancements with curriculum adaptations to promote critical thinking and ethical use of AI. Future research may explore the longitudinal impacts and intervention strategies to mitigate academic skill erosion caused by AI dependency.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. This paper reports a cross-sectional survey of 651 students from accredited HEIs in Pampanga, Philippines, and applies latent class analysis (LCA) to self-reported data on AI use in research/writing tasks and perceived effects on critical thinking, writing skills, learning independence, research skills, and academic engagement. It claims to identify four distinct AI-dependency profiles (highly engaged independent learners, selective AI users, moderate AI users, and AI-dependent learners), with the AI-dependent class showing the weakest competencies, and recommends educational policies that integrate AI literacy while preserving core academic skills.

Significance. If the four-class solution and the competency gradient hold after proper validation, the work would provide descriptive evidence on heterogeneous AI-dependency patterns in a Philippine higher-education sample and could inform targeted interventions. The use of LCA to uncover subgroups is methodologically appropriate for the research question, but the absence of objective corroboration for the self-report indicators substantially reduces the actionability and generalizability of the claimed profiles.

major comments (2)
  1. [Methods] Methods / Data Analysis: The manuscript provides no information on LCA model-fit indices (AIC, BIC, aBIC), entropy, Lo-Mendell-Rubin or bootstrap likelihood-ratio tests, or the class-enumeration procedure used to select the four-class solution. Without these statistics it is impossible to evaluate whether the reported profiles are statistically preferred over three- or five-class alternatives.
  2. [Methods] Data Collection / Measures: All indicators of AI usage and perceived competency effects are drawn from a single self-report instrument with no reported use of social-desirability scales, objective skill assessments (e.g., writing samples, critical-thinking tests), behavioral logs, or cross-validation against external criteria. This is load-bearing for the central claim that “AI-dependent learners demonstrated the weakest academic competencies,” because the observed class separation and competency ordering could reflect reporting bias rather than actual behavior.
minor comments (2)
  1. [Abstract] Abstract: The statement that students show “moderate to high AI dependency” is not accompanied by any descriptive statistics (means, percentages, or item-level responses) that would allow readers to gauge the magnitude of the reported dependency.
  2. [Methods] The manuscript should cite standard LCA references (e.g., Nylund et al. on class enumeration or Asparouhov & Muthén on fit indices) to justify the analytic choices.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment below and indicate the revisions planned for the resubmission. The additional methodological transparency will strengthen the paper while we maintain an honest discussion of the study's design constraints.

read point-by-point responses
  1. Referee: [Methods] Methods / Data Analysis: The manuscript provides no information on LCA model-fit indices (AIC, BIC, aBIC), entropy, Lo-Mendell-Rubin or bootstrap likelihood-ratio tests, or the class-enumeration procedure used to select the four-class solution. Without these statistics it is impossible to evaluate whether the reported profiles are statistically preferred over three- or five-class alternatives.

    Authors: We agree that these details are essential for evaluating the four-class solution. The original submission omitted the full model-comparison table for brevity. In the revised manuscript we will add a dedicated subsection reporting AIC, BIC, aBIC, entropy, and the Lo-Mendell-Rubin and bootstrap likelihood-ratio test results for the 1- through 6-class models. The four-class solution was retained because it yielded the lowest BIC and aBIC values, entropy above 0.80, statistically significant LMR and BLRT tests, and the most interpretable and theoretically coherent profiles. revision: yes

  2. Referee: [Methods] Data Collection / Measures: All indicators of AI usage and perceived competency effects are drawn from a single self-report instrument with no reported use of social-desirability scales, objective skill assessments (e.g., writing samples, critical-thinking tests), behavioral logs, or cross-validation against external criteria. This is load-bearing for the central claim that “AI-dependent learners demonstrated the weakest academic competencies,” because the observed class separation and competency ordering could reflect reporting bias rather than actual behavior.

    Authors: We recognize that exclusive reliance on self-report data introduces the possibility of reporting bias and that the absence of objective corroboration limits causal or behavioral claims. The study was conceived as a large-scale exploratory survey of perceived AI dependency and its self-assessed academic correlates; collecting objective measures or behavioral logs was not feasible within the available resources and timeline. In the revision we will expand the limitations paragraph to explicitly address social-desirability concerns, the potential for response bias to influence class separation, and the descriptive rather than confirmatory nature of the findings. We will also recommend future mixed-methods or longitudinal designs that incorporate objective assessments to validate the profiles. revision: partial

Circularity Check

0 steps flagged

No circularity: standard empirical LCA on survey data

full rationale

The paper collects cross-sectional self-report survey data from 651 students and applies Latent Class Analysis to classify AI dependency profiles and their association with perceived competencies. No equations, derivations, fitted parameters renamed as predictions, or self-citation chains appear in the abstract or described method. The four profiles emerge directly from the LCA on the observed indicators; the claim that AI-dependent learners show weakest competencies is an empirical output, not a definitional or self-referential reduction. This matches the default case of a non-circular empirical classification study.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The central claim rests on standard LCA assumptions and survey measurement validity rather than new postulates. No invented entities. Free parameters are limited to the data-driven choice of four classes.

free parameters (1)
  • Number of latent classes
    Chosen as four based on model fit; this is a standard LCA hyperparameter but directly shapes the reported profiles.
axioms (2)
  • standard math Local independence assumption in latent class analysis (indicators are independent given class membership)
    Invoked implicitly by using LCA; standard in the method but not explicitly tested or discussed in the abstract.
  • domain assumption Self-reported survey items validly measure AI dependency and academic competencies
    Core measurement assumption required for interpreting the profiles as real behavioral patterns.

pith-pipeline@v0.9.0 · 5555 in / 1410 out tokens · 53923 ms · 2026-05-07T08:56:02.965143+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

26 extracted references · 20 canonical work pages

  1. [1]

    H Akaike. 1974. A new look at the statistical model identification. IEEE Trans. Automat. Contr. 19, 6 (1974), 716–723. https://doi.org/10.1109/TAC.1974.1100705

  2. [2]

    Arguson, Marvin P

    Angelo C. Arguson, Marvin P. Mabborang, and Rhonnel S. Paculanan. 2023. The Acceptability of Generative AI Tools of Selected Senior High School Teachers in Schools Divisions Office-Manila, Philippines. J. Artif. Intell. Learn. Neural Netw. 3, 06 SE-Articles (October 2023), 1–10. https://doi.org/10.55529/jaimlnn.36.1.10

  3. [3]

    Arguson, Rhonnel S

    Angelo C. Arguson, Rhonnel S. Paculanan, and Marvin P. Mabborang. 2024. Investigating the Acceptability of Students on Generative AI Tools: A Correlation Analysis among Selected Tertiary Schools in Metro Manila, Philippines. J. Artif. Intell. Learn. Neural Netw. 4, 5 SE-Articles (August 2024), 7–18. https://doi.org/10. 55529/jaimlnn.45.7.18

  4. [4]

    Francisco Castro, Jian Gao, and Sébastien Martin. 2024. Human-AI Interactions and Societal Pitfalls. In Proceedings of the 25th ACM Conference on Economics and Computation (EC ’24), 2024. Association for Computing Machinery, New York, NY, USA, 205. https://doi.org/10.1145/3670865.3673482

  5. [5]

    Eriona Çela, Mathias Mbu Fonkam, and Rajasekhara Mouly Potluri. 2024. Risks of AI-Assisted Learning on Student Critical Thinking: A Case Study of Albania. Int. J. Risk Conting. Manag. 12, 1 (2024), 1–19. https://doi.org/10.4018/IJRCM.350185

  6. [6]

    Marlon A Diloy, Cyrix Pearl E Comparativo, John Carl T Reyes, Berna Jhane M Eusebio, and Lance Ian C Morona. 2024. Exploring the Landscape of AI Tools in Student Learning: An analysis of commonly utilized AI Tools at a university in the Philippines. In Proceedings of the 2023 6th Artificial Intelligence and Cloud Computing Conference (AICCC ’23 ), 2024. A...

  7. [7]

    Sérgio Gaitas, José Castro Silva, and António Poças. 2024. A latent class analysis on students’ beliefs about teachers’ practices enhancing their well-being. Front. Educ. 9, (2024)

  8. [8]

    Michael Gerlich. 2025. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 15. https://doi.org/10.3390/soc15010006

  9. [9]

    Gwo-Jen Hwang, Haoran Xie, Benjamin W Wah, and Dragan Gašević. 2020. Vision, challenges, roles and research issues of Artificial Intelligence in Education. Comput. Educ. Artif. Intell. 1, (2020), 100001. https://doi.org/10.1016/j.caeai.2020. 100001

  10. [10]

    D A Junio and A A Bandala. 2023. Utilization of Artificial Intelligence in Aca- demic Writing Class: L2 Learners Perspective. In 2023 IEEE 15th International Conference on Humanoid, Nanotechnology, Information Technology, Commu- nication and Control, Environment, and Management, HNICEM 2023 , 2023. . https://doi.org/10.1109/HNICEM60674.2023.10589003

  11. [11]

    Kristen P Kremer, Michael G Vaughn, and Travis M Loux. 2018. Parent and peer social norms and youth’s post-secondary attitudes: A latent class analysis. Child. Youth Serv. Rev. 93, (2018), 411–417. https://doi.org/10.1016/j.childyouth.2018.08. 026

  12. [12]

    Haozhuo Lin and Qiu Chen. 2024. Artificial intelligence (AI) -integrated edu- cational applications and college students’ creativity and academic emotions: students and teachers’ perceptions and attitudes. BMC Psychol. 12, 1 (September 2024), 487. https://doi.org/10.1186/s40359-024-01979-0

  13. [13]

    Marzuki, Utami Widiati, Diyenti Rusdin, Darwin, and Inda Indrawati. 2023. The impact of AI writing tools on the content and organization of students’ writing: EFL teachers’ perspective. Cogent Educ. 10, 2 (December 2023), 2236469. https://doi.org/10.1080/2331186X.2023.2236469

  14. [14]

    Anja Møgelvang, Camilla Bjelland, Simone Grassini, and Kristine Ludvigsen

  15. [15]

    Education Sciences 14

    Gender Differences in the Use of Generative Artificial Intelligence Chatbots in Higher Education: Characteristics and Consequences. Education Sciences 14. https://doi.org/10.3390/educsci14121363

  16. [16]

    Wilter C Morales-García, Liset Z Sairitupa-Sanchez, Sandra B Morales-García, and Mardel Morales-García. 2024. Development and validation of a scale for dependence on artificial intelligence in university students. Front. Educ. 9, (2024)

  17. [17]

    Karen L Nylund, Tihomir Asparouhov, and Bengt O Muthén. 2007. Deciding on the number of classes in latent class analysis and growth mixture modeling: A Monte Carlo simulation study. Structural Equation Modeling 14 , 535–569. https: //doi.org/10.1080/10705510701575396

  18. [18]

    Neil Selwyn. 2016. Is Technology Good for Education? Polity Press

  19. [19]

    Sayeda Sapna Shah and Muhammad Mujtaba Asad. 2024. Impact of Critical Thinking Approach on Learners’ Dependence on Innovative Transformation Through Artificial Intelligence. In The Evolution of Artificial Intelligence in Higher Education, Miltiadis D Lytras, Afnan Alkhaldi, Sawsan Malik, Andreea Claudia Serban and Tahani Aldosemani (eds.). Emerald Publish...

  20. [20]

    Xiaolei Shen and Mark Feng Teng. 2024. Three-wave cross-lagged model on the correlations between critical thinking skills, self-directed learning competency and AI-assisted writing. Think. Ski. Creat. 52, (2024), 101524. https://doi.org/10. 1016/j.tsc.2024.101524

  21. [21]

    Christian Stöhr, Amy Wanyu Ou, and Hans Malmström. 2024. Perceptions and usage of AI chatbots among students in higher education across genders, aca- demic levels and fields of study. Comput. Educ. Artif. Intell. 7, (2024), 100259. https://doi.org/10.1016/j.caeai.2024.100259

  22. [22]

    Jenn-Yun Tein, Stefany Coxe, and Heining Cham. 2013. Statistical Power to Detect the Correct Number of Classes in Latent Profile Analysis. Struct. Equ. Modeling 20, 4 (October 2013), 640–657. https://doi.org/10.1080/10705511.2013.824781

  23. [23]

    Resti Tito Villarino. 2025. Artificial Intelligence (AI) integration in Rural Philip- pine Higher Education: Perspectives, challenges, and ethical considerations. IJERI Int. J. Educ. Res. Innov. 23 SE- (February 2025). https://doi.org/10.46661/ijeri.10909

  24. [24]

    Tommy T Wijaya, Qingchun Yu, Yiming Cao, Yahan He, and Frederick K S Leung

  25. [25]

    Behavioral Sciences 14

    Latent Profile Analysis of AI Literacy and Trust in Mathematics Teachers and Their Relations with AI Dependency and 21st-Century Skills. Behavioral Sciences 14. https://doi.org/10.3390/bs14111008

  26. [26]

    Shunan Zhang, Xiangying Zhao, Tong Zhou, and Jang Hyun Kim. 2024. Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations on problematic AI usage behavior. Int. J. Educ. Technol. High. Educ. 21, 1 (2024), 34. https://doi.org/10.1186/s41239-024-00467-0 302