pith. machine review for the scientific record. sign in

arxiv: 2605.08869 · v1 · submitted 2026-05-09 · 💻 cs.DL

Recognition: no theorem link

Horizontal and Longitudinal Comparisons Among AI Subfields: A Bibliometric Perspective

Lu Yuan, Shuyu Chen, Tingxin Jiang, Xinyi Chang, Yalan Jin, Zeyu Li

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:25 UTC · model grok-4.3

classification 💻 cs.DL
keywords AI subfieldsbibliometric analysisknowledge diffusionstructural differentiationcomputer visionmachine learningnatural language processingweb and information retrieval
0
0 comments X

The pith

AI subfields have shifted to high-intensity knowledge diffusion and now show clear structural differences in impact, collaboration, and growth.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper uses bibliometric analysis on 106,622 papers from 2000 to 2024 across five AI subfields to compare their development. It establishes that all subfields now feature faster knowledge spread, greater reliance on external fields, and a move from isolated work to open, interdisciplinary networks involving multiple actors. At the same time, the subfields have differentiated: computer vision leads in academic impact on a task-focused path, machine learning has tightened international ties while reducing industry links, web and information retrieval stays industry-driven with stable networks, AI grows steadily, and natural language processing holds relatively steady. This matters because it shows AI is no longer evolving as one uniform field but through distinct paths that could call for different support strategies.

Core claim

Analysis of papers classified by CSRankings into AI, CV, ML, NLP, and Web&IR reveals that these subfields have entered high-intensity knowledge diffusion with rising academic impact, accelerated dissemination, growing external disciplinary reliance, and a shift from closed accumulation to open, interdisciplinary, multi-actor networks, accompanied by structural differentiation where CV leads in academic impact with a task-oriented trajectory, ML shows shrinking industry collaboration but concentrated international collaboration with a dispersed structure, Web&IR is strongly industry-driven with a stable collaboration network, AI exhibits continuous growth, and NLP remains relatively stable.

What carries the argument

A multidimensional bibliometric framework of twelve expert-selected indicators across impact and dissemination, collaboration characteristics, and author characteristics, tracked longitudinally and horizontally via violin plots, chord diagrams, and Sankey diagrams on CSRankings-classified data.

If this is right

  • Knowledge production across AI subfields has moved from closed accumulation to open interdisciplinary networks.
  • Computer vision shows the strongest academic impact and follows a task-oriented development path.
  • Machine learning displays reduced industry collaboration alongside concentrated international ties.
  • Web and information retrieval remains heavily driven by industry with stable collaboration structures.
  • AI overall continues to grow while natural language processing stays comparatively stable.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Uniform policies for AI may miss the distinct collaboration and impact needs of individual subfields.
  • The move toward open networks could accelerate cross-subfield idea exchange beyond what the paper measures.
  • Extending the same indicator set to post-2024 data or emerging areas would test whether differentiation persists or intensifies.

Load-bearing premise

The CSRankings-based assignment of papers to subfields and the choice of twelve bibliometric indicators accurately and consistently capture real structural differences and evolutionary paths without major misclassification or bias over 2000-2024.

What would settle it

Repeating the full analysis with an alternative subfield classification system or a different set of indicators and obtaining no evidence of increased diffusion or the reported subfield-specific patterns in impact and collaboration would falsify the central claims.

read the original abstract

Recent artificial intelligence has developed rapidly with significant interdisciplinary expansion, yet existing studies often treat it as a whole, lacking systematic long-term subfield comparisons and structural analyses, thereby limiting understanding of internal differences and evolutionary mechanisms. To address this gap, we employ bibliometric methods, using expert interviews and indicator screening to construct an analytical framework. Twelve bibliometric indicators are selected across three dimensions: Impact and Dissemination, Collaboration Characteristics, and Author Characteristics. We conduct horizontal and longitudinal analyses of five subfields (AI, CV, ML, NLP, Web\&IR) from 2000 to 2024. Using CSRankings classification and a dataset of 106,622 papers, we apply violin plots, chord diagrams, and sankey diagrams to characterize structural features and evolutionary paths. Results show that these subfields have entered high-intensity knowledge diffusion: academic impact increased, knowledge dissemination accelerated, external disciplinary reliance grown, and knowledge production shifted from closed accumulation to open, interdisciplinary, multi-actor networks. On this basis, subfields exhibit significant structural differentiation: CV leads in academic impact with a task-oriented trajectory; ML shows shrinking industry collaboration but concentrated international collaboration with a relatively dispersed structure; Web\&IR is strongly industry-driven with a stable collaboration network; AI shows continuous growth; NLP remains relatively stable. Overall, this study reveals artificial intelligence evolving from unified diffusion to structural differentiation, constructs an extensible multidimensional framework, and provides a quantitative approach for understanding complex technological field evolution.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 3 minor

Summary. The paper performs a bibliometric analysis of five AI subfields (AI, CV, ML, NLP, Web&IR) over 2000–2024 using 106,622 papers classified via CSRankings. It selects twelve indicators across Impact and Dissemination, Collaboration Characteristics, and Author Characteristics via expert input, then applies violin plots, chord diagrams, and Sankey diagrams for horizontal and longitudinal comparisons. The central claim is that the subfields have entered high-intensity knowledge diffusion with accelerated impact, dissemination, external reliance, and a shift to open interdisciplinary networks, accompanied by significant structural differentiation (CV leads impact with task-oriented path; ML shows reduced industry but concentrated international ties; Web&IR is industry-driven with stable networks; AI grows continuously; NLP is stable).

Significance. If the subfield labels and indicator framework hold, the study supplies a useful quantitative baseline for tracking internal differentiation within AI rather than treating the field monolithically. The large corpus and multi-dimensional, visualization-driven approach could inform future work on technological evolution and provide an extensible template for other domains. The descriptive patterns on collaboration shifts and impact growth are timely given rapid AI expansion.

major comments (3)
  1. [Data and Methods] Data and Methods: The paper assigns papers to subfields solely via CSRankings venue lists but reports no validation of classification accuracy, no misclassification estimates, and no sensitivity checks for venue-based errors (especially pre-2010 papers or cross-subfield work). Because all differentiation claims (CV impact leadership, ML industry shrinkage, etc.) rest on these group comparisons, unquantified assignment error directly threatens the validity of the structural-differentiation narrative.
  2. [Results] Results: The visualizations document indicator trends and network patterns, yet the manuscript presents no statistical tests (e.g., trend significance, between-subfield ANOVA, or confidence intervals) for the asserted “significant structural differentiation” or “high-intensity knowledge diffusion.” Descriptive plots alone do not establish that observed differences exceed what sampling or classification noise would produce.
  3. [Methods] Methods: The expert-interview and indicator-screening process is described only at a high level; the number of experts, their selection criteria, the exact screening protocol, and any measure of consensus are omitted. This information is required to evaluate whether the twelve-indicator framework is reproducible or inadvertently biased toward certain dimensions.
minor comments (3)
  1. [Abstract] Abstract and text: “Web&IR” should be rendered consistently (e.g., “Web & IR” or “Web/IR”) to avoid LaTeX artifacts in the published version.
  2. Figures: Violin-plot and Sankey-diagram captions would benefit from explicit guidance on how to interpret the distributions and flows in relation to the three indicator dimensions.
  3. [Data and Methods] Dataset description: A brief statement on deduplication, language filtering, and handling of journal versus conference papers would improve reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback, which highlights important areas for improving transparency and rigor. We address each major comment below and will revise the manuscript accordingly to strengthen the presentation of our bibliometric framework and findings.

read point-by-point responses
  1. Referee: Data and Methods: The paper assigns papers to subfields solely via CSRankings venue lists but reports no validation of classification accuracy, no misclassification estimates, and no sensitivity checks for venue-based errors (especially pre-2010 papers or cross-subfield work). Because all differentiation claims (CV impact leadership, ML industry shrinkage, etc.) rest on these group comparisons, unquantified assignment error directly threatens the validity of the structural-differentiation narrative.

    Authors: We agree that explicit validation and sensitivity analysis would enhance confidence in the subfield assignments. CSRankings venue lists are a standard, expert-curated resource in computer science, but the manuscript does not report sample-based checks or error estimates. In the revision, we will add a dedicated subsection on classification methodology, including a manual audit of a random sample of papers (e.g., 500 papers) to estimate misclassification rates, discussion of pre-2010 venue stability, and sensitivity checks by re-computing key indicators after excluding borderline venues or using an alternative venue-based partitioning. This will directly support the structural-differentiation claims. revision: yes

  2. Referee: Results: The visualizations document indicator trends and network patterns, yet the manuscript presents no statistical tests (e.g., trend significance, between-subfield ANOVA, or confidence intervals) for the asserted “significant structural differentiation” or “high-intensity knowledge diffusion.” Descriptive plots alone do not establish that observed differences exceed what sampling or classification noise would produce.

    Authors: The analysis is exploratory and descriptive, relying on consistent multi-indicator patterns across large-scale visualizations to characterize evolution rather than formal hypothesis testing. However, we acknowledge that adding inferential statistics would better substantiate claims of differentiation. In the revised version, we will incorporate trend significance tests (e.g., Mann-Kendall for longitudinal changes), between-subfield comparisons (e.g., Kruskal-Wallis with post-hoc tests), and bootstrap-derived confidence intervals or error bands on the violin plots and network metrics where feasible, while clarifying that the core contribution remains the visualization-driven characterization of diffusion and differentiation. revision: yes

  3. Referee: Methods: The expert-interview and indicator-screening process is described only at a high level; the number of experts, their selection criteria, the exact screening protocol, and any measure of consensus are omitted. This information is required to evaluate whether the twelve-indicator framework is reproducible or inadvertently biased toward certain dimensions.

    Authors: We recognize that the high-level description limits reproducibility assessment. The indicator selection drew on expert input to ensure relevance across Impact, Collaboration, and Author dimensions, but details were condensed. In the revision, we will expand the Methods section with specifics on the expert consultation (number of experts, their subfield expertise and selection via publication records and institutional diversity), the iterative screening protocol (initial longlist, scoring criteria, rounds of review), and consensus measures (e.g., agreement percentages or Delphi-style resolution). This will allow readers to evaluate potential bias and replicate the framework. revision: yes

Circularity Check

0 steps flagged

No significant circularity: data-driven bibliometric comparisons using external classification

full rationale

The paper conducts horizontal and longitudinal comparisons of AI subfields using CSRankings venue-based classification of 106,622 papers and twelve standard bibliometric indicators selected via expert interviews and screening. No mathematical derivations, equations, fitted parameters, or predictions appear in the provided text; results consist of direct visualizations (violin plots, chord diagrams, Sankey diagrams) and descriptive comparisons of impact, collaboration, and author metrics across subfields and time. The central claims of high-intensity diffusion and structural differentiation follow from these empirical patterns without reduction to self-defined quantities or self-citation chains. The analysis is therefore self-contained against external data sources and does not exhibit any of the enumerated circularity patterns.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The analysis rests on standard bibliometric assumptions that citation counts and collaboration metrics reflect impact and structure; no free parameters or invented entities are introduced beyond the indicator selection process.

axioms (2)
  • domain assumption CSRankings classification accurately assigns papers to subfields without substantial overlap or error
    Used to build the 106,622 paper dataset for all comparisons
  • domain assumption The twelve selected indicators across three dimensions capture the key structural features of subfield evolution
    Chosen via expert interviews and screening as the analytical framework

pith-pipeline@v0.9.0 · 5575 in / 1381 out tokens · 34152 ms · 2026-05-12T01:25:51.902423+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

52 extracted references · 52 canonical work pages

  1. [1]

    Scientometrics 124, 993–1013 (2020) https://doi.org/10.1007/s11192-020-03423-7

    Yuan, S., Shao, Z., Wei, X., Tang, J., Hall, W., Wang, Y., Wang, Y., Wang, Y.: Sci- ence behind ai: The evolution of trend, mobility, and collaboration. Scientometrics 124, 993–1013 (2020) https://doi.org/10.1007/s11192-020-03423-7

  2. [2]

    Publications5(4), 23 (2017) https://doi.org/10.3390/ publications5040023

    Fiala, D., Tutoky, G.: Computer science papers in web of science: A bibliometric analysis. Publications5(4), 23 (2017) https://doi.org/10.3390/ publications5040023

  3. [3]

    Scientometrics127(4), 2055–2084 (2022) https://doi.org/10.1007/ s11192-022-04294-w

    Bickley, S.J., Chan, H.F., Torgler, B.: Artificial intelligence in the field of economics. Scientometrics127(4), 2055–2084 (2022) https://doi.org/10.1007/ s11192-022-04294-w

  4. [4]

    Scientometrics126(4), 3153–3192 (2021) https://doi.org/10.1007/s11192-021-03868-4

    Liu, N., Shapira, P., Yue, X.: Tracking developments in artificial intelligence research: Constructing and applying a new search strategy. Scientometrics126(4), 3153–3192 (2021) https://doi.org/10.1007/s11192-021-03868-4

  5. [5]

    Human Behavior and Emerging Technologies2024(2024) https://doi.org/10.1155/2024/1689353

    Al-Marzouqi, A.H., Arabi, A.A.: A comparative analysis of the performance of leading countries in conducting artificial intelligence research. Human Behavior and Emerging Technologies2024(2024) https://doi.org/10.1155/2024/1689353

  6. [6]

    Scientometrics127(9), 5139–5158 (2022) https://doi.org/10.1007/ s11192-022-04477-5

    Arencibia-Jorge, R., Vega-Almeida, R.L., Jim´ enez-Andrade, J.L., Carrillo-Calvet, H.: Evolutionary stages and multidisciplinary nature of artificial intelligence research. Scientometrics127(9), 5139–5158 (2022) https://doi.org/10.1007/ s11192-022-04477-5

  7. [7]

    Scientometrics121(2), 1189–1211 (2019) https://doi.org/10.1007/s11192-019-03222-9

    Abbas, N.N., Ahmed, T., Shah, S.H.U., Omar, M., Park, H.W.: Investigating the applications of artificial intelligence in cyber security. Scientometrics121(2), 1189–1211 (2019) https://doi.org/10.1007/s11192-019-03222-9

  8. [8]

    Scientometrics (2024) https://doi.org/10

    Cai, R., Tian, W., Luo, R., Hu, Z.: The generation mechanism of research lead- ership in international collaboration. Scientometrics (2024) https://doi.org/10. 1007/s11192-024-04974-9

  9. [9]

    Scientometrics129(1), 601–639 (2024) https://doi.org/10.1007/s11192-023-04873-5

    Leibel, C., Bornmann, L.: What do we know about the disruption index in scien- tometrics? an overview of the literature. Scientometrics129(1), 601–639 (2024) https://doi.org/10.1007/s11192-023-04873-5

  10. [10]

    Scientometrics127(10), 5879–5930 (2022) https://doi

    Al-Jamimi, H.A., BinMakhashen, G.M., Bornmann, L.: Use of bibliometrics for research evaluation in emerging markets economies: a review and discussion of bibliometric indicators. Scientometrics127(10), 5879–5930 (2022) https://doi. org/10.1007/s11192-022-04490-8

  11. [11]

    Scientometrics76(2), 245–260 (2008) https://doi.org/10.1007/ s11192-007-1913-7 62

    Ma, R., Ni, C., Qiu, J.: Scientific research competitiveness of world universities in computer science. Scientometrics76(2), 245–260 (2008) https://doi.org/10.1007/ s11192-007-1913-7 62

  12. [12]

    Scientometrics110(2), 529–539 (2017) https://doi.org/10

    Fernandes, J.M., Monteiro, M.P.: Evolution in the number of authors of computer science publications. Scientometrics110(2), 529–539 (2017) https://doi.org/10. 1007/s11192-016-2214-9

  13. [13]

    Scientometrics81(3), 719–745 (2009) https://doi.org/10.1007/s11192-008-2197-2

    Porter, A.L., Rafols, I.: Is science becoming more interdisciplinary? measuring and mapping six research fields over time. Scientometrics81(3), 719–745 (2009) https://doi.org/10.1007/s11192-008-2197-2

  14. [14]

    Social Network Analysis and Mining4(1) (2014) https://doi.org/10

    Chakraborty, T., Sikdar, S., Ganguly, N., Mukherjee, A.: Citation interactions among computer science fields: A quantitative route to the rise and fall of scientific research. Social Network Analysis and Mining4(1) (2014) https://doi.org/10. 1007/s13278-013-0174-4

  15. [15]

    Scientometrics127(12), 7041–7060 (2022) https://doi.org/10.1007/s11192-022-04299-5

    Chavarro, D., P´ erez-Taborda, J.A.,´Avila, A.: Connecting brain and heart: artifi- cial intelligence for sustainable development. Scientometrics127(12), 7041–7060 (2022) https://doi.org/10.1007/s11192-022-04299-5

  16. [16]

    Scientometrics 79(3), 517–539 (2009) https://doi.org/10.1007/s11192-007-2046-8

    Abramo, G., D’Angelo, C.A., Caprasecca, A.: Gender differences in research pro- ductivity: A bibliometric analysis of the italian academic system. Scientometrics 79(3), 517–539 (2009) https://doi.org/10.1007/s11192-007-2046-8

  17. [17]

    Scientometrics104(2), 529–553 (2015) https://doi.org/10.1007/s11192-015-1612-8

    Singh, V.K., Uddin, A., Pinto, D.: Computer science research: the top 100 institutions in india and in the world. Scientometrics104(2), 529–553 (2015) https://doi.org/10.1007/s11192-015-1612-8

  18. [18]

    Scientometrics128(3), 1711–1743 (2023) https: //doi.org/10.1007/s11192-023-04640-6

    Ding, J., Du, D.: A study of the correlation between publication delays and measurement indicators of journal articles in the social network environment— based on online data in PLOS. Scientometrics128(3), 1711–1743 (2023) https: //doi.org/10.1007/s11192-023-04640-6

  19. [19]

    European Journal of Operational Research246(1), 1–19 (2015) https://doi.org/ 10.1016/j.ejor.2015.04.002

    Mingers, J., Leydesdorff, L.: A review of theory and practice in scientometrics. European Journal of Operational Research246(1), 1–19 (2015) https://doi.org/ 10.1016/j.ejor.2015.04.002

  20. [20]

    Wiechetek, L., Pastuszak, Z.: Academic social networks metrics: an effective indicator for university performance? Scientometrics127(3), 1381–1401 (2022) https://doi.org/10.1007/s11192-021-04258-6

  21. [21]

    Scientometrics103(1), 101–125 (2015) https://doi.org/10.1007/s11192-015-1534-8

    Bornmann, L., Mutz, R., Daniel, H.-D.: The h-index of social sciences journals: A comparison of four journal ranking systems. Scientometrics103(1), 101–125 (2015) https://doi.org/10.1007/s11192-015-1534-8

  22. [22]

    Scientometrics104(3), 873–906 (2015) https://doi.org/10.1007/s11192-015-1608-4 63

    Wildgaard, L.: A comparison of 17 author-level bibliometric indicators for researchers in astronomy, environmental science, philosophy and public health in web of science and google scholar. Scientometrics104(3), 873–906 (2015) https://doi.org/10.1007/s11192-015-1608-4 63

  23. [23]

    Scientometrics83(2), 243–258 (2009) https://doi.org/10.1007/s11192-009-0021-2

    Franceschet, M.: A comparison of bibliometric indicators for computer science scholars and journals on web of science and google scholar. Scientometrics83(2), 243–258 (2009) https://doi.org/10.1007/s11192-009-0021-2

  24. [24]

    Scientometrics 126(1), 641–682 (2021) https://doi.org/10.1007/s11192-020-03758-1

    Vahdati, S., Fathalla, S., Lange, C., Behrend, A., Say, A., Say, Z., Auer, S.: A comprehensive quality assessment framework for scientific events. Scientometrics 126(1), 641–682 (2021) https://doi.org/10.1007/s11192-020-03758-1

  25. [25]

    Sci- entometrics107(1), 123–165 (2016) https://doi.org/10.1007/s11192-015-1830-0

    Kim, M.-C., Zhu, Y., Chen, C.: How are they different? a quantitative domain comparison of information visualization and data visualization (2000–2014). Sci- entometrics107(1), 123–165 (2016) https://doi.org/10.1007/s11192-015-1830-0

  26. [26]

    Scientometrics128(9), 4651–4676 (2023) https://doi.org/10

    Bordons, M., Gonz´ alez-Albo, B., Moreno-Solano, L.: Improving our understand- ing of open access: How it relates to funding, internationality of research, and scientific leadership. Scientometrics128(9), 4651–4676 (2023) https://doi.org/10. 1007/s11192-023-04726-1

  27. [27]

    Scientometrics123(3), 677–705 (2020) https://doi.org/10.1007/s11192-020-03391-y

    Fathalla, S., Vahdati, S., Lange, C., Auer, S.: Scholarly event characteristics in four fields of science: a metrics-based analysis. Scientometrics123(3), 677–705 (2020) https://doi.org/10.1007/s11192-020-03391-y

  28. [28]

    Sciento- metrics104(2), 335–359 (2015) https://doi.org/10.1007/s11192-015-1594-6

    Zhu, Y., Yan, E.: Dynamic subfield analysis of disciplines: An examination of the trading impact and knowledge diffusion patterns of computer science. Sciento- metrics104(2), 335–359 (2015) https://doi.org/10.1007/s11192-015-1594-6

  29. [29]

    Scientometrics79(3), 655– 669 (2009) https://doi.org/10.1007/s11192-007-1944-0

    Arruda, D., Bezerra, F., Neris, V.A., Toro, P.R., Wainer, J.: Brazilian computer science research: Gender and regional distributions. Scientometrics79(3), 655– 669 (2009) https://doi.org/10.1007/s11192-007-1944-0

  30. [30]

    Com- munications of the ACM53(11), 9–9 (2010) https://doi.org/10.1145/1839676

    Freyne, J., Coyle, L., Smyth, B., Cunningham, P.: A quantitative evaluation of the relative status of journal and conference publications in computer science. Com- munications of the ACM53(11), 9–9 (2010) https://doi.org/10.1145/1839676. 1839701

  31. [31]

    Scientometrics130(4), 2475–2492 (2025)

    Culbert, J.H., Hobert, A., Jahn, N., Haupka, N., Schmidt, M., Donner, P., Mayr, P.: Reference coverage analysis of openalex compared to web of science and scopus. Scientometrics130(4), 2475–2492 (2025)

  32. [32]

    Knowledge Organization 48(7/8), 535–562 (2021) https://doi.org/10.5771/0943-7444-7-8-535

    Petrovich, E.: Science mapping and science maps. Knowledge Organization 48(7/8), 535–562 (2021) https://doi.org/10.5771/0943-7444-7-8-535

  33. [33]

    arXiv preprint arXiv:1708.05148 (2017) https://doi.org/10.48550/arXiv.1708.05148

    Khurana, D., Koli, A., Khatter, K., Singh, S.: Natural language processing: State of the art, current trends and challenges. arXiv preprint arXiv:1708.05148 (2017) https://doi.org/10.48550/arXiv.1708.05148

  34. [34]

    International Journal of Advanced Research in Computer Science 64 3(5), 175–178 (2012)

    Kuyoro, S.O., Oludele, A., Ibikunle, F.A., Samuel, A.B.: Information retrieval: An overview. International Journal of Advanced Research in Computer Science 64 3(5), 175–178 (2012)

  35. [35]

    arXiv preprint arXiv:2202.10936 (2022)

    Du, Y., Liu, Z., Li, J., Zhao, W.X.: A survey of vision-language pre-trained models. arXiv preprint arXiv:2202.10936 (2022)

  36. [36]

    IEEE Transactions on Professional Communication33(1), 20–25 (1990) https://doi.org/10.1109/47

    McCain, K.W.: Citation analysis of computer systems papers. IEEE Transactions on Professional Communication33(1), 20–25 (1990) https://doi.org/10.1109/47. 52965

  37. [37]

    Wiley-VCH, Weinheim, Germany (2016)

    Todeschini, R., Baccini, A.: Handbook of Bibliometric Indicators: Quantitative Tools for Studying and Evaluating Research. Wiley-VCH, Weinheim, Germany (2016)

  38. [38]

    Journal of Informetrics13(1), 195–207 (2019) https://doi.org/10

    Tahamtan, I., Bornmann, L.: Citation regression analysis of computer science publications. Journal of Informetrics13(1), 195–207 (2019) https://doi.org/10. 1016/j.joi.2018.12.003

  39. [39]

    Sci- entometrics72(3), 409–428 (2007) https://doi.org/10.1007/s11192-007-1707-4

    Porter, A.L., Cohen, A.S., Roessner, J.D., Perreault, M.: Is science becoming more interdisciplinary? measuring and mapping six research fields over time. Sci- entometrics72(3), 409–428 (2007) https://doi.org/10.1007/s11192-007-1707-4

  40. [40]

    Frontiers in Big Data3, 577974 (2020) https://doi.org/10

    Kusters, R.,et al.: Interdisciplinary research in artificial intelligence: Challenges and opportunities. Frontiers in Big Data3, 577974 (2020) https://doi.org/10. 3389/fdata.2020.577974

  41. [41]

    Nature Physics11(10), 791–796 (2015) https://doi.org/10.1038/nphys3440

    Sinatra, R., Deville, P., Szell, M., Wang, D., Barab´ asi, A.-L.: A century of physics. Nature Physics11(10), 791–796 (2015) https://doi.org/10.1038/nphys3440

  42. [42]

    Scientometrics61(3), 339–359 (2004) https://doi.org/10.1023/B:SCIE

    Guan, J.C., Ma, N.: A comparative study of research performance in computer science. Scientometrics61(3), 339–359 (2004) https://doi.org/10.1023/B:SCIE. 0000016797.43043.88

  43. [43]

    G´ omez Caridad, I., Bordons Gangas, M.: Estudio de la interdisciplinariedad en la ciencia a trav´ es de indicadores bibliom´ etricos (2007) https://doi.org/10.1007/ s11192-023-04642-4

  44. [44]

    Journal of Informetrics15(4), 101200 (2021) https://doi.org/10.1016/ j.joi.2021.101200

    Rousseau, R., Zhang, L.: Bilateral co-authorship indicators based on fractional counting. Journal of Informetrics15(4), 101200 (2021) https://doi.org/10.1016/ j.joi.2021.101200

  45. [45]

    Scientometrics105(1), 23–49 (2015) https://doi.org/10

    Garousi, V.: A bibliometric analysis of the turkish software engineering research community. Scientometrics105(1), 23–49 (2015) https://doi.org/10. 1007/s11192-015-1663-x

  46. [46]

    Social Networks 31(3), 155–163 (2009) https://doi.org/10.1016/j.socnet.2009.02.002

    Opsahl, T., Panzarasa, P.: Clustering in weighted networks. Social Networks 31(3), 155–163 (2009) https://doi.org/10.1016/j.socnet.2009.02.002

  47. [47]

    In: Proceedings of the 2009 SIAM International Conference on Data Mining (SDM), pp

    Bird, C., Barr, E.T., Nash, A., Devanbu, P.T., Filkov, V., Su, Z.: Structure and 65 dynamics of research collaboration in computer science. In: Proceedings of the 2009 SIAM International Conference on Data Mining (SDM), pp. 1–12. SIAM, Philadelphia, PA (2009). https://doi.org/10.1137/1.9781611972788.33

  48. [48]

    Scientometrics95(3), 689–716 (2012) https: //doi.org/10.1007/s11192-012-0883-6

    Iba˜ nez, A., Bielza, C., Larra˜ naga, P.: Relationship among research collaboration, number of documents and number of citations: A case study in spanish computer science production in 2000–2009. Scientometrics95(3), 689–716 (2012) https: //doi.org/10.1007/s11192-012-0883-6

  49. [49]

    In: Proceedings of the 29th International Conference on Machine Learning (ICML 2012), Edinburgh, Scotland, UK (2012)

    Le, Q.V., Ranzato, M., Monga, R., Devin, M., Chen, K., Corrado, G.S., Dean, J., Ng, A.Y.: Building high-level features using large-scale unsupervised learning. In: Proceedings of the 29th International Conference on Machine Learning (ICML 2012), Edinburgh, Scotland, UK (2012)

  50. [50]

    AI Index Report

    Stanford Institute for Human-Centered Artificial Intelligence (HAI): The 2023 AI Index Report. AI Index Report. https://hai.stanford.edu/ai-index/ 2023-ai-index-report

  51. [51]

    Journal of Intelligent & Fuzzy Systems45(6), 9929–9953 (2023) https://doi.org/10.3233/JIFS-233551

    Cheng, G., You, Q., Shi, L., Wang, Z., Luo, J., Li, T.: A survey of topic models: From a whole-cycle perspective. Journal of Intelligent & Fuzzy Systems45(6), 9929–9953 (2023) https://doi.org/10.3233/JIFS-233551

  52. [52]

    ACM Transactions on Knowledge Discovery from Data18(6), 1–28 (2024) https: //doi.org/10.1145/3652520 66

    Wang, Y., Xie, Q., Tang, M., Li, L., Yuan, J., Liu, Y.: A dual perspective framework of knowledge-correlation for cross-domain recommendation. ACM Transactions on Knowledge Discovery from Data18(6), 1–28 (2024) https: //doi.org/10.1145/3652520 66