pith. machine review for the scientific record. sign in

arxiv: 2604.19489 · v1 · submitted 2026-04-21 · 💻 cs.CV · cs.CY

Recognition: unknown

Seeing Candidates at Scale: Multimodal LLMs for Visual Political Communication on Instagram

Authors on Pith no claims yet

Pith reviewed 2026-05-10 02:32 UTC · model grok-4.3

classification 💻 cs.CV cs.CY
keywords multimodal LLMsvisual political communicationInstagramface recognitionperson countingpolitical campaignsGPT-4oelection analysis
0
0 comments X

The pith

GPT-4o outperforms traditional computer vision models at recognizing politicians and counting people in Instagram campaign images.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper compares specialized machine learning models with multimodal large language models for analyzing visual political communication on Instagram. It examines images from the 2021 German federal election to identify front-runner politicians and count individuals in stories and posts. GPT-4o achieved macro F1-scores of 0.89 for face recognition and 0.86 for person counting, surpassing FaceNet512, RetinaFace, and Google Cloud Vision. These results indicate that advanced AI can process large volumes of social media visuals that influence public views of candidates. Scaling such analysis matters because manual review of campaign imagery is too slow for comprehensive studies.

Core claim

The paper shows that the multimodal large language model GPT-4o outperforms traditional computer vision models in identifying front-runner politicians and counting individuals in Instagram stories and posts during the 2021 German federal election campaign, reaching a macro F1-score of 0.89 for face recognition and 0.86 for person counting while highlighting the potential of such systems to scale visual political communication analysis.

What carries the argument

Multimodal large language model GPT-4o applied to simultaneous face recognition of politicians and person counting in campaign imagery from Instagram.

Load-bearing premise

The manually created ground-truth labels for politician identities and person counts are accurate and free of systematic bias across the Instagram dataset.

What would settle it

Independent re-annotation of a random sample of the Instagram images by multiple human coders, where low inter-annotator agreement with the original labels would show the evaluation metrics rest on flawed ground truth.

Figures

Figures reproduced from arXiv: 2604.19489 by Christian Wolff, Mario Haim, Michael Achmann-Denkler.

Figure 1
Figure 1. Figure 1: Dimensions of personalization according to [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: To measure concentrated visibility, we use the [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: The prompt used to determine the presence of a political leader in social media images. The [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The face recognition workflow starts with face [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: We filtered the corpus to include only images [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Confusion matrices comparing the computational classifications (predicted labels) with the human annotations [PITH_FULL_IMAGE:figures/full_fig_p011_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Several examples of stories where the model [PITH_FULL_IMAGE:figures/full_fig_p011_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Three confusion matrices comparing the person count performance across the four labels and between the [PITH_FULL_IMAGE:figures/full_fig_p012_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Three examples of images where the RetinaFace model detected faces. In the face recog￾nition stage, human annotators identify the faces. When counting persons, human annotators agreed that the correct person count is 0. 4 Results: Case Study In this section, we present the findings from our case study analysis of concentrated visibility in visual political com￾munication. The focus is on identifying patte… view at source ↗
Figure 11
Figure 11. Figure 11: Comparison of front-runner appearances in Instagram [PITH_FULL_IMAGE:figures/full_fig_p014_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Comparison of front-runner appearances in Instagram [PITH_FULL_IMAGE:figures/full_fig_p014_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Confusion matrices comparing the computational classifications (predicted labels) with the human annota [PITH_FULL_IMAGE:figures/full_fig_p020_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Three confusion matrices comparing the person count performance across the four labels and between the [PITH_FULL_IMAGE:figures/full_fig_p020_14.png] view at source ↗
read the original abstract

This paper presents a computational case study that evaluates the capabilities of specialized machine learning models and emerging multimodal large language models for Visual Political Communication (VPC) analysis. Focusing on concentrated visibility in Instagram stories and posts during the 2021 German federal election campaign, we compare the performance of traditional computer vision models (FaceNet512, RetinaFace, Google Cloud Vision) with a multimodal large language model (GPT-4o) in identifying front-runner politicians and counting individuals in images. GPT-4o outperformed the other models, achieving a macro F1-score of 0.89 for face recognition and 0.86 for person counting in stories. These findings demonstrate the potential of advanced AI systems to scale and refine visual content analysis in political communication while highlighting methodological considerations for future research.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper presents an empirical case study comparing traditional computer vision models (FaceNet512, RetinaFace, Google Cloud Vision) against the multimodal LLM GPT-4o for two tasks in visual political communication: identifying front-runner politicians via face recognition and counting individuals in Instagram stories/posts from the 2021 German federal election campaign. It reports that GPT-4o achieves the highest performance with macro F1-scores of 0.89 (face recognition) and 0.86 (person counting in stories), arguing this demonstrates the potential of MLLMs to scale VPC analysis.

Significance. If the ground-truth labels prove reliable, the work provides a useful head-to-head comparison showing MLLMs can outperform specialized CV pipelines on real-world political imagery, with direct implications for scaling computational analysis in political communication and social media studies. The empirical focus and concrete F1 metrics are strengths, but the absence of dataset scale, annotation validation, and error analysis currently prevents the claims from being fully interpretable or generalizable.

major comments (2)
  1. [Methods / Data and Annotation] The manuscript reports macro F1-scores of 0.89 and 0.86 for GPT-4o but supplies no dataset size, number of images/stories, annotation protocol, number of labelers, or inter-annotator agreement statistics for the politician identity and person-count ground truth. This information is load-bearing for the central performance claims and must be added to allow evaluation of whether labeling biases could favor one model.
  2. [Results] No error analysis, confusion matrices, or discussion of edge cases (low-resolution stories, occluded faces, group shots, or ambiguous politician identities) is provided in the results. Without this, it is impossible to determine whether GPT-4o's reported advantage is robust or an artifact of the manual labeling process.
minor comments (2)
  1. [Introduction] The abstract and introduction use 'concentrated visibility' without a concise definition or citation to the political communication literature; adding one sentence would improve accessibility.
  2. [Results] Table or figure captions for the model comparison results should explicitly state the number of test instances and the exact definition of 'macro F1' used.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. The comments identify key gaps in transparency that we will address through revision to strengthen the interpretability of our results.

read point-by-point responses
  1. Referee: [Methods / Data and Annotation] The manuscript reports macro F1-scores of 0.89 and 0.86 for GPT-4o but supplies no dataset size, number of images/stories, annotation protocol, number of labelers, or inter-annotator agreement statistics for the politician identity and person-count ground truth. This information is load-bearing for the central performance claims and must be added to allow evaluation of whether labeling biases could favor one model.

    Authors: We agree that these methodological details are essential for assessing ground-truth reliability and potential biases. The revised manuscript will add a dedicated subsection in the Methods section that reports the total number of Instagram stories and posts analyzed, the data collection period and sampling strategy from the 2021 German federal election, the annotation protocol (including guidelines provided to labelers), the number of annotators, and inter-annotator agreement statistics for both politician identity and person-count labels. revision: yes

  2. Referee: [Results] No error analysis, confusion matrices, or discussion of edge cases (low-resolution stories, occluded faces, group shots, or ambiguous politician identities) is provided in the results. Without this, it is impossible to determine whether GPT-4o's reported advantage is robust or an artifact of the manual labeling process.

    Authors: We acknowledge the value of error analysis for demonstrating robustness. The revised Results section will include confusion matrices for both tasks and a new qualitative error analysis subsection that examines performance on the specified edge cases (low-resolution stories, occluded faces, group shots, and ambiguous identities), comparing GPT-4o against the baseline models and discussing any patterns that could relate to labeling artifacts. revision: yes

Circularity Check

0 steps flagged

No circularity: pure empirical model comparison on independent annotations

full rationale

The paper conducts an empirical evaluation of off-the-shelf and multimodal models (FaceNet512, RetinaFace, Google Cloud Vision, GPT-4o) against manually created ground-truth labels for politician identification and person counting on Instagram images. No equations, derivations, parameter fitting, or predictions are presented; performance is measured via standard F1 scores on held-out annotations. No self-citations serve as load-bearing premises for any claim, and no result reduces to its own inputs by construction. The analysis is self-contained against external benchmarks (model outputs vs. human labels).

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is an empirical case study with no mathematical derivations, free parameters, or postulated entities.

pith-pipeline@v0.9.0 · 5434 in / 1048 out tokens · 42611 ms · 2026-05-10T02:32:25.640723+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

194 extracted references · 139 canonical work pages

  1. [1]

    doi:10.1145/3717867.3717881 , url =

    Achmann-Denkler, Michael and Haim, Mario and Wolff, Christian , year = 2025, booktitle =. doi:10.1145/3717867.3717881 , url =

  2. [2]

    doi:10.17645/pag.8727 , issn =

    Bene, Márton and Magin, Melanie and Haßler, Jörg , year = 2024, month = jul, journal =. doi:10.17645/pag.8727 , issn =

  3. [3]

    doi:10.5117/ccr2022.1.005.jurg , issn =

    Jürgens, Pascal and Meltzer, Christine E and Scharkow, Michael , year = 2022, month = feb, journal =. doi:10.5117/ccr2022.1.005.jurg , issn =

  4. [4]

    Yue, Xiang and Ni, Yuansheng and Zhang, Kai and Zheng, Tianyu and Liu, Ruoqi and Zhang, Ge and Stevens, Samuel and Jiang, Dongfu and Ren, Weiming and Sun, Yuxuan and Wei, Cong and Yu, Botao and Yuan, Ruibin and Sun, Renliang and Yin, Ming and Zheng, Boyuan and Yang, Zhenzhu and Liu, Yibo and Huang, Wenhao and Sun, Huan and Su, Yu and Chen, Wenhu , year = ...

  5. [5]

    Veneti, Anastasia , year = 2023, month = jan, publisher =

  6. [6]

    Peng, Yilang and Lu, Yingdan , year = 2023, booktitle =

  7. [8]

    doi:10.4337/9781789906769.00023 , isbn =

    Schwemmer, Carsten and Unger, Saïd and Heiberger, Raphael , year = 2023, month = mar, booktitle =. doi:10.4337/9781789906769.00023 , isbn =

  8. [9]

    doi:10.1109/34.895972 , issn =

    Smeulders, A W M and Worring, M and Santini, S and Gupta, A and Jain, R , year = 2000, journal =. doi:10.1109/34.895972 , issn =

  9. [10]

    doi:10.1080/19331681.2021.1928579 , issn =

    Schmøkel, Rasmus and Bossetta, Michael , year = 2022, month = jan, journal =. doi:10.1080/19331681.2021.1928579 , issn =

  10. [11]

    doi:10.4337/9781800376939.00021 , isbn =

    Raynauld, Vincent and Lalancette, Mireille , year = 2023, month = jan, booktitle =. doi:10.4337/9781800376939.00021 , isbn =

  11. [12]

    Political Communication , publisher =

    Bossetta, Michael and Schm. Political Communication , publisher =. doi:10.1080/10584609.2022.2128949 , issn =

  12. [13]

    doi:10.7551/mitpress/14046.001.0001 , isbn = 9780262546133, url =

    Arnold, Taylor and Tilton, Lauren , year = 2023, month = oct, publisher =. doi:10.7551/mitpress/14046.001.0001 , isbn = 9780262546133, url =

  13. [14]

    Achmann-Denkler, Michael and Fehle, Jakob and Haim, Mario and Wolff, Christian , year = 2024, journal =

  14. [15]

    Goffman, Erving , year = 1956, publisher =

  15. [16]

    doi:10.4337/9781800376939.00020 , isbn =

    Šimunjak, Maja , year = 2023, month = jan, booktitle =. doi:10.4337/9781800376939.00020 , isbn =

  16. [17]

    Marcus, G E and Neuman, W R and Mackuen, M , year = 2000, publisher =

  17. [18]

    doi:10.1080/1369118x.2021.1962942 , issn =

    Brands, Charlotte and Kruikemeier, Sanne and Trilling, Damian , year = 2021, month = oct, journal =. doi:10.1080/1369118x.2021.1962942 , issn =

  18. [19]

    doi:10.1177/1940161220937239 , issn =

    Lindholm, Jenny and Carlson, Tom and Högväg, Joachim , year = 2021, month = jan, journal =. doi:10.1177/1940161220937239 , issn =

  19. [20]

    doi:10.1177/2378023120967171 , issn =

    Schwemmer, Carsten and Knight, Carly and Bello-Pardo, Emily D and Oklobdzija, Stan and Schoonvelde, Martijn and Lockhart, Jeffrey W , year = 2020, month = nov, journal =. doi:10.1177/2378023120967171 , issn =

  20. [21]

    doi:10.1080/1369118X.2022.2027500 , issn =

    Parmelee, John H and Perkins, Stephynie C and Beasley, Berrin , year = 2023, month = jul, journal =. doi:10.1080/1369118X.2022.2027500 , issn =

  21. [22]

    doi:10.1017/pan.2022.26 , issn =

    Tarr, Alexander and Hwang, June and Imai, Kosuke , year = 2022, month = nov, journal =. doi:10.1017/pan.2022.26 , issn =

  22. [23]

    doi:10.1609/icwsm.v14i1.7338 , issn =

    Xi, Nan and Ma, Di and Liou, Marcus and Steinert-Threlkeld, Zachary C and Anastasopoulos, Jason and Joo, Jungseock , year = 2020, month = may, journal =. doi:10.1609/icwsm.v14i1.7338 , issn =

  23. [24]

    doi:10.1177/14614448211038761 , issn =

    Chen, Yan and Sherren, Kate and Smit, Michael and Lee, Kyung Young , year = 2023, month = apr, journal =. doi:10.1177/14614448211038761 , issn =

  24. [25]

    doi:10.5617/dhnbpub.10647 , issn =

    Achmann, Michael and Wolff, Christian , year = 2023, month = oct, journal =. doi:10.5617/dhnbpub.10647 , issn =

  25. [26]

    Zhan, Q., Fang, R., Panchal, H

    Ziems, Caleb and Held, William and Shaikh, Omar and Chen, Jiaao and Zhang, Zhehao and Yang, Diyi , year = 2024, month = mar, journal =. doi:10.1162/coli\_a\_00502 , issn =

  26. [27]

    doi:10.1017/pan.2023.33 , issn =

    Girbau, Andreu and Kobayashi, Tetsuro and Renoust, Benjamin and Matsui, Yusuke and Satoh, Shin'ichi , year = 2024, month = apr, journal =. doi:10.1017/pan.2023.33 , issn =

  27. [28]

    doi:10.2478/njms-2024-0008 , issn =

    Magin, Melanie and Haßler, Jörg and Larsson, Anders Olof and Skogerbø, Eli , year = 2024, month = aug, journal =. doi:10.2478/njms-2024-0008 , issn =

  28. [29]

    doi:10.1016/j.linged.2019.05.003 , issn =

    Georgakopoulou, Alex , year = 2021, month = apr, journal =. doi:10.1016/j.linged.2019.05.003 , issn =

  29. [30]

    Joo, Jungseock and Bucy, E and Seidel, Claudia , year = 2019, month = sep, journal =

  30. [31]

    doi:10.1111/comt.12048 , issn =

    Geise, Stephanie and Baden, Christian , year = 2015, month = feb, journal =. doi:10.1111/comt.12048 , issn =

  31. [32]

    Politische Werbung und koordiniertes Verhalten in Sozialen Medien im Vorfeld der Bundestagswahl 2021 , author =

  32. [33]

    doi:10.1177/08944393241227554 , issn =

    Towner, Terri L and Muñoz, Caroline L , year = 2024, month = feb, journal =. doi:10.1177/08944393241227554 , issn =

  33. [34]

    doi:10.1007/s10919-009-0082-1 , issn =

    Olivola, Christopher Y and Todorov, Alexander , year = 2010, month = jun, journal =. doi:10.1007/s10919-009-0082-1 , issn =

  34. [35]

    doi:10.15581/003.33.37241 , issn =

    Sampietro, Agnese and Sánchez-Castillo, Sebastián , year = 2020, month = jan, journal =. doi:10.15581/003.33.37241 , issn =

  35. [36]

    doi:10.1177/1461444816686103 , issn =

    McGregor, Shannon C , year = 2018, month = mar, journal =. doi:10.1177/1461444816686103 , issn =

  36. [37]

    doi:10.1080/1369118x.2016.1167228 , issn =

    McGregor, Shannon C and Lawrence, Regina G and Cardona, Arielle , year = 2017, month = feb, journal =. doi:10.1080/1369118x.2016.1167228 , issn =

  37. [38]

    doi:10.1177/2056305116662179 , issn =

    Filimonov, Kirill and Russmann, Uta and Svensson, Jakob , year = 2016, month = jul, journal =. doi:10.1177/2056305116662179 , issn =

  38. [39]

    doi:10.4337/9781800376939.00016 , isbn =

    Steffan, Dennis , year = 2023, month = jan, booktitle =. doi:10.4337/9781800376939.00016 , isbn =

  39. [40]

    Steffan, Dennis , year = 2020, month = may, journal =

  40. [41]

    doi:10.5771/9783748925415 , isbn = 9783848781249, abstract =

    Steffan, Dennis , year = 2021, month = jul, publisher =. doi:10.5771/9783748925415 , isbn = 9783848781249, abstract =

  41. [43]

    doi:10.1177/1461444812457333 , issn =

    Hermans, Liesbeth and Vergeer, Maurice , year = 2013, month = feb, journal =. doi:10.1177/1461444812457333 , issn =

  42. [44]

    doi:10.1177/1065912918786805 , issn =

    Casas, Andreu and Williams, Nora Webb , year = 2019, month = jun, journal =. doi:10.1177/1065912918786805 , issn =

  43. [45]

    doi:10.1109/iccv.2015.423 , isbn = 9781467383912, url =

    Joo, Jungseock and Steen, Francis F and Zhu, Song-Chun , year = 2015, month = dec, booktitle =. doi:10.1109/iccv.2015.423 , isbn = 9781467383912, url =

  44. [46]

    doi:10.1017/s0022381610000939 , issn =

    Valentino, Nicholas A and Brader, Ted and Groenendyk, Eric W and Gregorowicz, Krysha and Hutchings, Vincent L , year = 2011, month = jan, journal =. doi:10.1017/s0022381610000939 , issn =

  45. [47]

    doi:10.1080/15205436.2010.530385 , issn =

    Tukachinsky, Riva and Mastro, Dana and King, Aimee , year = 2011, month = nov, journal =. doi:10.1080/15205436.2010.530385 , issn =

  46. [48]

    doi:10.1177/0002716296546001008 , issn =

    Graber, Doris A , year = 1996, month = jul, journal =. doi:10.1177/0002716296546001008 , issn =

  47. [49]

    doi:10.5117/ccr2022.1.001.joo , issn =

    Joo, Jungseock and Steinert-Threlkeld, Zachary C , year = 2022, month = feb, journal =. doi:10.5117/ccr2022.1.001.joo , issn =

  48. [50]

    doi:10.1080/19312458.2023.2285765 , issn =

    Peng, Yilang and Lock, Irina and Ali Salah, Albert , year = 2024, month = apr, journal =. doi:10.1080/19312458.2023.2277956 , issn =

  49. [51]

    Poguntke, Thomas and Webb, Paul D , year = 2005, publisher =

  50. [52]

    doi:10.1080/23808985.2010.11679101 , issn =

    Adam, Silke and Maier, Michaela , year = 2010, month = jan, journal =. doi:10.1080/23808985.2010.11679101 , issn =

  51. [53]

    doi:10.1007/978-3-030-18729-3 , isbn =

  52. [54]

    Python 3 Reference Manual , author =

  53. [55]

    doi:10.1386/jvpc\_00027\_1 , issn =

    Farkas, Xénia , year = 2023, month = oct, journal =. doi:10.1386/jvpc\_00027\_1 , issn =

  54. [56]

    doi:10.1177/14614448231224031 , issn =

    Geise, Stephanie and Maubach, Katharina and Boettcher Eli, Alena , year = 2024, month = mar, journal =. doi:10.1177/14614448231224031 , issn =

  55. [57]

    A benchmark of facial recognition pipelines and co-usability performances of modules.Journal of Information Technologies, 17(2):95–107, 2024

    A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules , author =. Bilisim Teknolojileri Dergisi , publisher =. doi:10.17671/gazibtd.1399077 , url =

  56. [58]

    Pryzant, Reid and Iter, Dan and Li, Jerry and Lee, Yin Tat and Zhu, Chenguang and Zeng, Michael , year = 2023, month = may, url =

  57. [59]

    doi:10.1080/15358593.2011.653504 , url =

    Schill, Dan , year = 2012, month = apr, journal =. doi:10.1080/15358593.2011.653504 , url =

  58. [60]

    doi:10.1080/01972243.1994.9960162 , issn =

    Agre, Philip E , year = 1994, month = apr, journal =. doi:10.1080/01972243.1994.9960162 , issn =

  59. [61]

    doi:10.1007/BF00988269 , issn =

    Altheide, David L , year = 1987, month = mar, journal =. doi:10.1007/BF00988269 , issn =

  60. [62]

    Amancio, Marina , year = 2017, url =

  61. [63]

    doi:10.1080/19312458.2020.1810648 , issn =

    Araujo, Theo and Lock, Irina and van de Velde, Bob , year = 2020, month = oct, journal =. doi:10.1080/19312458.2020.1810648 , issn =

  62. [64]

    doi:10.1093/llc/fqz013 , issn =

    Arnold, Taylor and Tilton, Lauren , year = 2019, month = dec, journal =. doi:10.1093/llc/fqz013 , issn =

  63. [65]

    doi:10.1080/19312458.2021.2015574 , issn =

    Baden, Christian and Pipal, Christian and Schoonvelde, Martijn and van der Velden, Mariken A C G , year = 2022, month = jan, journal =. doi:10.1080/19312458.2021.2015574 , issn =

  64. [66]

    doi:10.1177/1461444820960071 , issn =

    Bainotti, Lucia and Caliandro, Alessandro and Gandini, Alessandro , year = 2020, month = sep, journal =. doi:10.1177/1461444820960071 , issn =

  65. [67]

    doi:10.1080/15377857.2021.1892901 , issn =

    Bast, Jennifer , year = 2021, month = mar, journal =. doi:10.1080/15377857.2021.1892901 , issn =

  66. [68]

    Bast, Jennifer , year = 2021, month = jul, journal =

  67. [69]

    doi:10.1080/1369118X.2015.1084349 , issn =

    Bayer, Joseph B and Ellison, Nicole B and Schoenebeck, Sarita Y and Falk, Emily B , year = 2016, month = jul, journal =. doi:10.1080/1369118X.2015.1084349 , issn =

  68. [70]

    doi:10.1177/1077699018763307 , issn =

    Bossetta, Michael , year = 2018, month = jun, journal =. doi:10.1177/1077699018763307 , issn =

  69. [71]

    doi:10.1177/14614448211009504 , issn =

    Boulianne, Shelley and Larsson, Anders Olof , year = 2021, month = apr, journal =. doi:10.1177/14614448211009504 , issn =

  70. [72]

    doi:10.1177/0170840618772611 , issn =

    Boxenbaum, Eva and Jones, Candace and Meyer, Renate E and Svejenova, Silviya , year = 2018, month = jun, journal =. doi:10.1177/0170840618772611 , issn =

  71. [73]

    Profesional de la informaci

    Coromina,. Profesional de la informaci. doi:10.3145/epi.2018.sep.05 , issn =

  72. [74]

    doi:10.1007/978-3-319-45270-8\_8 , isbn =

    Ekman, Mattias and Widholm, Andreas , year = 2017, booktitle =. doi:10.1007/978-3-319-45270-8\_8 , isbn =

  73. [75]

    doi:10.1177/0267323113516727 , issn =

    Holtz-Bacha, Christina and Langer, Ana Ines and Merkle, Susanne , year = 2014, month = apr, journal =. doi:10.1177/0267323113516727 , issn =

  74. [76]

    doi:10.1177/1354856517741132 , issn =

    Larsson, Anders Olof , year = 2019, month = dec, journal =. doi:10.1177/1354856517741132 , issn =

  75. [77]

    doi:10.1177/1940161220964769 , issn =

    Peng, Yilang , year = 2021, month = jan, journal =. doi:10.1177/1940161220964769 , issn =

  76. [78]

    doi:10.1386/nl.15.1.15\_1 , issn =

    Ekman, Mattias and Widholm, Andreas , year = 2017, journal =. doi:10.1386/nl.15.1.15\_1 , issn =

  77. [79]

    doi:10.1080/1369118x.2019.1581244 , issn =

    Metz, Manon and Kruikemeier, Sanne and Lecheler, Sophie , year = 2020, month = aug, journal =. doi:10.1080/1369118x.2019.1581244 , issn =

  78. [80]

    moco , url=

    Deng, Jiankang and Guo, Jia and Ververas, Evangelos and Kotsia, Irene and Zafeiriou, Stefanos , year = 2020, month = jun, booktitle =. doi:10.1109/cvpr42600.2020.00525 , isbn = 9781728171685, url =

  79. [81]

    doi:10.1080/1369118X.2016.1198411 , issn =

    Bene, Marton , year = 2017, month = apr, journal =. doi:10.1080/1369118X.2016.1198411 , issn =

  80. [82]

    Huang, Gary B and Mattar, Marwan and Berg, Tamara and Learned-Miller, Eric , year = 2008, month = oct, booktitle =

Showing first 80 references.