pith. machine review for the scientific record. sign in

arxiv: 2605.11693 · v1 · submitted 2026-05-12 · 💻 cs.AI

Recognition: no theorem link

Measuring What Matters Beyond Text: Evaluating Multimodal Summaries by Quality, Alignment, and Diversity

Authors on Pith no claims yet

Pith reviewed 2026-05-13 07:04 UTC · model grok-4.3

classification 💻 cs.AI
keywords multimodal summarizationevaluation frameworktext qualityimage-text alignmentvisual diversityMLLM judgehuman preference alignmentbenchmark
0
0 comments X

The pith

MM-Eval integrates text quality, image-text alignment, and visual diversity into one calibrated framework for multimodal summaries.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents MM-Eval to fix the problem of evaluating multimodal summaries separately for text, images, and their fit. Current practice splits these checks with unimodal tools, which misses whether the parts reinforce each other. The new framework scores factual consistency and coherence in text, uses an MLLM to judge image-text relevance, and measures visual diversity with a truncated entropy calculation on CLIP embeddings. A learned model then blends the scores to match human ratings collected on a news benchmark. This matters because it gives developers a single, interpretable signal that can guide improvements in systems that output both text and pictures.

Core claim

MM-Eval unifies evaluation of multimodal summaries through three components: text quality via OpenFActScore for factual consistency and G-Eval for coherence, fluency, and relevance; image-text relevance via an MLLM-as-a-judge method; and image-set diversity via Truncated CLIP Entropy. These are combined by a learned aggregation model trained on the mLLM-EVAL news benchmark so that the overall score tracks human preferences, revealing a text-dominant hierarchy in which factual consistency is the main driver while visual signals add complementary information.

What carries the argument

MM-Eval, the unified scoring framework that fuses specific text, alignment, and diversity metrics with a learned aggregation model trained to reproduce human judgments.

If this is right

  • MM-Eval produces higher correlation with human judgments than heuristic ways of combining the same component scores.
  • Factual consistency in the text part becomes the strongest single predictor of overall perceived quality.
  • Image relevance and diversity scores supply additional information that improves the evaluation beyond text alone.
  • The framework supports direct, reference-free comparisons among different multimodal summary systems.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same aggregation approach could be tested on video or audio summaries to see whether the text-dominant pattern holds.
  • Developers might first invest in text factuality improvements before refining image selection in their systems.
  • The method could reduce the need for repeated human annotation by providing an automated yet human-aligned score.
  • Extending the benchmark collection to more languages or domains would reveal how stable the learned weights remain.

Load-bearing premise

The mLLM-EVAL news benchmark and the learned aggregation model trained on it accurately reflect human preferences for multimodal summary quality across diverse domains and modalities.

What would settle it

Run a fresh human preference study on multimodal summaries drawn from non-news domains; if MM-Eval scores correlate with the new ratings no better than simple text-only baselines, the unified framework's advantage would be refuted.

Figures

Figures reproduced from arXiv: 2605.11693 by Abid Ali, Diego Molla-Aliod, Usman Naseem.

Figure 1
Figure 1. Figure 1: Toy examples showing limitations of current MSMO evaluation. [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: An overview of MM-Eval Framework. Finally, Stext is a weighted aggregation of all four intra-pillar components as below: Stext = w1Sfact+w2Srel+w3Scoh+w4Sf lu (2) 3.2 Pillar 2: Assessing Image-to-Text Relevance (Srelevance) This pillar measures the alignment of the modali￾ties. It answers the question: Do the selected im￾ages actually complement/supplement the text? 3.2.1 MLLM-as-a-Judge for Alignment MM-E… view at source ↗
Figure 3
Figure 3. Figure 3: Human evaluation score distributions. more as baseline quality indicators than as primary differentiators among systems in this evaluation setting. The Intractability of Consistency and Relevance: By comparison, dimensions related to information alignment and accuracy (Consistency (σ = 1.412), Relevance (σ = 1.295), and Cross-Modal Rele￾vance (σ = 1.224)), as shown in [PITH_FULL_IMAGE:figures/full_fig_p00… view at source ↗
Figure 4
Figure 4. Figure 4: The gatekeeper effect of factual consistency on perceived overall quality. (a) Human consistency bins [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Learned weight hierarchy of MM-Eval components. (a) Pillar-level weights with individual rank correla￾tions (τ , ρ) against human judgments. (b) Intra-text component weights within Stext. Component Relative Con￾tribution (%) Factual Consistency ≈ 43.5% Other Text Qualities ≈ 35.5% Text Quality (Stext) 79.0% Image-to-Text Relevance ≈ 15.0% Visual Diversity (TCE) ≈ 6.0% [PITH_FULL_IMAGE:figures/full_fig_p00… view at source ↗
Figure 6
Figure 6. Figure 6: Ablation study showing each pillar’s contri [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: An Illustration of Image-Text Alignment scoring protocol. [PITH_FULL_IMAGE:figures/full_fig_p012_7.png] view at source ↗
read the original abstract

Multimodal Large Language Models (MLLMs) have facilitated Multimodal Summarization with Multimodal Output (MSMO), wherein systems generate concise textual summaries accompanied by salient visuals from multimodal sources. However, current MSMO evaluation remains fragmented: text quality, image-text alignment, and visual diversity are typically assessed in isolation using unimodal metrics, making it difficult to capture whether the modalities jointly support a faithful and useful summary. To address this gap, we introduce MM-Eval, a unified evaluation framework that integrates assessments of textual quality, cross-modal alignment, and visual diversity. MM-Eval comprises three components: (1) text quality, measured using OpenFActScore for factual consistency and G-Eval for coherence, fluency, and relevance; (2) image-text relevance, evaluated via an MLLM-as-a-judge approach; and (3) image-set diversity, quantified using Truncated CLIP Entropy. We calibrate MM-Eval through a learned aggregation model trained on the mLLM-EVAL news benchmark, aligning component contributions with human preferences. Our analysis reveals a text-dominant hierarchy in this setting, where factual consistency acts as a critical determinant of perceived overall quality, while visual relevance and diversity provide complementary signals. MM-Eval improves over heuristic aggregation baselines and provides an interpretable, reference-weak framework for comparative evaluation of multimodal summaries.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces MM-Eval, a unified evaluation framework for multimodal summarization with multimodal output (MSMO). It combines three components: text quality via OpenFActScore (factual consistency) and G-Eval (coherence, fluency, relevance); image-text relevance assessed by an MLLM-as-a-judge; and visual diversity via Truncated CLIP Entropy. These are aggregated through a learned model calibrated on the mLLM-EVAL news benchmark to align with human preferences. The work reports a text-dominant hierarchy in this setting, with factual consistency as the primary driver, and demonstrates improvement over heuristic aggregation baselines while providing an interpretable, reference-weak approach.

Significance. If the calibration proves robust beyond the single benchmark, MM-Eval would represent a meaningful advance in holistic MSMO evaluation by integrating fragmented unimodal metrics into a coherent, human-aligned score. The emphasis on component contributions and the text-dominant finding could inform system design priorities, while the reference-weak design supports practical deployment. The framework's interpretability is a clear strength relative to black-box alternatives.

major comments (2)
  1. Abstract: The learned aggregation model is central to the superiority claim over heuristic baselines, yet the manuscript provides no details on training procedure, validation splits, regularization, or sensitivity of the fitted weights; without these, it is impossible to determine whether reported gains reflect genuine improvement or post-hoc fitting to the mLLM-EVAL distribution.
  2. Abstract: The text-dominant hierarchy and component contribution analysis are derived exclusively from the news-only mLLM-EVAL benchmark; the absence of cross-domain human preference data or hold-out validation on other modalities/domains leaves open the possibility that the learned weights encode news-specific biases (e.g., heavy factual weighting) rather than general multimodal quality trade-offs.
minor comments (1)
  1. Abstract: The precise functional form of the aggregation model (linear weights, MLP, etc.) is not stated, which would aid reproducibility even at the high-level description.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment point-by-point below, with revisions made to enhance reproducibility and transparency where possible.

read point-by-point responses
  1. Referee: [—] Abstract: The learned aggregation model is central to the superiority claim over heuristic baselines, yet the manuscript provides no details on training procedure, validation splits, regularization, or sensitivity of the fitted weights; without these, it is impossible to determine whether reported gains reflect genuine improvement or post-hoc fitting to the mLLM-EVAL distribution.

    Authors: We agree that the original manuscript provided insufficient detail on the learned aggregation model, which is central to our claims. In the revised version, we have expanded Section 3.4 and added a dedicated appendix subsection describing the full procedure: the model is a linear regressor trained via ordinary least squares with 5-fold cross-validation on the mLLM-EVAL benchmark (80/20 train/validation split per fold), L2 regularization (coefficient selected via grid search), and we now report the exact fitted weights, their standard deviations across folds, and a sensitivity analysis showing weight stability (variation under 8% across seeds and split ratios). These additions demonstrate that the reported gains arise from systematic calibration rather than post-hoc fitting. revision: yes

  2. Referee: [—] Abstract: The text-dominant hierarchy and component contribution analysis are derived exclusively from the news-only mLLM-EVAL benchmark; the absence of cross-domain human preference data or hold-out validation on other modalities/domains leaves open the possibility that the learned weights encode news-specific biases (e.g., heavy factual weighting) rather than general multimodal quality trade-offs.

    Authors: We acknowledge this as a genuine limitation of the current study. The mLLM-EVAL benchmark is news-focused, and the text-dominant hierarchy reflects human preferences in that domain. We have revised the manuscript to explicitly qualify all claims about the hierarchy and weights as news-specific, added a dedicated paragraph in the Discussion section on potential domain biases, and outlined future work on cross-domain collection. However, we do not have access to human preference annotations from other domains (e.g., scientific articles or social media), so empirical hold-out validation across domains cannot be performed in this revision. revision: partial

standing simulated objections not resolved
  • Cross-domain validation of the learned aggregation weights, as no human preference data from non-news domains is currently available

Circularity Check

1 steps flagged

Learned aggregation weights fitted to mLLM-EVAL benchmark make superiority claim dependent on training data

specific steps
  1. fitted input called prediction [Abstract]
    "We calibrate MM-Eval through a learned aggregation model trained on the mLLM-EVAL news benchmark, aligning component contributions with human preferences. Our analysis reveals a text-dominant hierarchy in this setting... MM-Eval improves over heuristic aggregation baselines"

    The aggregation weights are optimized to match human ratings on the mLLM-EVAL benchmark; the reported improvement over heuristics and the text-dominant hierarchy are therefore evaluated on the identical training distribution, making the superiority statistically forced rather than independently demonstrated.

full rationale

The paper trains an aggregation model on the mLLM-EVAL news benchmark to align component scores with human preferences, then reports that the resulting MM-Eval improves over heuristic baselines and reveals a text-dominant hierarchy. This improvement is measured on the same benchmark used for fitting, so the central claim of superiority reduces to performance on the fitted data by construction. Individual metrics (OpenFActScore, G-Eval, CLIP entropy) are external, but the unified framework's advantage and interpretability rest on the calibration step. No hold-out or cross-domain validation is referenced, producing partial circularity in the evaluation claims.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The framework rests on the assumption that the chosen external metrics can be meaningfully combined via supervised fitting to a single benchmark; no new entities are postulated, but the aggregator introduces fitted parameters whose stability is not shown.

free parameters (1)
  • aggregation model weights
    Learned parameters that determine relative contribution of text quality, alignment, and diversity scores to the final MM-Eval score.
axioms (1)
  • domain assumption Human preferences on the mLLM-EVAL news benchmark are representative of general multimodal summary quality judgments.
    Used to train and validate the aggregation model that produces the final unified score.

pith-pipeline@v0.9.0 · 5551 in / 1296 out tokens · 53559 ms · 2026-05-13T07:04:30.139828+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

300 extracted references · 300 canonical work pages · 1 internal anchor

  1. [1]

    2008 , booktitle =

    Ding, Xiaowen and Liu, Bing and Yu, Philip S , pages =. 2008 , booktitle =

  2. [2]

    2016 , journal =

    Appel, Orestes and Chiclana, Francisco and Carter, Jenny and Fujita, Hamido , pages =. 2016 , journal =

  3. [3]

    2018 , journal =

    Sarpiri, Mona Mona Mona Najafi and Gandomani, Taghi Javdani and Teymourzadeh, Mahsa and Motamedi, Akram , number =. 2018 , journal =

  4. [4]

    2018 , journal =

    Wu, Chuhan and Wu, Fangzhao and Wu, Sixing and Yuan, Zhigang and Huang, Yongfeng , pages =. 2018 , journal =

  5. [5]

    2017 , journal =

    Bajaj, Simran and Garg, Niharika and Singh, Sandeep Kumar , pages =. 2017 , journal =

  6. [6]

    2017 , booktitle =

    Bajaj, Simran and Garg, Niharika and Singh, Sandeep Kumar , pages =. 2017 , booktitle =. doi:10.1016/j.procs.2017.11.467 , issn =

  7. [7]

    2011 , booktitle =

    Kwon, Ye Jin and Park, Young Bom , pages =. 2011 , booktitle =

  8. [8]

    ieeexplore.ieee.org , author =

  9. [9]

    2017 , journal =

    Rana, Toqir A and Cheah, Yu-N , pages =. 2017 , journal =

  10. [10]

    2014 , booktitle =

    Gerani, Shima and Mehdad, Yashar and Carenini, Giuseppe and Ng, Raymond T and Nejat, Bita , pages =. 2014 , booktitle =

  11. [11]

    2018 , journal =

    Naveed, Nasir and Gottron, Thomas and Rauf, Zahid , number =. 2018 , journal =

  12. [12]

    2012 , booktitle =

    Sharma, Anuj and Dey, Shubhamoy , pages =. 2012 , booktitle =

  13. [13]

    doi:10.1016/j.eswa.2007.05.028 , keywords =

    Elsevier , author =. doi:10.1016/j.eswa.2007.05.028 , keywords =

  14. [14]

    2020 , journal =

    Mehmood, K and Essam, D and Shafi, K and. 2020 , journal =

  15. [15]

    2015 , booktitle =

    Rafae, Abdul and Qayyum, Abdul and Moeenuddin, Muhammad and Karim, Asim and Sajjad, Hassan and Kamiran, Faisal , pages =. 2015 , booktitle =

  16. [16]

    2013 , booktitle =

    Carre. 2013 , booktitle =

  17. [17]

    2017 , journal =

    Liu, Jie and Fu, Xiaodong and Liu, Jin and Sun, Yunchuan , pages =. 2017 , journal =

  18. [18]

    2015 , booktitle =

    Andrea, Alessia D ' and Ferri, Fernando and Grifoni, Patrizia , number =. 2015 , booktitle =

  19. [19]

    Gousios, M

    Chen, Ning and Lin, Jialiu and Hoi, Steven C.H. and Xiao, Xiaokui and Zhang, Boshen , number =. 2014 , booktitle =. doi:10.1145/2568225.2568263 , issn =

  20. [20]

    2016 , journal =

    Poria, Soujanya and Cambria, Erik and Gelbukh, Alexander , pages =. 2016 , journal =

  21. [21]

    2015 , journal =

    Maharani, Warih and Widyantoro, Dwi H and Khodra, Masayu Leylia , pages =. 2015 , journal =

  22. [22]

    2018 , journal =

    Kumar, Ashok and Abirami, S , pages =. 2018 , journal =

  23. [23]

    2016 , booktitle =

    Khobragade, Shubhangi and Tiwari, Aditya and Patil, C Y and Narke, Vikram , pages =. 2016 , booktitle =

  24. [24]

    2013 , journal =

    Jaeger, Stefan and Karargyris, Alexandros and Candemir, Sema and Folio, Les and Siegelman, Jenifer and Callaghan, Fiona and Xue, Zhiyun and Palaniappan, Kannappan and Singh, Rahul K and Antani, Sameer and. 2013 , journal =

  25. [25]

    2016 , booktitle =

    Ahmad, Wan Siti Halimatul Munirah Wan and Zaki, Wan Mimi Diyana Wan and Fauzi, Mohammad Faizal Ahmad and Tan, Wooi Haw , pages =. 2016 , booktitle =

  26. [26]

    2015 , booktitle =

    Rayana, Shebuti and Akoglu, Leman , pages =. 2015 , booktitle =

  27. [27]

    2013 , booktitle =

    Allahbakhsh, Mohammad and Ignjatovic, Aleksandar and Benatallah, Boualem and Bertino, Elisa and Foo, Norman and. 2013 , booktitle =

  28. [28]

    2015 , booktitle =

    Xu, Chang and Zhang, Jie , pages =. 2015 , booktitle =

  29. [29]

    2004 , journal =

    Liu, Hugo and Singh, Push , number =. 2004 , journal =. doi:10.1023/B:BTTJ.0000047600.45421.6d , issn =

  30. [30]

    2018 , booktitle =

    Aghakhani, Hojjat and Machiry, Aravind and Nilizadeh, Shirin and Kruegel, Christopher and Vigna, Giovanni , pages =. 2018 , booktitle =

  31. [31]

    2018 , booktitle =

    Aghakhani, Hojjat and MacHiry, Aravind and Nilizadeh, Shirin and Kruegel, Christopher and Vigna, Giovanni , pages =. 2018 , booktitle =. doi:10.1109/SPW.2018.00022 , arxivId =

  32. [32]

    2011 , booktitle =

    Mukherjee, Arjun and Liu, Bing and Wang, Junhui and Glance, Natalie and Jindal, Nitin , pages =. 2011 , booktitle =. doi:10.1145/1963192.1963240 , keywords =

  33. [33]

    2019 , booktitle =

    Hu, Mengxiao and Xu, Guangxia and Ma, Chuang and Daneshmand, Mahmoud , pages =. 2019 , booktitle =

  34. [34]

    2016 , journal =

    Wang, Zhuo and Hou, Tingting and Song, Dawei and Li, Zhun and Kong, Tianqi , number =. 2016 , journal =

  35. [35]

    academic.oup.com , author =

  36. [36]

    2016 , booktitle =

    Xu, Guangxia and Qi, Jin and Huang, Deling and Daneshmand, Mahmoud , pages =. 2016 , booktitle =

  37. [37]

    2017 , booktitle =

    Sharma, Abhishek and Raju, Daniel and Ranjan, Sutapa , pages =. 2017 , booktitle =

  38. [38]

    2015 , journal =

    Heydari, Atefeh and Tavakoli, Mohammad ali and Salim, Naomie and Heydari, Zahra , number =. 2015 , journal =. doi:10.1016/j.eswa.2014.12.029 , issn =

  39. [39]

    2015 , booktitle =

    Ye, Junting and Akoglu, Leman , pages =. 2015 , booktitle =

  40. [40]

    2019 , journal =

    Mehmood, K and Essam, D and Shafi, K and Access, MK Malik - IEEE and 2019, Undefined , url =. 2019 , journal =

  41. [41]

    2017 , journal =

    Li, Luyang and Qin, Bing and Ren, Wenjing and Liu, Ting , pages =. 2017 , journal =

  42. [42]

    dl.acm.org , author =

  43. [43]

    2014 , journal =

    Chung, Junyoung and Gulcehre, Caglar and Cho, KyungHyun and Bengio, Yoshua , month =. 2014 , journal =

  44. [44]

    2017 , journal =

    Araque, O and Corcuera-Platas, JF Sánchez-Rada , url =. 2017 , journal =

  45. [45]

    2018 , booktitle =

    Fern. 2018 , booktitle =. doi:10.1007/978-3-030-03928-8

  46. [46]

    2018 , booktitle =

    Luo, Zhiyi and Huang, Shanshan and Xu, Frank F and Lin, Bill Yuchen and Shi, Hanyuan and Zhu, Kenny , pages =. 2018 , booktitle =

  47. [47]

    2013 , journal =

    Htay, Su Su and Lynn, Khin Thidar , volume =. 2013 , journal =

  48. [48]

    2018 , journal =

    Mirtalaie, Monireh Alsadat and Hussain, Omar Khadeer and Chang, Elizabeth and Hussain, Farookh Khadeer , pages =. 2018 , journal =

  49. [49]

    2018 , journal =

    Sinha, Anusha and Arora, Nishant and Singh, Shipra and Cheema, Mohita and Nazir, Akthar , number =. 2018 , journal =

  50. [50]

    2018 , journal =

    Chowdhary, Neha S and Pandit, Anala A , number =. 2018 , journal =

  51. [51]

    2018 , journal =

    S., Neha and A., Anala , number =. 2018 , journal =. doi:10.5120/ijca2018917316 , keywords =

  52. [52]

    2018 , booktitle =

    Dematis, Ioannis and Karapistoli, Eirini and Vakali, Athena , pages =. 2018 , booktitle =

  53. [53]

    1994 , booktitle =

    Agrawal, Rakesh and Srikant, Ramakrishnan and. 1994 , booktitle =

  54. [54]

    2002 , booktitle =

    Bailey, James and Manoukian, Thomas and Ramamohanarao, Kotagiri , pages =. 2002 , booktitle =. doi:10.1007/3-540-45681-3

  55. [55]

    2017 , booktitle =

    Deepa, N Vamsha and Krishna, Nanditha and Kumar, G Hemanth , pages =. 2017 , booktitle =

  56. [56]

    2013 , booktitle =

    Naveed, N and Gottron, T and Staab, S , pages =. 2013 , booktitle =

  57. [57]

    2011 , booktitle =

    Ott, Myle and Choi, Yejin and Cardie, Claire and Hancock, Jeffrey T , pages =. 2011 , booktitle =

  58. [58]

    2012 , booktitle =

    Naveed, Nasir and Gottron, Thomas and Sizov, Sergej and Staab, Steffen , publisher =. 2012 , booktitle =

  59. [59]

    2018 , journal =

    Wang, Zhuo and Gu, Songmin and Zhao, Xiangnan and Xu, Xiaowei , number =. 2018 , journal =

  60. [60]

    2019 , booktitle =

    Xu, Guangxia and Hu, Mengxiao and Ma, Chuang and Daneshmand, Mahmoud , volume =. 2019 , booktitle =. doi:10.1109/ICC.2019.8761650 , issn =

  61. [61]

    2018 , journal =

    Li, Xiaomeng and Chen, Hao and Qi, Xiaojuan and Dou, Qi and Fu, Chi-Wing and Heng, Pheng-Ann , number =. 2018 , journal =

  62. [62]

    2011 , booktitle =

    Zhang, Xiuzhen and Zhou, Yun , pages =. 2011 , booktitle =

  63. [63]

    2014 , booktitle =

    Guzman, Emitza and Maalej, Walid , pages =. 2014 , booktitle =

  64. [64]

    2018 , booktitle =

    Wu, Zhiang and Zhang, Lu and Wang, Youquan and Cao, Jie , pages =. 2018 , booktitle =. doi:10.1007/978-1-4939-7131-2

  65. [65]

    2014 , booktitle =

    Saad, Mohd Nizam and Muda, Zurina and Ashaari, Noraidah Sahari and Hamid, Hamzaini Abdul , pages =. 2014 , booktitle =

  66. [66]

    2010 , journal =

    Zhu, Feng and Zhang, Xiaoquan , number =. 2010 , journal =

  67. [67]

    and Konstan, Joseph A

    Ziegler, Cai-Nicolas and McNee, Sean M. and Konstan, Joseph A. and Lausen, Georg , pages =. 2005 , booktitle =

  68. [68]

    2017 , journal =

    Chen, Tao and Xu, Ruifeng and He, Yulan and Wang, Xuan , pages =. 2017 , journal =

  69. [69]

    2018 , journal =

    Kermany, Daniel and Zhang, Kang and Goldbaum, Michael , volume =. 2018 , journal =

  70. [70]

    2017 , journal =

    Sharf, Zareen and Rahman, Saif Ur , number =. 2017 , journal =

  71. [71]

    2010 , booktitle =

    Syed, Afraz Z and Aslam, Muhammad and Martinez-Enriquez, Ana Maria , pages =. 2010 , booktitle =

  72. [72]

    2016 , booktitle =

    Rehman, Zia Ul and Bajwa, Imran Sarwar , pages =. 2016 , booktitle =

  73. [73]

    2019 , journal =

    Zotin, Aleksandr and Hamad, Yousif and Simonov, Konstantin and Kurako, Mikhail , pages =. 2019 , journal =

  74. [74]

    2018 , journal =

    Yagci, Ismail Art and Das, Sanchoy , pages =. 2018 , journal =

  75. [75]

    2004 , booktitle =

    Hu, Minqing and Liu, Bing , pages =. 2004 , booktitle =

  76. [76]

    2018 , booktitle =

    Singh, Manisha and Kumar, Lokesh and Sinha, Sapna , pages =. 2018 , booktitle =

  77. [77]

    2018 , booktitle =

    Singh, Manisha and Kumar, Lokesh and Sinha, Sapna , pages =. 2018 , booktitle =. doi:10.1007/978-981-10-6602-3

  78. [78]

    2012 , booktitle =

    Zhao, Jichang and Dong, Li and Wu, Junjie and Xu, Ke , pages =. 2012 , booktitle =. doi:10.1145/2339530.2339772 , keywords =

  79. [79]

    2017 , journal =

    Ren, Yafeng and Ji, Donghong , pages =. 2017 , journal =

  80. [80]

    2002 , booktitle =

    Loper, Edward and Bird, Steven , url =. 2002 , booktitle =

Showing first 80 references.