Recognition: no theorem link
LatentRouter: Can We Choose the Right Multimodal Model Before Seeing Its Answer?
Pith reviewed 2026-05-13 01:39 UTC · model grok-4.3
The pith
LatentRouter selects the best multimodal model for an image-question pair by predicting counterfactual performance through latent communication between query capsules and model tokens, without running any models.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
LatentRouter formulates MLLM routing as counterfactual multimodal utility prediction. Given an image-question query, it extracts learned multimodal routing capsules, represents each candidate MLLM with a model capability token, and performs latent communication between these states to estimate how each model would perform if selected. A distributional outcome head predicts model-specific counterfactual quality, while a bounded capsule correction refines close decisions. The resulting utility-based policy supports performance-oriented and performance-cost routing and handles changing candidate pools through shared per-model scoring with availability masking.
What carries the argument
Latent communication between learned multimodal routing capsules extracted from the query and model capability tokens, which produces estimates of counterfactual performance without observing any model output.
If this is right
- The utility scores enable routing policies that balance accuracy against cost or latency without retraining.
- Shared per-model scoring combined with availability masking lets the same router work when the set of available models changes.
- Improvements concentrate on task groups that require visual detail, layout understanding, or multi-step reasoning.
- Ablation results indicate that the latent communication step, rather than the capsules or tokens alone, drives most of the gain.
Where Pith is reading between the lines
- The same latent-communication pattern could be tested for routing inside a single large model that contains multiple specialized sub-networks.
- In production systems the router could be updated periodically from logged outcomes, allowing it to adapt as new models are added without full retraining.
- If the predictions remain reliable across domains, the method suggests a general way to compose heterogeneous AI components by estimating their joint utility before execution.
Load-bearing premise
The learned routing capsules and model capability tokens, when allowed to communicate in latent space, contain enough information to accurately predict how well each model would answer a query without ever seeing its actual response.
What would settle it
On a new collection of image-question pairs with known ground-truth performance for every candidate model, measure whether the router's chosen model achieves higher average accuracy or lower cost than a strong baseline router; if it does not, the counterfactual prediction claim is falsified.
Figures
read the original abstract
Multimodal large language models (MLLMs) have heterogeneous strengths across OCR, chart understanding, spatial reasoning, visual question answering, cost, and latency. Effective MLLM routing therefore requires more than estimating query difficulty: a router must match the multimodal requirements of the current image-question input with the capabilities of each candidate model. We propose LatentRouter, a router that formulates MLLM routing as counterfactual multimodal utility prediction. Given an image-question query, LatentRouter extracts learned multimodal routing capsules, represents each candidate MLLM with a model capability token, and performs latent communication between these states to estimate how each model would perform if selected. A distributional outcome head predicts model-specific counterfactual quality, while a bounded capsule correction refines close decisions without allowing residual signals to dominate the prediction. The resulting utility-based policy supports performance-oriented and performance-cost routing, and handles changing candidate pools through shared per-model scoring with availability masking. Experiments on MMR-Bench and VL-RouterBench show that LatentRouter outperforms fixed-model, feature-level, and learned-router baselines. Additional analyses show that the gains are strongest on multimodal task groups where model choice depends on visual, layout-sensitive, or reasoning-oriented requirements, and that latent communication is the main contributor to the improvement. The code is available at: https://github.com/LabRAI/LatentRouter.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes LatentRouter for routing image-question queries to heterogeneous MLLMs by formulating the problem as counterfactual multimodal utility prediction. It extracts learned multimodal routing capsules from the query, represents each candidate MLLM via a model capability token, and uses latent communication between these to estimate per-model performance without observing outputs. A distributional outcome head and bounded capsule correction produce utility scores supporting performance-oriented and performance-cost policies, with availability masking for dynamic pools. Experiments on MMR-Bench and VL-RouterBench report outperformance over fixed-model, feature-level, and learned-router baselines, with gains strongest on visual/layout/reasoning tasks and ablations attributing improvements to latent communication.
Significance. If the experimental results and ablations hold, the work addresses a practical need for efficient MLLM selection amid heterogeneous capabilities in OCR, reasoning, cost, and latency. The counterfactual framing without output observation, combined with support for dynamic candidate pools, could enable more adaptive and cost-effective multimodal systems. Public code release strengthens the contribution by supporting reproducibility.
major comments (3)
- [Abstract and Experiments] Abstract and Experiments section: The reported outperformance on MMR-Bench and VL-RouterBench is presented without details on training procedure, loss functions, error bars, statistical significance testing, or controls for post-hoc hyperparameter choices; these omissions are load-bearing because the central claim attributes gains specifically to latent communication.
- [§3] §3 (Architecture): The bounded capsule correction is described as refining close decisions without allowing residual signals to dominate, but no equations or ablation isolating its effect versus the distributional outcome head are provided; this mechanism is central to the counterfactual utility prediction and requires explicit validation.
- [Results and Analysis] Table or figure reporting task-group results: The claim that gains are strongest on visual, layout-sensitive, or reasoning-oriented requirements needs quantitative per-group deltas with confidence intervals to substantiate that latent communication (rather than other components) is the main contributor.
minor comments (2)
- [Methods] The notation distinguishing multimodal routing capsules from model capability tokens would benefit from an explicit diagram or equation set in the methods to improve clarity for readers implementing the latent communication step.
- [Abstract] The abstract mentions 'the code is available' but the manuscript should include a direct link or DOI in the main text rather than only the abstract for archival purposes.
Simulated Author's Rebuttal
Thank you for the constructive feedback on our manuscript. We appreciate the referee's recognition of the practical value of LatentRouter for efficient MLLM selection. We address each major comment below and will incorporate revisions to strengthen the presentation of our results and methods.
read point-by-point responses
-
Referee: [Abstract and Experiments] Abstract and Experiments section: The reported outperformance on MMR-Bench and VL-RouterBench is presented without details on training procedure, loss functions, error bars, statistical significance testing, or controls for post-hoc hyperparameter choices; these omissions are load-bearing because the central claim attributes gains specifically to latent communication.
Authors: We agree that these details are necessary to substantiate the central claims. In the revised manuscript, we will expand the Experiments section with a full description of the training procedure (including data splits, optimizer settings, and training duration), the precise loss functions used for the distributional outcome head and capsule components, error bars computed across multiple random seeds, and statistical significance tests (such as paired t-tests or Wilcoxon tests) comparing LatentRouter against baselines. We will also document the hyperparameter selection process to confirm it was not post-hoc. These additions will provide clearer evidence linking performance gains to latent communication. revision: yes
-
Referee: [§3] §3 (Architecture): The bounded capsule correction is described as refining close decisions without allowing residual signals to dominate, but no equations or ablation isolating its effect versus the distributional outcome head are provided; this mechanism is central to the counterfactual utility prediction and requires explicit validation.
Authors: We acknowledge the need for more explicit formalization of this component. In the revision, we will add the mathematical equations defining the bounded capsule correction and its interaction with the distributional outcome head. We will also include a new ablation study that isolates the correction's effect by comparing the full model to variants without it and to the outcome head alone, thereby validating its role in refining counterfactual utility predictions. revision: yes
-
Referee: [Results and Analysis] Table or figure reporting task-group results: The claim that gains are strongest on visual, layout-sensitive, or reasoning-oriented requirements needs quantitative per-group deltas with confidence intervals to substantiate that latent communication (rather than other components) is the main contributor.
Authors: We agree that quantitative per-group analysis with confidence intervals is required to support the claim. We will add a dedicated table (or extended figure) in the Results and Analysis section reporting performance deltas on task-group subsets (visual, layout, reasoning) with 95% confidence intervals. We will further include an ablation that disables latent communication and shows the reduction in gains specifically on these groups, confirming it as the primary contributor over other components. revision: yes
Circularity Check
No significant circularity detected in derivation or claims
full rationale
The paper describes an empirical ML routing system trained via supervised learning on observed performance labels for training queries. At inference, it uses only query-derived capsules, model tokens, and latent communication to produce counterfactual utility estimates without access to model outputs. This is a standard train-then-predict setup with no equations shown that would equate the reported performance gains or counterfactual predictions to the fitted parameters by construction. Experiments evaluate on separate benchmarks (MMR-Bench, VL-RouterBench) against fixed-model, feature-level, and learned-router baselines, with ablations attributing gains to latent communication on specific task groups. No load-bearing self-citations, uniqueness theorems, or ansatzes imported from prior author work are present in the text. The central claim remains independently testable via the described architecture and held-out evaluation.
Axiom & Free-Parameter Ledger
free parameters (2)
- routing capsule dimensions and number
- model capability token embeddings
axioms (1)
- domain assumption Multimodal query requirements can be captured by a fixed set of learned routing capsules that interact meaningfully with model tokens.
invented entities (2)
-
multimodal routing capsules
no independent evidence
-
model capability token
no independent evidence
Reference graph
Works this paper leans on
-
[1]
AutoMix: Automatically mixing language models
Pranjal Aggarwal, Aman Madaan, Ankit Anand, Srividya Pranavi Potharaju, Swaroop Mishra, Pei Zhou, Aditya Gupta, Dheeraj Rajagopal, Karthik Kappaganthu, Yiming Yang, Shyam Upadhyay, Manaal Faruqui, and Mausam. Automix: Automatically mixing language models. arXiv preprint arXiv:2310.12963, 2023
-
[2]
Phi-4-reasoning-vision-15b technical report.arXiv preprint arXiv:2603.03975, 2026
Jyoti Aneja, Michael Harrison, Neel Joshi, Tyler LaBonte, John Langford, and Eduardo Salinas. Phi-4-reasoning-vision-15b technical report.arXiv preprint arXiv:2603.03975, 2026
-
[3]
Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2.5-vl technical report.arXiv preprint arXiv:2502.13923, 2025
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[4]
FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance
Lingjiao Chen, Matei Zaharia, and James Zou. Frugalgpt: How to use large language models while reducing cost and improving performance.arXiv preprint arXiv:2305.05176, 2023
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[5]
Shuhao Chen, Weisen Jiang, Baijiong Lin, James Kwok, and Yu Zhang. Routerdc: Query-based router by dual contrastive learning for assembling large language models.Advances in Neural Information Processing Systems, 37:66305–66328, 2024
work page 2024
-
[6]
Xinghao Chen, Anhao Zhao, Heming Xia, Xuan Lu, Hanlin Wang, Yanjun Chen, Wei Zhang, Jian Wang, Wenjie Li, and Xiaoyu Shen. Reasoning beyond language: A comprehensive survey on latent chain-of-thought reasoning.arXiv preprint arXiv:2505.16782, 2025
-
[7]
Zhe Chen, Weiyun Wang, Yue Cao, Yutong Liu, Zhangwei Gao, Erfei Cui, et al. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling.arXiv preprint arXiv:2412.05271, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[8]
Bts: A comprehensive benchmark for tie strength prediction
Xueqi Cheng, Catherine Yang, Yuying Zhao, Yu Wang, Hamid Karimi, and Tyler Derr. Bts: A comprehensive benchmark for tie strength prediction. InProceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V . 2, pages 5345–5354, 2025
work page 2025
-
[9]
Xueqi Cheng, Minxing Zheng, Shixiang Zhu, and Yushun Dong. Misleader: Defending against model extraction with ensembles of distilled models.arXiv preprint arXiv:2506.02362, 2025
-
[10]
A unified approach to routing and cascading for llms.arXiv preprint arXiv:2410.10347, 2024
Jasper Dekoninck, Maximilian Baader, and Martin Vechev. A unified approach to routing and cascading for llms.arXiv preprint arXiv:2410.10347, 2024
- [11]
-
[12]
arXiv preprint arXiv:2410.03834 , year=
Tao Feng, Yanzhen Shen, and Jiaxuan You. Graphrouter: A graph-based router for llm selections. arXiv preprint arXiv:2410.03834, 2024
-
[13]
Ling Fu, Biao Yang, Zhebin Kuang, Jiajun Song, Yuzhe Li, Linghao Zhu, Qidi Luo, Xinyu Wang, Hao Lu, Mingxin Huang, Zhang Li, Guozhi Tang, Bin Shan, Chunhui Lin, Qi Liu, Binghong Wu, Hao Feng, Hao Liu, Can Huang, Jingqun Tang, Wei Chen, Lianwen Jin, Yuliang Liu, and Xiang Bai. Ocrbench v2: An improved benchmark for evaluating large multimodal models on vis...
-
[14]
Making the V in VQA: Elevating the role of image understanding in visual question answering
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA: Elevating the role of image understanding in visual question answering. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904–6913, 2017
work page 2017
-
[15]
Training Large Language Models to Reason in a Continuous Latent Space
Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason E. Weston, and Yuandong Tian. Training large language models to reason in a continuous latent space.arXiv preprint arXiv:2412.06769, 2025
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[16]
PLUME: Latent Reasoning Based Universal Multimodal Embedding
Chenwei He, Xiangzhao Hao, Tianyu Yang, Yuxiang Ma, Yuheng Jia, Lingxiang Wu, Chaoyang Zhao, Haiyun Guo, and Jinqiao Wang. Plume: Latent reasoning based universal multimodal embedding.arXiv preprint arXiv:2604.02073, 2026. 10
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[17]
Qitian Jason Hu, Jacob Bieker, Xiuyu Li, Nan Jiang, Benjamin Keigwin, Gaurav Ranganath, Kurt Keutzer, and Shriyash Kaustubh Upadhyay. Routerbench: A benchmark for multi-llm routing system.arXiv preprint arXiv:2403.12031, 2024
-
[18]
Vl-routerbench: A benchmark for vision-language model routing, 2025
Zhehao Huang, Baijiong Lin, Jingyuan Zhang, Jingying Wang, Yuhang Liu, Ning Lu, Tao Li, and Xiaolin Huang. Vl-routerbench: A benchmark for vision-language model routing, 2025
work page 2025
-
[19]
Drew A. Hudson and Christopher D. Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6700–6709, 2019
work page 2019
-
[20]
Efficient multi- modal large language models: A survey,
Yizhang Jin, Jian Li, Yexin Liu, Tianjun Gu, Kai Wu, Zhengkai Jiang, Muyang He, Bo Zhao, Xin Tan, Zhenye Gan, Yabiao Wang, Chengjie Wang, and Lizhuang Ma. Efficient multimodal large language models: A survey.arXiv preprint arXiv:2405.10739, 2024
-
[21]
Amita Kamath, Jack Hessel, and Kai-Wei Chang. Text encoders bottleneck compositionality in contrastive vision-language models.arXiv preprint arXiv:2305.14897, 2023
-
[22]
LLaVA-OneVision: Easy Visual Task Transfer
Bo Li, Yuanhan Zhang, Pengchuan Li, Zicheng Zhang, Fanyi Pu, Ziwei Liu, et al. Llava- onevision: Easy visual task transfer.arXiv preprint arXiv:2408.03326, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[23]
A comprehensive survey and guide to multimodal large language models in vision-language tasks
Cheng-Xu Liang, Yaoyuan Tian, Li Ming, Tianyang Wang, Ziqian Bi, Ming Liu, et al. A comprehensive survey and guide to multimodal large language models in vision-language tasks. arXiv preprint arXiv:2411.06284, 2024
-
[24]
On the hidden mystery of ocr in large multimodal models
Yuliang Liu, Zhang Li, Mingxin Huang, Biao Yang, Wenwen Yu, Chunyuan Li, Xucheng Yin, Cheng-Lin Liu, Lianwen Jin, and Xiang Bai. Ocrbench: On the hidden mystery of ocr in large multimodal models.arXiv preprint arXiv:2305.07895, 2023
-
[25]
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts.arXiv preprint arXiv:2310.02255, 2023
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[26]
Haoxuan Ma, Guannan Lai, and Han-Jia Ye. Mmr-bench: A comprehensive benchmark for multimodal llm routing.arXiv preprint arXiv:2601.17814, 2026
-
[27]
RouteLLM: Learning to Route LLMs with Preference Data
Isaac Ong, Amjad Almahairi, Vincent Wu, Wei-Lin Chiang, Tianhao Wu, Joseph E. Gonzalez, M. Waleed Kadous, and Ion Stoica. Routellm: Learning to route llms with preference data. arXiv preprint arXiv:2406.18665, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[28]
Haoran Qiu, Anish Biswas, Zihan Zhao, Jayashree Mohan, Alind Khare, Esha Choukse, Inigo Goiri, Zeyu Zhang, Haiying Shen, Chetan Bansal, Ramachandran Ramjee, and Rodrigo Fonseca. Modserve: Modality- and stage-aware resource disaggregation for scalable multimodal model serving.arXiv preprint arXiv:2502.00937, 2025
-
[29]
Yifei Shao, Kun Zhou, Ziming Xu, Mohammad Atif Quamar, Shibo Hao, Zhen Wang, Zhit- ing Hu, and Biwei Huang. Learning modal-mixed chain-of-thought reasoning with latent embeddings.arXiv preprint arXiv:2602.00574, 2026
-
[30]
Towards vqa models that can read
Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Devi Parikh, Marcus Rohrbach, and Dhruv Batra. Towards vqa models that can read. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8317–8326, 2019
work page 2019
-
[31]
Xin Tang, Youfang Han, Fangfei Gou, Wei Zhao, Xin Meng, Yang Yu, Jinguo Zhang, Yuanchun Shi, Yuntao Wang, and Tengxiang Zhang. Ecvl-router: Scenario-aware routing for vision- language models.arXiv preprint arXiv:2510.27256, 2025
-
[32]
Zihang Tian, Rui Li, Jingsen Zhang, Xiaohe Bo, Wei Huo, and Xu Chen. Haps: Hierarchical llm routing with joint architecture and parameter search.arXiv preprint arXiv:2601.05903, 2026
-
[33]
Clovis Varangot-Reille, Christophe Bouvard, Antoine Gourru, Mathieu Ciancone, Marion Schaeffer, and Francois Jacquenet. Doing more with less – implementing routing strategies in large language model-based systems: An extended survey.arXiv preprint arXiv:2502.00409, 2025. 11
-
[34]
Measuring multimodal mathematical reasoning with MATH-Vision dataset, 2024
Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. Measuring multimodal mathematical reasoning with math-vision dataset.arXiv preprint arXiv:2402.14804, 2024
-
[35]
Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution
Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: Enhancing vision- language model’s perception of the world at any resolution.arXiv preprint arXiv:2409.12191, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[36]
Monet: Reasoning in latent visual space beyond images and language
Qixun Wang, Yang Shi, Yifei Wang, Yuanxing Zhang, Pengfei Wan, Kun Gai, Xianghua Ying, and Yisen Wang. Monet: Reasoning in latent visual space beyond images and language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2026
work page 2026
-
[37]
Wulin Xie, Yi-Fan Zhang, Chaoyou Fu, Yang Shi, Bingyan Nie, Hongkai Chen, Zhang Zhang, Liang Wang, and Tieniu Tan. Mme-unify: A comprehensive benchmark for unified multimodal understanding and generation models.arXiv preprint arXiv:2504.03641, 2025
-
[38]
SynHAT: A Two-stage Coarse-to-Fine Diffusion Framework for Synthesizing Human Activity Traces
Rongchao Xu, Lin Jiang, Dahai Yu, Ximiao Li, and Guang Wang. Synhat: A two-stage coarse-to-fine diffusion framework for synthesizing human activity traces.arXiv preprint arXiv:2604.14705, 2026
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[39]
arXiv preprint arXiv:2505.11409 (2025) 5
Yi Xu, Chengzu Li, Han Zhou, Xingchen Wan, Caiqi Zhang, Anna Korhonen, and Ivan Vulic. Visual planning: Let’s think only with images.arXiv preprint arXiv:2505.11409, 2025
-
[40]
R2-Router: A new paradigm for LLM routing with reasoning.arXiv preprint arXiv:2602.02823, 2026
Jiaqi Xue, Qian Lou, Jiarong Xing, and Heng Huang. R2-router: A new paradigm for llm routing with reasoning.arXiv preprint arXiv:2602.02823, 2026
-
[41]
Zhibo Yang, Jun Tang, Zhaohai Li, Pengfei Wang, Jianqiang Wan, Humen Zhong, Xuejing Liu, Mingkun Yang, Peng Wang, Lianwen Jin, and Junyang Lin. Cc-ocr: A comprehensive and challenging ocr benchmark for evaluating large multimodal models in literacy.arXiv preprint arXiv:2412.02210, 2024
-
[42]
A survey on multimodal large language models
Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen. A survey on multimodal large language models.arXiv preprint arXiv:2306.13549, 2023
-
[43]
Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems?, 2024
Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, and Hongsheng Li. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems?arXiv preprint arXiv:2403.14624, 2024
-
[44]
Leveraging uncertainty estimation for efficient llm routing.arXiv preprint arXiv:2502.11021, 2025
Tuo Zhang, Asal Mehradfar, Dimitrios Dimitriadis, and Salman Avestimehr. Leveraging uncertainty estimation for efficient llm routing.arXiv preprint arXiv:2502.11021, 2025
-
[45]
A survey on latent reasoning.arXiv preprint arXiv:2507.06203, 2025
Rui-Jie Zhu, Tianhao Peng, Tianhao Cheng, Xingwei Qu, Jinfa Huang, Dawei Zhu, Hao Wang, Kaiwen Xue, Xuanliang Zhang, Yong Shan, Tianle Cai, Taylor Kergan, Assel Kembay, Andrew Smith, Chenghua Lin, Binh Nguyen, Yuqi Pan, Yuhong Chou, Zefan Cai, Zhenhe Wu, Yongchi Zhao, Tianyu Liu, Jian Yang, Wangchunshu Zhou, Chujie Zheng, Chongxuan Li, Yuyin Zhou, Zhoujun...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.