Recognition: 2 theorem links
· Lean TheoremOn What We Can Learn from Low-Resolution Data
Pith reviewed 2026-05-13 05:22 UTC · model grok-4.3
The pith
Low-resolution data improves model performance on high-resolution tasks when high-resolution samples are scarce.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Low-resolution observations from the same distribution contribute positively to training even when the final model is tested on high-resolution inputs, with their relative value bounded by Kullback-Leibler divergence measures of influence change and information loss under downsampling, leading to measurable performance gains when high-resolution data is scarce.
What carries the argument
The Kullback-Leibler divergence measure of how a data point's influence on the trained model changes with its resolution, used to derive bounds relating high- and low-resolution contributions to downsampling losses.
If this is right
- Training sets can be usefully expanded with low-resolution samples to raise high-resolution accuracy when high-resolution data volume is limited.
- Data collection in constrained environments can favor greater volume at lower resolution without complete loss of training value.
- The performance benefit from low-resolution augmentation holds across architectures including vision transformers and convolutional networks.
- Theoretical bounds on relative contributions can guide decisions on which low-resolution samples to include in a mixed-resolution training set.
Where Pith is reading between the lines
- In settings with data from multiple devices or institutions, this could support mixing resolutions more effectively if the shared-distribution premise holds.
- The KL-based bounds might be adapted to other modalities such as time-series or audio where downsampling is routine.
- Training procedures could incorporate the derived influence measures to dynamically weight low-resolution samples during optimization.
Load-bearing premise
The low-resolution observations come from the same underlying distribution as the high-resolution targets, and the KL-based influence measure accurately reflects practical information loss under downsampling.
What would settle it
An experiment showing that adding low-resolution data from the same distribution either fails to improve or degrades performance on a high-resolution test set, or where measured gains fall outside the theoretical bounds derived from the KL analysis.
Figures
read the original abstract
Artificial intelligence systems typically rely on large, centrally collected datasets, a premise that does not hold in many real-world domains such as healthcare and public institutions. In these settings, data sharing is often constrained by storage, privacy, or resource limitations. For example, small wearable devices may lack the bandwidth or energy capacity needed to store and transmit high-resolution data, leading to aggregation during data collection and thus a loss of information. As a result, datasets collected from different sources may consist of a mixture of high- and low-resolution samples. Despite the prevalence of this setting, it remains unclear how informative low-resolution data is when models are ultimately evaluated on high-resolution inputs. We provide a theoretical analysis based on the Kullback-Leibler divergence that characterises how the influence of a datapoint changes with resolution, and derive bounds that relate the relative contribution of high- and low-resolution observations to the information lost under downsampling. To support this analysis, we empirically demonstrate, using both a vision transformer and a convolutional neural network, that adding low-resolution data to the training set consistently improves performance when high-resolution data is scarce.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript claims that low-resolution data remains informative for models ultimately evaluated on high-resolution inputs. It provides a KL-divergence analysis characterizing how a datapoint's influence varies with resolution, derives bounds relating the relative contributions of high- and low-resolution samples to downsampling-induced information loss, and empirically shows on both a vision transformer and a CNN that adding low-resolution samples consistently improves performance when high-resolution data is scarce.
Significance. If the theoretical bounds hold under the stated assumptions and the empirical gains are robust, the work addresses a practically relevant setting in domains such as healthcare and edge computing where mixed-resolution data arises from storage, privacy, or bandwidth constraints. The combination of a divergence-based characterization with experiments on standard architectures (ViT, CNN) offers both conceptual insight and actionable guidance for training under data scarcity.
major comments (2)
- [§3] §3 (theoretical analysis): the KL-based influence measure and subsequent bounds are derived under the assumption that low-resolution observations are drawn from the same underlying measure as the high-resolution targets (i.e., a simple marginal). The manuscript does not analyze the effect of a deterministic many-to-one downsampling operator inducing a pushforward measure, nor does it show that the scalar KL term tracks the scale-specific features a neural network can still exploit; this assumption is load-bearing for the claim that the derived bounds quantify usable training signal.
- [Experiments] Experimental section and associated tables: the reported consistent improvements lack explicit statements of the number of independent runs, the precise rule for selecting or excluding low-resolution samples, and whether error bars or statistical tests support the 'consistently improves' statement across different scarcity levels; without these controls it is unclear whether post-hoc choices affect the central empirical claim.
minor comments (2)
- [§2] Notation for the downsampling operator and the induced distributions could be introduced earlier and used consistently to avoid ambiguity when relating the KL term to practical information loss.
- [Abstract] The abstract states the architectures used but the main text would benefit from a brief reminder of the exact ViT and CNN variants and the resolution pairs tested.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed report. We address each major comment point-by-point below, indicating where revisions will be made.
read point-by-point responses
-
Referee: [§3] §3 (theoretical analysis): the KL-based influence measure and subsequent bounds are derived under the assumption that low-resolution observations are drawn from the same underlying measure as the high-resolution targets (i.e., a simple marginal). The manuscript does not analyze the effect of a deterministic many-to-one downsampling operator inducing a pushforward measure, nor does it show that the scalar KL term tracks the scale-specific features a neural network can still exploit; this assumption is load-bearing for the claim that the derived bounds quantify usable training signal.
Authors: We thank the referee for highlighting this foundational aspect of the analysis. The low-resolution observations are generated by applying a deterministic downsampling operator to high-resolution samples, which by definition induces the pushforward measure; our reference to a 'simple marginal' is intended to denote exactly this pushforward distribution of the low-resolution data. We agree that the manuscript would benefit from an explicit discussion of this equivalence and of the relationship between the scalar KL term and the features retained at lower resolution. In the revision we will add a short subsection in §3 that (i) formally identifies the low-resolution distribution as the pushforward measure and (ii) clarifies that the derived bounds quantify the relative contribution of this marginal without claiming to isolate scale-specific features; the empirical results on ViT and CNN are then presented as evidence that the retained signal remains usable by standard architectures. This is a partial revision: the core KL bounds themselves are unchanged, but their interpretation and grounding are strengthened. revision: partial
-
Referee: [Experiments] Experimental section and associated tables: the reported consistent improvements lack explicit statements of the number of independent runs, the precise rule for selecting or excluding low-resolution samples, and whether error bars or statistical tests support the 'consistently improves' statement across different scarcity levels; without these controls it is unclear whether post-hoc choices affect the central empirical claim.
Authors: We agree that these experimental details are necessary for reproducibility and to substantiate the central claim. In the revised manuscript we will add: (1) an explicit statement that all reported results are averages over 5 independent runs with distinct random seeds; (2) the precise selection rule—low-resolution samples are drawn uniformly at random from the low-resolution pool to reach the target scarcity ratio, with no post-hoc exclusion or cherry-picking; and (3) error bars showing standard deviation across runs together with paired t-test p-values confirming that the observed improvements are statistically significant (p < 0.05) at the majority of scarcity levels examined. These additions will be incorporated into the experimental section and the associated tables/figures. revision: yes
Circularity Check
No circularity: theory uses standard KL properties; empirical results are independent validation
full rationale
The paper derives bounds on high- versus low-resolution contributions via KL divergence between distributions under downsampling, starting from the standard definition of KL and the assumption that low-res samples are pushforwards of the same underlying measure. This is not self-referential: the influence characterization follows directly from the KL formula without fitting to the performance claim or importing uniqueness from prior self-work. The empirical section on ViT/CNN training then tests the predicted improvement when high-res data is scarce, rather than re-deriving the same quantity from the fitted parameters. No step reduces by construction to its inputs, and the derivation chain remains self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Kullback-Leibler divergence quantifies information loss under downsampling in a manner relevant to model influence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Proposition 3 (Exact KL-divergence) ... KLh/KLl = log(E[exp(−ℓ(θ,xh))])+E[ℓ(θ,xh)] / ...
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Assumption 1 (Gibbs Distribution) p(θ|X) ∝ exp(−∑ℓ(θ,x))
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Fisher, R. A. , title =. Annals of Eugenics , volume =. doi:https://doi.org/10.1111/j.1469-1809.1936.tb02137.x , url =. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1469-1809.1936.tb02137.x , abstract =
-
[2]
Proceedings of the International Symposium on Auditory and Audiological Research , author =
Data-driven hearing care with time-stamped data-logging , volume =. Proceedings of the International Symposium on Auditory and Audiological Research , author =. 2018 , pages =
work page 2018
-
[3]
Mallat, Stéphane , year =. A. doi:10.1016/B978-0-12-374370-1.X0001-8 , language =
-
[4]
The. Technometrics , author =. 2015 , pages =. doi:10.1080/00401706.2014.902774 , language =
-
[5]
Proc Natl Inst Sci India , author =
On the generalized distance in statistics , volume =. Proc Natl Inst Sci India , author =. 1936 , pages =
work page 1936
-
[6]
Gut, Allan , year =. Probability:. doi:10.1007/978-1-4614-4708-5 , language =
-
[7]
The advanced theory of statistics , author =
-
[8]
International Conference on Machine Learning , pages=
Privacy for free: How does dataset condensation help privacy? , author=. International Conference on Machine Learning , pages=. 2022 , organization=
work page 2022
-
[9]
S. Kullback and R. A. Leibler , title =. The Annals of Mathematical Statistics , number =. 1951 , doi =
work page 1951
-
[10]
Pattern Recognition , author =
Explainable. Pattern Recognition , author =. 2024 , pages =. doi:10.1016/j.patcog.2024.110309 , abstract =
-
[11]
Protecting. Information , author =. 2024 , pages =. doi:10.3390/info15100630 , abstract =
-
[12]
Electronics , author =. 2025 , pages =. doi:10.3390/electronics14163261 , abstract =
-
[13]
Privacy preservation for federated learning in health care , volume =. Patterns , author =. 2024 , pages =. doi:10.1016/j.patter.2024.100974 , abstract =
-
[14]
Decentralised, collaborative, and privacy-preserving machine learning for multi-hospital data , volume =. eBioMedicine , author =. 2024 , pages =. doi:10.1016/j.ebiom.2024.105006 , abstract =
-
[15]
JCO Clinical Cancer Informatics , author =
Systematic. JCO Clinical Cancer Informatics , author =. 2020 , pages =. doi:10.1200/CCI.19.00047 , abstract =
- [16]
-
[17]
Frontiers in Digital Health , author =
Real-. Frontiers in Digital Health , author =. 2021 , pages =. doi:10.3389/fdgth.2021.722186 , abstract =
-
[18]
Advances in Neural Information Processing Systems , volume=
Medformer: A multi-granularity patching transformer for medical time-series classification , author=. Advances in Neural Information Processing Systems , volume=
-
[19]
International conference on artificial intelligence and statistics , pages=
Multi-resolution time-series transformer for long-term forecasting , author=. International conference on artificial intelligence and statistics , pages=. 2024 , organization=
work page 2024
-
[20]
Chen, Peng and Zhang, Yingying and Cheng, Yunyao and Shu, Yang and Wang, Yihang and Wen, Qingsong and Yang, Bin and Guo, Chenjuan , month = sep, year =. Pathformer:. doi:10.48550/arXiv.2402.05956 , abstract =
-
[21]
Hui, Zheng and Gao, Xinbo and Yang, Yunchu and Wang, Xiumei , month = oct, year =. Lightweight. Proceedings of the 27th. doi:10.1145/3343031.3351084 , abstract =
-
[22]
Tian, Rui and Wu, Zuxuan and Dai, Qi and Hu, Han and Qiao, Yu and Jiang, Yu-Gang , month = jun, year =. 2023. doi:10.1109/CVPR52729.2023.02176 , abstract =
-
[23]
doi:10.48550/arXiv.2403.18361 , abstract =
Fan, Qihang and You, Quanzeng and Han, Xiaotian and Liu, Yongfei and Tao, Yunzhe and Huang, Huaibo and He, Ran and Yang, Hongxia , month = mar, year =. doi:10.48550/arXiv.2403.18361 , abstract =
-
[24]
Intermediate features matter in prototype-guided personalized federated learning , volume =. Information Fusion , author =. 2025 , pages =. doi:10.1016/j.inffus.2025.103381 , abstract =
-
[25]
Wavelet-enhanced federated learning with personalized adaptive prototypes , volume =. Neurocomputing , author =. 2026 , pages =. doi:10.1016/j.neucom.2026.132871 , abstract =
-
[26]
Jastrzębski, Stanisław and Kenton, Zachary and Arpit, Devansh and Ballas, Nicolas and Fischer, Asja and Bengio, Yoshua and Storkey, Amos , month = sep, year =. Three. doi:10.48550/arXiv.1711.04623 , abstract =
-
[27]
IEEE Transactions on Neural Networks and Learning Systems , author =
Clustered. IEEE Transactions on Neural Networks and Learning Systems , author =. 2021 , pages =. doi:10.1109/TNNLS.2020.3015958 , abstract =
-
[28]
SIAM Journal on Control and Optimization , author =
Acceleration of. SIAM Journal on Control and Optimization , author =. 1992 , pages =. doi:10.1137/0330046 , abstract =
- [29]
-
[30]
Yang, Zilu and Zhao, Yanchao and Zhang, Jiale , editor =. Web and. 2023 , note =. doi:10.1007/978-3-031-25201-3_28 , abstract =
-
[31]
Fedmd: Heterogenous federated learning via model distillation,
Li, Daliang and Wang, Junpu , month = oct, year =. doi:10.48550/arXiv.1910.03581 , abstract =
-
[32]
International Conference of Machine Learning , author =
Privacy for. International Conference of Machine Learning , author =
-
[33]
doi:10.48550/arXiv.2305.15706 , abstract =
Tan, Jiahao and Zhou, Yipeng and Liu, Gang and Wang, Jessie Hui and Yu, Shui , month = may, year =. doi:10.48550/arXiv.2305.15706 , abstract =
-
[34]
Oh, Jaehoon and Kim, Sangmook and Yun, Se-Young , month = mar, year =. doi:10.48550/arXiv.2106.06042 , abstract =
-
[35]
McMahan, H Brendan and Moore, Eider and Ramage, Daniel and Hampson, Seth , year =. Communication-
- [36]
-
[37]
Proceedings of the AAAI Conference on Artificial Intelligence , author =. 2024 , pages =. doi:10.1609/aaai.v38i15.29617 , abstract =
-
[38]
Proceedings of the AAAI Conference on Artificial Intelligence , author =
Tackling. Proceedings of the AAAI Conference on Artificial Intelligence , author =. 2023 , pages =. doi:10.1609/aaai.v37i6.25891 , abstract =
-
[39]
doi:10.48550/arXiv.2401.03230 , abstract =
Zhang, Jianqing and Liu, Yang and Hua, Yang and Cao, Jian , month = jan, year =. doi:10.48550/arXiv.2401.03230 , abstract =
-
[40]
Dai, Yutong and Chen, Zeyuan and Li, Junnan and Heinecke, Shelby and Sun, Lichao and Xu, Ran , month = dec, year =. Tackling. doi:10.48550/arXiv.2212.02758 , abstract =
-
[41]
doi:10.48550/arXiv.2105.00243 , abstract =
Tan, Yue and Long, Guodong and Liu, Lu and Zhou, Tianyi and Lu, Qinghua and Jiang, Jing and Zhang, Chengqi , month = mar, year =. doi:10.48550/arXiv.2105.00243 , abstract =
-
[42]
Makhdoumi, Ali and Salamatian, Salman and Fawaz, Nadia and Medard, Muriel , month = sep, year =. From the. doi:10.48550/arXiv.1402.1774 , abstract =
-
[43]
Quantifying attention flow in transformers
Abnar, Samira and Zuidema, Willem , year =. Quantifying. Proceedings of the 58th. doi:10.18653/v1/2020.acl-main.385 , abstract =
-
[44]
Backpropagation. Neural Computation , author =. 1989 , pages =. doi:10.1162/neco.1989.1.4.541 , abstract =
-
[45]
Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil , month = jun, year =. An. doi:10.48550/arXiv.2010.11929 , abstract =
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2010.11929 2010
-
[46]
Hao, Jitai and Huang, Qiang and Liu, Hao and Xiao, Xinyan and Ren, Zhaochun and Yu, Jun , month = dec, year =. A. doi:10.48550/arXiv.2505.12781 , abstract =
-
[47]
Computers & Security , author =
Preserving data privacy in machine learning systems , volume =. Computers & Security , author =. 2024 , pages =. doi:10.1016/j.cose.2023.103605 , language =
-
[48]
Zamir, Syed Waqas and Arora, Aditya and Khan, Salman and Hayat, Munawar and Khan, Fahad Shahbaz and Yang, Ming-Hsuan and Shao, Ling , editor =. Learning. Computer. 2020 , note =. doi:10.1007/978-3-030-58595-2_30 , abstract =
-
[49]
Niu, Ben and Wen, Weilei and Ren, Wenqi and Zhang, Xiangde and Yang, Lianping and Wang, Shuzhen and Zhang, Kaihao and Cao, Xiaochun and Shen, Haifeng , editor =. Single. Computer. 2020 , note =. doi:10.1007/978-3-030-58610-2_12 , abstract =
-
[50]
IEEE Transactions on Pattern Analysis and Machine Intelligence , author =
Image. IEEE Transactions on Pattern Analysis and Machine Intelligence , author =. 2022 , pages =. doi:10.1109/TPAMI.2022.3204461 , abstract =
-
[51]
International Journal of Applied Mathematics and Computer Science , author =
Impact of. International Journal of Applied Mathematics and Computer Science , author =. 2018 , pages =. doi:10.2478/amcs-2018-0056 , abstract =
-
[52]
Sunkara, Raja and Luo, Tie , editor =. No. Machine. 2023 , note =. doi:10.1007/978-3-031-26409-2_27 , abstract =
-
[53]
Wang, Zhaowen and Liu, Ding and Yang, Jianchao and Han, Wei and Huang, Thomas , month = dec, year =. Deep. 2015. doi:10.1109/ICCV.2015.50 , abstract =
-
[54]
Liu, Ziwei and Luo, Ping and Xiaogang, Wang and Xiaoou, Tang , month = dec, year =. Proceedings of
-
[55]
Learning Multiple Layers of Features from Tiny Images , author=. 2009 , url=
work page 2009
-
[56]
IEEE Transactions on Image Processing , author =
Low-. IEEE Transactions on Image Processing , author =. 2019 , pages =. doi:10.1109/TIP.2018.2883743 , abstract =
-
[57]
Low-resolution face recognition with single sample per person , volume =. Signal Processing , author =. 2017 , pages =. doi:10.1016/j.sigpro.2017.05.012 , abstract =
-
[58]
Expert Systems with Applications , author =
Low resolution face recognition using a two-branch deep convolutional neural network architecture , volume =. Expert Systems with Applications , author =. 2020 , pages =. doi:10.1016/j.eswa.2019.112854 , abstract =
-
[59]
IEEE Transactions on Pattern Analysis and Machine Intelligence , author =
Low. IEEE Transactions on Pattern Analysis and Machine Intelligence , author =. 2016 , pages =. doi:10.1109/TPAMI.2015.2469282 , abstract =
-
[60]
IEEE Signal Processing Letters , author =
Deep. IEEE Signal Processing Letters , author =. 2018 , pages =. doi:10.1109/LSP.2018.2810121 , abstract =
- [61]
-
[62]
Learning in compressed space , volume =. Neural Networks , author =. 2013 , pages =. doi:10.1016/j.neunet.2013.01.020 , abstract =
-
[63]
Chang, Thomas and Tolooshams, Bahareh and Ba, Demba , month = oct, year =. Randnet:. 2019. doi:10.1109/MLSP.2019.8918878 , abstract =
-
[64]
IEEE Transactions on Image Processing , author =
Efficient. IEEE Transactions on Image Processing , author =. 2020 , pages =. doi:10.1109/TIP.2020.2995049 , abstract =
-
[65]
Authenticated down-sampling for privacy-preserving energy usage data sharing , isbn =
Mashima, Daisuke , month = nov, year =. Authenticated down-sampling for privacy-preserving energy usage data sharing , isbn =. 2015. doi:10.1109/SmartGridComm.2015.7436367 , abstract =
-
[66]
Wang, Zhangyang and Chang, Shiyu and Yang, Yingzhen and Liu, Ding and Huang, Thomas S. , month = jun, year =. Studying. 2016. doi:10.1109/CVPR.2016.518 , abstract =
-
[67]
IEEE Signal Processing Letters , author =
Discriminative. IEEE Signal Processing Letters , author =. 2018 , pages =. doi:10.1109/LSP.2017.2746658 , abstract =
-
[68]
Khadka, Puskal and Rizk, Rodrigue and Wang, Longwei and Santosh, K. C. , month = sep, year =. doi:10.48550/arXiv.2509.08959 , abstract =
-
[69]
and Saenko, Kate , month = may, year =
Peng, Xingchao and Hoffman, Judy and Yu, Stella X. and Saenko, Kate , month = may, year =. Fine-to-coarse. doi:10.48550/arXiv.1605.06695 , abstract =
-
[70]
Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang
Abadi, Martin and Chu, Andy and Goodfellow, Ian and McMahan, H. Brendan and Mironov, Ilya and Talwar, Kunal and Zhang, Li , month = oct, year =. Deep. Proceedings of the 2016. doi:10.1145/2976749.2978318 , abstract =
-
[71]
Zhuo, Wei and Zhan, Zhaohuan and Yu, Han , month = oct, year =. Personalized. doi:10.48550/arXiv.2505.23864 , abstract =
-
[72]
Yang, Danni and Chen, Zhikang and Cui, Sen and Yang, Mengyue and Li, Ding and Wuerkaixi, Abudukelimu and Li, Haoxuan and Ren, Jinke and Gong, Mingming , month = sep, year =. Decentralized. doi:10.48550/arXiv.2509.23683 , abstract =
-
[73]
doi:10.48550/arXiv.2511.00480 , abstract =
Bo, Weihao and Sun, Yanpeng and Wang, Yu and Zhang, Xinyu and Li, Zechao , month = nov, year =. doi:10.48550/arXiv.2511.00480 , abstract =
-
[74]
Wei, Ting and Mei, Biao and Lyu, Junliang and Zhang, Renquan and Zhou, Feng and Sun, Yifan , month = oct, year =. Personalized. doi:10.48550/arXiv.2505.14161 , abstract =
-
[75]
Li, Xiang and Su, Buxin and Wang, Chendi and Long, Qi and Su, Weijie J. , month = oct, year =. Mitigating. doi:10.48550/arXiv.2510.19934 , abstract =
-
[76]
Sun, Jingwei and Li, Ang and Wang, Binghui and Yang, Huanrui and Li, Hai and Chen, Yiran , month = jun, year =. Soteria:. 2021. doi:10.1109/CVPR46437.2021.00919 , abstract =
-
[77]
Hatamizadeh, Ali and Yin, Hongxu and Roth, Holger and Li, Wenqi and Kautz, Jan and Xu, Daguang and Molchanov, Pavlo , month = jun, year =. 2022. doi:10.1109/CVPR52688.2022.00978 , abstract =
-
[78]
IEEE/ACM Transactions on Audio, Speech, and Language Processing , author =
An. IEEE/ACM Transactions on Audio, Speech, and Language Processing , author =. 2016 , pages =. doi:10.1109/TASLP.2016.2585878 , abstract =
-
[79]
MViTv2: Improved Multiscale Vision Transformers for Classification and Detection , isbn =
Li, Zhuohang and Zhang, Jiaxin and Liu, Luyang and Liu, Jian , month = jun, year =. Auditing. 2022. doi:10.1109/CVPR52688.2022.00989 , abstract =
-
[80]
Yin, Hongxu and Mallya, Arun and Vahdat, Arash and Alvarez, Jose M. and Kautz, Jan and Molchanov, Pavlo , month = jun, year =. See through. 2021. doi:10.1109/CVPR46437.2021.01607 , abstract =
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.