pith. machine review for the scientific record. sign in

arxiv: 2605.13942 · v1 · submitted 2026-05-13 · 💻 cs.LG · cs.DC· cs.NI

Recognition: 2 theorem links

· Lean Theorem

EMA: Efficient Model Adaptation for Learning-based Systems

Authors on Pith no claims yet

Pith reviewed 2026-05-15 06:14 UTC · model grok-4.3

classification 💻 cs.LG cs.DCcs.NI
keywords model adaptationlearning-based systemsstate transformersdata labelingdynamic environmentsresource managementnetwork optimizationefficient retraining
0
0 comments X

The pith

EMA lets learning-based systems adapt to changing environments by aligning new states to past ones and prioritizing useful data labels, cutting retraining costs.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents EMA as a system that helps machine learning models used in networked and resource-management tasks adjust when input conditions and goals evolve, without full retraining from scratch each time. It achieves this through state transformers that map fresh inputs onto similar previously seen states so models can start from a warmer point, plus a utility-based scheme that decides which data points to label next while weighing labeling effort against training gains. A sympathetic reader would care because current learning-based systems incur high ongoing costs in data collection, labeling, and GPU time whenever environments shift, leading to slow responses and degraded performance. EMA claims to cut these overheads across varied system designs while still raising overall system metrics such as throughput.

Core claim

EMA is the first model adaptation system for learning-based systems that uses state transformers to align the input state of a new environment with previously similar states for warm-start adaptation and applies utility-based labeling prioritization to balance the tradeoff between training and labeling costs, thereby reducing adaptation overhead in heterogeneous, long-running, and dynamic settings.

What carries the argument

State transformers that map new environment inputs onto similar prior states combined with utility-based labeling prioritization that selects high-utility data while trading off training versus labeling expense.

If this is right

  • Adaptation GPU training time falls between 14.9 and 42.4 percent across the eight tested learning-based systems.
  • System-level metrics such as network throughput rise by 6.9 to 31.3 percent after adaptation.
  • The approach works with diverse existing system and model architectures without requiring major redesigns.
  • Both expensive model retraining and the often-overlooked data-labeling step are addressed in one integrated pipeline.
  • Long-running systems become more responsive to ongoing changes in load or objectives.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same state-alignment idea could be tested in non-network domains such as cloud autoscaling or robotic control where environments also drift gradually.
  • If labeling prioritization proves robust, it might lower the human effort needed to maintain deployed learning systems over months or years.
  • Designers of online learning pipelines might adopt state similarity checks as a lightweight alternative to full continual-learning retraining loops.
  • A follow-up experiment could measure how well the utility scores generalize when the underlying model architecture changes after initial deployment.

Load-bearing premise

State transformers can reliably map new inputs to similar earlier states across different system designs, and the utility scoring will not skip critical new decision data needed for accurate model updates.

What would settle it

Run EMA on one of the evaluated systems in a controlled environment shift where the state transformers produce poor alignments, then measure whether the claimed cost reductions and performance gains disappear.

Figures

Figures reproduced from arXiv: 2605.13942 by Daiyang Yu, Fan Lai, Xinyu Chen, Yan Liang, Yaqi Qiao, Yihan Zhang.

Figure 3
Figure 3. Figure 3: Data labeling is costly in learning-based sys [PITH_FULL_IMAGE:figures/full_fig_p003_3.png] view at source ↗
Figure 2
Figure 2. Figure 2: In FLUX (flow size prediction for network [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 4
Figure 4. Figure 4: EMA repurposes operational knowledge of similar environments to optimize systems adaptation. [PITH_FULL_IMAGE:figures/full_fig_p004_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: EMA identifies a source state from prior de [PITH_FULL_IMAGE:figures/full_fig_p005_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: MMD similarity exhibits a stronger positive [PITH_FULL_IMAGE:figures/full_fig_p006_6.png] view at source ↗
Figure 9
Figure 9. Figure 9: Too frequently or infrequently collection incurs suboptimal costs. feedback incurs resource usage and user experience penal￾ties [17, 23]; simulator-based feedback is limited by simula￾tion latency [37, 39]; and in some tasks, labels require costly human annotation [38]. EMA augments existing in-network learning systems like Caravan [38], which leverages different labeling agencies (e.g., combining LLMs an… view at source ↗
Figure 11
Figure 11. Figure 11: In streaming environments with dynamic attack arrivals, an IDS-LSTM adapts to distribution shifts in network traffic with EMA, achieving better online intrusion detection. 0 2 4 6 8 Training Epoch 60 70 80 Norm. Network Tpt. DOTE + Caravan + Caravan + EMA (a) Time to Accuracy on DOTE. 0 50 100 Training Epoch 0.0 0.2 0.4 0.6 Flow Pred. R-Score Flux + Caravan + Caravan + EMA (b) Time to Accuracy on Flux. 0 … view at source ↗
Figure 12
Figure 12. Figure 12: EMA enables faster adaptation and better [PITH_FULL_IMAGE:figures/full_fig_p010_12.png] view at source ↗
Figure 14
Figure 14. Figure 14: Performance breakdown of EMA design. DOTE MimicNet Flux FIRM 0 1 2 3 Latency Overhead (%) 2.9 1.4 0.3 0.1 [PITH_FULL_IMAGE:figures/full_fig_p011_14.png] view at source ↗
Figure 16
Figure 16. Figure 16: Accounting for heterogeneous data label￾ing costs is important. 6.3 Performance Breakdown Breakdown of system components. To assess individual components, we evaluate two key variants of EMA: (i) EMA w/o State Transformer (ST): This variant bypasses the State Transformer and starts tuning on the current global model. (ii) EMA w/o Labeling Agent (LA): This variant disables the Labeling Agent during adaptat… view at source ↗
Figure 17
Figure 17. Figure 17: EMA improves cost-effectiveness un￾der different data labeling settings. 1.3 2.2 3.6 Full Sampling Size (×10 ) 0 1 2 3 4 Improvement Factor 0.6 0.7 0.8 0.9 1.0 Flow Pred. R-Score Improvement Factor Flow Pred. R-Score (a) Flux. 3.6 6.0 9.7 Full Sampling Size (×10³) 0.0 0.5 1.0 1.5 Improvement Factor 0.8 0.9 1.0 Norm. Network Tpt. Improvement Factor Norm. Network Tpt. (b) DOTE [PITH_FULL_IMAGE:figures/full… view at source ↗
Figure 18
Figure 18. Figure 18: Light-weight data transformer samples data [PITH_FULL_IMAGE:figures/full_fig_p012_18.png] view at source ↗
Figure 21
Figure 21. Figure 21: EMA’s Training-Phase Adaptation Modules improve NetLLM’s performance under the same cost on the cluster job scheduling task. 1 import EMA 2 3 def EMA_model_training ( model , state , task_config ) : 4 # Create local EMA agent based on input task config and connect to the EMA Orchestrator 5 ema_agent = EMA . create_agent ( task_config ) 6 7 # Transform state ( input data ) using EMA pre - training module 8… view at source ↗
Figure 22
Figure 22. Figure 22: EMA offers friendly APIs to enable efficient [PITH_FULL_IMAGE:figures/full_fig_p016_22.png] view at source ↗
read the original abstract

Machine learning (ML) is increasingly applied to optimize system performance in tasks such as resource management and network simulation. Unlike traditional ML tasks (e.g., image classification), networked systems often operate in heterogeneous, long-running, and dynamic environment states, where input conditions (e.g., network loads) and operational objectives can shift over time and across settings. Existing learning-based systems offer little support for adaptation, resulting in costly model training, extensive data collection, degraded system performance, and slow responsiveness. This paper presents EMA, the first model adaptation system supporting learning-based systems to adapt to evolving environments with minimal operational overhead. EMA takes a system-driven, data-centric approach that accommodates diverse system and model designs while addressing two key deployment challenges. First, it reduces expensive model training by introducing state transformers that align the input state of a new environment with previously similar states, allowing models to warm-start adaptation. Second, it addresses the often-overlooked yet costly process of data labeling--collecting ground truth for exploring and training on various system decisions--by prioritizing labeling high-utility data while balancing the tradeoff between training and labeling cost. Evaluations on eight representative learning-based systems show that EMA reduces adaptation costs (e.g., GPU training time) by 14.9-42.4% while improving system performance (e.g., network throughput) by 6.9-31.3%.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes EMA, the first model adaptation system for learning-based systems operating in heterogeneous, long-running, and dynamic environments. It uses state transformers to align new environment inputs with previously observed similar states, enabling warm-start model adaptation to reduce training overhead, and introduces a high-utility data prioritization mechanism to balance labeling and training costs. Evaluations on eight representative systems report adaptation cost reductions of 14.9-42.4% (e.g., GPU time) and system performance improvements of 6.9-31.3% (e.g., network throughput).

Significance. If the state transformers prove reliable for cross-system alignment and the prioritization avoids missing critical data, EMA could meaningfully lower barriers to maintaining ML-optimized systems under environmental drift, a practical gap in current deployments. The data-centric design that accommodates diverse models is a positive aspect, and the multi-system empirical evaluation provides a starting point for assessing real-world utility, though stronger validation would increase its impact.

major comments (2)
  1. [Abstract and Evaluation] Abstract and evaluation sections: quantitative gains are reported on eight systems without specifying baselines, statistical tests, exact experimental setups, or controls for confounds, which limits substantiation of the central claims on cost reduction and performance improvement.
  2. [Methodology (state transformers)] State transformers section: the mechanism for mapping new inputs to similar prior states lacks reported details on distance metrics, embedding construction, or failure cases for structural divergence, and no per-system ablation on alignment accuracy is provided despite this being load-bearing for the warm-start benefit and reported cost savings.
minor comments (2)
  1. [Data prioritization mechanism] Clarify the precise definition of utility in the labeling prioritization and how the training-labeling tradeoff is quantified in the algorithm.
  2. [Figures] Ensure all figures include error bars or variance measures to support the reported percentage ranges.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their thorough review and valuable feedback on our manuscript. We address each of the major comments below, providing clarifications and outlining the revisions we plan to make to strengthen the paper.

read point-by-point responses
  1. Referee: [Abstract and Evaluation] Abstract and evaluation sections: quantitative gains are reported on eight systems without specifying baselines, statistical tests, exact experimental setups, or controls for confounds, which limits substantiation of the central claims on cost reduction and performance improvement.

    Authors: We acknowledge that the abstract and evaluation sections would benefit from more explicit details to support our quantitative claims. In the revised manuscript, we will specify the baselines used in our experiments, such as adaptation without state alignment and without prioritized labeling. We will also incorporate statistical tests to validate the significance of the reported cost reductions and performance improvements. Additionally, we will provide more precise descriptions of the experimental setups, including environment parameters and hardware configurations, and discuss how we controlled for potential confounding factors like varying rates of environmental change. These changes will be reflected in an updated abstract and expanded evaluation section. revision: yes

  2. Referee: [Methodology (state transformers)] State transformers section: the mechanism for mapping new inputs to similar prior states lacks reported details on distance metrics, embedding construction, or failure cases for structural divergence, and no per-system ablation on alignment accuracy is provided despite this being load-bearing for the warm-start benefit and reported cost savings.

    Authors: We agree that additional details on the state transformers are needed for reproducibility and to fully substantiate their contribution. In the revision, we will elaborate on the distance metrics employed for state similarity, the methods used to construct embeddings from system states, and potential failure cases when there is significant structural divergence between environments. We will also add per-system ablation studies measuring alignment accuracy and its correlation with the observed adaptation cost savings. This will better illustrate why the warm-start approach is effective across the evaluated systems. revision: yes

Circularity Check

0 steps flagged

No circularity: EMA is an empirical engineering system with independent evaluation results

full rationale

The paper introduces EMA as a practical adaptation framework using state transformers for input alignment and utility-based labeling prioritization. No equations, derivations, or predictions are presented that reduce by construction to fitted parameters or self-citations. The central claims rest on empirical measurements across eight heterogeneous systems (14.9-42.4% cost reduction, 6.9-31.3% performance gains), which are externally falsifiable and not forced by any internal definition or prior self-citation chain. The approach is data-centric and system-driven without renaming known results or smuggling ansatzes. This is a standard systems contribution whose validity depends on the reported experiments rather than any self-referential logic.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 2 invented entities

The central claim rests on the unverified effectiveness of two new components introduced by the paper: state transformers and the labeling prioritization logic. These are treated as domain assumptions without independent evidence beyond the reported evaluations.

axioms (2)
  • domain assumption State transformers can align inputs from new environments with previously similar states across diverse system and model designs
    Invoked to enable warm-start adaptation without full retraining.
  • domain assumption Prioritizing high-utility data for labeling balances training and labeling costs effectively in dynamic settings
    Core of the data-centric approach described.
invented entities (2)
  • state transformers no independent evidence
    purpose: Map new environment states to similar prior states for model warm-start
    New component introduced to reduce training overhead
  • high-utility data prioritization mechanism no independent evidence
    purpose: Select data points to label while trading off labeling vs training cost
    New data-centric technique for the adaptation pipeline

pith-pipeline@v0.9.0 · 5559 in / 1383 out tokens · 48052 ms · 2026-05-15T06:14:30.849439+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

43 extracted references · 43 canonical work pages · 1 internal anchor

  1. [1]

    Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang

    Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep Learning with Differential Privacy(CCS)

  2. [2]

    Venkat Arun and Hari Balakrishnan. 2018. Copa: Practical Delay-Based Congestion Control for the Internet. InNSDI

  3. [3]

    Simon Eismann, Long Bui, Johannes Grohmann, Cristina Abad, Nikolas Herbst, and Samuel Kounev. 2021. Sizeless: Predicting the optimal size of serverless functions. InMiddleware. 248–259

  4. [4]

    Xianghong Fang, Haoli Bai, Ziyi Guo, Bin Shen, Steven Hoi, and Zenglin Xu. 2020. DART: Domain-adversarial residual-transfer net- works for unsupervised cross-domain image classification.Neural Networks127 (2020), 182–192

  5. [5]

    Guy Hacohen, Avihu Dekel, and Daphna Weinshall. 2022. Active learning on a budget: Opposite strategies suit high and low budgets. arXiv preprint arXiv:2202.02794(2022)

  6. [6]

    Syed Usman Jafri, Sanjay Rao, Vishal Shrivastav, and Mohit Tawar- malani. 2024. Leo: Online ML-based Traffic Classification at Multi- Terabit Line Rate. InNSDI

  7. [7]

    Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. 2009. Multi- class active learning for image classification. InCVPR

  8. [8]

    Madhyastha, and Mosharaf Chowd- hury

    Fan Lai, Yinwei Dai, Harsha V. Madhyastha, and Mosharaf Chowd- hury. 2023. ModelKeeper: Accelerating DNN Training via Automated Training Warmup. InNSDI

  9. [9]

    Madhyastha, and Mosharaf Chowd- hury

    Fan Lai, Xiangfeng Zhu, Harsha V. Madhyastha, and Mosharaf Chowd- hury. 2021. Oort: Efficient Federated Learning via Guided Participant Selection. InOSDI

  10. [10]

    Parnell, Andreea Anghel, and Haralam- pos Pozidis

    Malgorzata Lazuka, Thomas P. Parnell, Andreea Anghel, and Haralam- pos Pozidis. 2022. Search-based Methods for Multi-Cloud Configura- tion. InCLOUD

  11. [11]

    David D Lewis. 1995. A sequential algorithm for training text classifiers: Corrigendum and additional data. InAcm Sigir Forum, Vol. 29. ACM New York, NY, USA, 13–19

  12. [12]

    Wenxin Li, Xin He, Yuan Liu, Keqiu Li, Kai Chen, Zhao Ge, Zewei Guan, Heng Qi, Song Zhang, and Guyue Liu. 2024. Flow scheduling with imprecise knowledge. InNSDI

  13. [13]

    Chieh-Jan Mike Liang, Zilin Fang, Yuqing Xie, Fan Yang, Zhao Lucis Li, Li Lyna Zhang, Mao Yang, and Lidong Zhou. 2023. On Modular Learning of Distributed Systems for Predicting End-to-End Latency. InNSDI

  14. [14]

    Chieh-Jan Mike Liang, Hui Xue, Mao Yang, Lidong Zhou, Lifei Zhu, Zhao Lucis Li, Zibo Wang, Qi Chen, Quanlu Zhang, Chuanjie Liu, and Wenjun Dai. 2020. AutoSys: The Design and Operation of Learning- Augmented Systems. InATC

  15. [15]

    Xudong Liao, Han Tian, Chaoliang Zeng, Xinchen Wan, and Kai Chen

  16. [16]

    InEuroSys

    Astraea: Towards Fair and Efficient Learning-based Congestion Control. InEuroSys

  17. [17]

    Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable features with deep adaptation networks. In ICML

  18. [18]

    Hongzi Mao, Ravi Netravali, and Mohammad Alizadeh. 2017. Neural Adaptive Video Streaming with Pensieve. InSIGCOMM

  19. [19]

    Hongzi Mao, Malte Schwarzkopf, Shaileshh Bojja Venkatakrishnan, Zili Meng, and Mohammad Alizadeh. 2019. Learning scheduling algo- rithms for data processing clusters. InSIGCOMM

  20. [20]

    James Newling and François Fleuret. 2017. K-Medoids For K-Means Seeding. InNIPS

  21. [21]

    S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang. 2011. Domain Adapta- tion via Transfer Component Analysis.IEEE Transactions on Neural Networks22, 2 (Feb. 2011), 199–210

  22. [22]

    Lorenzo Pappone, Alessio Sacco, Flavio Esposito, et al . 2025. Mu- tant: Learning Congestion Control from Existing Protocols via Online Reinforcement Learning. InNSDI

  23. [23]

    Yarin Perry, Felipe Vieira Frujeri, Chaim Hoch, Srikanth Kandula, Ishai Menache, Michael Schapira, and Aviv Tamar. 2023. DOTE: Rethinking (Predictive) WAN Traffic Engineering. InNSDI

  24. [24]

    Banerjee, Saurabh Jha, Zbigniew T

    Haoran Qiu, Subho S. Banerjee, Saurabh Jha, Zbigniew T. Kalbarczyk, and Ravishankar K. Iyer. 2020. FIRM: An Intelligent Fine-grained Resource Management Framework for SLO-Oriented Microservices. InOSDI

  25. [25]

    Haoran Qiu, Weichao Mao, Archit Patke, Shengkun Cui, Chen Wang, Hubertus Franke, Zbigniew Kalbarczyk, Tamer Basar, and Ravi K. Iyer

  26. [26]

    FLASH: Fast Model Adaptation in ML-Centric Cloud Platforms. InMLSys

  27. [27]

    Haoran Qiu, Weichao Mao, Chen Wang, Hubertus Franke, Alaa Youssef, Zbigniew T Kalbarczyk, Tamer Başar, and Ravishankar K Iyer. 2023. AWARE: Automate workload autoscaling with reinforcement learning in production cloud systems. InATC

  28. [28]

    Ozan Sener and Silvio Savarese. 2017. Active learning for convolutional neural networks: A core-set approach.arXiv preprint arXiv:1708.00489 (2017)

  29. [29]

    Burr Settles. 2009. Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences. 13

  30. [30]

    Ghorbani

    Iman Sharafaldin, Arash Habibi Lashkari, and Ali A. Ghorbani. 2018. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization. InInternational Conference on Information Systems Security and Privacy

  31. [31]

    Han Tian, Xudong Liao, Decang Sun, Chaoliang Zeng, Yilun Jin, Junxue Zhang, Xinchen Wan, Zilong Wang, Yong Wang, and Kai Chen. 2025. Achieving Fairness Generalizability for Learning-based Congestion Control with Jury. InEuroSys

  32. [32]

    Vojislav Ðukić, Sangeetha Abdu Jyothi, Bojan Karlaš, Muhsen Owaida, Ce Zhang, and Ankit Singla. 2019. Is advance knowledge of flow sizes a plausible assumption?. InNSDI

  33. [33]

    Zibo Wang, Pinghe Li, Chieh-Jan Mike Liang, Feng Wu, and Francis Y. Yan. 2024. Autothrottle: A Practical Bi-Level Approach to Resource Management for SLO-Targeted Microservices. InNSDI

  34. [34]

    Zhaodong Wang, Samuel Lin, Guanqing Yan, Soudeh Ghorbani, Min- lan Yu, Jiawei Zhou, Nathan Hu, Lopa Baruah, Sam Peters, Srikanth Kamath, Jerry Yang, and Ying Zhang. 2025. Intent-Driven Network Management with Multi-Agent LLMs: The Confucius Framework. In SIGCOMM

  35. [35]

    Duo Wu, Xianda Wang, Yaqi Qiao, Zhi Wang, Junchen Jiang, Shuguang Cui, and Fangxin Wang. 2024. NetLLM: Adapting Large Language Models for Networking. InSIGCOMM

  36. [36]

    Zhiying Xu, Francis Y Yan, Rachee Singh, Justin T Chiu, Alexander M Rush, and Minlan Yu. 2023. Teal: Learning-accelerated optimization of WAN traffic engineering. InSIGCOMM

  37. [37]

    Yan, Hudson Ayers, Chenzhi Zhu, Sadjad Fouladi, James Hong, Keyi Zhang, Philip Levis, and Keith Winstein

    Francis Y. Yan, Hudson Ayers, Chenzhi Zhu, Sadjad Fouladi, James Hong, Keyi Zhang, Philip Levis, and Keith Winstein. 2020. Learning in situ: a randomized experiment in video streaming. InNSDI

  38. [38]

    Ying Yan, Liang Jeff Chen, and Zheng Zhang. 2014. Error-bounded sampling for analytics on big sparse data.VLDB7, 13 (2014), 1508– 1519

  39. [39]

    Qingqing Yang, Xi Peng, Li Chen, Libin Liu, Jingze Zhang, Hong Xu, Baochun Li, and Gong Zhang. 2022. Deepqueuenet: Towards scalable and generalized network performance estimation with packet-level visibility. InSIGCOMM

  40. [40]

    Qizheng Zhang, Ali Imran, Enkeleda Bardhi, Tushar Swamy, Nathan Zhang, Muhammad Shahbaz, and Kunle Olukotun. 2024. Caravan: Practical Online Learning of In-Network ML Models with Labeling Agents. InOSDI

  41. [41]

    Qizhen Zhang, Kelvin K. W. Ng, Charles Kazer, Shen Yan, João Sedoc, and Vincent Liu. 2021. MimicNet: fast performance estimates for data center networks with machine learning. InSIGCOMM

  42. [42]

    Edward Suh, and Christina Delimitrou

    Yanqi Zhang, Weizhe Hua, Zhuangzhuang Zhou, G. Edward Suh, and Christina Delimitrou. 2021. Sinan: ML-based and QoS-aware resource management for cloud microservices. InASPLOS

  43. [43]

    Haizhong Zheng, Rui Liu, Fan Lai, and Atul Prakash. 2023. Coverage- centric Coreset Selection for High Pruning Rates. InICLR. 14 A Privacy Concern in EMA In this section, we prove that adding small noise to each data sample in the source dataset does not compromise the performance of TCA. Since TCA minimizes the Maximum Mean Discrepancy (MMD) between the ...