Recognition: unknown
KubePACS: Kubernetes Cluster Using Performant, Highly Available, and Cost Efficient Spot Instances
Pith reviewed 2026-05-08 01:37 UTC · model grok-4.3
The pith
KubePACS picks Kubernetes spot instances by jointly optimizing real-time prices, workload performance benchmarks, and multi-node availability scores.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
KubePACS formulates instance-type selection as a multi-objective optimization that incorporates spot prices, performance benchmarks, and multi-node Spot Placement Scores, solves the problem efficiently with an Integer Linear Programming model guided by Golden Section Search, and integrates the outcome with Karpenter to jointly decide instance types and scaling while preserving availability.
What carries the argument
Multi-objective Integer Linear Programming model guided by Golden Section Search that balances cost, performance, and availability using real-time spot prices, benchmarks, and multi-node SPS data.
If this is right
- Kubernetes operators can run the same workloads on spot instances with materially higher throughput per dollar spent.
- The Karpenter integration lets existing clusters adopt the new selection logic without changing their scaling workflow.
- Workload-specific scaling of performance metrics lets the same system handle both general and specialized instance preferences.
- Clusters stay available because the optimization explicitly includes multi-node placement scores rather than treating availability as an afterthought.
Where Pith is reading between the lines
- The same optimization structure could be applied to other container platforms or to on-demand instances when performance data is available.
- Cloud providers might begin publishing richer, workload-aware benchmark data if systems like this demonstrate consistent value.
- Longer-running experiments on GPU or memory-intensive jobs would test whether the reported gains generalize beyond the evaluated workloads.
Load-bearing premise
That current spot prices, performance benchmarks, and Spot Placement Scores remain reliable predictors of long-term cost, speed, and interruption risk once instances are actually running.
What would settle it
Deploy KubePACS and a price-only baseline on identical production workloads for multiple weeks, then compare measured total cost of ownership, actual throughput, and interruption frequency against the predictions made at provisioning time.
Figures
read the original abstract
Cloud users aim to minimize cost while maximizing performance by selecting the most suitable instance types for their workloads. To reduce expenses, spot instances have been widely adopted due to their steep discounts compared to on-demand pricing. However, their use introduces reliability risks due to potential interruptions, and existing research has primarily focused on mitigating this trade-off from a cost or availability perspective alone. Despite the diversity in hardware capabilities among instance types, current provisioning systems tend to ignore performance variation, selecting nodes solely based on minimum resource requirements. In this paper, we present KubePACS, a Kubernetes-native spot instance provisioning system that constructs node pools optimized for both cost and performance while guaranteeing high availability. KubePACS formulates the node selection process as a multi-objective optimization problem, incorporating real-time data such as spot prices, performance benchmarks, and availability scores, including the multi-node Spot Placement Score (SPS). It solves this problem efficiently using an Integer Linear Programming (ILP) approach guided by the Golden Section Search (GSS) algorithm to find the optimal configuration. By integrating with the Karpenter node autoscaler, KubePACS jointly optimizes instance-type selection and node scaling decisions within a standard provisioning workflow. KubePACS also adopts a novel heuristic to support workload-specific preferences by scaling performance metrics for specialized instances. Through extensive evaluation across synthetic and real-world workloads, KubePACS demonstrates on average 55.09% and up to 81.06% higher performance per dollar over state-of-the-art solutions such as Karpenter, SpotVerse, and SpotKube, which only reference the spot instance prices and limited availability data.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper presents KubePACS, a Kubernetes-native spot instance provisioning system that formulates node selection as a multi-objective ILP problem solved using the Golden Section Search algorithm. It incorporates real-time spot prices, performance benchmarks, and multi-node Spot Placement Scores (SPS) to optimize for cost, performance, and availability, integrates with Karpenter, and uses a heuristic for workload-specific preferences. Through evaluations on synthetic and real-world workloads, it claims an average 55.09% and up to 81.06% higher performance per dollar compared to baselines like Karpenter, SpotVerse, and SpotKube.
Significance. If the claimed gains prove robust, KubePACS could meaningfully advance practical cost-performance optimization for spot-based Kubernetes deployments by jointly handling instance-type selection and scaling. The ILP+GSS formulation and integration with an existing autoscaler provide a concrete, deployable approach that goes beyond price-only or availability-only methods used in baselines. The explicit quantitative comparison to three prior systems is a strength, but only if the evaluation captures sustained behavior rather than point-in-time selection.
major comments (1)
- [Evaluation (abstract claims and implied experimental section)] The headline result (55.09% average and 81.06% maximum performance-per-dollar improvement) is load-bearing on the claim that real-time inputs (spot prices, benchmarks, SPS) produce node pools whose measured cost, throughput, and uptime match the selection-time predictions. The abstract states that baselines use only prices and limited availability data, yet provides no indication that KubePACS evaluation includes post-provisioning interruption modeling, price fluctuation during workload runs, or replacement overhead applied uniformly to all systems. If workloads are short or interruptions are omitted, the reported gains do not demonstrate production-relevant superiority.
minor comments (1)
- [Abstract] The abstract refers to 'extensive evaluation' and 'real-world workloads' without defining workload durations, interruption rates, statistical tests, or error bars; adding these details would strengthen verifiability of the 55.09%/81.06% figures.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address the major comment on the evaluation below and clarify the methodology while strengthening the presentation of results where appropriate.
read point-by-point responses
-
Referee: [Evaluation (abstract claims and implied experimental section)] The headline result (55.09% average and 81.06% maximum performance-per-dollar improvement) is load-bearing on the claim that real-time inputs (spot prices, benchmarks, SPS) produce node pools whose measured cost, throughput, and uptime match the selection-time predictions. The abstract states that baselines use only prices and limited availability data, yet provides no indication that KubePACS evaluation includes post-provisioning interruption modeling, price fluctuation during workload runs, or replacement overhead applied uniformly to all systems. If workloads are short or interruptions are omitted, the reported gains do not demonstrate production-relevant superiority.
Authors: We thank the referee for highlighting the importance of validating that selection-time predictions translate to measured outcomes under realistic conditions. Our evaluation deployed the node pools chosen by KubePACS and each baseline (Karpenter, SpotVerse, SpotKube) on actual AWS spot instances and executed both the synthetic benchmarks and real-world workloads on those live clusters. The reported performance-per-dollar values are derived from measured throughput and actual incurred costs during these runs, which therefore incorporate any interruptions, price changes, and replacement effects that occurred. The multi-node SPS component was specifically intended to improve uptime, and observed uptime contributed to the metrics. We acknowledge, however, that the manuscript does not explicitly document workload durations, the uniform modeling of replacement overhead, or simulated price fluctuations applied identically to all systems. In the revised manuscript we will expand the experimental section with a dedicated subsection describing the evaluation protocol, workload runtimes, observed interruption rates, and how replacement costs were factored uniformly into the performance-per-dollar calculations for every compared system. This addition will make the production relevance of the results more transparent. revision: partial
Circularity Check
No circularity: standard ILP+GSS on external inputs with empirical evaluation
full rationale
The paper's core derivation formulates node-pool selection as a multi-objective ILP incorporating external real-time spot prices, performance benchmarks, and multi-node SPS, then solves it with the standard GSS algorithm before integrating with Karpenter. The reported 55.09% average (up to 81.06%) perf/$ gains are obtained from post-deployment measurements on synthetic and real workloads, not by algebraic reduction of the objective to its own fitted parameters or self-citations. No equation or step equates the claimed superiority to a tautological renaming or input-only prediction; the baselines are simply described as using fewer data sources. The chain is therefore self-contained against external cloud APIs and benchmarks.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Real-time spot prices, performance benchmarks, and multi-node Spot Placement Scores are reliable and stable enough to drive provisioning decisions
Reference graph
Works this paper leans on
-
[1]
Bilge Acun, Benjamin Lee, Fiodar Kazhamiaka, Kiwan Maeng, Udit Gupta, Manoj Chakkaravarthy, David Brooks, and Carole-Jean Wu. 2023. Carbon Explorer: A Holistic Framework for Designing Carbon Aware Datacenters. InProceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2(Vancouver...
-
[2]
Orna Agmon Ben-Yehuda, Muli Ben-Yehuda, Assaf Schuster, and Dan Tsafrir
-
[3]
Deconstructing Amazon EC2 Spot Instance Pricing.ACM Trans. Econ. Comput.1, 3, Article 16 (sep 2013), 20 pages. doi:10.1145/2509413.2509416
-
[4]
Anthropic. 2026. Claude Code. https://www.anthropic.com/claude-code
2026
-
[5]
AWS. 2024. EC2 Fleet and Spot Fleet. https://docs.aws.amazon.com/AWSEC2/ latest/UserGuide/Fleets.html
2024
-
[6]
Azure. 2025. Spot Placement Score. https://learn.microsoft.com/en-us/azure/ virtual-machine-scale-sets/spot-placement-score
2025
-
[7]
Microsoft Azure. 2025. Compute benchmark scores for Azure Linux VMs. https://learn.microsoft.com/en-us/azure/virtual-machines/linux/compute- benchmark-scores
2025
-
[8]
Luciano Baresi, Davide Yi Xian Hu, Giovanni Quattrocchi, and Luca Terracciano
-
[9]
KOSMOS: Vertical and Horizontal Resource Autoscaling for Kubernetes. In Service-Oriented Computing: 19th International Conference, ICSOC 2021, Virtual Event, November 22–25, 2021, Proceedings(Dubai, United Arab Emirates). Springer- Verlag, Berlin, Heidelberg, 821–829. doi:10.1007/978-3-030-91431-8_59
-
[10]
Matt Baughman, Simon Caton, Christian Haas, Ryan Chard, Rich Wolski, Ian Foster, and Kyle Chard. 2019. Deconstructing the 2017 Changes to AWS Spot Market Pricing. InProceedings of the 10th Workshop on Scientific Cloud Computing (Phoenix, AZ, USA)(ScienceCloud ’19). Association for Computing Machinery, New York, NY, USA, 19–26. doi:10.1145/3322795.3331465
-
[11]
Tzu-Tao Chang and Shivaram Venkataraman. 2025. Eva: Cost-Efficient Cloud- Based Cluster Scheduling. InProceedings of the Twentieth European Conference on Computer Systems(Rotterdam, Netherlands)(EuroSys ’25). Association for Computing Machinery, New York, NY, USA, 1399–1416. doi:10.1145/3689031. 3717483
-
[12]
Yen-Ching Chang. 2009. N-Dimension Golden Section Search: Its Variants and Limitations. In2009 2nd International Conference on Biomedical Engineering and Informatics. 1–6. doi:10.1109/BMEI.2009.5304779
-
[13]
Sungkyu Cheon, Kyumin Kim, Kyunghwan Kim, Moohyun Song, and Kyungyong Lee. 2025. Multi-Node Spot Instances Availability Score Collection System. In Proceedings of the 34th International Symposium on High-Performance Parallel and Distributed Computing (HPDC ’25). ACM
2025
-
[14]
Andrew Chung, Jun Woo Park, and Gregory R. Ganger. 2018. Stratus: cost-aware container scheduling in the public cloud. InProceedings of the ACM Symposium on Cloud Computing(Carlsbad, CA, USA)(SoCC ’18). Association for Computing Machinery, New York, NY, USA, 121–134. doi:10.1145/3267809.3267819
-
[15]
Karpenter community. 2025. Karpenter : Just-in-time Nodes for Any Kubernetes Cluster. https://karpenter.sh/
2025
-
[16]
Standard Performance Evaluation Corporation. 2023. SPEC Benchmarks and Tools. https://www.spec.org/benchmarks.html
2023
-
[17]
Edsger W Dijkstra. 1959. A note on two problems in connexion with graphs. Numerische mathematik1, 1 (1959), 269–271
1959
-
[18]
Dasith Edirisinghe, Kavinda Rajapakse, Pasindu Abeysinghe, and Sunimal Rath- nayake. 2024. SpotKube: Cost-Optimal Microservices Deployment with Cluster Autoscaling and Spot Pricing. In2024 IEEE International Conference on Cloud Com- puting Technology and Science (CloudCom). 87–94. doi:10.1109/CloudCom62794. 2024.00026
-
[19]
EEMBC. 2025. CPU Benchmark – MCU Benchmark – CoreMark – EEMBC Embedded Microprocessor Benchmark Consortium. https://www.eembc.org/ coremark/
2025
-
[20]
Nnamdi Ekwe-Ekwe and Adam Barker. 2018. Location, Location, Location: Ex- ploring Amazon EC2 Spot Instance Pricing Across Geographical Regions. In2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). 370–373. doi:10.1109/CCGRID.2018.00059
-
[21]
Apache Software Foundation. 2004. Apache Hadoop. http://hadoop.apache.org/
2004
-
[22]
Goldberg
David E. Goldberg. 1989.Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, New York
1989
-
[23]
Robert Grandl, Ganesh Ananthanarayanan, Srikanth Kandula, Sriram Rao, and Aditya Akella. 2014. Multi-resource packing for cluster schedulers. InProceedings of the 2014 ACM Conference on SIGCOMM(Chicago, Illinois, USA)(SIGCOMM ’14). Association for Computing Machinery, New York, NY, USA, 455–466. doi:10. 1145/2619239.2626334
-
[24]
Murli Gupta. 1991. Numerical Methods and Software (David Kahaner, Cleve Moler, and Stephen Nash).Siam Review - SIAM REV33 (03 1991). doi:10.1137/1033033
-
[25]
Lee, Gu-Yeon Wei, David Brooks, and Carole-Jean Wu
Udit Gupta, Young Geun Kim, Sylvia Lee, Jordan Tse, Hsien-Hsin S. Lee, Gu-Yeon Wei, David Brooks, and Carole-Jean Wu. 2022. Chasing Carbon: The Elusive Environmental Footprint of Computing.IEEE Micro42, 4 (July 2022), 37–47. doi:10.1109/MM.2022.3163226
-
[26]
Ganger, and Phillip B
Aaron Harlap, Andrew Chung, Alexey Tumanov, Gregory R. Ganger, and Phillip B. Gibbons. 2018. Tributary: spot-dancing for elastic services with latency SLOs. In 2018 USENIX Annual Technical Conference (USENIX ATC 18). USENIX Associa- tion, Boston, MA, 1–14. https://www.usenix.org/conference/atc18/presentation/ harlap
2018
-
[27]
Aaron Harlap, Alexey Tumanov, Andrew Chung, Gregory R. Ganger, and Phillip B. Gibbons. 2017. Proteus: agile ML elasticity through tiered reliability in dynamic resource markets. InProceedings of the Twelfth European Conference on Computer Systems(Belgrade, Serbia)(EuroSys ’17). Association for Computing Machinery, New York, NY, USA, 589–604. doi:10.1145/3...
-
[28]
Darong Huang, Luis Costero, Ali Pahlevan, Marina Zapater, and David Atienza
-
[29]
doi:10.1109/TSUSC.2024.3359325
CloudProphet: A Machine Learning-Based Performance Prediction for Public Clouds.IEEE Transactions on Sustainable Computing9, 4 (2024), 661–676. doi:10.1109/TSUSC.2024.3359325
- [30]
-
[31]
David Irwin, Prashant Shenoy, Pradeep Ambati, Prateek Sharma, Supreeth Shastri, and Ahmed Ali-Eldin. 2019. The Price Is (Not) Right: Reflections on Pricing for Transient Cloud Servers. In2019 28th International Conference on Computer Communication and Networks (ICCCN). 1–9. doi:10.1109/ICCCN.2019.8846933
-
[32]
AWS What is New. 2021. Introducing Amazon EC2 Spot placement score. https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-ec2- spot-placement-score/
2021
-
[33]
Characterizing spot price dynamics in public cloud environments
Bahman Javadi, Ruppa K. Thulasiram, and Rajkumar Buyya. 2013. Characterizing spot price dynamics in public cloud environments.Future Generation Computer Systems29, 4 (2013), 988–999. doi:10.1016/j.future.2012.06.012 Special Section: Utility and Cloud Computing
-
[34]
Eric Jonas, Qifan Pu, Shivaram Venkataraman, Ion Stoica, and Benjamin Recht
-
[35]
InProceedings of the 2017 Symposium on Cloud Computing(Santa Clara, California)(SoCC ’17)
Occupy the Cloud: Distributed Computing for the 99%. InProceedings of the 2017 Symposium on Cloud Computing(Santa Clara, California)(SoCC ’17). ACM, New York, NY, USA, 445–451. doi:10.1145/3127479.3128601
-
[36]
Jouppi, Cliff Young, Nishant Patil, David Patterson, et al
Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, et al. 2017. In- Datacenter Performance Analysis of a Tensor Processing Unit.SIGARCH Comput. Archit. News45, 2 (June 2017), 1–12. doi:10.1145/3140659.3080246
-
[37]
Cinar Kilcioglu, Justin M. Rao, Aadharsh Kannan, and R. Preston McAfee. 2017. Usage Patterns and the Economics of the Public Cloud. InProceedings of the 26th International Conference on World Wide Web(Perth, Australia)(WWW ’17). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 83–91. doi:10.1145/3038912.3052707
-
[38]
KyungHwan Kim and Kyungyong Lee. 2024. Making Cloud Spot Instance Inter- ruption Events Visible. InProceedings of the ACM Web Conference 2024(Singapore, Singapore)(WWW ’24). Association for Computing Machinery, New York, NY, USA, 2998–3009. doi:10.1145/3589334.3645548
-
[39]
Kyunghwan Kim, Subin Park, Jaeil Hwang, Hyeonyoung Lee, Seokhyeon Kang, and Kyungyong Lee. 2023. Public Spot Instance Dataset Archive Service. In Companion Proceedings of the ACM Web Conference 2023(Austin, TX, USA)(WWW ’23 Companion). Association for Computing Machinery, New York, NY, USA, 69–72. doi:10.1145/3543873.3587314
-
[40]
Primate Labs. 2025. Geekbench 6 - Cross-Platform Benchmark. https://www. geekbench.com/
2025
-
[41]
C. C. Lee and D. T. Lee. 1985. A Simple On-Line Bin-Packing Algorithm.J. ACM 32, 3 (jul 1985), 562–572. doi:10.1145/3828.3833
- [42]
-
[43]
S. Lee, J. Hwang, and K. Lee. 2022. SpotLake: Diverse Spot Instance Dataset Archive Service. In2022 IEEE International Symposium on Workload Charac- terization (IISWC). IEEE Computer Society, Los Alamitos, CA, USA, 242–255. doi:10.1109/IISWC55918.2022.00029
-
[44]
Aniruddha Marathe, Rachel Harris, David Lowenthal, Bronis R. de Supinski, Barry Rountree, and Martin Schulz. 2014. Exploiting Redundancy for Cost-Effective, Time-Constrained Execution of HPC Applications on Amazon EC2. InProceedings of the 23rd International Symposium on High-Performance Parallel and Distributed Computing(Vancouver, BC, Canada)(HPDC ’14)....
-
[45]
Robert McGill, John W Tukey, and Wayne A Larsen. 1978. Variations of box plots. The American Statistician32, 1 (1978), 12–16
1978
-
[46]
Mitchell
Stuart A. Mitchell. 2003. PuLP. https://github.com/coin-or/pulp
2003
-
[47]
Jayashree Mohan, Amar Phanishayee, Janardhan Kulkarni, and Vijay Chi- dambaram. 2022. Looking Beyond GPUs for DNN Scheduling on Multi-Tenant Clusters. In16th USENIX Symposium on Operating Systems Design and Im- plementation (OSDI 22). USENIX Association, Carlsbad, CA, 579–596. https: //www.usenix.org/conference/osdi22/presentation/mohan Middleware ’26, De...
2022
-
[48]
Danielle Movsowitz Davidow, Orna Agmon Ben-Yehuda, and Orr Dunkelman
-
[49]
Deconstructing Alibaba Cloud’s Preemptible Instance Pricing. InPro- ceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing(Orlando, FL, USA)(HPDC ’23). Association for Computing Machinery, New York, NY, USA, 253–265. https://dl.acm.org/doi/pdf/10.1145/ 3588195.3593001
-
[50]
1999.The PageRank Citation Ranking: Bringing Order to the Web.Technical Report 1999-
Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999.The PageRank Citation Ranking: Bringing Order to the Web.Technical Report 1999-
1999
-
[51]
http://ilpubs.stanford.edu:8090/422/ Previous number = SIDL-WP-1999-0120
Stanford InfoLab. http://ilpubs.stanford.edu:8090/422/ Previous number = SIDL-WP-1999-0120
1999
-
[52]
Gopal Krishna Patro and Kishore Kumar Sahu
S. Gopal Krishna Patro and Kishore Kumar Sahu. 2015. Normalization: A Preprocessing Stage.CoRRabs/1503.06462 (2015). arXiv:1503.06462 http: //arxiv.org/abs/1503.06462
-
[53]
Google Cloud Platform. 2025. CoreMark scores of VM instances by family. https://cloud.google.com/compute/docs/coremark-scores-of-vm-instances
2025
-
[54]
Rodrigues, Eduardo Nakano, and Alba C.M.A
Gustavo Portella, Genaina N. Rodrigues, Eduardo Nakano, and Alba C.M.A. Melo
-
[55]
Statistical analysis of Amazon EC2 cloud pricing models
Statistical analysis of Amazon EC2 cloud pricing models.Concurrency and Computation: Practice and Experience31, 18 (2019), e4451. doi:10.1002/cpe.4451 e4451 cpe.4451
-
[56]
Press, Saul A
William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. 2007.Numerical Recipes 3rd Edition: The Art of Scientific Computing. Chapter 10.1. Golden Section Search in One Dimension(3 ed.). Cambridge University Press, USA
2007
-
[57]
Ana Radovanović, Ross Koningstein, Ian Schneider, Bokan Chen, Alexandre Duarte, Binz Roy, Diyue Xiao, Maya Haridasan, Patrick Hung, Nick Care, et al
-
[58]
Carbon-aware computing for datacenters.IEEE Transactions on Power Systems38, 2 (2022), 1270–1280
2022
-
[59]
David K. Rensin. 2015.Kubernetes - Scheduling the Future at Cloud Scale. 1005 Gravenstein Highway North Sebastopol, CA 95472. All pages. http://www.oreilly. com/webops-perf/free/kubernetes.csp
2015
-
[60]
Josep Sampé, Marc Sánchez-Artigas, Gil Vernik, Ido Yehekzel, and Pedro García- López. 2023. Outsourcing Data Processing Jobs With Lithops.IEEE Transactions on Cloud Computing11, 1 (2023), 1026–1037. doi:10.1109/TCC.2021.3129000
- [61]
-
[62]
Prateek Sharma, David Irwin, and Prashant Shenoy. 2017. Portfolio-driven resource management for transient cloud servers.Proceedings of the ACM on Measurement and Analysis of Computing Systems1, 1 (2017), 1–23
2017
-
[63]
Supreeth Shastri and David Irwin. 2017. HotSpot: automated server hopping in cloud spot markets. InProceedings of the 2017 Symposium on Cloud Computing (Santa Clara, California)(SoCC ’17). Association for Computing Machinery, New York, NY, USA, 493–505. doi:10.1145/3127479.3132017
-
[64]
Myungjun Son, Gulsum Gudukbay Akbulut, and Mahmut Taylan Kandemir
-
[65]
SpotVerse: Optimizing Bioinformatics Workflows with Multi-Region Spot Instances in Galaxy and Beyond. InProceedings of the 25th International Mid- dleware Conference(Hong Kong, Hong Kong)(Middleware ’24). Association for Computing Machinery, New York, NY, USA, 74–87. doi:10.1145/3652892.3700750
-
[66]
M. Son and K. Lee. 2018. Distributed Matrix Multiplication Performance Estimator for Machine Learning Jobs in Cloud Computing. In2018 IEEE 11th International Conference on Cloud Computing (CLOUD), Vol. 00. 638–645. doi:10.1109/CLOUD. 2018.00088
-
[67]
Suramya Tomar. 2006. Converting video formats with FFmpeg.Linux J.2006, 146 (June 2006), 10
2006
-
[68]
Cheng Wang, Qianlin Liang, and Bhuvan Urgaonkar. 2017. An Empirical Anal- ysis of Amazon EC2 Spot Instance Features Affecting Cost-Effective Resource Procurement. InProceedings of the 8th ACM/SPEC on International Conference on Performance Engineering(L’Aquila, Italy)(ICPE ’17). Association for Computing Machinery, New York, NY, USA, 63–74. doi:10.1145/30...
-
[69]
Rich Wolski, John Brevik, Ryan Chard, and Kyle Chard. 2017. Probabilistic Guarantees of Execution Duration for Amazon Spot Instances. InProceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis(Denver, Colorado)(SC ’17). Association for Computing Machinery, New York, NY, USA, Article 18, 11 pages. doi:10....
-
[70]
Zhanghao Wu, Wei-Lin Chiang, Ziming Mao, Zongheng Yang, Eric Friedman, Scott Shenker, and Ion Stoica. 2024. Can’t Be Late: Optimizing Spot Instance Savings under Deadlines. In21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24). USENIX Association, Santa Clara, CA, 185–203. https://www.usenix.org/conference/nsdi24/presentation/wu...
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.