pith. machine review for the scientific record. sign in

arxiv: 2604.24027 · v1 · submitted 2026-04-27 · 💻 cs.DC

Recognition: unknown

KubePACS: Kubernetes Cluster Using Performant, Highly Available, and Cost Efficient Spot Instances

Authors on Pith no claims yet

Pith reviewed 2026-05-08 01:37 UTC · model grok-4.3

classification 💻 cs.DC
keywords Kubernetesspot instancescloud provisioningcost optimizationperformance per dollarautoscalingavailability
0
0 comments X

The pith

KubePACS picks Kubernetes spot instances by jointly optimizing real-time prices, workload performance benchmarks, and multi-node availability scores.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

KubePACS is a Kubernetes-native system that selects spot instances to minimize cost while maximizing performance and maintaining high availability. It treats node selection as a multi-objective optimization problem that pulls in current spot prices, measured performance numbers for the target workload, and multi-node Spot Placement Scores. The system solves this with Integer Linear Programming guided by the Golden Section Search algorithm and plugs the results into the Karpenter autoscaler. Evaluations on synthetic and real workloads show average gains of 55 percent and peaks above 80 percent in performance per dollar compared with price-only or limited-availability baselines.

Core claim

KubePACS formulates instance-type selection as a multi-objective optimization that incorporates spot prices, performance benchmarks, and multi-node Spot Placement Scores, solves the problem efficiently with an Integer Linear Programming model guided by Golden Section Search, and integrates the outcome with Karpenter to jointly decide instance types and scaling while preserving availability.

What carries the argument

Multi-objective Integer Linear Programming model guided by Golden Section Search that balances cost, performance, and availability using real-time spot prices, benchmarks, and multi-node SPS data.

If this is right

  • Kubernetes operators can run the same workloads on spot instances with materially higher throughput per dollar spent.
  • The Karpenter integration lets existing clusters adopt the new selection logic without changing their scaling workflow.
  • Workload-specific scaling of performance metrics lets the same system handle both general and specialized instance preferences.
  • Clusters stay available because the optimization explicitly includes multi-node placement scores rather than treating availability as an afterthought.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same optimization structure could be applied to other container platforms or to on-demand instances when performance data is available.
  • Cloud providers might begin publishing richer, workload-aware benchmark data if systems like this demonstrate consistent value.
  • Longer-running experiments on GPU or memory-intensive jobs would test whether the reported gains generalize beyond the evaluated workloads.

Load-bearing premise

That current spot prices, performance benchmarks, and Spot Placement Scores remain reliable predictors of long-term cost, speed, and interruption risk once instances are actually running.

What would settle it

Deploy KubePACS and a price-only baseline on identical production workloads for multiple weeks, then compare measured total cost of ownership, actual throughput, and interruption frequency against the predictions made at provisioning time.

Figures

Figures reproduced from arXiv: 2604.24027 by Enrique Molina-Gim\'enez, Kyumin Kim, Kyungyong Lee, Pedro Garc\'ia-L\'opez, Taeyoon Kim.

Figure 1
Figure 1. Figure 1: Comparing benchmark score (CoreMark) and spot instance price variation. Different instance configurations show view at source ↗
Figure 2
Figure 2. Figure 2: Different multiple spot instance availability for view at source ↗
Figure 4
Figure 4. Figure 4: Implementation of KubePACS to provision Kuber view at source ↗
Figure 5
Figure 5. Figure 5: Comparing KubePACS with related works shows superb performance for cost and performance efficiency view at source ↗
Figure 6
Figure 6. Figure 6: Overall efficiency (𝐸𝑇𝑜𝑡𝑎𝑙) changes with 𝛼, the cost-performance trade-off parameter Comparison with SpotKube in Small-scale Scenarios. SpotKube was omitted from the extensive large-scale experiments as its orig￾inal evaluation framework [16] is specifically designed for small￾scale microservice environments. To ensure a fair baseline com￾parison, the experimental setup described in the SpotKube publica￾ti… view at source ↗
Figure 7
Figure 7. Figure 7: ILP solver latency and efficiency changes for differ view at source ↗
Figure 8
Figure 8. Figure 8: Effectiveness of special feature instance selection view at source ↗
Figure 10
Figure 10. Figure 10: Comparison of Cost, Performance, and Availability between KubePACS and Karpenter view at source ↗
Figure 11
Figure 11. Figure 11: Execution times of graph analysis application view at source ↗
Figure 12
Figure 12. Figure 12: The effectiveness of KubePACS interrupt handling view at source ↗
read the original abstract

Cloud users aim to minimize cost while maximizing performance by selecting the most suitable instance types for their workloads. To reduce expenses, spot instances have been widely adopted due to their steep discounts compared to on-demand pricing. However, their use introduces reliability risks due to potential interruptions, and existing research has primarily focused on mitigating this trade-off from a cost or availability perspective alone. Despite the diversity in hardware capabilities among instance types, current provisioning systems tend to ignore performance variation, selecting nodes solely based on minimum resource requirements. In this paper, we present KubePACS, a Kubernetes-native spot instance provisioning system that constructs node pools optimized for both cost and performance while guaranteeing high availability. KubePACS formulates the node selection process as a multi-objective optimization problem, incorporating real-time data such as spot prices, performance benchmarks, and availability scores, including the multi-node Spot Placement Score (SPS). It solves this problem efficiently using an Integer Linear Programming (ILP) approach guided by the Golden Section Search (GSS) algorithm to find the optimal configuration. By integrating with the Karpenter node autoscaler, KubePACS jointly optimizes instance-type selection and node scaling decisions within a standard provisioning workflow. KubePACS also adopts a novel heuristic to support workload-specific preferences by scaling performance metrics for specialized instances. Through extensive evaluation across synthetic and real-world workloads, KubePACS demonstrates on average 55.09% and up to 81.06% higher performance per dollar over state-of-the-art solutions such as Karpenter, SpotVerse, and SpotKube, which only reference the spot instance prices and limited availability data.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper presents KubePACS, a Kubernetes-native spot instance provisioning system that formulates node selection as a multi-objective ILP problem solved using the Golden Section Search algorithm. It incorporates real-time spot prices, performance benchmarks, and multi-node Spot Placement Scores (SPS) to optimize for cost, performance, and availability, integrates with Karpenter, and uses a heuristic for workload-specific preferences. Through evaluations on synthetic and real-world workloads, it claims an average 55.09% and up to 81.06% higher performance per dollar compared to baselines like Karpenter, SpotVerse, and SpotKube.

Significance. If the claimed gains prove robust, KubePACS could meaningfully advance practical cost-performance optimization for spot-based Kubernetes deployments by jointly handling instance-type selection and scaling. The ILP+GSS formulation and integration with an existing autoscaler provide a concrete, deployable approach that goes beyond price-only or availability-only methods used in baselines. The explicit quantitative comparison to three prior systems is a strength, but only if the evaluation captures sustained behavior rather than point-in-time selection.

major comments (1)
  1. [Evaluation (abstract claims and implied experimental section)] The headline result (55.09% average and 81.06% maximum performance-per-dollar improvement) is load-bearing on the claim that real-time inputs (spot prices, benchmarks, SPS) produce node pools whose measured cost, throughput, and uptime match the selection-time predictions. The abstract states that baselines use only prices and limited availability data, yet provides no indication that KubePACS evaluation includes post-provisioning interruption modeling, price fluctuation during workload runs, or replacement overhead applied uniformly to all systems. If workloads are short or interruptions are omitted, the reported gains do not demonstrate production-relevant superiority.
minor comments (1)
  1. [Abstract] The abstract refers to 'extensive evaluation' and 'real-world workloads' without defining workload durations, interruption rates, statistical tests, or error bars; adding these details would strengthen verifiability of the 55.09%/81.06% figures.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address the major comment on the evaluation below and clarify the methodology while strengthening the presentation of results where appropriate.

read point-by-point responses
  1. Referee: [Evaluation (abstract claims and implied experimental section)] The headline result (55.09% average and 81.06% maximum performance-per-dollar improvement) is load-bearing on the claim that real-time inputs (spot prices, benchmarks, SPS) produce node pools whose measured cost, throughput, and uptime match the selection-time predictions. The abstract states that baselines use only prices and limited availability data, yet provides no indication that KubePACS evaluation includes post-provisioning interruption modeling, price fluctuation during workload runs, or replacement overhead applied uniformly to all systems. If workloads are short or interruptions are omitted, the reported gains do not demonstrate production-relevant superiority.

    Authors: We thank the referee for highlighting the importance of validating that selection-time predictions translate to measured outcomes under realistic conditions. Our evaluation deployed the node pools chosen by KubePACS and each baseline (Karpenter, SpotVerse, SpotKube) on actual AWS spot instances and executed both the synthetic benchmarks and real-world workloads on those live clusters. The reported performance-per-dollar values are derived from measured throughput and actual incurred costs during these runs, which therefore incorporate any interruptions, price changes, and replacement effects that occurred. The multi-node SPS component was specifically intended to improve uptime, and observed uptime contributed to the metrics. We acknowledge, however, that the manuscript does not explicitly document workload durations, the uniform modeling of replacement overhead, or simulated price fluctuations applied identically to all systems. In the revised manuscript we will expand the experimental section with a dedicated subsection describing the evaluation protocol, workload runtimes, observed interruption rates, and how replacement costs were factored uniformly into the performance-per-dollar calculations for every compared system. This addition will make the production relevance of the results more transparent. revision: partial

Circularity Check

0 steps flagged

No circularity: standard ILP+GSS on external inputs with empirical evaluation

full rationale

The paper's core derivation formulates node-pool selection as a multi-objective ILP incorporating external real-time spot prices, performance benchmarks, and multi-node SPS, then solves it with the standard GSS algorithm before integrating with Karpenter. The reported 55.09% average (up to 81.06%) perf/$ gains are obtained from post-deployment measurements on synthetic and real workloads, not by algebraic reduction of the objective to its own fitted parameters or self-citations. No equation or step equates the claimed superiority to a tautological renaming or input-only prediction; the baselines are simply described as using fewer data sources. The chain is therefore self-contained against external cloud APIs and benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The approach rests on the assumption that external real-time cloud signals (prices, benchmarks, SPS) are trustworthy inputs and that the ILP solver plus GSS will find usable configurations within practical time limits; no free parameters or invented entities are explicitly named in the abstract.

axioms (1)
  • domain assumption Real-time spot prices, performance benchmarks, and multi-node Spot Placement Scores are reliable and stable enough to drive provisioning decisions
    Invoked as inputs to the multi-objective optimization problem.

pith-pipeline@v0.9.0 · 5623 in / 1219 out tokens · 44106 ms · 2026-05-08T01:37:31.348057+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

70 extracted references · 36 canonical work pages

  1. [1]

    Bilge Acun, Benjamin Lee, Fiodar Kazhamiaka, Kiwan Maeng, Udit Gupta, Manoj Chakkaravarthy, David Brooks, and Carole-Jean Wu. 2023. Carbon Explorer: A Holistic Framework for Designing Carbon Aware Datacenters. InProceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2(Vancouver...

  2. [2]

    Orna Agmon Ben-Yehuda, Muli Ben-Yehuda, Assaf Schuster, and Dan Tsafrir

  3. [3]

    Deconstructing Amazon EC2 Spot Instance Pricing.ACM Trans. Econ. Comput.1, 3, Article 16 (sep 2013), 20 pages. doi:10.1145/2509413.2509416

  4. [4]

    Anthropic. 2026. Claude Code. https://www.anthropic.com/claude-code

  5. [5]

    AWS. 2024. EC2 Fleet and Spot Fleet. https://docs.aws.amazon.com/AWSEC2/ latest/UserGuide/Fleets.html

  6. [6]

    Azure. 2025. Spot Placement Score. https://learn.microsoft.com/en-us/azure/ virtual-machine-scale-sets/spot-placement-score

  7. [7]

    Microsoft Azure. 2025. Compute benchmark scores for Azure Linux VMs. https://learn.microsoft.com/en-us/azure/virtual-machines/linux/compute- benchmark-scores

  8. [8]

    Luciano Baresi, Davide Yi Xian Hu, Giovanni Quattrocchi, and Luca Terracciano

  9. [9]

    In Service-Oriented Computing: 19th International Conference, ICSOC 2021, Virtual Event, November 22–25, 2021, Proceedings(Dubai, United Arab Emirates)

    KOSMOS: Vertical and Horizontal Resource Autoscaling for Kubernetes. In Service-Oriented Computing: 19th International Conference, ICSOC 2021, Virtual Event, November 22–25, 2021, Proceedings(Dubai, United Arab Emirates). Springer- Verlag, Berlin, Heidelberg, 821–829. doi:10.1007/978-3-030-91431-8_59

  10. [10]

    Matt Baughman, Simon Caton, Christian Haas, Ryan Chard, Rich Wolski, Ian Foster, and Kyle Chard. 2019. Deconstructing the 2017 Changes to AWS Spot Market Pricing. InProceedings of the 10th Workshop on Scientific Cloud Computing (Phoenix, AZ, USA)(ScienceCloud ’19). Association for Computing Machinery, New York, NY, USA, 19–26. doi:10.1145/3322795.3331465

  11. [11]

    Tzu-Tao Chang and Shivaram Venkataraman. 2025. Eva: Cost-Efficient Cloud- Based Cluster Scheduling. InProceedings of the Twentieth European Conference on Computer Systems(Rotterdam, Netherlands)(EuroSys ’25). Association for Computing Machinery, New York, NY, USA, 1399–1416. doi:10.1145/3689031. 3717483

  12. [12]

    Yen-Ching Chang. 2009. N-Dimension Golden Section Search: Its Variants and Limitations. In2009 2nd International Conference on Biomedical Engineering and Informatics. 1–6. doi:10.1109/BMEI.2009.5304779

  13. [13]

    Sungkyu Cheon, Kyumin Kim, Kyunghwan Kim, Moohyun Song, and Kyungyong Lee. 2025. Multi-Node Spot Instances Availability Score Collection System. In Proceedings of the 34th International Symposium on High-Performance Parallel and Distributed Computing (HPDC ’25). ACM

  14. [14]

    Andrew Chung, Jun Woo Park, and Gregory R. Ganger. 2018. Stratus: cost-aware container scheduling in the public cloud. InProceedings of the ACM Symposium on Cloud Computing(Carlsbad, CA, USA)(SoCC ’18). Association for Computing Machinery, New York, NY, USA, 121–134. doi:10.1145/3267809.3267819

  15. [15]

    Karpenter community. 2025. Karpenter : Just-in-time Nodes for Any Kubernetes Cluster. https://karpenter.sh/

  16. [16]

    Standard Performance Evaluation Corporation. 2023. SPEC Benchmarks and Tools. https://www.spec.org/benchmarks.html

  17. [17]

    Edsger W Dijkstra. 1959. A note on two problems in connexion with graphs. Numerische mathematik1, 1 (1959), 269–271

  18. [18]

    Dasith Edirisinghe, Kavinda Rajapakse, Pasindu Abeysinghe, and Sunimal Rath- nayake. 2024. SpotKube: Cost-Optimal Microservices Deployment with Cluster Autoscaling and Spot Pricing. In2024 IEEE International Conference on Cloud Com- puting Technology and Science (CloudCom). 87–94. doi:10.1109/CloudCom62794. 2024.00026

  19. [19]

    EEMBC. 2025. CPU Benchmark – MCU Benchmark – CoreMark – EEMBC Embedded Microprocessor Benchmark Consortium. https://www.eembc.org/ coremark/

  20. [20]

    Nnamdi Ekwe-Ekwe and Adam Barker. 2018. Location, Location, Location: Ex- ploring Amazon EC2 Spot Instance Pricing Across Geographical Regions. In2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). 370–373. doi:10.1109/CCGRID.2018.00059

  21. [21]

    Apache Software Foundation. 2004. Apache Hadoop. http://hadoop.apache.org/

  22. [22]

    Goldberg

    David E. Goldberg. 1989.Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, New York

  23. [23]

    Robert Grandl, Ganesh Ananthanarayanan, Srikanth Kandula, Sriram Rao, and Aditya Akella. 2014. Multi-resource packing for cluster schedulers. InProceedings of the 2014 ACM Conference on SIGCOMM(Chicago, Illinois, USA)(SIGCOMM ’14). Association for Computing Machinery, New York, NY, USA, 455–466. doi:10. 1145/2619239.2626334

  24. [24]

    Murli Gupta. 1991. Numerical Methods and Software (David Kahaner, Cleve Moler, and Stephen Nash).Siam Review - SIAM REV33 (03 1991). doi:10.1137/1033033

  25. [25]

    Lee, Gu-Yeon Wei, David Brooks, and Carole-Jean Wu

    Udit Gupta, Young Geun Kim, Sylvia Lee, Jordan Tse, Hsien-Hsin S. Lee, Gu-Yeon Wei, David Brooks, and Carole-Jean Wu. 2022. Chasing Carbon: The Elusive Environmental Footprint of Computing.IEEE Micro42, 4 (July 2022), 37–47. doi:10.1109/MM.2022.3163226

  26. [26]

    Ganger, and Phillip B

    Aaron Harlap, Andrew Chung, Alexey Tumanov, Gregory R. Ganger, and Phillip B. Gibbons. 2018. Tributary: spot-dancing for elastic services with latency SLOs. In 2018 USENIX Annual Technical Conference (USENIX ATC 18). USENIX Associa- tion, Boston, MA, 1–14. https://www.usenix.org/conference/atc18/presentation/ harlap

  27. [27]

    Ganger, and Phillip B

    Aaron Harlap, Alexey Tumanov, Andrew Chung, Gregory R. Ganger, and Phillip B. Gibbons. 2017. Proteus: agile ML elasticity through tiered reliability in dynamic resource markets. InProceedings of the Twelfth European Conference on Computer Systems(Belgrade, Serbia)(EuroSys ’17). Association for Computing Machinery, New York, NY, USA, 589–604. doi:10.1145/3...

  28. [28]

    Darong Huang, Luis Costero, Ali Pahlevan, Marina Zapater, and David Atienza

  29. [29]

    doi:10.1109/TSUSC.2024.3359325

    CloudProphet: A Machine Learning-Based Performance Prediction for Public Clouds.IEEE Transactions on Sustainable Computing9, 4 (2024), 661–676. doi:10.1109/TSUSC.2024.3359325

  30. [30]

    Yoonseo Hur and Kyungyong Lee. 2024. CNN Training Latency Prediction Using Hardware Metrics on Cloud GPUs. In2024 IEEE 24th International Sympo- sium on Cluster, Cloud and Internet Computing (CCGrid). 216–226. doi:10.1109/ CCGrid59990.2024.00033

  31. [31]

    David Irwin, Prashant Shenoy, Pradeep Ambati, Prateek Sharma, Supreeth Shastri, and Ahmed Ali-Eldin. 2019. The Price Is (Not) Right: Reflections on Pricing for Transient Cloud Servers. In2019 28th International Conference on Computer Communication and Networks (ICCCN). 1–9. doi:10.1109/ICCCN.2019.8846933

  32. [32]

    AWS What is New. 2021. Introducing Amazon EC2 Spot placement score. https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-ec2- spot-placement-score/

  33. [33]

    Characterizing spot price dynamics in public cloud environments

    Bahman Javadi, Ruppa K. Thulasiram, and Rajkumar Buyya. 2013. Characterizing spot price dynamics in public cloud environments.Future Generation Computer Systems29, 4 (2013), 988–999. doi:10.1016/j.future.2012.06.012 Special Section: Utility and Cloud Computing

  34. [34]

    Eric Jonas, Qifan Pu, Shivaram Venkataraman, Ion Stoica, and Benjamin Recht

  35. [35]

    InProceedings of the 2017 Symposium on Cloud Computing(Santa Clara, California)(SoCC ’17)

    Occupy the Cloud: Distributed Computing for the 99%. InProceedings of the 2017 Symposium on Cloud Computing(Santa Clara, California)(SoCC ’17). ACM, New York, NY, USA, 445–451. doi:10.1145/3127479.3128601

  36. [36]

    Jouppi, Cliff Young, Nishant Patil, David Patterson, et al

    Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, et al. 2017. In- Datacenter Performance Analysis of a Tensor Processing Unit.SIGARCH Comput. Archit. News45, 2 (June 2017), 1–12. doi:10.1145/3140659.3080246

  37. [37]

    Rao, Aadharsh Kannan, and R

    Cinar Kilcioglu, Justin M. Rao, Aadharsh Kannan, and R. Preston McAfee. 2017. Usage Patterns and the Economics of the Public Cloud. InProceedings of the 26th International Conference on World Wide Web(Perth, Australia)(WWW ’17). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 83–91. doi:10.1145/3038912.3052707

  38. [38]

    KyungHwan Kim and Kyungyong Lee. 2024. Making Cloud Spot Instance Inter- ruption Events Visible. InProceedings of the ACM Web Conference 2024(Singapore, Singapore)(WWW ’24). Association for Computing Machinery, New York, NY, USA, 2998–3009. doi:10.1145/3589334.3645548

  39. [39]

    Kyunghwan Kim, Subin Park, Jaeil Hwang, Hyeonyoung Lee, Seokhyeon Kang, and Kyungyong Lee. 2023. Public Spot Instance Dataset Archive Service. In Companion Proceedings of the ACM Web Conference 2023(Austin, TX, USA)(WWW ’23 Companion). Association for Computing Machinery, New York, NY, USA, 69–72. doi:10.1145/3543873.3587314

  40. [40]

    Primate Labs. 2025. Geekbench 6 - Cross-Platform Benchmark. https://www. geekbench.com/

  41. [41]

    C. C. Lee and D. T. Lee. 1985. A Simple On-Line Bin-Packing Algorithm.J. ACM 32, 3 (jul 1985), 562–572. doi:10.1145/3828.3833

  42. [42]

    Sungjae Lee, Yoonseo Hur, Subin Park, and Kyungyong Lee. 2022. PROFET: PROFiling-based CNN Training Latency ProphET for GPU Cloud Instances. In 2022 IEEE International Conference on Big Data (Big Data). 186–193. doi:10.1109/ BigData55660.2022.10020212

  43. [43]

    S. Lee, J. Hwang, and K. Lee. 2022. SpotLake: Diverse Spot Instance Dataset Archive Service. In2022 IEEE International Symposium on Workload Charac- terization (IISWC). IEEE Computer Society, Los Alamitos, CA, USA, 242–255. doi:10.1109/IISWC55918.2022.00029

  44. [44]

    Marathe, R

    Aniruddha Marathe, Rachel Harris, David Lowenthal, Bronis R. de Supinski, Barry Rountree, and Martin Schulz. 2014. Exploiting Redundancy for Cost-Effective, Time-Constrained Execution of HPC Applications on Amazon EC2. InProceedings of the 23rd International Symposium on High-Performance Parallel and Distributed Computing(Vancouver, BC, Canada)(HPDC ’14)....

  45. [45]

    Robert McGill, John W Tukey, and Wayne A Larsen. 1978. Variations of box plots. The American Statistician32, 1 (1978), 12–16

  46. [46]

    Mitchell

    Stuart A. Mitchell. 2003. PuLP. https://github.com/coin-or/pulp

  47. [47]

    Jayashree Mohan, Amar Phanishayee, Janardhan Kulkarni, and Vijay Chi- dambaram. 2022. Looking Beyond GPUs for DNN Scheduling on Multi-Tenant Clusters. In16th USENIX Symposium on Operating Systems Design and Im- plementation (OSDI 22). USENIX Association, Carlsbad, CA, 579–596. https: //www.usenix.org/conference/osdi22/presentation/mohan Middleware ’26, De...

  48. [48]

    Danielle Movsowitz Davidow, Orna Agmon Ben-Yehuda, and Orr Dunkelman

  49. [49]

    InPro- ceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing(Orlando, FL, USA)(HPDC ’23)

    Deconstructing Alibaba Cloud’s Preemptible Instance Pricing. InPro- ceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing(Orlando, FL, USA)(HPDC ’23). Association for Computing Machinery, New York, NY, USA, 253–265. https://dl.acm.org/doi/pdf/10.1145/ 3588195.3593001

  50. [50]

    1999.The PageRank Citation Ranking: Bringing Order to the Web.Technical Report 1999-

    Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999.The PageRank Citation Ranking: Bringing Order to the Web.Technical Report 1999-

  51. [51]

    http://ilpubs.stanford.edu:8090/422/ Previous number = SIDL-WP-1999-0120

    Stanford InfoLab. http://ilpubs.stanford.edu:8090/422/ Previous number = SIDL-WP-1999-0120

  52. [52]

    Gopal Krishna Patro and Kishore Kumar Sahu

    S. Gopal Krishna Patro and Kishore Kumar Sahu. 2015. Normalization: A Preprocessing Stage.CoRRabs/1503.06462 (2015). arXiv:1503.06462 http: //arxiv.org/abs/1503.06462

  53. [53]

    Google Cloud Platform. 2025. CoreMark scores of VM instances by family. https://cloud.google.com/compute/docs/coremark-scores-of-vm-instances

  54. [54]

    Rodrigues, Eduardo Nakano, and Alba C.M.A

    Gustavo Portella, Genaina N. Rodrigues, Eduardo Nakano, and Alba C.M.A. Melo

  55. [55]

    Statistical analysis of Amazon EC2 cloud pricing models

    Statistical analysis of Amazon EC2 cloud pricing models.Concurrency and Computation: Practice and Experience31, 18 (2019), e4451. doi:10.1002/cpe.4451 e4451 cpe.4451

  56. [56]

    Press, Saul A

    William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. 2007.Numerical Recipes 3rd Edition: The Art of Scientific Computing. Chapter 10.1. Golden Section Search in One Dimension(3 ed.). Cambridge University Press, USA

  57. [57]

    Ana Radovanović, Ross Koningstein, Ian Schneider, Bokan Chen, Alexandre Duarte, Binz Roy, Diyue Xiao, Maya Haridasan, Patrick Hung, Nick Care, et al

  58. [58]

    Carbon-aware computing for datacenters.IEEE Transactions on Power Systems38, 2 (2022), 1270–1280

  59. [59]

    David K. Rensin. 2015.Kubernetes - Scheduling the Future at Cloud Scale. 1005 Gravenstein Highway North Sebastopol, CA 95472. All pages. http://www.oreilly. com/webops-perf/free/kubernetes.csp

  60. [60]

    Josep Sampé, Marc Sánchez-Artigas, Gil Vernik, Ido Yehekzel, and Pedro García- López. 2023. Outsourcing Data Processing Jobs With Lithops.IEEE Transactions on Cloud Computing11, 1 (2023), 1026–1037. doi:10.1109/TCC.2021.3129000

  61. [61]

    Shazibul Islam Shamim, Jonathan Alexander Gibson, Patrick Morrison, and Akond Rahman. 2022. Benefits, Challenges, and Research Topics: A Multi-vocal Literature Review of Kubernetes. arXiv:2211.07032 [cs.SE] https://arxiv.org/abs/ 2211.07032

  62. [62]

    Prateek Sharma, David Irwin, and Prashant Shenoy. 2017. Portfolio-driven resource management for transient cloud servers.Proceedings of the ACM on Measurement and Analysis of Computing Systems1, 1 (2017), 1–23

  63. [63]

    Supreeth Shastri and David Irwin. 2017. HotSpot: automated server hopping in cloud spot markets. InProceedings of the 2017 Symposium on Cloud Computing (Santa Clara, California)(SoCC ’17). Association for Computing Machinery, New York, NY, USA, 493–505. doi:10.1145/3127479.3132017

  64. [64]

    Myungjun Son, Gulsum Gudukbay Akbulut, and Mahmut Taylan Kandemir

  65. [65]

    InProceedings of the 25th International Mid- dleware Conference(Hong Kong, Hong Kong)(Middleware ’24)

    SpotVerse: Optimizing Bioinformatics Workflows with Multi-Region Spot Instances in Galaxy and Beyond. InProceedings of the 25th International Mid- dleware Conference(Hong Kong, Hong Kong)(Middleware ’24). Association for Computing Machinery, New York, NY, USA, 74–87. doi:10.1145/3652892.3700750

  66. [66]

    Son and K

    M. Son and K. Lee. 2018. Distributed Matrix Multiplication Performance Estimator for Machine Learning Jobs in Cloud Computing. In2018 IEEE 11th International Conference on Cloud Computing (CLOUD), Vol. 00. 638–645. doi:10.1109/CLOUD. 2018.00088

  67. [67]

    Suramya Tomar. 2006. Converting video formats with FFmpeg.Linux J.2006, 146 (June 2006), 10

  68. [68]

    Cheng Wang, Qianlin Liang, and Bhuvan Urgaonkar. 2017. An Empirical Anal- ysis of Amazon EC2 Spot Instance Features Affecting Cost-Effective Resource Procurement. InProceedings of the 8th ACM/SPEC on International Conference on Performance Engineering(L’Aquila, Italy)(ICPE ’17). Association for Computing Machinery, New York, NY, USA, 63–74. doi:10.1145/30...

  69. [69]

    Rich Wolski, John Brevik, Ryan Chard, and Kyle Chard. 2017. Probabilistic Guarantees of Execution Duration for Amazon Spot Instances. InProceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis(Denver, Colorado)(SC ’17). Association for Computing Machinery, New York, NY, USA, Article 18, 11 pages. doi:10....

  70. [70]

    Zhanghao Wu, Wei-Lin Chiang, Ziming Mao, Zongheng Yang, Eric Friedman, Scott Shenker, and Ion Stoica. 2024. Can’t Be Late: Optimizing Spot Instance Savings under Deadlines. In21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24). USENIX Association, Santa Clara, CA, 185–203. https://www.usenix.org/conference/nsdi24/presentation/wu...