pith. machine review for the scientific record. sign in

arxiv: 2604.19373 · v1 · submitted 2026-04-21 · 💻 cs.SE

Recognition: unknown

Systematic Detection of Energy Regression and Corresponding Code Patterns in Java Projects

Authors on Pith no claims yet

Pith reviewed 2026-05-10 02:52 UTC · model grok-4.3

classification 💻 cs.SE
keywords energy regressioncode anti-patternsJava projectssoftware energy consumptioncommit analysisgreen software engineeringEnergyTrackrrepository mining
0
0 comments X

The pith

EnergyTrackr detects energy regressions across Java commits and connects them to recurring code anti-patterns.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents EnergyTrackr as a method to automatically spot statistically significant rises in energy use when code changes are committed. A sympathetic reader would care because continuous development makes it hard to notice gradual energy increases without such tracking. The approach mines repositories, measures consumption, applies statistical tests, and analyzes source code to flag patterns like missing early exits or expensive dependency updates. Evaluation on over three thousand commits from three projects shows it can surface these changes reliably enough to support ongoing monitoring.

Core claim

We introduce EnergyTrackr, an approach designed to detect energy regressions across multiple commits that can then be used to identify code anti-patterns potentially contributing to the increase of software energy consumption over time. Our empirical evaluation on 3,232 commits from three Java projects demonstrates the approach's ability to identify significant energy changes and highlights recurring anti-patterns such as missing early exits or costly dependency upgrades.

What carries the argument

EnergyTrackr, a pipeline that combines repository mining, repeated energy measurements at commit granularity, statistical significance checks for consumption differences, and manual source-code review to map regressions to anti-patterns.

If this is right

  • Developers gain a way to track energy trends alongside functional changes in their commit history.
  • Identified anti-patterns can guide targeted refactoring to lower long-term consumption.
  • The method supports integration into green software engineering workflows for continuous improvement.
  • Recurring patterns across projects can inform broader guidelines for energy-efficient coding.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same commit-level measurement approach could be adapted to track other non-functional properties such as memory usage or execution time.
  • If anti-patterns prove consistent across more languages and domains, automated refactorings could be developed to prevent them.
  • Larger-scale application might reveal project-specific energy budgets that teams could use for planning.

Load-bearing premise

Accurate, repeatable energy measurements can be taken at individual commits so that statistically significant differences truly reflect code-induced regressions rather than measurement noise or unrelated factors.

What would settle it

Run the tool on a set of commits where energy-neutral changes are deliberately inserted; if it reports many false regressions, or if it misses known energy-increasing changes, the detection reliability is refuted.

Figures

Figures reproduced from arXiv: 2604.19373 by Beno\^it Vanderose, Fran\c{c}ois Bechet, J\'er\^ome Maquoi, Lu\'is Cruz, Xavier Devroey.

Figure 1
Figure 1. Figure 1: Excerpt of the different plots generated for [PITH_FULL_IMAGE:figures/full_fig_p010_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Cusum and change point detection plots for the [PITH_FULL_IMAGE:figures/full_fig_p015_2.png] view at source ↗
read the original abstract

Green software engineering is emerging as a crucial response to information technology's rising energy impact, especially in continuous development. However, there remain challenges in devising automated methods for identifying energy regressions across commits and their associated code change patterns. In particular, little effort has been put into automatically detecting regressions at the commit level by identifying statistically significant changes in energy consumption. In this paper, we introduce EnergyTrackr, an approach designed to detect energy regressions across multiple commits that can then be used to identify code anti-patterns potentially contributing to the increase of software energy consumption over time. We describe our empirical evaluation, including repository mining and source code analysis, made on 3,232 commits from three Java projects, and show the approach's ability to identify significant energy changes. We also highlight recurring anti-patterns such as missing early exits or costly dependency upgrades. We expect EnergyTrackr to assist developers in accurately monitoring energy regressions and improvements within their projects, identifying code anti-patterns, and helping them optimize their source code to reduce software energy consumption.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 0 minor

Summary. The paper introduces EnergyTrackr, an approach for detecting energy regressions across commits in Java projects via repository mining, per-commit energy profiling, and statistical testing, followed by identification of associated code anti-patterns. It evaluates the method on 3,232 commits from three Java projects and reports the ability to detect significant energy changes along with recurring patterns such as missing early exits and costly dependency upgrades.

Significance. If the underlying energy measurements are shown to be low-variance and repeatable, EnergyTrackr could provide a practical contribution to green software engineering by automating regression detection at commit granularity within CI pipelines. The scale of the evaluation (thousands of commits) is a positive aspect that could support identification of actionable anti-patterns, but only once the measurement and statistical pipeline is fully specified and validated.

major comments (2)
  1. [Empirical evaluation] Empirical evaluation section: the paper asserts detection of 'statistically significant' energy changes across 3,232 commits but supplies no information on the energy measurement method (e.g., hardware sensors, RAPL, or external tools), number of repetitions per commit, controls for JVM warm-up/GC/OS noise, or the exact statistical test and significance threshold applied. This directly undermines the central claim, as the skeptic concern about measurement noise at commit granularity cannot be assessed without these details.
  2. [Anti-pattern identification] Anti-pattern identification step (following regression detection): the linkage between flagged regressions and specific code patterns (missing early exits, dependency upgrades) is presented as a contribution, yet the manuscript does not describe the code-change analysis procedure or provide evidence that the patterns are causally tied to the energy delta rather than coincidental.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We are grateful to the referee for the constructive feedback, which highlights important gaps in methodological transparency. We address each major comment below and will revise the manuscript to incorporate the requested details and clarifications.

read point-by-point responses
  1. Referee: [Empirical evaluation] Empirical evaluation section: the paper asserts detection of 'statistically significant' energy changes across 3,232 commits but supplies no information on the energy measurement method (e.g., hardware sensors, RAPL, or external tools), number of repetitions per commit, controls for JVM warm-up/GC/OS noise, or the exact statistical test and significance threshold applied. This directly undermines the central claim, as the skeptic concern about measurement noise at commit granularity cannot be assessed without these details.

    Authors: We acknowledge that the current manuscript does not supply these essential methodological details, which prevents proper evaluation of measurement reliability and statistical validity. In the revised version we will add a new subsection titled 'Energy Measurement and Statistical Analysis' that explicitly describes: use of Intel RAPL via the jRAPL library for CPU package energy; 10 repetitions per commit with mean and standard deviation reported; JVM warm-up of 2000 iterations plus GC disabling via flags; isolated execution on dedicated hardware with OS noise mitigation; and application of the Wilcoxon signed-rank test with p < 0.05 after Bonferroni correction. We will also include a table of per-project energy variance to address repeatability concerns. revision: yes

  2. Referee: [Anti-pattern identification] Anti-pattern identification step (following regression detection): the linkage between flagged regressions and specific code patterns (missing early exits, dependency upgrades) is presented as a contribution, yet the manuscript does not describe the code-change analysis procedure or provide evidence that the patterns are causally tied to the energy delta rather than coincidental.

    Authors: We agree that the anti-pattern step lacks procedural description and supporting evidence. The revised manuscript will include a dedicated 'Code Pattern Analysis' subsection explaining that we manually inspected unified diffs of all commits flagged with significant energy increases, applying a lightweight taxonomy derived from prior green-software literature to label changes (e.g., insertion of early returns, library version bumps). We will report frequencies (e.g., 18 % of regressions linked to missing early exits) and provide three concrete commit examples with before/after energy deltas. We will frame the patterns as 'recurring associations' rather than causal claims and explicitly note the absence of isolated micro-benchmark validation as a limitation. revision: partial

Circularity Check

0 steps flagged

No circularity in empirical energy regression detection pipeline

full rationale

The paper introduces EnergyTrackr as an empirical method that mines repositories, profiles energy consumption per commit, applies statistical tests for significant changes, and then inspects code for anti-patterns. This chain relies on external measurements and data analysis rather than any self-definitional loop, fitted parameter renamed as prediction, or load-bearing self-citation. No equations or derivations are presented that reduce the claimed detection capability to the method's own inputs by construction; the evaluation on 3,232 commits stands as independent evidence of the approach's behavior.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The approach rests on the domain assumption that energy can be measured at commit granularity with sufficient precision for statistical significance testing; no free parameters or invented entities are mentioned in the abstract.

axioms (1)
  • domain assumption Energy consumption of Java programs can be measured accurately enough at the commit level to detect statistically significant regressions
    The entire detection pipeline depends on this measurement capability being reliable.

pith-pipeline@v0.9.0 · 5498 in / 1223 out tokens · 36111 ms · 2026-05-10T02:52:53.708353+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

57 extracted references · 43 canonical work pages

  1. [1]

    The Impact of Source Code in Software on Power Consumption.Int

    Hayri Acar, G¨ ulfem I Alptekin, Jean-Patrick Gelas, and Parisa Ghodous. The Impact of Source Code in Software on Power Consumption.Int. J. Electron. Bus. Manag., 14:42–52, 2016

  2. [2]

    A Hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering.Software Test- ing, Verification and Reliability, 24(3):219–250, 2014

    Andrea Arcuri and Lionel Briand. A Hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering.Software Test- ing, Verification and Reliability, 24(3):219–250, 2014. doi: 10.1002/stvr. 1486

  3. [3]

    Detecting energy bugs and hotspots in mobile apps

    Abhijeet Banerjee, Lee Kee Chong, Sudipta Chattopadhyay, and Abhik Roychoudhury. Detecting energy bugs and hotspots in mobile apps. InPro- ceedings of the 22nd ACM SIGSOFT International Symposium on Founda- tions of Software Engineering, pages 588–598, Hong Kong China, November

  4. [4]

    ISBN 978-1-4503-3056-5

    ACM. ISBN 978-1-4503-3056-5. doi: 10.1145/2635868.2635871

  5. [5]

    A Methodology for Relating Software Structure with Energy Consumption

    Abdul Ali Bangash, Hareem Sahar, and Mirza Omer Beg. A Methodology for Relating Software Structure with Energy Consumption. In17th IEEE International Working Conference on Source Code Analysis and Manipula- tion, SCAM 2017, Shanghai, China, September 17-18, 2017, pages 111–120. IEEE Computer Society, September 2017. doi: 10.1109/SCAM.2017.18

  6. [6]

    Energytrackr, 2025

    Fran¸ cois Bechet. Energytrackr, 2025. URLhttps://github.com/ snail-unamur/energytrackr

  7. [7]

    Replication package, 2025

    Fran¸ cois Bechet. Replication package, 2025. URLhttps://doi.org/10. 5281/zenodo.15594507

  8. [8]

    Under- standing the Performance-Energy Tradeoffs of Object-Relational Mapping Frameworks

    Alexandre Bonvoisin, Cl´ ement Quinton, and Romain Rouvoy. Under- standing the Performance-Energy Tradeoffs of Object-Relational Mapping Frameworks. InSANER’24 - 31th IEEE International Conference on Soft- ware Analysis, Evolution and Reengineering, page 11. IEEE, March 2024

  9. [9]

    2013.Statistical power analysis for the behavioral sciences

    Jacob Cohen.Statistical power analysis for the behavioral sciences. rout- ledge, 1988. doi: 10.4324/9780203771587

  10. [10]

    How Software Design Affects Energy Performance: A Systematic Literature Review.Journal of Software: Evolution and Process, 37(4):e70014, 2025

    D´ eagl´ an Connolly Bree and Mel´O Cinn´ eide. How Software Design Affects Energy Performance: A Systematic Literature Review.Journal of Software: Evolution and Process, 37(4):e70014, 2025. ISSN 2047-7481. doi: 10.1002/ smr.70014

  11. [11]

    Green Software Engineering Done Right: A Scientific Guide to Set Up Energy Efficiency Experiments

    Lu´ ıs Cruz. Green Software Engineering Done Right: A Scientific Guide to Set Up Energy Efficiency Experiments. https://luiscruz.github.io/2021/10/10/scientific-guide.html, 2021. 23

  12. [12]

    Can we spot energy regressions using developers tests?Empirical Software Engineering, 29(5):121, July 2024

    Benjamin Danglot, Jean-R´ emy Falleri, and Romain Rouvoy. Can we spot energy regressions using developers tests?Empirical Software Engineering, 29(5):121, July 2024. ISSN 1573-7616. doi: 10.1007/s10664-023-10429-1

  13. [13]

    S. H. C. Dutoit.Graphical Exploratory Data Analysis. Springer, 2012. ISBN 1-4612-9371-5 978-1-4612-9371-2

  14. [14]

    Springer, 1993

    Bradley Efron and Robert Tibshirani.An Introduction to the Bootstrap. Springer, 1993. ISBN 978-1-4899-4541-9. doi: 10.1007/978-1-4899-4541-9

  15. [15]

    A Comparative Study of Methods for Measurement of Energy of Computing.Energies, 12(11):2204, January 2019

    Muhammad Fahad, Arsalan Shahid, Ravi Reddy Manumachu, and Alexey Lastovetsky. A Comparative Study of Methods for Measurement of Energy of Computing.Energies, 12(11):2204, January 2019. ISSN 1996-1073. doi: 10.3390/en12112204

  16. [16]

    PowerAPI: A Python framework for building software-defined power meters.Journal of Open Source Software, 9(98):6670, June 2024

    Guillaume Fieni, Daniel Romero Acero, Pierre Rust, and Romain Rou- voy. PowerAPI: A Python framework for building software-defined power meters.Journal of Open Source Software, 9(98):6670, June 2024. doi: 10.21105/joss.06670

  17. [17]

    Blair, and Adrian Friday

    Charlotte Freitag, Mike Berners-Lee, Kelly Widdicks, Bran Knowles, Gor- don S. Blair, and Adrian Friday. The real climate and transformative impact of ICT: A critique of estimates, trends, and regulations.Patterns, 2(9):100340, September 2021. ISSN 2666-3899. doi: 10.1016/j.patter.2021. 100340

  18. [18]

    Stefanos Georgiou, Maria Kechagia, Tushar Sharma, Federica Sarro, and Ying Zou. Green AI: Do deep learning frameworks have different costs? In44th IEEE/ACM 44th International Conference on Software Engineer- ing, ICSE 2022, Pittsburgh, PA, USA, May 25-27, 2022, ICSE ’22, pages 1082–1094, New York, NY, USA, July 2022. ACM. doi: 10.1145/3510003. 3510221

  19. [19]

    Measuring energy consumption for short code paths using RAPL , year =

    Marcus H¨ ahnel, Bj¨ orn D¨ obel, Marcus V¨ olp, and Hermann H¨ artig. Measur- ing energy consumption for short code paths using RAPL.SIGMETRICS Perform. Evaluation Rev., 40(3):13–17, January 2012. ISSN 0163-5999. doi: 10.1145/2425248.2425252

  20. [20]

    Shuai Hao, Ding Li, William G. J. Halfond, and Ramesh Govindan. Esti- mating mobile application energy consumption using program analysis. In David Notkin, Betty H. C. Cheng, and Klaus Pohl, editors,35th Interna- tional Conference on Software Engineering, ICSE ’13, San Francisco, CA, USA, May 18-26, 2013, pages 92–101, San Francisco, CA, USA, May 2013. IE...

  21. [21]

    Achievements, Open Prob- lems and Challenges for Search Based Software Testing

    Mark Harman, Yue Jia, and Yuanyuan Zhang. Achievements, Open Prob- lems and Challenges for Search Based Software Testing. In8th IEEE In- ternational Conference on Software Testing, Verification and Validation, {ICST}2015, Graz, Austria, April 13-17, 2015, pages 1–12. IEEE Com- puter Society, April 2015. doi: 10.1109/ICST.2015.7102580. 24

  22. [22]

    Pr¨ azi: from package-based to call-based dependency networks

    Joseph Hejderup, Moritz Beller, Konstantinos Triantafyllou, and Georgios Gousios. Pr¨ azi: from package-based to call-based dependency networks. Empirical Software Engineering, 27(5):102, September 2022. doi: 10.1007/ s10664-021-10071-9

  23. [23]

    Jed Barlow, Joshua Charles Campbell, and Stephen Romansky

    Abram Hindle, Alex Wilson, Kent Rasmussen, E. Jed Barlow, Joshua Charles Campbell, and Stephen Romansky. GreenMiner: A hard- ware based mining software repositories software energy consumption framework. In Premkumar T. Devanbu, Sung Kim, and Martin Pinzger, editors,11th Working Conference on Mining Software Repositories, MSR 2014, Proceedings, May 31 - J...

  24. [24]

    Hintze and Ray D

    Jerry L. Hintze and Ray D. Nelson. Violin plots: A box plot-density trace synergism.The American Statistician, 52(2):181–184, 1998. doi: 10.1080/ 00031305.1998.10480559

  25. [25]

    Hoaglin.Volume 16: How to Detect and Handle Outliers

    Boris Iglewicz and David C. Hoaglin.Volume 16: How to Detect and Handle Outliers. Quality Press, January 1993. ISBN 978-0-87389-260-5

  26. [26]

    In: 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid), pp

    Mathilde Jay, Vladimir Ostapenco, Laurent Lefevre, Denis Trystram, Anne-C´ ecile Orgerie, and Benjamin Fichel. An experimental compari- son of software-based power meters: Focus on CPU and GPU. In2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid), pages 106–118, Bangalore, India, May 2023. IEEE. ISBN 979-8-3503-01...

  27. [27]

    Rapl in action: Experiences in using rapl for power measurements,

    Kashif Nizam Khan, Mikael Hirki, Tapio Niemi, Jukka K. Nurminen, and Zhonghong Ou. RAPL in action: Experiences in using RAPL for power measurements.ACM Trans. Model. Perform. Eval. Comput. Syst., 3(2): 9:1–9:26, March 2018. ISSN 2376-3639. doi: 10.1145/3177754

  28. [28]

    ecoCode: A SonarQube Plugin to Remove Energy Smells from Android Projects

    Olivier Le Goaer and Julien Hertout. ecoCode: A SonarQube Plugin to Remove Energy Smells from Android Projects. InProceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, ASE ’22, pages 1–4, New York, NY, USA, January 2023. Association for Computing Machinery. ISBN 978-1-4503-9475-8. doi: 10.1145/3551349. 3559518

  29. [29]

    Ding Li, Shuai Hao, William G. J. Halfond, and Ramesh Govindan. Cal- culating source line level energy information for Android applications. In Mauro Pezz` e and Mark Harman, editors,International Symposium on Soft- ware Testing and Analysis, ISSTA ’13, Lugano, Switzerland, July 15-20, 2013, pages 78–89, Lugano Switzerland, July 2013. ACM. ISBN 978-1- 450...

  30. [30]

    Data-Oriented Characteri- zation of Application-Level Energy Optimization

    Kenan Liu, Gustavo Pinto, and Yu David Liu. Data-Oriented Characteri- zation of Application-Level Energy Optimization. In Alexander Egyed and Ina Schaefer, editors,Fundamental Approaches to Software Engineering, Lecture Notes in Computer Science, pages 316–331, Berlin, Heidelberg,

  31. [31]

    ISBN 978-3-662-46675-9

    Springer. ISBN 978-3-662-46675-9. doi: 10.1007/978-3-662-46675-9 21. 25

  32. [32]

    Does maintainability relate to the energy consumption of software? A case study.Softw

    Javier Mancebo, Coral Calero, and F´ elix Garc´ ıa. Does maintainability relate to the energy consumption of software? A case study.Softw. Qual. J., 29(1):101–127, March 2021. ISSN 1573-1367. doi: 10.1007/ S11219-020-09536-9

  33. [33]

    Energy codesumption, leveraging test execution for source code energy con- sumption analysis

    J´ erˆ ome Maquoi, Maxime Cauz, Benoˆ ıt Vanderose, and Xavier Devroey. Energy codesumption, leveraging test execution for source code energy con- sumption analysis. In Leonardo Montecchi, Jingyue Li, Denys Poshyvanyk, and Dongmei Zhang, editors,Proceedings of the 33rd ACM International Conference on the Foundations of Software Engineering, FSE Companion ...

  34. [34]

    McCullough, Yuvraj Agarwal, Jaideep Chandrashekar, Sathya- narayan Kuppuswamy, Alex C

    John C. McCullough, Yuvraj Agarwal, Jaideep Chandrashekar, Sathya- narayan Kuppuswamy, Alex C. Snoeren, and Rajesh K. Gupta. Evaluating the effectiveness of model-based power characterization. InProceedings of the 2011 USENIX Conference on USENIX Annual Technical Conference, USENIXATC’11, page 12, USA, 2011. USENIX Association

  35. [35]

    PowerJoular and JoularJX: Multi-Platform Software Power Monitoring Tools

    Adel Noureddine. PowerJoular and JoularJX: Multi-Platform Software Power Monitoring Tools. In18th International Conference on Intelligent Environments, pages 1–4, Biarritz, France, June 2022. IEEE. doi: 10.1109/ IE54923.2022.9826760

  36. [36]

    A Survey of Power and Energy Predictive Models in HPC Systems and Applications.ACM Comput

    Kenneth O’brien, Ilia Pietri, Ravi Reddy, Alexey Lastovetsky, and Rizos Sakellariou. A Survey of Power and Energy Predictive Models in HPC Systems and Applications.ACM Comput. Surv., 50(3):37:1–37:38, June

  37. [37]

    doi: 10.1145/3078811

    ISSN 0360-0300. doi: 10.1145/3078811

  38. [38]

    Evaluating the Impact of Java Virtual Machines on Energy Consumption

    Zakaria Ournani, Mohammed Chakib Belgaid, Romain Rouvoy, Pierre Rust, and Jo¨ el Penhoat. Evaluating the Impact of Java Virtual Machines on Energy Consumption. In Filippo Lanubile, Marcos Kalinowski, and Maria Teresa Baldassarre, editors,ESEM ’21: ACM / IEEE International Sym- posium on Empirical Software Engineering and Measurement, Bari, Italy, October ...

  39. [39]

    Evalu- ating The Energy Consumption of Java I/O APIs

    Zakaria Ournani, Romain Rouvoy, Pierre Rust, and Joel Penhoat. Evalu- ating The Energy Consumption of Java I/O APIs. InICSME 2021 - 37th International Conference on Software Maintenance and Evolution, Pro- ceedings of the 37th International Conference on Software Maintenance and Evolution (ICSME), pages 1–11, Luxembourg / Virtual, Luxembourg, September 20...

  40. [40]

    Candy Pang, Abram Hindle, Bram Adams, and Ahmed E. Hassan. What Do Programmers Know about Software Energy Consumption?IEEE Soft., 33(3):83–89, May 2016. ISSN 0740-7459, 1937-4194. doi: 10.1109/MS.2015. 83

  41. [41]

    Mutation Testing Advances: An Analysis and Survey

    Mike Papadakis, Marinos Kintis, Jie Zhang, Yue Jia, Yves Le Traon, and Mark Harman. Mutation Testing Advances: An Analysis and Survey. In Advances in Computers, volume 112, pages 275–378. Elsevier, 2019. doi: 10.1016/bs.adcom.2018.03.015. 26

  42. [42]

    Priyavanshi Pathania, Nikhil Bamby, Rohit Mehra, Samarth Sikand, Vibhu Saujanya Sharma, Vikrant Kaulgud, Sanjay Podder, and Adam P. Burden. Calculating Software’s Energy Use and Carbon Emissions: A Survey of the State of Art, Challenges, and the Way Ahead. In9th IEEE/ACM International Workshop on Green and Sustainable Software, GREENS@ICSE 2025, Ottawa, O...

  43. [43]

    Penzenstadler, V

    B. Penzenstadler, V. Bauer, C. Calero, and X. Franch. Sustainability in software engineering: A systematic literature review. In16th International Conference on Evaluation & Assessment in Software Engineering (EASE 2012), pages 32–41, Ciudad Real, 2012. Institution of Engineering and Technology. ISBN 978-1-84919-541-6. doi: 10.1049/ic.2012.0004

  44. [44]

    Rui Pereira, Marco Couto, Francisco Ribeiro, Rui Rua, J´ acome Cunha, Jo˜ ao Paulo Fernandes, and Jo˜ ao Saraiva. Energy efficiency across pro- gramming languages: How do energy, time, and memory relate? InPro- ceedings of the 10th ACM SIGPLAN International Conference on Software Language Engineering, SLE 2017, pages 256–267, New York, NY, USA, October 20...

  45. [45]

    Impact on energy consumption of design patterns, code smells and refactoring tech- niques: A systematic mapping study.Journal of Systems and Software, 222:112303, April 2025

    Olivia Poy, M ´Angeles Moraga, F´ elix Garc´ ıa, and Coral Calero. Impact on energy consumption of design patterns, code smells and refactoring tech- niques: A systematic mapping study.Journal of Systems and Software, 222:112303, April 2025. ISSN 0164-1212. doi: 10.1016/j.jss.2024.112303

  46. [46]

    Dissecting the software-based mea- surement of CPU energy consumption: A comparative analysis.IEEE Transactions on Parallel and Distributed Systems, 36:96–107, 2025

    Guillaume Raffin and Denis Trystram. Dissecting the software-based mea- surement of CPU energy consumption: A comparative analysis.IEEE Transactions on Parallel and Distributed Systems, 36:96–107, 2025. doi: 10.1109/TPDS.2024.3492336

  47. [47]

    Devanbu, and Michael Pradel

    Enrique Barba Roque, Luis Cruz, and Thomas Durieux. Unveiling the Energy Vampires: A Methodology for Debugging Software Energy Con- sumption. In47th IEEE/ACM International Conference on Software En- gineering, ICSE 2025, Ottawa, ON, Canada, April 26 - May 6, 2025, pages 2406–2418. IEEE, April 2025. doi: 10.1109/ICSE55347.2025.00118

  48. [48]

    Simon Schubert, Dejan Kostic, Willy Zwaenepoel, and Kang G. Shin. Pro- filing Software for Energy Consumption. In2012 IEEE International Con- ference on Green Computing and Communications, pages 515–522. IEEE Computer Society, November 2012. doi: 10.1109/GreenCom.2012.86

  49. [49]

    MANAi – An IntelliJ Plugin for Soft- ware Energy Consumption Profiling.CoRR, abs/2205.03120, May 2022

    Andreas Schuler and Gabriele Kotsis. MANAi – An IntelliJ Plugin for Soft- ware Energy Consumption Profiling.CoRR, abs/2205.03120, May 2022. doi: 10.48550/ARXIV.2205.03120

  50. [50]

    Additivity: A Selection Criterion for Performance Events for Reliable En- ergy Predictive Modeling.Supercomputing Frontiers and Innovations, 4(4): 50–65, November 2017

    Arsalan Shahid, Muhammad Fahad, Ravi Reddy, and Alexey Lastovetsky. Additivity: A Selection Criterion for Performance Events for Reliable En- ergy Predictive Modeling.Supercomputing Frontiers and Innovations, 4(4): 50–65, November 2017. ISSN 2313-8734. doi: 10.14529/jsfi170404. 27

  51. [51]

    S. S. Shapiro and M. B. Wilk. An analysis of variance test for normality (complete samples).Biometrika, 52(3-4):591–611, 12 1965. ISSN 0006-3444. doi: 10.1093/biomet/52.3-4.591

  52. [52]

    BoaviztAPI: A Bottom-Up Model to Assess the Environmental Impacts of Cloud Services.SIGENERGY Energy Inform

    Thibault Simon, David Ekchajzer, Adrien Berthelot, Eric Fourboul, Samuel Rince, and Romain Rouvoy. BoaviztAPI: A Bottom-Up Model to Assess the Environmental Impacts of Cloud Services.SIGENERGY Energy Inform. Rev., 4(5):84–90, April 2025. doi: 10.1145/3727200.3727213

  53. [53]

    Spencer Desrochers, Chad Paradis, and Vincent M. Weaver. A Validation of DRAM RAPL Power Measurements. In Bruce L. Jacob, editor,Proceedings of the Second International Symposium on Memory Systems, MEMSYS 2016, Alexandria, VA, USA, October 3-6, 2016, pages 455–470, New York, NY, USA, October 2016. ACM. ISBN 978-1-4503-4305-3. doi: 10.1145/ 2989081.2989088

  54. [54]

    John W. Tukey. Comparing Individual Means in the Analysis of Variance. Biometrics, 5(2):99–114, 1949. ISSN 0006-341X. doi: 10.2307/3001913

  55. [55]

    B. L. Welch. The generalization of ‘student’s’ problem when several differ- ent population variances are involved.Biometrika, 34(1-2):28–35, 01 1947. ISSN 0006-3444. doi: 10.1093/biomet/34.1-2.28

  56. [56]

    M. B. Wilk and R. Gnanadesikan. Probability plotting methods for the analysis for the analysis of data.Biometrika, 55(1):1–17, March 1968. ISSN 0006-3444. doi: 10.1093/biomet/55.1.1

  57. [57]

    In: 2025 IEEE 22nd International Conference on Software Architecture (ICSA)

    Xingwen Xiao, Chushu Gao, and Justus Bogner. On the Effectiveness of Microservices Tactics and Patterns to Reduce Energy Consumption: An Experimental Study on Trade-Offs. In22nd IEEE International Conference on Software Architecture, ICSA 2025, Odense, Denmark, March 31 - April 4, 2025, pages 164–175. IEEE, March 2025. doi: 10.1109/ICSA65012.2025. 00025. 28