pith. machine review for the scientific record. sign in

arxiv: 2605.05426 · v1 · submitted 2026-05-06 · 💻 cs.NI

Recognition: unknown

Performance Characterization of dApps in Open Radio Access Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-08 15:39 UTC · model grok-4.3

classification 💻 cs.NI
keywords O-RANdAppsperformance characterizationcontainerizationbare-metalsmart NIClatencyscalability
0
0 comments X

The pith

Containerized deployments of dApps in O-RAN reveal latency and resource trade-offs that offloading to smart NICs can resolve.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper implements representative dApps for Open Radio Access Networks and tests them in both bare-metal and containerized environments. It measures the resulting differences in latency, scalability, and resource utilization to guide deployment choices. The evaluation identifies performance bottlenecks in these setups. It further shows that moving dApp processing to smart Network Interface Cards reduces these issues and supports better real-time operation.

Core claim

By implementing and evaluating representative dApps across bare-metal and container deployment scenarios, this work characterizes the trade-offs in latency, scalability, and resource utilization for both intelligent and non-intelligent applications in O-RAN. Key performance bottlenecks are identified, and offloading dApps to smart NICs is demonstrated to alleviate these limitations and improve real-time responsiveness.

What carries the argument

Comparative performance evaluation of dApps in bare-metal servers versus containers, combined with hardware offloading to smart NICs for bottleneck alleviation.

Load-bearing premise

That the representative dApps selected accurately mirror the performance characteristics of actual dApps used in real-world O-RAN deployments.

What would settle it

Running the same evaluation on a production O-RAN testbed with live intelligent dApps and observing no meaningful latency difference between bare-metal and containers, or no improvement from smart NIC offloading.

Figures

Figures reproduced from arXiv: 2605.05426 by Andrea Lacava, Conrado Boeira, Dimitrios Koutsonikolas, Eduardo Baena, Israat Haque, Tommaso Melodia.

Figure 1
Figure 1. Figure 1: Messages exchange for the control loop while utilizing view at source ↗
Figure 2
Figure 2. Figure 2: Execution time of dApps on a bare-metal server without view at source ↗
Figure 3
Figure 3. Figure 3: The impact of computing sources and AI platforms on neural network-based dApps in a bare-metal deployment. view at source ↗
Figure 4
Figure 4. Figure 4: Deployment comparison across four workloads and divided into the 4 latency phases. Co-located containers (2-container view at source ↗
Figure 5
Figure 5. Figure 5: dApp latency vs. allocated CPU cores in a 2-container deployment. Only Xception benefits from multi-core allocation; view at source ↗
Figure 7
Figure 7. Figure 7: GPU-accelerated Xception at up to 16 instances. view at source ↗
Figure 6
Figure 6. Figure 6: CPU-based FCN running at up to 16 instances simulta view at source ↗
Figure 8
Figure 8. Figure 8: Resource utilization reveals the bottleneck: GPU idle view at source ↗
Figure 9
Figure 9. Figure 9: Comparison between traditional dApp deployment and view at source ↗
Figure 10
Figure 10. Figure 10: Smart NIC deployment outperforms bare-metal in terms of latency despite slower CPUs (2.6 GHz ARM vs. 3.2 GHz view at source ↗
Figure 11
Figure 11. Figure 11: The average response time reduces by 35%, worst view at source ↗
read the original abstract

Despite recommendations to deploy real-time Open Radio Access Network (O-RAN) applications (dApps) in containerized environments, existing approaches predominantly rely on bare-metal servers. Moreover, current dApp deployments offer limited visibility into the resource usage patterns of both intelligent and non-intelligent dApps, hindering informed deployment decisions. This work addresses these gaps by implementing and evaluating representative dApps across multiple deployment scenarios (bare-metal and containers) to characterize the trade-offs in latency, scalability, and resource utilization. Additionally, we identify key performance bottlenecks and demonstrate how offloading dApps to emerging hardware accelerators, such as smart Network Interface Cards (NICs), can alleviate these limitations and improve real-time responsiveness in O-RAN systems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 0 minor

Summary. The paper implements and evaluates representative dApps for O-RAN across bare-metal and containerized deployments to characterize trade-offs in latency, scalability, and resource utilization; it identifies performance bottlenecks and demonstrates that offloading to smart NICs can alleviate them and improve real-time responsiveness.

Significance. If the chosen dApps accurately reflect production workloads, the empirical characterization could guide deployment choices between bare-metal and containers in O-RAN and quantify the benefits of emerging hardware accelerators like smart NICs for latency-sensitive applications.

major comments (1)
  1. The central claim that the experiments reveal generalizable trade-offs and that NIC offloading alleviates bottlenecks rests on the representativeness of the selected dApps. The manuscript must explicitly map the computational profiles, I/O patterns, ML inference loads (if any), and real-time constraints of the implemented dApps to those of actual intelligent and non-intelligent dApps interacting with the RIC, E2 interface, and fronthaul; without this mapping or validation against production traces, the measured container-vs-bare-metal deltas and offload gains may not transfer.

Simulated Author's Rebuttal

1 responses · 1 unresolved

We thank the referee for highlighting the need to substantiate the representativeness of our dApps. We address this point directly below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: The central claim that the experiments reveal generalizable trade-offs and that NIC offloading alleviates bottlenecks rests on the representativeness of the selected dApps. The manuscript must explicitly map the computational profiles, I/O patterns, ML inference loads (if any), and real-time constraints of the implemented dApps to those of actual intelligent and non-intelligent dApps interacting with the RIC, E2 interface, and fronthaul; without this mapping or validation against production traces, the measured container-vs-bare-metal deltas and offload gains may not transfer.

    Authors: We agree that an explicit mapping is required to support claims of generalizability. Our dApps were selected to reflect documented O-RAN use cases: the non-intelligent dApp performs periodic KPI collection and reporting over the E2 interface (I/O-bound with low compute, sub-5 ms polling cycles matching fronthaul timing), while the intelligent dApp executes lightweight ML inference for anomaly detection (CPU-bound inference with occasional GPU offload, targeting <10 ms end-to-end latency per RIC control-loop requirements). We will add a dedicated subsection (new Section 3.2) that tabulates these profiles against O-RAN Alliance specifications for RIC, E2, and fronthaul interactions, including compute intensity, I/O patterns, and real-time constraints. The observed containerization overheads and NIC-offload gains are presented as illustrative of these representative workloads rather than universally quantified. We will also moderate the abstract and conclusion to avoid implying broad transferability without production-trace validation. revision: partial

standing simulated objections not resolved
  • Direct validation against proprietary production traces, as the authors have no access to operator-internal dApp workloads or E2/fronthaul logs.

Circularity Check

0 steps flagged

No circularity: purely empirical characterization of dApp deployments

full rationale

The paper performs an experimental study by implementing representative dApps and measuring latency, scalability, and resource utilization across bare-metal, containerized, and smart-NIC offload scenarios. No equations, derivations, fitted parameters, or predictions appear in the provided abstract or description. The central claims rest on direct measurements rather than any self-referential reduction, self-citation chain, or ansatz smuggled in via prior work. The representativeness assumption is a standard empirical limitation but does not create circularity under the defined criteria, as no result is forced by construction from the inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is an empirical performance evaluation paper. No free parameters, mathematical axioms, or new invented entities are described in the abstract.

pith-pipeline@v0.9.0 · 5431 in / 1162 out tokens · 71544 ms · 2026-05-08T15:39:09.587433+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

30 extracted references · 2 canonical work pages

  1. [1]

    dapps: Distributed applications for real-time inference and control in o-ran,

    S. D’Oro, M. Polese, L. Bonati, H. Cheng, and T. Melodia, “dapps: Distributed applications for real-time inference and control in o-ran,” IEEE Communications Magazine, vol. 60, no. 11, pp. 52–58, 2022

  2. [2]

    Listen-while-talking: Toward dapp-based real-time spectrum sharing in o-ran,

    R. Gangula, A. Lacava, M. Polese, S. D’Oro, L. Bonati, F. Kaltenberger, P. Johari, and T. Melodia, “Listen-while-talking: Toward dapp-based real-time spectrum sharing in o-ran,” inMILCOM 2024-2024 IEEE Military Communications Conference (MILCOM). IEEE, 2024, pp. 651–652

  3. [3]

    Det-ran: Data-driven cross-layer real-time attack detection in 5g open rans,

    A. Scalingi, S. D’Oro, F. Restuccia, T. Melodia, and D. Giustiniano, “Det-ran: Data-driven cross-layer real-time attack detection in 5g open rans,” inIEEE INFOCOM 2024-IEEE Conference on Computer Com- munications. IEEE, 2024, pp. 41–50

  4. [4]

    (2024) dApps for Real-Time RAN Control: Use Cases and Requirements

    O-RAN next Generation Research Group (nGRG). (2024) dApps for Real-Time RAN Control: Use Cases and Requirements. Accessed: Nov. 19, 2024. [Online]. Available: https://mediastorage.o-ran.org/ngrg-rr/nGRG-RR-2024-10-dApp% 20use%20cases%20and%20requirements.pdf

  5. [5]

    dApps: Enabling Real-Time AI-Based Open RAN Control,

    A. Lacava, L. Bonati, N. Mohamadi, R. Gangula, F. Kaltenberger, P. Johari, S. D’Oro, F. Cuomo, M. Polese, and T. Melodia, “dApps: Enabling Real-Time AI-Based Open RAN Control,” Computer Networks, p. 111342, 2025. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1389128625003093

  6. [6]

    LibIQ: Toward Real-Time Spectrum Classification in O- RAN dApps,

    F. Olimpieri, N. Giustini, A. Lacava, S. D’Oro, T. Melodia, and F. Cuomo, “LibIQ: Toward Real-Time Spectrum Classification in O- RAN dApps,” in2025 23rd Mediterranean Communication and Com- puter Networking Conference (MedComNet), 2025, pp. 1–6

  7. [7]

    Openairinterface: Democratizing innovation in the 5g era,

    F. Kaltenberger, A. P. Silva, A. Gosain, L. Wang, and T.-T. Nguyen, “Openairinterface: Democratizing innovation in the 5g era,”Computer Networks, vol. 176, p. 107284, 2020

  8. [8]

    dapp-library,

    “dapp-library,” 2025. [Online]. Available: https://github.com/ PINetDalhousie/dApp-library

  9. [9]

    Under- standing o-ran: Architecture, interfaces, algorithms, security, and re- search challenges,

    M. Polese, L. Bonati, S. D’oro, S. Basagni, and T. Melodia, “Under- standing o-ran: Architecture, interfaces, algorithms, security, and re- search challenges,”IEEE Communications Surveys & Tutorials, vol. 25, no. 2, pp. 1376–1411, 2023

  10. [10]

    5gperf: profiling open source 5g ran components under different architectural deployments,

    C. Wei, A. Kak, N. Choi, and T. Wood, “5gperf: profiling open source 5g ran components under different architectural deployments,” inProceedings of the ACM SIGCOMM Workshop on 5G and Beyond Network Measurements, Modeling, and Use Cases, 2022, pp. 43–49

  11. [11]

    Impact of cpu resource allocation on vran performance in o-cloud,

    M. Herv ´as-Gut´ıerrez, E. Baena, C. Baena, J. Villegas, R. Barco, and S. Fortes, “Impact of cpu resource allocation on vran performance in o-cloud,”Authorea Preprints, 2023

  12. [12]

    Yinyangran: Resource multiplexing in gpu-accelerated virtualized rans,

    L. L. Schiavo, J. A. Ayala-Romero, A. Garcia-Saavedra, M. Fiore, and X. Costa-Perez, “Yinyangran: Resource multiplexing in gpu-accelerated virtualized rans,” inIEEE INFOCOM 2024-IEEE Conference on Com- puter Communications. IEEE, 2024, pp. 721–730

  13. [13]

    Interfo-ran: Real- time in-band cellular uplink interference detection with gpu-accelerated dapps,

    N. N. Santhi, D. Villa, M. Polese, and T. Melodia, “Interfo-ran: Real- time in-band cellular uplink interference detection with gpu-accelerated dapps,”arXiv preprint arXiv:2507.23177, 2025

  14. [14]

    Open radio access networks (o-ran) experimenta- tion platform: Design and datasets,

    J. X. Salvat, J. A. Ayala-Romero, L. Zanzi, A. Garcia-Saavedra, and X. Costa-Perez, “Open radio access networks (o-ran) experimenta- tion platform: Design and datasets,”IEEE Communications Magazine, vol. 61, no. 9, pp. 138–144, 2023

  15. [15]

    Nexran: Closed-loop ran slicing in powder-a top-to-bottom open-source open-ran use case,

    D. Johnson, D. Maas, and J. Van Der Merwe, “Nexran: Closed-loop ran slicing in powder-a top-to-bottom open-source open-ran use case,” in Proceedings of the 15th ACM Workshop on Wireless Network Testbeds, Experimental evaluation & CHaracterization, 2022, pp. 17–23

  16. [16]

    {EdgeRIC}: Empowering real-time intelligent optimization and control in{NextG}cellular networks,

    W.-H. Ko, U. Ghosh, U. Dinesha, R. Wu, S. Shakkottai, and D. Bhara- dia, “{EdgeRIC}: Empowering real-time intelligent optimization and control in{NextG}cellular networks,” in21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24), 2024, pp. 1315–1330

  17. [17]

    Accelerator-aware in-network load balancing for improved application performance,

    H. Tajbakhsh, R. Parizotto, M. Neves, A. Schaeffer-Filho, and I. Haque, “Accelerator-aware in-network load balancing for improved application performance,” in2022 IFIP Networking Conference (IFIP Networking). IEEE, 2022, pp. 1–9

  18. [18]

    P4hauler: an accelerator-aware in-network load balancer for applications perfor- mance boosting,

    H. Tajbakhsh, R. Parizotto, A. Schaeffer-Filho, and I. Haque, “P4hauler: an accelerator-aware in-network load balancer for applications perfor- mance boosting,”IEEE Transactions on Cloud Computing, vol. 12, no. 2, pp. 697–711, 2024

  19. [19]

    On the (dis) advantages of pro- grammable nics for network security services,

    J. Zhao, M. Neves, and I. Haque, “On the (dis) advantages of pro- grammable nics for network security services,” in2023 IFIP Networking Conference (IFIP Networking). IEEE, 2023, pp. 1–9

  20. [20]

    Towards portable end-to-end network performance characterization of smartnics,

    T. Xing, H. Tajbakhsh, I. Haque, M. Honda, and A. Barbalace, “Towards portable end-to-end network performance characterization of smartnics,” inProceedings of the 13th ACM SIGOPS Asia-Pacific Workshop on Systems, ser. APSys ’22. New York, NY , USA: Association for Computing Machinery, 2022, pp. 46–52. [Online]. Available: https://doi.org/10.1145/3546591.3547528

  21. [21]

    Synergy: A smartnic accelerated 5g dataplane and monitor for mobility prediction,

    S. Panda, K. Ramakrishnan, and L. N. Bhuyan, “Synergy: A smartnic accelerated 5g dataplane and monitor for mobility prediction,” in2022 IEEE 30th International Conference on Network Protocols (ICNP). IEEE, 2022, pp. 1–12

  22. [22]

    A comprehensive survey on smartnics: Architectures, de- velopment models, applications, and research directions,

    E. F. Kfoury, S. Choueiri, A. Mazloum, A. AlSabeh, J. Gomez, and J. Crichigno, “A comprehensive survey on smartnics: Architectures, de- velopment models, applications, and research directions,”IEEE Access, 2024

  23. [23]

    Fpga- accelerated smartnic for supporting 5g virtualized radio access network,

    J. C. Borromeo, K. Kondepu, N. Andriolli, and L. Valcarenghi, “Fpga- accelerated smartnic for supporting 5g virtualized radio access network,” Computer Networks, vol. 210, p. 108931, 2022

  24. [24]

    Unikernels: Library operating systems for the cloud,

    A. Madhavapeddy, R. Mortier, C. Rotsos, D. Scott, B. Singh, T. Gaza- gnaire, S. Smith, S. Hand, and J. Crowcroft, “Unikernels: Library operating systems for the cloud,”ACM SIGARCH Computer Architecture News, vol. 41, no. 1, pp. 461–472, 2013

  25. [25]

    Toward real microkernels,

    J. Liedtke, “Toward real microkernels,”Communications of the ACM, vol. 39, no. 9, pp. 70–77, 1996

  26. [26]

    dapp-openairinterface5g,

    Wineslab, “dapp-openairinterface5g,” 2025. [Online]. Available: https: //github.com/wineslab/dApp-openairinterface5g

  27. [27]

    Deepsense: Fast wideband spectrum sensing through real-time in-the-loop deep learning,

    D. Uvaydov, S. D’Oro, F. Restuccia, and T. Melodia, “Deepsense: Fast wideband spectrum sensing through real-time in-the-loop deep learning,” inIEEE INFOCOM 2021-IEEE Conference on Computer Communications. IEEE, 2021, pp. 1–10

  28. [28]

    Xception: Deep learning with depthwise separable convolu- tions,

    F. Chollet, “Xception: Deep learning with depthwise separable convolu- tions,” inProceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258

  29. [29]

    Nvidia tensorrt,

    Nvidia, “Nvidia tensorrt,” Online, 2024. [Online]. Available: https: //developer.nvidia.com/tensorrt#section-what-is-nvidia-tensorrt

  30. [30]

    Litert overview — google ai edge — google ai for developers,

    Google, “Litert overview — google ai edge — google ai for developers,” Online, 2025. [Online]. Available: https://www.tensorflow.org/lite