LEO performs cross-vendor backward slicing from stalled GPU instructions to attribute root causes to source code, enabling optimizations that produce geometric-mean speedups of 1.73-1.82x on 21 workloads.
Lightweight kubernetes distributions: A performance comparison of microk8s, k3s, k0s, and microshift
3 Pith papers cite this work. Polarity classification is still indexing.
years
2026 3verdicts
UNVERDICTED 3representative citing papers
KEET uses LLM agents to generate data-grounded natural language explanations of performance issues in GPU kernels from Nsight Compute profiles and shows these improve downstream LLM-based optimization tasks.
Go outperforms Python and Node.js in throughput and CPU efficiency on OpenFaaS; K3s and Kubeadm deliver higher throughput and lower latency than MicroK8s or K0s under concurrent load.
citing papers explorer
-
LEO: Tracing GPU Stall Root Causes via Cross-Vendor Backward Slicing
LEO performs cross-vendor backward slicing from stalled GPU instructions to attribute root causes to source code, enabling optimizations that produce geometric-mean speedups of 1.73-1.82x on 21 workloads.
-
KEET: Explaining Performance of GPU Kernels Using LLM Agents
KEET uses LLM agents to generate data-grounded natural language explanations of performance issues in GPU kernels from Nsight Compute profiles and shows these improve downstream LLM-based optimization tasks.
-
Optimizing OpenFaaS on Kubernetes: Comparative Analysis of Language Runtimes and Cluster Distributions
Go outperforms Python and Node.js in throughput and CPU efficiency on OpenFaaS; K3s and Kubeadm deliver higher throughput and lower latency than MicroK8s or K0s under concurrent load.