IRIS-14B is the first LLM trained explicitly for GIMPLE-to-LLVM IR translation and outperforms much larger models by up to 44 percentage points on real-world C code.
MLIR: Scaling Compiler Infrastructure for Domain Specific Computation
8 Pith papers cite this work. Polarity classification is still indexing.
years
2026 8verdicts
UNVERDICTED 8representative citing papers
SkCC compiles LLM skills via SkIR to achieve portability across agent frameworks, reduce adaptation effort from O(m×n) to O(m+n), and enforce security with reported gains in task success rates and token efficiency.
An MLIR-native NumPy-like DSL with a new dialect-agnostic type checker and parallel-first lowering to a dataflow dialect, shown on weather modeling and CFD workloads in Fortran.
Free-variable sets and a nesting tree can replace dominance relations in SSA for higher-order programs, improving precision without requiring explicit control-flow graphs.
LEO performs cross-vendor backward slicing from stalled GPU instructions to attribute root causes to source code, enabling optimizations that produce geometric-mean speedups of 1.73-1.82x on 21 workloads.
EquivFusion unifies equivalence checking across hardware design levels by lowering PyTorch, C/C++, Chisel, Verilog, and netlists via MLIR into SMT-LIB, BTOR2, and AIGER formats.
KEET uses LLM agents to generate data-grounded natural language explanations of performance issues in GPU kernels from Nsight Compute profiles and shows these improve downstream LLM-based optimization tasks.
AutoLALA automatically generates symbolic formulas for reuse distance and data movement complexity in affine loop programs using polyhedral lowering and Barvinok counting.
citing papers explorer
-
LLM Translation of Compiler Intermediate Representation
IRIS-14B is the first LLM trained explicitly for GIMPLE-to-LLVM IR translation and outperforms much larger models by up to 44 percentage points on real-world C code.
-
SkCC: Portable and Secure Skill Compilation for Cross-Framework LLM Agents
SkCC compiles LLM skills via SkIR to achieve portability across agent frameworks, reduce adaptation effort from O(m×n) to O(m+n), and enforce security with reported gains in task success rates and token efficiency.
-
Demonstrating a Future for MLIR-native DSL Compilers on a NumPy-like Example
An MLIR-native NumPy-like DSL with a new dialect-agnostic type checker and parallel-first lowering to a dataflow dialect, shown on weather modeling and CFD workloads in Fortran.
-
SSA without Dominance for Higher-Order Programs
Free-variable sets and a nesting tree can replace dominance relations in SSA for higher-order programs, improving precision without requiring explicit control-flow graphs.
-
LEO: Tracing GPU Stall Root Causes via Cross-Vendor Backward Slicing
LEO performs cross-vendor backward slicing from stalled GPU instructions to attribute root causes to source code, enabling optimizations that produce geometric-mean speedups of 1.73-1.82x on 21 workloads.
-
EquivFusion: Unifying Hardware Equivalence Checking from Algorithms to Netlists via MLIR
EquivFusion unifies equivalence checking across hardware design levels by lowering PyTorch, C/C++, Chisel, Verilog, and netlists via MLIR into SMT-LIB, BTOR2, and AIGER formats.
-
KEET: Explaining Performance of GPU Kernels Using LLM Agents
KEET uses LLM agents to generate data-grounded natural language explanations of performance issues in GPU kernels from Nsight Compute profiles and shows these improve downstream LLM-based optimization tasks.
-
AutoLALA: Automatic Loop Algebraic Locality Analysis for AI and HPC Kernels
AutoLALA automatically generates symbolic formulas for reuse distance and data movement complexity in affine loop programs using polyhedral lowering and Barvinok counting.