pith. machine review for the scientific record. sign in

hub

PyTorch 2: Faster machine learning through dynamic Python bytecode transformation and graph compilation

20 Pith papers cite this work. Polarity classification is still indexing.

20 Pith papers citing it

hub tools

citation-role summary

background 1

citation-polarity summary

years

2026 20

roles

background 1

polarities

background 1

representative citing papers

Locking Pretrained Weights via Deep Low-Rank Residual Distillation

cs.LG · 2026-05-11 · unverdicted · novelty 7.0

DLR-Lock locks open-weight LLMs against unauthorized fine-tuning by swapping MLPs for deep low-rank residual networks that inflate backprop memory and complicate optimization, yet preserve original capabilities via module-wise distillation.

VNN-LIB 2.0: Rigorous Foundations for Neural Network Verification

cs.LG · 2026-05-08 · unverdicted · novelty 7.0

VNN-LIB 2.0 defines a network theory abstraction, formal query syntax, type system over numeric domains, and Agda-mechanized semantics to provide rigorous foundations for neural network verification independent of evolving model formats.

Sarus Suite: Cloud-native Containers for HPC

cs.DC · 2026-04-18 · unverdicted · novelty 7.0

Sarus Suite shows HPC can match production container performance using an unmodified Podman engine plus explicit system layers for scheduling, scalable images, and host integration.

Neuro-Symbolic ODE Discovery with Latent Grammar Flow

cs.LG · 2026-04-17 · unverdicted · novelty 7.0

Latent Grammar Flow discovers ODEs by placing grammar-based equation representations in a discrete latent space, using a behavioral loss to cluster similar equations, and sampling via a discrete flow model guided by data fit and constraints.

Doubly Robust Proxy Causal Learning with Neural Mean Embeddings

cs.LG · 2026-05-10 · unverdicted · novelty 6.0

A neural doubly robust proxy causal learning framework using mean embeddings for treatment bridges provides consistent estimators for causal dose-response functions under unobserved confounding for continuous and structured treatments.

Can Muon Fine-tune Adam-Pretrained Models?

cs.LG · 2026-05-11 · unverdicted · novelty 4.0

Constraining fine-tuning updates with LoRA mitigates performance degradation when switching from Adam to Muon on pretrained models.

Quantum-inspired tensor networks in machine learning models

cs.LG · 2026-04-15 · unverdicted · novelty 2.0

Tensor networks developed for quantum states are reviewed as tools for machine learning models, with assessment of their potential computational, explanatory, and privacy advantages alongside remaining challenges.

citing papers explorer

Showing 20 of 20 citing papers.