Recognition: unknown
Neutrino Production via e^-e^+ Collision at Z-boson Peak
read the original abstract
The production of the three normal neutrinos via $e^-e+$ collision at $Z$-boson peak (neutrino production in a Z-factory) is investigated thoroughly. The differences of $\nu_e$-pair production from $\nu_\mu$-pair and $\nu_\tau$-pair production are presented in various aspects. Namely the total cross sections, relevant differential cross sections and the forward-backward asymmetry etc for these neutrinos are presented in terms of figures as well as numerical tables. The restriction on the room for the mixing of the three species of light neutrinos with possible externals (heavy neutral leptons and/or stereos) from refined measurements of the invisible width of $Z$-boson is discussed.
This paper has not been read by Pith yet.
Forward citations
Cited by 12 Pith papers
-
Accelerating Intra-Node GPU-to-GPU Communication Through Multi-Path Transfers with CUDA Graphs
Embedding CUDA Graphs in UCX for multi-path intra-node GPU communication yields up to 2.95x bandwidth improvement over single-path UCX on a four-GPU node for large messages.
-
JetSCI: A Hybrid JAX-PETSc Framework for Scalable Differentiable Simulation
JetSCI is a hybrid JAX-PETSc framework that delivers scalable differentiable finite element simulations and outperforms pure JAX implementations on heterogeneous micromechanics problems.
-
A Fully GPU-Accelerated Framework for High-Performance Configuration Interaction Selection with Neural Network Quantum States
QiankunNet-cuSCI achieves up to 2.32x end-to-end speedup on 64 A100 GPUs for NNQS-SCI while preserving chemical accuracy by fully accelerating global de-duplication and coupled-configuration generation on the device.
-
Nautilus: An Auto-Scheduling Tensor Compiler for Efficient Tiled GPU Kernels
Nautilus auto-compiles math-like tensor descriptions into optimized GPU kernels, delivering up to 42% higher throughput than prior compilers on transformer models across NVIDIA GPUs.
-
NestPipe: Large-Scale Recommendation Training on 1,500+ Accelerators via Nested Pipelining
NestPipe achieves up to 3.06x speedup and 94.07% scaling efficiency on 1,536 workers via dual-buffer inter-batch and frozen-window intra-batch pipelining that overlaps communication with computation.
-
GTaP: A GPU-Resident Fork-Join Task-Parallel Runtime with a Pragma-Based Interface
GTaP delivers a GPU-resident fork-join task-parallel runtime with pragma support and EPAQ that outperforms CPU OpenMP on several irregular applications.
-
A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network
SCIN uses an in-switch accelerator for direct memory access and 8-bit in-network quantization during All-Reduce, delivering up to 8.7x faster small-message reduction and 1.74x TTFT speedup on LLaMA-2 models.
-
EnergyLens: Predictive Energy-Aware Exploration for Multi-GPU LLM Inference Optimization
EnergyLens predicts multi-GPU LLM inference energy consumption with 9-13% MAPE and identifies configurations with up to 52x energy efficiency differences.
-
KV-RM: Regularizing KV-Cache Movement for Static-Graph LLM Serving
KV-RM regularizes KV-cache movement in static-graph LLM serving via block paging and merge-staged transport to improve throughput, tail latency, and memory use for variable-length decoding.
-
Where did we fail? -- Reproducing build failures in embedded open source software
PhantomRun standardizes CI build log retrieval and reproduction for embedded systems, reconstructing 91.8% of 4628 failing runs while preserving outcomes in 98% of cases.
-
Preserving Clusters in Error-Bounded Lossy Compression of Particle Data
A clustering-aware correction algorithm using spatial partitioning and projected gradient descent preserves single-linkage clusters in lossy-compressed particle data while keeping competitive compression ratios.
-
Sustaining Exascale Performance: Lessons from HPL and HPL-MxP on Aurora
Aurora reached 1.01 EF/s FP64 HPL and 11.64 EF/s HPL-MxP through locality-aware mapping, CPU-GPU pipelining, mixed-precision orchestration, and hybrid resilience on a large Intel GPU-based system.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.