pith. machine review for the scientific record. sign in

quant-ph

Quantum Physics

Description coming soon

0
quant-ph 2026-05-13 2 theorems

Library supplies scalable patterns for measurement-based quantum simulation

Scalable Measurement-Based Quantum Simulation Patterns for Benchmarking

QPatLib provides workflows and commuting conventions for Pauli-string unitaries to test optimization on near-term devices.

Figure from the paper full image
abstract click to expand
Measurement-based quantum computing uses measurement patterns on predefined quantum resource states to execute quantum logic. Quantum simulation offers an important use case on near-term devices. However, pattern optimization depends on the multivariable interplay between hardware and software constraints and is therefore use-dependent and highly non-trivial. Optimization of large-scale patterns under realistic assumptions remains a barrier. We announce the release of the quantum measurement pattern library QPatLib, a dataset that, in v1.0, presents patterns for use in measurement-based quantum simulation. We present the workflow for generating patterns that execute Pauli-string unitaries needed for many quantum algorithms. We provide benchmark patterns for measurement-based quantum unitary evolution. The measurement patterns are defined with different conventions for commuting Pauli-string subsets to allow scaling of pattern size and complexity. The purpose of the library is to (i) serve as a standardized testbed for pattern-optimization protocols for measurement-based quantum simulation routines, (ii) offer a suite of patterns for direct use on hardware, (iii) provide data to empirically justify pattern design principles, and (iv) provide a flexible resource for future storage and use of measurement-based patterns beyond quantum simulation.
0
0
quant-ph 2026-05-13 Recognition

Coherent spin control shown for silicon G centers via ODMR

Optical detection of the electron spin resonances of G centers in silicon

The metastable triplet state allows optical readout of spin resonances, with optimal conditions and coherence times now measured.

Figure from the paper full image
abstract click to expand
Color centers in silicon are emerging as promising platforms for quantum technologies. Among them, the G center has attracted considerable interest owing to its bright telecom O-band single-photon emission and its optically addressable metastable electron-spin triplet state. Here we investigate the spin properties of ensembles of G centers under above-band-gap excitation. We elucidate the spin photo-dynamics giving rise to the optical detected magnetic resonance (ODMR) response of G centers. The optimal pulsed sequence for measuring the ODMR spectrum of the G defects is identified, along with the temperature and optical-power regimes maximizing the spin readout contrast. Through magneto-optical measurements, we detect a level-anticrossing of the G center electron spin states. At last, we demonstrate coherent spin control of the defects, and characterize their spin-coherence properties. Unveiling the spin degree of freedom of the G center opens new avenues for the realization of quantum memories and quantum registers based on silicon color centers.
1 0
0
quant-ph 2026-05-13 2 theorems

Bivariate QSP gives optimal simulation of non-Hermitian Hamiltonians

Simulation of Non-Hermitian Hamiltonians with Bivariate Quantum Signal Processing

Additive query costs from real and imaginary parts match the information-theoretic lower bound in the separate-oracle model.

Figure from the paper full image
abstract click to expand
We achieve query-optimal quantum simulations of non-Hermitian Hamiltonians $H_{\mathrm{eff}} = H_R + iH_I$, where $H_R$ is Hermitian and $H_I \succeq 0$, using a bivariate extension of quantum signal processing (QSP) with non-commuting signal operators. The algorithm encodes the interaction-picture Dyson series as a polynomial on the bitorus, implemented through a structured multivariable QSP (M-QSP) circuit. A constant-ratio condition guarantees scalar angle-finding for M-QSP circuits with arbitrary non-commuting signal operators. A degree-preserving sum-of-squares spectral factorization permits scalar complementary polynomials in two variables. Angles are deterministically calculated in a classical precomputation step, running in $\mathcal{O}(d_R \cdot d_I)$ classical operations. Operator norms $\alpha_R\,,\beta_I$ contribute additively with query complexity $\mathcal{O}((\alpha_R + \beta_I)T + \log(1/\varepsilon)/\log\log(1/\varepsilon))$ matching an information-theoretic lower bound in the separate-oracle model, where $H_R$ and $H_I$ are accessed through independent block encodings. The postselection success probability is $e^{-2\beta_I T}\|e^{-iH_{\mathrm{eff}}T}|\psi_0\rangle\|^2\cdot (1 - \mathcal{O}(\varepsilon))$, decomposing into a state-dependent factor $\|e^{-iH_{\mathrm{eff}}T}|\psi_0\rangle\|^2$ from the intrinsic barrier and an $e^{-2\beta_I T}$ overhead from polynomial block-encoding.
0
0
quant-ph 2026-05-13 2 theorems

Reservoirs stabilize entangled qubits to 90.8% fidelity

Entangling Superconducting Qubits via Energy-Selective Local Reservoirs

Parametric driving of readout resonators supplies energy-selective incoherent pumps and losses that autonomously prepare and maintain single

Figure from the paper full image
abstract click to expand
Engineered dissipation provides a powerful route to controlling and stabilizing quantum states in open systems. Superconducting circuits are particularly suited to this approach due to their tunable coupling to dissipative environments. Here we realize programmable local reservoirs for superconducting qubits through parametrically driven coupling to readout resonators, creating energy-selective incoherent pump and loss. Using coupled superconducting qubits, we autonomously stabilize entangled single-excitation states with fidelity up to 90.8%. We probe the stabilization dynamics under varying initial conditions and bath parameters, and implement robust classical shadow estimation for accurate and scalable state characterization. Finally, we numerically study a configuration where the engineered pump and loss share a common dissipative mode, leading to reservoir-mediated interference and classically correlated steady states. Our results demonstrate a scalable and hardware-efficient framework for dissipative preparation and control of correlated many-body states in superconducting circuits.
0
0
quant-ph 2026-05-13 Recognition

Continuous excitation permits sub-shot-noise single-photon emission

Sub-shot-noise emission statistics of a CW-excited single photon source

Model shows photon variance drops below Poisson level when excitation rate approaches spontaneous decay rate.

Figure from the paper full image
abstract click to expand
Shot noise sets a fundamental limit on the sensitivity of classical optical measurements, with coherent emitters achieving the lowest possible shot-noise level. Emission from sub-Poissonian light provides a pathway to surpass this limit, and single-photon sources provide a natural platform for generating such light. However, it is commonly assumed that continuously excited single-photon sources exhibit Poissonian statistics. In this work, a theoretical model of a continuously driven two-level single-photon source is developed, treating both excitation and radiative decay as stochastic processes. The analysis demonstrates that photon emission can display sub-Poissonian statistics when excitation and decay rates are comparable, showing that continuous excitation does not inherently preclude nonclassical emission. The model is further extended to include finite detection efficiency and detector dead time, illustrating how these practical non-idealities can affect the experimental observation of sub-Poissonian statistics.
0
0
quant-ph 2026-05-13 Recognition

Distance-four code matches surface-code protection with one-tenth the qubits

Lower overhead fault-tolerant building blocks for noisy quantum computers

Combinatorial flag methods and a compact six-qubit block cut both qubit count and gate time in fault-tolerant quantum circuits.

Figure from the paper full image
abstract click to expand
Quantum computation holds the promise of solving certain complex problems exponentially faster than classical computers. However, the high prevalent noise in current quantum devices impedes the accurate execution of even basic algorithms. This can be remedied by protecting quantum information with a quantum error-correcting code, where the logical information of an algorithmic qubit is spread across multiple physical qubits. Individual quantum errors are then located and corrected by the fault-tolerant measurement of multi-qubit stabilizer operators (parity checks). Unfortunately, error correction and fault tolerance both impose large demands on the qubit overhead: hundreds to thousands of physical qubits per logical qubit. We reduce the spacetime cost of fault tolerance by redesigning key building blocks of an error-corrected quantum computer. First, we develop a combinatorial proof with flag fault tolerance that exponentially reduces the extra qubits needed to measure a stabilizer of any size, while tolerating one fault. We leverage these proofs to then design state preparation circuits for the Steane and Golay codes with 100% yield. Next, we improve error correction on a planar layout by showing that a distance-four code encoding six logical qubits protects information as well as the distance-five surface code, using one-tenth as many physical qubits. Finally, we optimize the time overhead of logical gates in surface code quantum computers by protecting measurement results with a classical code, cutting computation time by a factor of two to six. Our hardware-agnostic optimizations of fault tolerance overheads thus suggest new routes to advance the timeline of error-free quantum computing.
0
0
quant-ph 2026-05-13 Recognition

Dynamic QAP model for qubit routing cuts CNOT counts 12-30 percent

QAP-Router: Tackling Qubit Routing as Dynamic Quadratic Assignment with Reinforcement Learning

Flow-distance objective plus transformer policy with lookahead outperforms industry compilers on 1831 real circuits.

Figure from the paper full image
abstract click to expand
Qubit routing is a fundamental problem in quantum compilation, known to be NP-hard. Its dynamic nature makes local routing decisions propagate and compound over time, making global efficient solutions challenging. Existing heuristic methods rely on local rules with limited lookahead, while recent learning-based approaches often treat routing as a generic sequential decision problem without fully exploiting its underlying structure. In this paper, we introduce QAP-Router, framing qubit routing based on a dynamic Quadratic Assignment Problem (QAP) formulation. By modeling logical interactions, or quantum gates, as flow matrices and hardware topology as a distance matrix, our approach captures the interaction-distance coupling in a unified objective, which defines the reward in the reinforcement learning environment. To further exploit this structure, the policy network employs a solution-aware Transformer backbone that encodes the interaction between the flow matrix and the distance matrix into the attention mechanism. We also integrate a lookahead mechanism that blends naturally into the QAP framework, preventing myopic decisions. Extensive experiments on 1,831 real-world quantum circuits from the MQTBench, AgentQ and QUEKO datasets show that our method substantially reduces the CNOT gate count of routed circuits by 15.7%, 30.4% and 12.1%, respectively, relative to existing industry compilers.
0
0
quant-ph 2026-05-13 Recognition

Entropy nondecrease prevents second-law violations in generalized theories

Information Thermodynamics in Generalized Probabilistic Theories

Unified framework shows consistent measurements block contradictory work extraction, while entropy decrease allows explicit counterexamples.

Figure from the paper full image
abstract click to expand
Generalized Probabilistic Theories (GPTs) provide a unified framework for describing probabilistic physical theories, encompassing classical and quantum theories as well as hypothetical theories beyond quantum mechanics. Since most GPTs are highly unrealistic and far removed from known physical theories, it is important to constrain them by physically reasonable principles. One of the most important such principles is consistency with thermodynamics, which has been extensively studied through toy models involving semipermeable membranes (SPMs) implementing measurements. On the other hand, information thermodynamics, which plays a central role in understanding the relationship between measurement and thermodynamics in classical and quantum theory, has remained largely undeveloped in GPTs. In this work, we construct information thermodynamics in GPTs and provide a unified framework for analyzing the relationship between measurement, feedback, information erasure, and the second law of thermodynamics. We also formulate a general framework for SPM models and analyze the thermodynamic cost of measurements implemented by SPMs. As a result, we show that no work can be extracted in contradiction with the second law as long as the measurement processes are consistent with entropy nondecrease, and derive sufficient conditions for this property for several entropy definitions proposed in GPTs. Moreover, by considering measurement processes violating these conditions, we construct explicit GPT systems realizing isothermal SPM cycles from which positive work can be extracted. These results demonstrate that violations of the second law can arise from the lack of fundamental entropy properties or discrepancies between entropy definitions, and provide a unified and model-independent foundation for understanding the relationship between thermodynamics and measurement in GPTs.
0
0
quant-ph 2026-05-13 2 theorems

Cryogenic hardware advances to closed-cycle systems for quantum photonics

Cryogenic Systems for Quantum Photonic Technologies: A Practical Review

Review explains principles of modern cooling for solid-state devices used in quantum communication and computing.

abstract click to expand
While nonclassical light sources are fundamental to quantum communication and computing, solid-state platforms like color centers and quantum dots require cryogenic temperatures to reach the performance levels necessary for practical applications. Over the past decade, low-temperature engineering has transitioned from manual handling of liquid cryogens to automated closed-cycle cryostats. This review details the principles behind modern cooling hardware ranging from flow cryostats to mechanical cryocoolers and dilution refrigerators, with a specific focus on the requirements of optical quantum devices. Aimed at the practicing scientist, this overview provides the technical insights and historical context needed to navigate the current cryogenic landscape and evaluate its role in the future of quantum technology deployment.
0
0
quant-ph 2026-05-13 2 theorems

Gravitomagnetic potential adds rotational collapse terms

A post-Newtonian Gravitational Collapse Model from Linearized Gravity

Linearized gravity couples mass currents to quantum states and extends the Diósi-Penrose model beyond position.

Figure from the paper full image
abstract click to expand
We introduce a general gravity-related collapse mechanism based on linearized gravity. Starting from the weak-field limit of general relativity, gravitoelectromagnetism suggests an effective coupling between the gravitoelectric potential and the mass density distribution. At the same time, it provides a similar relation for the gravitomagnetic vector potential and the mass current. Following a hybrid (classical-quantum) dynamics approach, these couplings lead to a master equation whose non-unitary part is determined by the underlying mass distribution and currents. When the gravitoelectric potential coupling is considered, the well-known Di\'osi-Penrose collapse model acting on positional degrees of freedom is recovered. However, upon including the gravitomagnetic vector potential, additional collapse mechanisms emerge for rotational degrees of freedom as well as for mixed mass-rotation contributions.
0
0
quant-ph 2026-05-13 2 theorems

Optimal driving halves variance for impulse estimation in quantum systems

Optimal State Preparation for Impulse Estimation in Gaussian Quantum Systems

Targeted parametric modulation around a known disturbance time beats steady-state operation in linear monitored systems.

Figure from the paper full image
abstract click to expand
We present an optimal control-based strategy to enhance the estimation of impulse-like disturbances in continuously monitored linear classical and quantum systems by exploiting non-equilibrium states. Using optimal estimation techniques for linear Gaussian systems to collect information from the temporal vicinity of the disturbance, we cast the minimization of disturbance estimation uncertainty as a nonlinear optimal control problem over time-dependent system parameters. The resulting method dynamically shapes the estimation covariances through parametric modulation, maximizing information gain at a known impulse time. This differs fundamentally from conventional squeezing protocols using periodic modulation that effectively degrade inference of impulse-like disturbances. Applied to nanomechanical resonators and levitated nanoparticles, optimal parametric driving reduces estimation variance by up to a factor of two relative to steady-state operation
0
0
quant-ph 2026-05-13 Recognition

Post-selection cuts PEC overhead by 1000x for 200-qubit logical GHZ

Zeno-Enhanced Probabilistic Error Cancellation with Quantum Error Detection Codes

Error detection maps physical noise to a weaker logical channel whose perturbative inverse requires far fewer samples to cancel.

Figure from the paper full image
abstract click to expand
Probabilistic error cancellation (PEC) is unbiased but suffers exponential sampling overhead set by noise-weighted circuit volume, whereas quantum error-detecting codes (QEDCs) remove many physical faults by stabilizer post-selection but leave an undetectable logical residue. We exploit this complementarity by using post-selection to map physical noise to a weaker accepted logical channel, and then applying PEC only to the residual channel. The resulting feedback-free QED+PEC scheme interleaves Clifford logical blocks, stabilizer measurements, post-selection, and probabilistic cancellation on accepted trajectories, without real-time decoding or active recovery. A key complication is that post-selection correlates accepted fault branches through stabilizer-commutation constraints, so the sparse Pauli-Lindblad factorization underlying bare PEC no longer applies directly. We therefore construct the inverse channel perturbatively: for fixed order $K$, only accepted fault branches up to order $K$ are retained, reducing preprocessing from $2^m$ branches to $O(m^K)$ per block. The order-$K$ protocol cancels the normalized post-selected channel through degree $K$, leaving a per-block error $O(W^{K+1})$ that accumulates at most linearly. For logical GHZ-state preparation with the $[[n,n-2,2]]$ Iceberg code under circuit-level depolarizing noise and ideal stabilizer measurements, first-order QED+PEC reaches $n=200$ physical qubits and lowers sampling overhead by three to four orders of magnitude relative to standard PEC while maintaining $F\simeq0.956$. Syndrome-noise tests show that readout-only flips mainly increase post-selection cost, whereas noisy GHZ-assisted global stabilizer extraction can remove the advantage. This identifies a discrete-Zeno trade-off: cheap detection reshapes the effective channel PEC must invert, rather than simply adding overhead.
0
0
quant-ph 2026-05-13 2 theorems

17 nm oxide thickness minimizes quantum dot variability

Understanding oxide-thickness-dependent variability in dense Si-MOS quantum dot arrays

Competing disorder sources produce a non-monotonic trend, setting a practical target for uniform silicon spin qubit arrays.

Figure from the paper full image
abstract click to expand
Achieving uniform and scalable control of semiconductor spin qubits remains a key challenge for large scale quantum computing. In this work, we investigate how gate oxide thickness influences uniformity in dense two dimensional silicon quantum dot arrays. Using a 7 x 7 array fabricated in a 300 mm CMOS-process patterned by EUV lithography, we statistically characterize 392 quantum dots across four different oxide thicknesses. The threshold voltages, capacitances, lever arms, and charging energies are extracted using parallel row based measurements and we identify an optimal SiO2 thickness of 17 nm that minimizes threshold voltage variability below 63 mV standard deviation. Our observations illustrate how multiple sources of disorder can introduce competing oxide-thickness dependencies, resulting in non-monotonic trends. These results provide key design guidelines for dense, scalable silicon spin qubit architectures.
0
0
quant-ph 2026-05-13 Recognition

Partial Bell entanglement still yields unit-fidelity teleportation

Quantum teleportation with coherent error in Bell-state measurement

Exact relation among measurement entanglement, channel entanglement, and success probability recovers perfect fidelity.

Figure from the paper full image
abstract click to expand
Quantum teleportation is a fundamental protocol in quantum information science, whose performance is conventionally evaluated under the assumption of ideal Bell-state measurements. In realistic implementations, however, joint measurements are often imperfect and can deviate from maximally entangled bases due to coherent errors in entangling operations. In this work, we analytically show how the entanglement of joint measurements determines teleportation performance and propose a strategy to overcome the limitations imposed by partially entangled joint measurements to recover the unit teleportation fidelity. We then derive an exact equation revealing a quantitative relation between measurement entanglement, channel entanglement, and the success probability to realize the unit-fidelity teleportation. We illustrate our results using elegant joint measurements and realistic coherent error models arising from imperfect entangling operations in quantum systems. Our work provides fundamental insight into the role of measurement entanglement in quantum teleportation and establishes a practical framework for achieving faithful teleportation without requiring substantial modifications to existing hardware.
0
0
quant-ph 2026-05-13 2 theorems

Neurons can violate classical time-correlation limits

Leggett--Garg Tests in Neural Dynamics: Probing Non-Diffusive Stochastic Structure in Single Neurons

Leggett-Garg tests on single cells distinguish persistent stochastic models from simple diffusion.

abstract click to expand
We propose an experimental programme to test Leggett--Garg-type temporal correlations in single-neuron dynamics. The goal is to distinguish between diffusive (Wiener/cable-equation) models and non-diffusive persistent stochastic models based on Kac-type finite-velocity processes leading to the Telegrapher's equation. We show that while purely diffusive dynamics satisfies Leggett--Garg inequalities, persistent stochastic dynamics can produce oscillatory temporal correlations capable of violating these inequalities. The Leggett--Garg inequality may be viewed as a temporal analogue of Bell-type constraints. In the present context, however, violation is interpreted conservatively not as evidence of microscopic quantum coherence, but as evidence against a simple trajectory-based diffusive description. The resulting temporal correlations indicate persistence, memory, and contextual temporal structure mathematically analogous to that encountered in quantum systems. Using the analytic continuation connecting Kac processes to Dirac-like envelope equations, we argue that finite-velocity persistent stochastic transport provides a natural mechanism for such non-diffusive temporal correlations. These tests therefore offer a possible experimental probe of contextual and non-Markovian structure in neural dynamics without requiring claims of microscopic quantum coherence in the brain.
0
0
quant-ph 2026-05-13 2 theorems

Unified methods link squeezing to adiabaticity breaking in driven oscillators

Squeezing and adiabaticity breaking in time-dependent quantum harmonic oscillators

Invariants, transformations, and a differential equation explain excitations under arbitrary frequency changes

Figure from the paper full image
abstract click to expand
The quantum harmonic oscillator with time-dependent frequency is a paradigmatic model of driven quantum dynamics and one of the few nontrivial systems that admits an exact analytical solution. In this review paper, we present a unified treatment of the time-dependent oscillator based on the Lewis-Riesenfeld invariant method, Bogoliubov transformations and the Ermakov-Pinney equation. We show how these approaches naturally connect to squeezing for the description of excitations production, and to the breakdown of adiabaticity under generic frequency protocols. Exact results for sudden quenches and smooth ramps are discussed in detail. By explicitly bridging invariant methods and squeezing formalism, this review is meant to provide a comprehensive framework for understanding nonequilibrium dynamics in quadratic potentials, with applications ranging from thermodynamics and condensed matter to quantum control theory.
0
0
quant-ph 2026-05-13 3 theorems

AL-QHD benchmarks reach tens of millions of gates on power problems

Benchmarking and Resource Analysis for Augmented-Lagrangian Quantum Hamiltonian Descent

Resource analysis on ACOPF instances shows steep scaling, indicating fault-tolerant hardware is needed for practical constrained quantum use

Figure from the paper full image
abstract click to expand
Quantum Hamiltonian Descent (QHD) is a continuous optimization algorithm based on simulating a time-dependent quantum Hamiltonian whose potential energy encodes the objective function and whose kinetic energy promotes exploration through quantum interference and tunneling. While QHD is formulated for unconstrained optimization, many real-world optimization problems are constrained and highly nonconvex. In this paper, we benchmark AL-QHD, a hybrid framework that embeds QHD within the Augmented Lagrangian Method (ALM), thereby solving a sequence of unconstrained subproblems while using ALM to enforce constraints. We evaluate AL-QHD on standard nonconvex test functions and use iterative refinement to improve solution accuracy at fixed per-run qubit cost. We also perform a gate-based resource analysis on ACOPF-derived power system subproblems constructed from power-network data to estimate the quantum-computer scale required for practical applications. Resource estimates on Texas7k-derived ACOPF instances show steep hard-gate scaling, reaching $\sim 4.46 \times 10^7$ entangling gates in a NISQ-oriented model and $\sim 9.42 \times 10^8$ T gates in a fault-tolerant model at $\sim 5.3 \times 10^2$ active variables. These results suggest that AL-QHD is a useful framework for studying constrained nonconvex optimization with QHD, but that practical ACOPF-scale applications would likely require large-scale fault-tolerant quantum hardware.
0
0
quant-ph 2026-05-13 2 theorems

Data scale beats architecture for neural QEC decoders

Rethink the Role of Neural Decoders in Quantum Error Correction

Surface-code tests to distance 9 show training data improves accuracy more than complex designs, with INT4 quantization required for micro-µ

Figure from the paper full image
abstract click to expand
Quantum error correction (QEC) is essential for enabling quantum advantages, with decoding as a central algorithmic primitive. Owing to its importance and intrinsic difficulty, substantial effort has been made to QEC decoder design, among which neural decoders have recently emerged as a promising data-driven paradigm. Despite this progress, practical deployment remains hindered by a fundamental accuracy-latency tradeoff, often on the microsecond timescale. To address this challenge, here we revisit neural decoders for surface-code decoding under explicit accuracy-latency constraints, considering code distances up to d=9 (161 physical qubits). We unify and redesign representative neural decoders into five architectural paradigms and develop an end-to-end compression pipeline to evaluate their deployability and performance on FPGA hardware. Through systematic experiments, we reveal several previously underexplored insights: (i) near-term decoding performance is driven more by data scale than architectural complexity; (ii) appropriate inductive bias is essential for achieving high decoding accuracy; and (iii) INT4 quantization is a prerequisite for meeting microsecond-scale latency requirements on FPGAs. Together, these findings provide concrete guidance toward scalable and real-time neural QEC decoding.
0
0
quant-ph 2026-05-13 2 theorems

CHSH value sets extractable work from side info in Szilard engine

Thermodynamic value of CHSH-induced side-information channels in a Szilard engine

Reversible work equals k_B T ln 2 [1 - h_2(1/2 + S(P)/8)] and strictly orders classical, quantum and nonsignaling cases while full cycles to

Figure from the paper full image
abstract click to expand
We study the thermodynamic value of side-information channels induced by Bell-type correlations through a CHSH prediction task embedded into a Szilard-type feedback engine. A thermal two-level system supplies a uniformly random physical microstate $X$, and a trusted referee encoding together with a nonsignalling correlation resource induces a controller bit $G$ that acts as side information about $X$. We show that the maximal average feedback work satisfies $\langle W_{\max}\rangle \le k_B T \ln 2 , I(X:G)$, with equality achievable in the ideal quasistatic limit. For the CHSH embedding considered here, the induced channel $X \to G$ is binary symmetric with success probability $p_{\rm win}=1/2+S(P)/8$, where $S(P)$ is the CHSH value. The corresponding reversible feedback work is $k_B T \ln 2 ,[1-h_2(p_{\rm win})]$, yielding a strict ordering of the optimal classical, quantum, and nonsignalling cases. The result should be interpreted as a thermodynamic valuation of CHSH-induced side information available to the controller, not as evidence that Bell nonlocality itself is a source of free energy. The analysis assumes that the controller receives only the compressed bit $G$ and does not include the thermodynamic cost of implementing the referee, the correlation resource, or the auxiliary preprocessing. A full-cycle analysis including controller-memory reset gives non-positive net work, consistent with the second law.
0
0
quant-ph 2026-05-13 2 theorems

Giant atoms achieve passive qubit state transfer above 99 percent

Enabling Deterministic Passive Quantum State Transfer with Giant Atoms

Multiple coupling points create time-symmetric wavepackets so states move deterministically without active control.

Figure from the paper full image
abstract click to expand
Achieving quantum state transfer in passive ways can become a powerful asset for scalable quantum networks. Here, we demonstrate how giant atoms coupled to 1D waveguides provide a platform for such a passive, deterministic transfer. Engineering the position and strength of coupling points, we show that the nonlocal interaction can be utilized for the emission of time-reversal-symmetric single-photon wavepackets by spontaneous decay. We first derive general analytical conditions under which arbitrary qubit decays can be mapped to wavevector-dependent couplings that guarantee perfect state transfer in the continuum limit of infinitely many coupling points. Then, for experimentally relevant configurations with a finite number of coupling points, we demonstrate that high transfer fidelities can still be achieved by optimization, reaching 87% with only two coupling points and exceeding 99% with ten or more. We further analyze the robustness of the protocol against disorder in leg positioning and extend the formalism to environments with nonlinear dispersion, showing that dispersion-induced distortions can be fully compensated by judiciously chosen setups. Our results establish giant atoms as a powerful platform for realizing high-fidelity quantum state transfer in a setting without time-dependent control, opening new avenues for scalable quantum networks and engineered light-matter interfaces.
0
0
quant-ph 2026-05-13 2 theorems

Tuned measurements prepare versatile qubit probes for sensing

Versatile probe state preparation via generalized measurements for quantum sensing and thermometry

Two generalized measurements on thermal states modulate quantum Fisher information for decay rate and temperature estimation.

Figure from the paper full image
abstract click to expand
We investigate a probe state preparation protocol based on two non-selective generalized quantum measurements to enhance parameter estimation in single-qubit systems. By fine-tuning the measurement strengths, we demonstrate the ability to design a broad class of probe states, initially prepared in a thermal state, which can be optimized for specific estimation tasks. We apply this framework to characterize the decay rate and the temperature of a generalized amplitude damping channel. Our results show that the preparation protocol significantly modulates the quantum Fisher information for both parameters. Furthermore, we derive a general analytical relationship between the quantum Fisher information, thermodynamic susceptibilities, and Hamiltonian variance, valid even in the transient regime. This connection highlights the role of energy fluctuations and kinetic response in determining metrological precision. Finally, we briefly discuss a quantum circuit for experimental implementation using nuclear magnetic resonance techniques.
0
0
quant-ph 2026-05-13 Recognition

Joint Realizability Tradeoffs Bounded by Quantum Channel Incompatibility

Generalized robustness sets a lower limit on total error when approximating joint implementations of incompatible quantum channels.

Figure from the paper full image
abstract click to expand
Incompatible quantum channels cannot be jointly and exactly realized, meaning that any approximate joint realization inevitably entails a tradeoff in implementation accuracy. While this notion of channel incompatibility unifies fundamental limitations such as measurement uncertainty, the no information without disturbance principle, and the no-cloning and no-broadcasting theorems, connecting these traditional relations directly to the resource-theoretic strength of incompatibility has remained elusive. In this Letter, we show that generalized robustness, a typical resource quantifier of channel incompatibility, lower bounds the total error of any approximate joint realization. Applying this result to measurement channels provides a unified, model-independent framework encompassing error-error and information-error-disturbance tradeoffs. Furthermore, our robustness-based evaluation of disturbance outperforms an algebraic bound for all POVMs in dimensions up to six.
1 0
0
quant-ph 2026-05-13 2 theorems

Some postselection preserves polynomial gradient scaling in photonic circuits

Pre-Asymptotic Trainability in Photonic Variational Circuits under Postselection

Simulations show allow-bunching and collision-free regimes avoid exponential decay up to tested sizes, unlike dual-rail.

Figure from the paper full image
abstract click to expand
Barren plateaus in variational quantum circuits are commonly attributed to strong mixing dynamics that cause gradient variance to vanish exponentially with system size. Passive photonic circuits, central to linear optical quantum computing, challenge this picture: although their Hilbert space can be exponentially large, their dynamics are constrained to a Lie algebra whose dimension scales as the square of the number of modes. In photonic systems, postselection also plays a central role, with gradient concentration governed not by the Hilbert-space dimension but by how it reshapes the effective observable. Through exact statevector simulations, we compare allow-bunching evolution, collision-free filtering, and dual-rail postselection. In the allow-bunching and collision-free regimes, gradient variance remains consistent with polynomial rather than exponential decay over the tested system sizes. By contrast, dual-rail postselection induces exponential concentration beyond moderate system sizes, robustly across three initialization ensembles. These results indicate that photonic barren plateaus are governed by the interplay between passive linear-optical dynamics, postselection geometry, and task observables, offering practical guidance for designing near-term photonic variational architectures.
0
0
quant-ph 2026-05-13 Recognition

Trace and determinant fix covariance matrices for discrete position-momentum

The uncertainty geometry of finite-dimensional position and momentum

The complete geometric region of attainable matrices identifies extremal states and supplies bounds for estimation and entanglement tests.

Figure from the paper full image
abstract click to expand
Uncertainty relations are usually stated as bounds on selected combinations of variances, but the full covariance matrix contains substantially richer information about the geometry of quantum state space and about the operational capabilities of quantum systems. Here we characterize the covariance matrices attainable by a finite-dimensional canonical pair of observables related by the discrete Fourier transform, the natural analogue of position and momentum in a finite Hilbert space. We combine analytic arguments with convex-geometric and semidefinite-programming methods based on joint numerical ranges to describe the admissible region through unitary invariants, in particular the trace and determinant of the covariance matrix. This provides a systematic way to identify extremal states, generalizing the notion of minimum-uncertainty states, and to quantify how the discrete uncertainty geometry approaches its continuous counterpart with increasing dimension. We further show that the resulting covariance-matrix characterization has direct consequences for applications: it yields accuracy bounds for multi parameter estimation protocols and separability criteria for finite-dimensional bipartite systems, including discrete analogues of continuous-variable EPR-type witnesses. Our results establish a systematic and versatile platform for connecting uncertainty relations, convex quantum geometry, metrology, and entanglement detection in finite-dimensional systems.
0
0
quant-ph 2026-05-13 Recognition

Calibration feedback control cuts optimization gaps in local and tight-loop regimes

Runtime Calibration as State-Trajectory Feedback Control in Quantum-Classical Workflows

Drifting equivalent-age state modeling shows positive gains over static baselines that grow with quality sensitivity and initial age, with a

Figure from the paper full image
abstract click to expand
In superconducting devices running variational workloads, gate and readout fidelities drift on hour timescales, while existing runtime schedulers treat backend quality as static. The temporal dimension of calibration remains unresolved. We formulate runtime calibration as a state-trajectory feedback-control problem under a fixed wall-clock budget, and investigate whether spending time on calibration now can improve the future optimization trajectory. Calibration quality proxy is represented as a drifting equivalent-age state, recovery action is modeled as costly state reset, and policies are evaluated by time-integrated optimization gap over the full execution window. Using a finite-horizon rollout controller, we compare feedback calibration against a strengthened family of open-loop baselines across three latency regimes: cloud-like (25 ms), local-millisecond (1 ms), and tight-loop (4 $\mathrm{\mu}$s). The results show a clear ordering: cloud-like feedback is generally uncompetitive, while local-ms and tight-loop regimes open a positive-gain region that grows with workload quality-sensitivity and initial calibration age. Crucially, the gap between local-ms and tight-loop control is modest for single-target recovery. The advantage of tight-loop integration emerges under capacity pressure, when many calibration targets must be processed within the same control window.
0
0
quant-ph 2026-05-13 2 theorems

Quantum circuits show SSH topology survives weak interactions

Adiabatic Quantum Simulation of the Topological Su--Schrieffer--Heeger--Hubbard Model

Simulations extract the many-body Berry phase and find breakdown only after the symmetry-breaking Hubbard term passes a threshold.

Figure from the paper full image
abstract click to expand
We develop an adiabatic quantum simulation framework on gate-based quantum computers to probe topological signatures of the one-dimensional fermionic Su--Schrieffer--Heeger--Hubbard (SSHH) model. We present explicit quantum-circuit constructions for initial-state preparation and time evolution, together with a practical measurement protocol and classical post-processing procedure for extracting the many-body Berry phase and the spatial profile of the sublattice polarization. Using classical simulations of the proposed circuits, we demonstrate -- for the first time within a genuine many-body framework -- that the topological characteristics of the SSH model remain robust against weak Hubbard interactions but eventually break down as the chiral-symmetry-breaking component of the interaction exceeds a threshold. The required qubit number, gate complexity, measurement shots, and classical pre- and post-processing costs all scale polynomially with system size. Our results provide a proof-of-concept framework for probing topological properties of interacting many-body systems via adiabatic quantum simulation on future large-scale quantum computers.
0
0
quant-ph 2026-05-13 2 theorems

Backward retrieval achieved in Stark-modulated spin-wave memory

Realization of Backward Retrieval in a Stark-modulated Spin-wave Quantum Memory

First demonstration preserves full optical depth for over 97% fidelity and points to efficiencies beyond forward reabsorption limits.

Figure from the paper full image
abstract click to expand
We report the first experimental realization of backward retrieval in a spin-wave quantum memory based on a Stark-echo-modulated protocol in Eu3+:Y2SiO5. By using Stark control, we preserve the full optical depth of the ensemble while suppressing coherent noise, enabling conditional storage fidelities above 97%. Our analysis shows that the present backward-retrieval efficiency is mainly limited by technical imperfections rather than by fundamental constraints. With realistic engineering improvements, backward retrieval in this protocol could move beyond the reabsorption-limited forward-emission regime. The protocol is also compatible with cavity-enhanced operation, offering an additional route toward higher efficiencies. These findings establish Stark-echo modulation as a practical and scalable route to high-efficiency, long-lived solid-state quantum memories.
0
0
quant-ph 2026-05-13 2 theorems

Security proof secures decoy-state QKD despite encoder correlations

Security of decoy-state quantum key distribution with correlated bit-and-basis encoders

Finite-key bounds against coherent attacks hold with only partial knowledge of bit-and-basis encoder memory effects

abstract click to expand
Practical quantum key distribution (QKD) modulators inevitably introduce correlations, causing the state emitted in a given round to depend on the setting choices made in previous rounds. These correlations break the round-by-round independence structure on which many widely used security proof techniques rely, leaving a significant gap between available theoretical guarantees and the reality of practical implementations. In this work, we develop a finite-key security proof for decoy-state BB84 against general coherent attacks that rigorously incorporates correlations introduced by Alice's bit-and-basis encoder, while requiring only partial characterization of such correlations.
0
0
quant-ph 2026-05-13 2 theorems

Chaos emerges through exceptional points in reset-driven Floquet channels

Chaos Emerge with Exceptional Points in Reset-Driven Floquet Dynamics

Tuning the chaos parameter splits channel eigenvalues at exceptional points, breaking symmetry constraints and distinguishing dynamical régm

Figure from the paper full image
abstract click to expand
We investigate the spectral structure of reset-driven Floquet quantum channels generated by the Hamiltonian evolution of a many-body system followed by periodic resetting of a bath. By tuning a chaos-controlling parameter in the underlying Hamiltonian, we uncover an exceptional-point-induced spectral transition from a symmetry-constrained ergodic regime to a fully chaotic regime. Across this transition, increasing the chaos parameter causes the real eigenvalues of the channel to drift, coalesce at exceptional points, and bifurcate into complex-conjugate pairs, signaling the progressive breaking of symmetry constraints in operator space. We further show that the channel spectrum sharply distinguishes chaotic, ergodic, many-body localized, and scarred dynamical regimes. Finally, we connect the leading channel eigenvalues to experimentally accessible probes based on quantum mutual information, establishing a link between the spectral organization of reset-driven quantum channels and observable relaxation dynamics.
0
0
quant-ph 2026-05-13 Recognition

Congruences decide leakage in qudit encrypted cloning

Classification of informative subsets in quantum encrypted cloning on qudits

Unauthorized subsets of size n leak input dependence exactly when their congruence system has nontrivial solutions, via a GCD condition that

abstract click to expand
Encrypted cloning offers a means of introducing redundancy into quantum storage while respecting the no-cloning theorem: an unknown state is encoded into multiple signal-noise pairs, and only authorized subsets can recover the original information. However, the leakage properties of unauthorized subsets particularly for higher-dimensional systems (qudits) have remained unexplored. In this work, we systematically classify the informative subsets of the storage register in the qudit encrypted-cloning protocol. We focus on unauthorized subsets of size $n$ that contain exactly one qudit from each signal-noise pair. We show that the presence or absence of information leakage is determined by the solution set of a system of congruences whose coefficients depend on the dimension $d$ and on the numbers of signal and noise qudits in the subset. The reduced state is completely uninformative if and only if the congruence system admits only the trivial solution; otherwise, it retains a residual dependence on the input state through specific generalized Pauli operators. Low-dimensional examples ($n=1,2,3$) are worked out explicitly, and the complete classification is expressed in terms of a greatest-common-divisor condition. Our results extend the parity-based classification known for qubits ($d=2$) to arbitrary finite dimensions, revealing a dimension-dependent boundary of confidentiality in encrypted cloning.
0
0
quant-ph 2026-05-13 Recognition

Nanophotonic memory stores single photons over 1 microsecond

Telecom quantum memory over one microsecond in nanophotonic lithium niobate

Erbium-doped lithium niobate chip holds telecom light far longer than propagation allows, with verified phase coherence and low noise.

Figure from the paper full image
abstract click to expand
Nanophotonic quantum memory is a vital component for scalable quantum information processing in quantum computing, networking, and sensing. Here we store single-photon-level telecom-band optical pulses for more than 1 microsecond using an atomic frequency comb in erbium-doped thin-film lithium niobate, far exceeding what is practically achievable by propagation in even the best nanophotonic devices because of propagation losses. We verify the quantum nature of this storage by demonstrating phase coherence and sub-single-photon noise upon retrieval. We also show the flexibility of our platform by storing up to 20 temporal modes and demonstrating an acceptance bandwidth up to 2.2 GHz. These results establish erbium-doped thin-film lithium niobate as a practical platform for on-chip quantum memory at telecom wavelengths, a key missing element for photonic quantum computing and quantum networking.
0
0
quant-ph 2026-05-13 Recognition

Scaling threshold of 1/2 separates easy from hard quantum kernels

Wavelet Variance Equipartition as a Threshold for World-Model Quality and Quantum Kernel TN-Simulability

World-model latents above the value stay area-law entangled and classically simulable; real models fall below into exponential hardness.

Figure from the paper full image
abstract click to expand
While world models learn compact representations of complex environments, they lack a physics-grounded metric to assess the structural fidelity of their latent spaces. We identify the wavelet scaling exponent $\alpha$ as a critical diagnostic, proposing optimal representations satisfy variance equipartition ($\alpha \approx 1/2$) -- mirroring Kolmogorov's inertial range. We establish $\alpha = 1/2$ as a sharp transition boundary for the classical simulability of amplitude-encoded quantum kernels. Using tensor-network theory, we prove latents with $\alpha > 1/2$ reside in an area-law phase admitting efficient classical emulation, while $\alpha < 1/2$ triggers a volume-law phase where the Matrix Product State bond dimension $\chi$ grows exponentially with qubit count $n$. Analyzing pre-trained VideoMAE latents reveals a dichotomy: spatial tokens approach the equipartition limit ($\alpha \approx 0.423$), but permutation-invariant feature channels exhibit unstructured disorder ($\alpha \approx -0.123$). This forces real-world latents deep into the volume-law phase, providing a data-driven necessary condition for simulation hardness. Finally, we apply Weingarten calculus to derive the exact variance of the scrambled transition probability under a 2-design ensemble. We prove this variance scales strictly as $\Var[X] = \Theta(d^{-2})$. We confirm this numerically with a log-log slope of $-1.881$ ($R^2 = 0.999$), identifying a formidable shot-noise wall demanding a measurement budget of $M = \Omega(d^2)$ that constrains quantum machine learning scalability.
0
0
quant-ph 2026-05-13 2 theorems

Layer ablation shows qubit choice sets quantum teleportation fidelity floor

QuBridge: Layer-wise Fidelity Decomposition in Quantum Computation Pipeline

Isolating each pipeline decision narrows worst-case error from 11.8 percent to under 2 percent while later stages add smaller conditional 0.

Figure from the paper full image
abstract click to expand
Running a quantum circuit on current hardware involves a sequence of engineering decisions, each with tunable parameters and distinct error characteristics. Existing tools optimize each decision in isolation, leaving practitioners unable to determine how much each decision contributes to final output quality. We present QuBridge, a pipeline analysis tool that decomposes quantum computation into three decision layers and measures each layer's fidelity contribution through progressive ablation and isolation experiments. Applied to quantum teleportation under IBM-calibrated noise models, the framework surfaces three phenomena that end-to-end measurement obscures. Qubit selection narrows the worst-case fidelity band from 11.8% to under 2% with downstream layers held fixed, without changing the peak. Per-gate pulse-shape assignment adds a +0.9% residual gain whose attributed magnitude depends on upstream layout. Error-detection encoding is not uniformly advantageous, and its conditional benefit emerges for input states whose dominant error channel is detectable by the chosen code. QuBridge operates on cached calibration data without requiring live hardware access.
0
0
quant-ph 2026-05-13 Recognition

Digital annealer cuts average CNOT count by 13.7% in circuit transpilation

Digital Annealer-Assisted Accuracy-First Quantum Circuit Transpilation with Integrated QUBO Mapping and Routing

Hybrid DA mapping plus heuristic routing beats Qiskit optimization on structured quantum circuits, accepting longer compile times for fewer

Figure from the paper full image
abstract click to expand
In the Noisy Intermediate-Scale Quantum (NISQ) era, limited qubit counts and high gate error rates directly constrain circuit fidelity, making the minimization of CNOT gate counts crucial. While conventional compilers prioritize heuristic efficiency, there is a compelling need for "accuracy-first" transpilation that prioritizes gate reduction over compilation latency. We propose a framework leveraging the Digital Annealer (DA) via two complementary strategies: (1) Hybrid, which uses DA-driven global initial mapping combined with high-speed heuristic routing by Qiskit, and (2) Full DA, which solves mapping and routing as separate DA-assisted QUBO subproblems within an iterative workflow. Benchmarks demonstrate that our Hybrid approach achieves an average CNOT reduction of 13.7 % (up to 57.4 %) compared to Qiskit's highest optimization level, with the largest gains on structured circuits such as GHZ and ASP where the initial layout is decisive. The Full DA approach matches Hybrid on structured circuits and outperforms ISAAQ by 23.1 % on average (maximum 90.8 %), but degrades on circuits with random or concentrated connectivity - exposing a trade-off between QUBO size and solution quality when the entire circuit is encoded in a single annealing pass. Although these global optimizations incur higher computational overhead than pure heuristics, our results indicate that for high-precision workflows where gate noise is the primary bottleneck, DA-assisted global initial placement provides a practical "time-for-quality" trade-off for enhancing the utility of near-term quantum hardware.
0
0
quant-ph 2026-05-13 2 theorems

Vertical couplers link stacked qubits at 97.5% CZ fidelity

Breaking the scalability barrier via a vertical tunable coupler in 3D integrated transmon system

3D architecture achieves matching intra- and inter-chip gate performance plus cross-chip entanglement, bypassing planar size limits.

Figure from the paper full image
abstract click to expand
Scaling superconducting quantum processors beyond the constraints of monolithic planar architectures is essential for fault-tolerant quantum computation. Here we demonstrate a three-dimensional (3D) integrated superconducting quantum processor in which two qubit chips are vertically stacked on opposing sides of a carrier chip and galvanically connected via multilayer flip-chip bonding. Intrachip qubit coupling is mediated by planar tunable couplers, whereas interchip coupling is enabled by vertical tunable couplers embedded in the carrier chip. Randomized benchmarking reveals simultaneous single-qubit gate fidelities of 99.87 % with negligible crosstalk, and controlled-Z gates achieve an average fidelity of 97.5 % for both intrachip and interchip operations. We further demonstrate high-fidelity Bell-state preparation and coherent generation of a four-qubit $W$ state, confirming the architecture's capability for interchip entanglement distribution. These results establish vertical coupling as a promising pathway toward scalable quantum processors compatible with advanced quantum error-correcting codes.
0
0
quant-ph 2026-05-13 Recognition

Loss turns into a tool for nonreciprocal qubit entanglement

Loss-induced quantum nonreciprocity and entanglement in superconducting qubits

Two lossy cavities between remote transmons create direction-dependent couplings, enabling tunable nonreciprocal entanglement.

Figure from the paper full image
abstract click to expand
Losses are ubiquitous in physics and are usually regarded as harmful in quantum information processing. Here, we propose a loss-induced scheme to achieve nonreciprocity and nonreciprocal entanglement in a superconducting platform, where two remote superconducting transmon qubits are connected via two lossy auxiliary cavities. The nonreciprocity in our scheme originates from interference between multiple lossy coupling paths. The coherent phases associated with the qubit-resonator couplings reverse sign under propagation reversal, while the loss-induced phases remain direction independent. Their combined effect leads to different interference conditions in the opposite directions, resulting in unequal effective couplings. We show that this loss-induced scheme can generate nonreciprocal quantum entanglement, indicating that loss can be utilized as a resource. Moreover, the tunability of nonreciprocity and nonreciprocal entanglement in our scheme can be manipulated by the relative phase induced by loss, allowing to tailor both reciprocal and nonreciprocal behaviors. Our results establish a direct link between engineered loss and nonreciprocal entanglement in quantum information processing and offer potential applications in scalable quantum networks.
0
0
quant-ph 2026-05-13 3 theorems

String diagrams recast constructor theory as quantum processes

String Diagrams for Quantum Foundations, Computing and Natural Language Processing

They expose a locality-composition conflict and also enable phase-encoded wave logic plus cross-language circuit equivalence in DisCoCirc.

abstract click to expand
Applied category theory provides powerful mathematical tools for modelling processes and their composition. Symmetric monoidal categories, which involve series and parallel composition, are particularly well-suited for describing the composition of processes in space and time. Also called process theories, they admit string diagrams, which constitute a visually intuitive, mathematically rigorous, expressive and flexible syntax that is applicable to wide-ranging scientific domains. In this thesis, we employ string diagrams to investigate a selection of topics in the areas of quantum foundations, computing, and natural language processing: (1) We formalise constructor theory as a process theory. In the context of quantum physics, we also demonstrate the conflict between constructor-theoretic principles of locality and composition. Moreover, we argue that if the principle of locality is rejected, categorical quantum mechanics (CQM) can be conceived as a constructor theory of quantum physics. (2) We develop a formalism for wave-based logic circuits with phase encoding. We motivate the formalism using the example of spin-wave circuits, and then demonstrate its utility in design, analysis and optimisation of Boolean logic circuits. (3) We investigate the elimination of inter-language grammatical bureaucracy in the distributional compositional circuits (DisCoCirc) framework. In particular, we develop a hybrid grammar for a restricted fragment of the Urdu language, and show that Urdu text endowed with this hybrid grammar maps surjectively to DisCoCirc text circuits. Furthermore, we show that for the same language fragment, Urdu and English text circuits become the same up to gate-level translation. The aforementioned work supports the view that a process-relational outlook in science is well-supported by applied category-theoretic tools, particularly string diagrams.
0
0
quant-ph 2026-05-13 2 theorems

Two-qubit battery capacity decreases with rising entanglement

Correlations Between Quantum Battery Capacity and Quantum Resources for Two-qubit System

It peaks when entanglement, steering, nonlocality and coherence vanish, while residual gap tracks entanglement positively.

Figure from the paper full image
abstract click to expand
We investigate the relationship between quantum battery capacity and quantum resources in a two-qubit system consisting of mutually coupled battery and charger subsystems. We find that the battery capacity decreases monotonically with the quantum entanglement, steering, Bell nonlocality and coherence, and peaks when these four quantum resources vanish. Moreover, we reveal the capacity gap between the total system capacity and the sum of the battery and charger spin capacities, which is the residual battery capacity, and establish its positive correlation with entanglement. Furthermore, unlike the first four resources, although the battery capacity decreases monotonically with quantum imaginarity, its disappearance under system detuning does not guarantee a peak capacity, and this effect becomes more pronounced as the detuning increases. In contrast to the first five resources, the quantum state texture shows a positive correlation with battery capacity, but a negative correlation with entanglement, steering, Bell nonlocality, coherence, imaginarity, and residual battery capacity. These monotonic relationships are independent of the choice of system parameters. Our findings reveal the relationship between quantum battery capacity and quantum resources during the dynamic evolution of a quantum battery system, and advances the theory of quantum batteries and the development of quantum energy storage systems.
0
0
quant-ph 2026-05-13 Recognition

Task runtime dispatches QIR programs to multiple quantum processors

Classic and Quantum Task-Based Intelligent Runtime for QIRs Running on Multiple QPUs

Circuit cutting and classical merging allow parallel sub-task execution while recovering full results on one node.

Figure from the paper full image
abstract click to expand
High-performance computing systems are rapidly evolving into heterogeneous platforms that fuse quantum accelerators with traditional classical processing units (CPUs) and graphical processing units (GPUs). This convergence calls for runtimes capable of managing both classical and quantum workloads in a unified manner. We introduce an intelligent, task-based runtime that marries the Intelligent RuntIme System (IRIS) asynchronous scheduler with a quantum programming stack through the Quantum Intermediate Representation Execution Engine (QIR-EE). Our design allows programs written in the quantum intermediate representation (QIR) to be dispatched concurrently to a variety of back-ends, including multiple quantum simulators and nascent quantum processors, enabling genuine hybrid execution on a single node. To illustrate its practicality, we partition a 4-qubit and 20-qubit circuit into three sub-circuits using quantum circuit cutting via the QCut library. Each sub-circuit is simulated independently by the QIR-EE driver within IRIS, after which a classical post-processing step merges the simulation results to recover the outcome of the original full-circuit computation. This case study demonstrates how finer task granularity can enable the parallel execution and lower the simulation burden per quantum task while preserving overall accuracy, highlighting the feasibility of our hybrid approach.
0
0
quant-ph 2026-05-13 2 theorems

RL policy selects quantum passes for higher fidelity and speed

TuniQ: Autotuning Compilation Passes for Quantum Workloads at Scale for Effectiveness and Efficiency

Adapts to each circuit and backend without retraining, beating fixed Qiskit sequences with gains that grow on large workloads

Figure from the paper full image
abstract click to expand
Quantum processors are being integrated into HPC ecosystems as co-processors, where compilation of quantum circuits into hardware-executable form determines both output fidelity and runtime. Current compilers use a fixed pass sequence and ignore the fact that optimal pass selection varies with circuit, hardware, and noise conditions. We present TuniQ, a reinforcement learning-based system that selects compilation passes at each pipeline stage, adapting to circuit, backend, and current noise profile. TuniQ introduces several novel design components like a dual-encoder for stage-aware representation, shaped rewards for cross-stage credit assignment, and dynamic action masking for valid compilation. Evaluated across diverse quantum workloads on multiple IBM Quantum Cloud processors, TuniQ improves fidelity and reduces compilation time over the state-of-the-art IBM Qiskit transpiler, generalizes across backends without retraining, and scales strongly to utility-scale circuits with growing advantage.
0
0
quant-ph 2026-05-12 2 theorems

Teleportation in top quark pairs beats classical limit under noise

Characterizing quantum correlations and quantum teleportation in gg to tbar{t} and qbar{q} to tbar{t} processes under noisy channels

Spin correlations from gg and qqbar processes resist amplitude damping and phase flip, keeping fidelity above two thirds.

Figure from the paper full image
abstract click to expand
The measurement of top-quark spin correlations provides a key tool for probing its interactions with high precision. Owing to its extremely short lifetime ($\tau \sim 10^{-25}$ s), the top quark preserves its spin polarization information, making the $t\bar{t}$ system an ideal framework for investigating quantum correlations in high-energy physics. In this work, we analyze quantum correlations in $t\bar{t}$ pairs produced in QCD using several quantum information-theoretic measures, including Bell nonlocality, quantum steering, concurrence, and geometric quantum discord. Their dependence on kinematic variables is examined in both the $gg \to t\bar{t}$ and $q\bar{q} \to t\bar{t}$ channels, with convergence toward the $gg \to t\bar{t}$ dominated regime in the ultra-relativistic limit ($\beta = 1$). We also investigate the effect of three effective decoherence channels (AD, PD, and PF). The AD and PD channels lead to a monotonic degradation of correlations as the decoherence parameter $p$ increases, while the PF channel exhibits a symmetric behavior around $p=1/2$. The impact of these channels on quantum teleportation is analyzed, showing that it remains above the classical threshold of $2/3$ even in the presence of noise. These results indicate that certain quantum resources can persist despite decoherence, opening new perspectives at the interface of quantum information and particle physics.
1 0
0
quant-ph 2026-05-12 3 theorems

Hypergraph product codes shrink while keeping distances

Spatial overhead reduction for 2D hypergraph product codes

Spatial reduction cuts physical qubits in 2D quantum codes without changing dimension, distances, or subthreshold noise performance.

Figure from the paper full image
abstract click to expand
The hypergraph product creates a quantum stabilizer code from two input classical linear codes; a paradigmatic example being the surface code as a hypergraph product of two classical repetition codes. Many properties of the hypergraph product code can be inherited from those of the classical codes such as the code dimension, minimum distance and certain fault-tolerant gadgets. We investigate ways to reduce the number of physical qubits in hypergraph product codes while maintaining some of their useful properties for fault tolerance. We show that the code dimension, canonical logical basis, and minimum distances of the hypergraph product code are preserved through this reduction. We also provide distance-preserving syndrome measurement schedules as well as examples of reduced hypergraph product codes with parameter improvements such as $[\![610,64,6]\!] \rightarrow [\![441,64,6]\!]$ and $[\![1225,49,11]\!] \rightarrow [\![931,49,11]\!]$. In memory simulations with circuit-level depolarizing noise, we observe that the reduced codes can have similar subthreshold performance as their unreduced versions, but using fewer physical qubits. Finally, we show how overhead reduction can be compatible with homomorphic measurement gadgets, fold-transversal gates and automorphisms, which extends the savings to logical computation.
0
0
quant-ph 2026-05-12 Recognition

Weaker gadgets simulate non-critical quantum systems via extrapolation

Analogue quantum simulation with polylogarithmic interaction strengths by extrapolating within phases of matter

Polylogarithmic interaction strengths suffice when observables are extrapolated within stable phases instead of simulated at full strength.

Figure from the paper full image
abstract click to expand
Simple families of quantum Hamiltonians can simulate general many-body systems at arbitrary precision through the use of perturbative gadgets, however this generally requires interaction strengths spanning many orders of magnitude which scale polynomially in the system size and inverse precision, resulting in physically unrealisable systems. In this work, we show that for non-critical systems these required scalings can be exponentially reduced through classical post-processing, by simulating the model at smaller energy scales and extrapolating observables to the perturbative limit. In particular, we show that both local and extensive properties of thermal states with exponentially decaying correlations and ground states with a sufficiently stable gap can be simulated using gadgets whose interaction strengths scale only polylogarithmically in the inverse precision and the system size. As a key tool, we develop a generalised treatment of the local Schrieffer-Wolff transformation for geometrically quasi-local Hamiltonians over many energy scales, facilitating the analysis of perturbative gadget Hamiltonians without extensive global energy penalities, which may be of independent interest.
0
0
quant-ph 2026-05-12 2 theorems

Classical action branches fail to reconstruct wave functions in tunneling

Quantum tunneling, global phases and the limits of classical action reconstructions

The growing component inside barriers, fixed by global boundary conditions, cannot arise from local real trajectories alone.

abstract click to expand
It was proposed recently that the Schr\"odinger wave function can be reconstructed exactly from a discrete superposition of classical action branches weighted by associated classical densities, without semiclassical approximations. We examine this construction for quantum tunneling through finite potential barriers and for quantum phase phenomena. Although formally consistent when the Hamilton-Jacobi equation admits globally defined real branches, the construction breaks down in classically forbidden regions where no real classical action exists. Using rectangular and Coulomb barrier tunneling in alpha decay and nuclear fusion, we show that the wave function requires either a non-vanishing quantum potential or complex-valued action. The growing barrier component fixed by global boundary conditions is essential for transmission and cannot arise from local real classical trajectories alone. Berry phase, flux quantization, Josephson tunneling, and dc SQUID interference likewise impose global phase constraints absent from local classical action transport.
0
0
quant-ph 2026-05-12 1 theorem

Quantum walks identify hidden graphs with O(n²/log n) measurements

Quantum Algorithm for Identifying Hidden Graphs: Spectral Theory and Numerical Evidence

Spectral theory and numerics up to n=10242 support conjectured exponential speedup over classical query methods

Figure from the paper full image
abstract click to expand
We give a quantum algorithm for a novel type of black-box problem: identifying a hidden $d$-regular base graph $G$ on $n$ vertices from oracle access to an obfuscated version of it, rather than traversing it. From $G$ we build the spired graph $G_{\rm spire}$ in three steps: each vertex is lifted into an exponentially large cluster, with adjacent clusters joined by a random bipartite graph; each cluster is then crowned with a balanced spire; finally, all vertices are randomly relabelled. Specializing to $G=K_2$ recovers the welded-trees graph. Our algorithm is conceptually simple: a continuous-time quantum walk on $G_{\rm spire}$, followed by a single Hadamard test at a classically precomputed time $t^*$; the algorithm returns the candidate whose predicted amplitude is closest to the measurement. The design rests on a rigorous spectral theory: from the apex of any spire, the walk is confined to a polynomial-dimensional invariant subspace evolving under the adjacency matrix of a simpler towered graph $G_{\rm tower}$; that matrix block-diagonalizes into $n$ independent tridiagonal systems of size $n$, each solved in closed form by a Chebyshev secular equation. Efficient numerics enabled by this decomposition supply $t^*$ and the predicted amplitudes. On the prism graphs $Y_m$ versus the M\"obius ladders $M_m$ (each on $n=2m$ vertices), the numerical study supports a precise conjecture that $\widetilde O(n^2/\log n)$ measurements at evolution time of order $m^2$ suffice to distinguish the two families; we have tested $4 \le m \le 5121$ ($n$ up to $10242$). By analogy with the welded-trees lower bounds, we further conjecture that any classical algorithm requires queries exponential in $n$. Together these conjectures point to an exponential quantum speedup for the identification of an obfuscated base graph.
0
0
quant-ph 2026-05-12 Recognition

Learned parity bases raise accuracy 24-42% on binary tasks

Quantum Parity Representations: Learnable Basis Discovery, Encoders, and Shadow Deployment

Hybrid training finds bit-product features that separate labels better than standard classifiers, then runs entirely classically at test

Figure from the paper full image
abstract click to expand
We study parity features as representations that can be evaluated entirely classically once the binary or quantized input representation and parity words are fixed, particularly when labels depend on higher-order feature interactions or when discrete inference interfaces support perturbation robustness. A parity feature is a signed product over selected bits of a binary input: once the participating bits are known, evaluation requires no quantum resources. Reaching a useful parity representation requires solving two challenges. When the input is parity-ready (a meaningful binary string), the challenge is basis discovery: selecting useful parity words from a combinatorial search space. Otherwise, the challenge is encoding: constructing a binary vector on which parity computation is meaningful. We use hybrid quantum-classical training pipelines to address these: learnable Pauli word selection for basis discovery, learned projection encodings for continuous embeddings, and sPQC-Parity for discrete inputs. On three native-binary parity tasks with 5-10 qubits, the learned parity basis improves mean accuracy by 23.9% to 41.7% over logistic-regression and support-vector baselines. A model comparison shows that the improvement comes primarily from discovering the right parity basis, rather than from quantum moment computation at inference. On five continuous text benchmarks, learned projection recovers much of the loss introduced by dimensionality reduction and fixed binarization, exceeding the full continuous baseline on CR, SST-2, and SST-5. On three encoding-limited discrete datasets, when compared with PCA-bin as the baseline, sPQC-Parity reaches 94.6% improvement on mushroom, 3.0% on splice, and matches PCA-bin on promoter. We also analyze inference robustness under binary or quantized inference, where rounding gives exact invariance below half the quantization step.
0
0
quant-ph 2026-05-12 2 theorems

Toolkit lets users simulate quantum Hamiltonians via error tolerances

The Quantum Hamiltonian Analysis Toolkit: Lowering the Barrier to Quantum Computing with Hamiltonians

Inputs focus on maximum allowable error and system descriptions instead of algorithm steps or orders to lower the barrier for analysis and研究

Figure from the paper full image
abstract click to expand
We present the Quantum Hamiltonian Analysis Toolkit (QHAT), a newly developed application that provides a user-friendly interface for studying Hamiltonians and performing Hamiltonian simulation on fault-tolerant quantum computers. QHAT enables the generation and analysis of Hamiltonians through a powerful and feature-rich application, driven by simple inputs designed to reflect user needs rather than algorithmic details, so that productive research on your application of interest can be done without needing a deep understanding of quantum computing algorithms. QHAT enables a streamlined workflow to analyze Hamiltonians and Hamiltonian simulation, supporting multiple choices of algorithms and analyses. It supports Hamiltonians from multiple sources but can also generate Hamiltonians based on a simple description of the system, saving intermediate data files for re-use when generating related Hamiltonians. Deriving the parameters for quantum computing algorithms can be a challenge, so QHAT is built around user-facing concepts such as maximum allowable error, rather than being built around algorithmic details such as steps counts or order parameters. An emphasis on user-friendly interfaces and efficient analysis means that the barrier to entry is low while rapidly providing results useful for a broad scope of studies.
0
0
quant-ph 2026-05-12 2 theorems

Linearized GST scales error checks to ten qubits

Scalable linearized gate set tomography

Sparse models plus a linear fit on shallow circuits recover coherent, stochastic and crosstalk errors accurately.

Figure from the paper full image
abstract click to expand
Characterizing errors on many-qubit quantum computers remains a key challenge to understanding and improving the performance of these devices. Current characterization methods either don't scale beyond a few qubits, or make simplifying assumptions (such as assuming stochastic Pauli error) that obscure the underlying physical error mechanisms. In this work, we present a scalable extension to gate set tomography-linearized gate set tomography-that enables characterization of many-qubit systems. Linearized gate set tomography relies on sparse error models, a linear approximation to enable efficient data fitting, and data from shallow circuits-so that the systematic error in the linear approximation is small. We demonstrate the accuracy of our technique using simulations of a ten-qubit system with coherent and stochastic errors, including coherent crosstalk, and we demonstrate that it is robust in presence of additional errors that are not included within the sparse error model ansatz.
0
0
quant-ph 2026-05-12 2 theorems

Random circuit averages reduce to classical tensor networks

Lecture Notes on Replica Tensor Networks for Random Quantum Circuits

Multi-copy observables become partition functions whose boundary conditions set the measured quantity, turning quantum averages into tractav

Figure from the paper full image
abstract click to expand
We present a pedagogical, hands-on tutorial on \emph{replica tensor-network} techniques for random quantum circuits. At its core, the method recasts circuit-averaged observables acting on multiple copies of the system as the contraction of a classical tensor network, equivalently the partition function of a statistical-mechanics model whose effective spins live in the commutant of the gate ensemble. The framework is general: changing the observable or the initial state modifies only the replica boundary conditions, while changing the ensemble modifies the bulk tensors. Focusing on quantum-information diagnostics, from metrics of wavefunction spreadings to entanglement quantifiers, we illustrate the approach in both clean and noisy random unitary circuits. We then briefly explain how the methodology extends to other ensembles, such as orthogonal or Clifford circuits. The lecture notes are accompanied by \texttt{ReplicaTN}, a self-contained C++/Python library and pedagogical notebooks.
0
0
quant-ph 2026-05-12 Recognition

Symmetry-aware trajectories cut N-emitter simulation cost to O(N^2)

Permutation-symmetric quantum trajectories

Exact quantum dynamics of large emitter arrays coupled to a shared mode become feasible without approximating the master equation.

Figure from the paper full image
abstract click to expand
We show how one may perform a stochastic unraveling which respects weak permutation symmetry for models of $N$ emitters coupled to a common system (e.g. a cavity mode). For problems involving 2-level emitters, such an unravelling reduces the computational cost from $\mathcal{O}(N^5)$ to $\mathcal{O}(N^2)$, and with additional refinements, allows reduction to $\mathcal{O}(N)$. This significantly increases the range of system sizes for which one can model exact quantum dynamics of such systems. We further show how the method can also be applied to d-level systems, with computational effort scaling as $\mathcal{O}(N^{d(d-1)/2})$, and we show it allows large-N simulations for d=3.
0
0
quant-ph 2026-05-12 2 theorems

Punctured surface code turns many couplings into one logical signal

Distributed estimation of many-body Hamiltonians via punctured surface code

It enables robust distributed estimation of weighted averages of Z-type interactions via topological protection.

Figure from the paper full image
abstract click to expand
We study how a punctured surface code can turn many local $Z$-type couplings into one protected logical signal for distributed quantum metrology, where the goal is to estimate a weighted average of the coupling strengths. We consider an ordinary planar patch with two $X$-cut holes and provide a distributed sensing protocol where all $Z$-type couplings correspond to the same nontrivial logical $\bar{Z}$ for the punctured surface code. When the couplings are disjoint, we show that the relevant global condition is equivalent to the existence of a closed dual loop, called a witness, that has an odd number of intersections with every chain. Together with a local clean opening condition, this witness criterion gives a concrete punctured-code construction in which all signal chains implement the same nontrivial logical $\bar Z$. For three-body interactions with overlapping supports, we also identify the class of interactions where our punctured surface code protocol applies. Overall, our results provide a novel, noise-robust distributed sensing protocol for many-body interactions, with corresponding topological design criteria.
0
0
quant-ph 2026-05-12 2 theorems

Distributed toric code beats monolithic below 0.05% error

Tolerating Device Failure in Distributed Quantum Computing

Modular networks with node failures at probability p/100 keep lower logical errors than a single device.

Figure from the paper full image
abstract click to expand
It is desirable that a distributed quantum computer can operate despite the replacement or failure of its constituent components, allowing the reliability of the distributed system to exceed that of its subcomponents. We first show that when quantum error correction is performed over a modular quantum network, quantum devices can be swapped out or replaced, during operation, with minimal impact on logical error rates. We also investigate the ability of the toric and hyperbolic Floquet quantum error correcting codes to protect logical information under low rates of modular node failure. In particular, we show that under the proposed distributed quantum error scheme, the selected codes are able to maintain good logical error suppression during the failure of entire nodes. For catastrophic node failure of probability p/100, we suggest that a distributed toric code would outperform one implemented on a monolithic device below a physical error rate of 0.05%.
0
0
quant-ph 2026-05-12 2 theorems

Graph state choice sets entanglement and scrambling speeds

Graph-State Circuit Blocks control Entanglement and Scrambling Velocities

LC-inequivalent graph-state blocks yield distinct v_E and v_B in identical random circuits, driven by internal entanglement distribution and

Figure from the paper full image
abstract click to expand
Random circuit models often describe local dynamics using generic two-qubit gates, which have proven successful in capturing entanglement growth and operator spreading in many contexts. This approach naturally leads to the expectation that detailed gate structure plays only a limited role in coarse-grained entanglement and scrambling diagnostics. We show that the internal structure of multipartite circuit primitives can significantly influence these dynamical rates, even within a fixed random-circuit architecture. To investigate this, we study an exactly simulable family of Clifford quantum circuits built from fixed $n$-qubit graph-state preparation unitaries, which we treat as elementary building blocks. Specifically, we consider a one-dimensional chain of $N$ qubits initialized in a product state and evolved by layers in which nonoverlapping length-$n$ blocks are placed at uniformly random positions with sparsity $\alpha$. We find that different choices of graph-state building blocks lead to strongly varying dynamical rates. Graph states that are inequivalent under local Clifford (LC) transformations generate sharply different entanglement velocities $v_E$ and butterfly velocities $v_B$, even though the circuits are drawn from the same ensemble with identical architecture and randomness parameters. We further show that this hierarchy is captured by two complementary block-level characteristics: the distribution of entanglement across internal bipartitions of the graph state, which correlates with $v_E$, and a graph-theoretic connectivity profile across bipartitions, which correlates with $v_B$. Neither descriptor alone fully determines the dynamics; rather, entanglement growth and operator spreading are controlled by distinct structural features of the local circuit blocks. Notably, AME states appear among the fastest scrambling building blocks within the ensembles studied here.
0
0
quant-ph 2026-05-12 Recognition

Shared oscillator gives constant-depth fanout for any number of qubits

Quantum Fanout Gates in Constant Depth via Resonance Engineering

Resonance-tuned Jaynes-Cummings dynamics produce linear error growth instead of the usual quadratic penalty from CNOT decompositions.

Figure from the paper full image
abstract click to expand
We present a novel implementation of an n-qubit fanout gate using resonance engineering. Our proposed mechanism uses Jaynes-Cummings interactions between multiple qubits and a common harmonic oscillator to realize a fanout gate at the system-level. Our theoretical analysis establishes upper bounds on the gate error, demonstrating linear infidelity scaling in constant time -- a favorable trade-off compared to a conventional CNOT decomposition. To validate the performance of our scheme at large system sizes, we exploit permutation symmetry to reduce the simulation complexity from exponential to polynomial in the number of qubits, enabling simulation up to 100 qubits. The results of this numerical analysis are consistent with our theoretical findings and allow us to characterize the performance well. Our gate will enable faster stabilizer readouts and could provide polynomial speedups in many quantum algorithms.
0
0
quant-ph 2026-05-12 2 theorems

3D Hamiltonian encodes qubit for exponential time at finite temperature

A passive self-correcting quantum memory in three dimensions

Recursive transformations on a seed model keep the code local in three dimensions while extending thermal memory lifetime.

Figure from the paper full image
abstract click to expand
We construct a 3D Pauli stabilizer Hamiltonian whose ground state space can encode a qubit for exponential time when coupled to a bath at non-zero temperature. Our construction recursively applies a sequence of transformations to a seed Hamiltonian that increases the memory lifetime of the encoded qubit while maintaining geometric locality in $\mathbb{R}^3$.
0
0
quant-ph 2026-05-12 3 theorems

Symmetry fixes strain interaction for biased-erasure phononic gates

Crystallographic Symmetry Generates Phononic Holonomic Gates with Biased-Erasure Channels

Shared irreducible representation yields holonomic gates with 0.47% erasure probability and 64% fewer data qubits in XZZX simulations.

Figure from the paper full image
abstract click to expand
Solid-state processors require control layers whose errors are legible to quantum-error-correction decoders. We show that crystallographic symmetry can provide such a layer in strain-active Lambda manifolds. When the projected strain tensor and Lambda-transition operators share a multiplicity-one two-dimensional irreducible representation, symmetry fixes the linear strain interaction to a scalar dot product. Two phase-locked mechanical modes synthesize a circular strain field, enabling complex phononic Lambda-leg control without local microwave near fields. On this manifold we construct a superadiabatic echo-lune holonomic gate using Lambda-leg control and a resonant double-quantum counterdiabatic tone. Rotating-frame simulations of a nitrogen-vacancy center give 99.88% conditional average fidelity in 1.833 microseconds, or 99.40% when leakage is counted as error. A resonant gigahertz high-overtone bulk acoustic resonator analysis translates the Hamiltonian into Rabi-rate, linewidth, and envelope-tracking requirements. The bright-state structure organizes noise: A2-sector perturbations are parity-filtered into an optically distinguishable auxiliary state, whereas transverse E-sector faults are echo suppressed and retained as a decoder stress axis. The extracted channel has 0.47% erasure probability and 0.168% residual Z error. In XZZX code-capacity simulations, this biased-erasure model yields a nominal 64% fit-extrapolated data-qubit reduction relative to an unstructured Rabi baseline. Repeated-round detector-model diagnostics preserve the nominal distance-9 proxy and identify missed erasures, transverse floors, leakage/flag timing, and strong crosstalk as validation limits. Extensions to orbital Lambda systems and bright-projector phonon-bus diagnostics identify crystallographic symmetry as a principle for co-designing phononic actuation, leakage, noise bias, and quantum decoding.
0
0
quant-ph 2026-05-12 Recognition

Global pulses read multi-qubit stabilizers in dual-species arrays

Multi-Qubit Stabilizer Readout on a Dual-Species Rydberg Array

Tuning the Rydberg drive cancels interspecies phase errors to enable simultaneous non-destructive measurements on cesium plaquettes.

Figure from the paper full image
abstract click to expand
The ability to locally control and measure subsets of ancilla qubits in an efficient and crosstalk-free manner is a key ingredient in quantum error correction (QEC). Dual-species neutral atom arrays offer an ideal implementation of these capabilities, enabling independent state preparation, manipulation, and detection on each species. In this work, we realize such a dual-species Rydberg array of Na and Cs atoms trapped in co-localized 2D optical tweezer arrays, using Na as an ancilla to measure stabilizers of surrounding Cs data qubits. We identify the finite interspecies Rydberg-Rydberg interaction strength as a practical obstacle to high-fidelity multi-body entanglement and show that, by tuning the Rabi frequency and the detuning of the Rydberg driving field, the resulting geometric phase error can be compensated. This yields a protocol for simultaneous, non-destructive, in situ stabilizer readout of multiple data qubits via global pulses alone. Using this protocol, we demonstrate non-destructive measurement of Pauli-Z stabilizers on four-qubit Cs plaquettes via a single global Rydberg pulse sequence. Our results demonstrate dual-species tweezer arrays as a promising route towards scalable QEC and open the door to new quantum control protocols leveraging both interspecies and intraspecies interactions.
0
0
quant-ph 2026-05-12 Recognition

RL agent scales Clifford synthesis to 30 qubits after 10-qubit training

Equivariant Reinforcement Learning for Clifford Quantum Circuit Synthesis

Equivariant policy finds shorter circuits than Qiskit methods on large unseen tableaus

Figure from the paper full image
abstract click to expand
We consider the problem of synthesizing Clifford quantum circuits for devices with all-to-all qubit connectivity. We approach this task as a reinforcement learning problem in which an agent learns to discover a sequence of elementary Clifford gates that reduces a given symplectic matrix representation of a Clifford circuit to the identity. This formulation permits a simple learning curriculum based on random walks from the identity. We introduce a novel neural network architecture that is equivariant to qubit relabelings of the symplectic matrix representation, and which is size-agnostic, allowing a single learned policy to be applied across different qubit counts without circuit splicing or network reparameterization. On six-qubit Clifford circuits, the largest regime for which optimal references are available, our agent finds circuits within one two-qubit gate of optimality in milliseconds per instance, and finds optimal circuits in 99.2% of instances within seconds per instance. After continued training on ten-qubit instances, the agent scales to unseen Clifford tableaus with up to thirty qubits, including targets generated from circuits with over a thousand Clifford gates, where it achieves lower average two-qubit gate counts than Qiskit's Aaronson-Gottesman and greedy Clifford synthesizers.
0
0
quant-ph 2026-05-12 Recognition

Hybrid method outperforms random addition in discrete search

Improving search efficiency via adaptive acquisition function selection in discrete black-box optimization

BOCS plus adaptive GP fallback on stagnation yields better QUBO and HUBO solutions by advancing local neighborhoods.

Figure from the paper full image
abstract click to expand
In discrete-variable black-box optimization, the number of candidate solutions grows combinatorially, while each evaluation is often expensive. Therefore, it is important to identify promising solutions efficiently within a limited number of trials. Bayesian Optimization of Combinatorial Structures (BOCS), an existing parametric method, works effectively when only a small amount of data is available. However, as the number of observations increases, BOCS tends to repeatedly propose points that have already been evaluated, which leads to search stagnation. A random-point addition strategy has been proposed to address this issue when an evaluated point is proposed, but it cannot sufficiently exploit information from promising data obtained so far. In this study, we propose a hybrid method that uses BOCS as the main search framework and generates alternative unevaluated points using a Gaussian process only when search stagnation is detected. In the Gaussian-process-based component, multiple Lower Confidence Bound (LCB) acquisition functions are adaptively selected to dynamically control the balance between exploitation and exploration. Numerical experiments using fully connected Quadratic Unconstrained Binary Optimization (QUBO) and Higher-order Unconstrained Binary Optimization (HUBO) as black-box functions show that the proposed method finds solutions with better objective values than the conventional random-point addition method in both settings. Additional analyses show that its effectiveness comes from selecting points that promote search progress within Hamming-distance neighborhoods, rather than simply adding low-energy points near promising solutions. Experiments with sparse surrogate models for quantum annealer applications further suggest the importance of retaining near-fully connected representational capacity.
0
0
quant-ph 2026-05-12 Recognition

Sharp density jump found in exactly solvable dissipative fermions

Exact steady states of interacting driven dissipative fermionic systems with hidden time-reversal symmetry

Generalizing the absorber technique uncovers hidden time-reversal symmetry that keeps the first-order transition intact at finite loss rates

Figure from the paper full image
abstract click to expand
We present exact solutions for the non-equilibrium steady states of a class of dissipative spinless fermionic systems with arbitrary Hamiltonian pairing terms, global charging energy interactions, and uniform single particle loss on every site. Our exact solution is found by generalizing the coherent quantum absorber technique to fermionic systems, and our result establishes the existence of hidden time-reversal symmetry in driven-dissipative fermionic models. The steady state exhibits a first order phase transition in the particle density, with the resulting jump discontinuity in density persisting even for finite dissipation rates. A mean-field description of the model exhibits a bistable regime that encompasses the first-order transition line yet which fails to accurately predict its precise location via a Maxwell construction. We also show that the model's hidden time-reversal symmetry results in an Onsager symmetry of certain two-time correlation functions.
0
0
quant-ph 2026-05-12 2 theorems

Quantum currents cluster unlabeled data without tomography

Qlustering for Data Clustering via Network-Based Quantum Transport

Steady-state transport in open networks supplies cluster assignments directly, matching performance on Iris and QM9 while avoiding fullstate

Figure from the paper full image
abstract click to expand
Analog quantum computation offers a route to machine learning using controllable physical dynamics as a computational resource. However, many existing approaches rely on task-specific protocols or observables that are difficult to access experimentally, limiting generality and implementation. Here we introduce Qlustering, an unsupervised clustering framework based on steady-state quantum transport in quantum networks governed by the GKSL master equation, developed through algorithm-hardware co-design. Data are encoded as input states, and cluster assignments are inferred from steady-state output currents, avoiding full state tomography in favor of accessible transport observables. The method realizes a hybrid classical-quantum workflow in which data preparation and training are performed classically, while clustering is carried out by transport dynamics. We benchmark the method on synthetic datasets, localization, and QM9 and Iris, finding competitive performance and stability over a broad range of dephasing strengths. These results show that unlabeled data structure can be extracted directly from steady-state transport observables, identifying terminal-current readout as a native, tomography-free mechanism for unsupervised learning in open quantum networks.
0
0
quant-ph 2026-05-12 2 theorems

Perturbations create synthetic twist defects in surface code

Emergence of synthetic twist defects in the surface code under local perturbation

Simplified spin and Majorana models locate the phase transition where non-Abelian defects emerge under local changes.

Figure from the paper full image
abstract click to expand
Topologically-ordered quantum states with Abelian excitations can host defects that obey effective non-Abelian statistics, in principle allowing for quantum information processing via defect braiding. These extrinsic defects (or twists) are typically studied as static features of the lattice. However, an alternative proposal considers how an underlying topologically ordered quantum substrate can be locally perturbed to create and manipulate synthetic defects \cite{you_synthetic_2013}. Unfortunately, while largely referenced, elements of this proposal were never systematically studied. Understanding the energy spectrum is particularly important in finite size and finitely perturbed systems, which are crucial for experimental realizations. In this work we announce a significant step in this direction by explicitly constructing, simplifying, and numerically studying the spectral properties of synthetic defects in a model system. First, we introduce two alternative representations of this problem in both spin and Majorana languages. In the former we describe emergent virtual symmetries which constrain and simplify the problem and in the latter we show a direct connection to Kitaev's well-known Majorana chain. We utilize these simplifications to perform numerical calculations to indicate the location of the quantum phase transition driving the emergence of the synthetic defects. We conclude by discussing key steps for future work to more clearly and completely study this phenomena.
0
0
quant-ph 2026-05-12 Recognition

Two-parameter quantum net beats classical nets needing eight parameters

Algorithmic Advantage on a Gate-Based Photonic Quantum Neural Network

Photonic QNN reaches 100 percent accuracy on nonlinear problems where matched ANNs fail, showing efficiency gains on current hardware.

Figure from the paper full image
abstract click to expand
We report on a gate-based variational quantum classifier implemented with single photons and probabilistic gates, to emulate the standard quantum circuit model framework. We evaluate the expressive power of two deployable quantum neural networks (QNNs) by computing their effective dimension, a capacity measure grounded in a proven generalization-error bound, and compare them with classical artificial neural networks (ANNs) of equivalent trainable-parameter count. Supervised binary classification tasks are used to benchmark performance across photonic and superconducting QNNs, both of which exhibit superior converged (lower) cross-entropy loss and (higher) prediction accuracy relative to matched-parameter ANNs. For a nonlinearly separable task, our photonic QNN with a single pair of trainable parameters successfully converged (loss 0.04 and accuracy 100%), whereas the equivalent ANN failed to learn the decision boundary, saturating at random-guessing performance. We simulate photonic quantum circuits, training them on the XOR problem and a two-class Iris subset using gradient-free optimization, and assess their robustness to sampling errors under realistic noise processes including photon loss and phase-shifter imperfections. Circuits with comparatively high effective dimension were deployed remotely on a six-qubit photonic quantum processor, achieving classification accuracies of up to 100% in both online and offline learning settings. Notably, even the simplest QNN deployed, with just two trainable parameters, successfully solved tasks that classically require ANNs with at least quadruple the number of parameters, suggesting an emergent algorithmic advantage. Overall, these results demonstrate a clear proof-of-principle that gate-based QNNs can be realized and trained effectively on current photonic hardware, providing proof of algorithmic advantage on a gate-based photonic QNN.
0
0
quant-ph 2026-05-12 2 theorems

Dissipation mismatch induces holonomy in qubit work cycles

Holonomy and Complementarity in Open Quantum Systems

Quasistatic driving maps complementarity to geometry on the Bloch sphere, with curvature depending on pointer alignment.

Figure from the paper full image
abstract click to expand
Complementarity relations constrain the distribution of coherence, predictability, and openness in quantum systems. Here we show that, in open quantum systems, these local constraints acquire a geometric interpretation through quasistatic transport. For a driven dissipative qubit, the complementarity variables define cylindrical coordinates on the Bloch sphere, while openness appears geometrically as a radial deficit associated with reduction from a larger Hilbert space. Quasistatic driving induces a work connection on the resulting steady-state manifold whose curvature determines the cyclic response. Hamiltonian-aligned dissipation produces an exact work connection and vanishing cyclic work, whereas fixed pointer-basis dissipation generates non-integrable transport, finite curvature, and holonomic response. The resulting curvature admits a phase-resolved representation on the triality manifold and develops perturbatively with pointer--Hamiltonian mismatch. In the weak-mismatch limit, the curvature is governed by a competition between coherence-preserving and pure-dephasing channels, producing symmetry-related positive- and negative-curvature sectors. These results establish a direct connection between complementarity, dissipation, and geometric thermodynamic response, and show that cyclic quasistatic work provides an operational probe of nonequilibrium quantum geometry.
0
0
quant-ph 2026-05-12 2 theorems

Local two-qubit equivalence skips the Weyl chamber

On the KAK Decomposition and Equivalence Classes

The standard geometric picture holds only when global phases are ignored; proper local classes use a different region.

Figure from the paper full image
abstract click to expand
The KAK decomposition is a fundamental tool in Lie theory and quantum computing. Despite its widespread use, the mathematical foundations remain incomplete, particularly regarding the precise conditions for the decomposition and the characterization of equivalence classes under multiplication by elements of $K$. Here, we present a mathematical theory of the KAK decomposition for connected compact semisimple Lie groups and derive the decomposition for $\mathrm{SU}(4)$. In particular, we clarify the relationship between various definitions of a Cartan decomposition in the literature and give a complete proof of a general KAK decomposition theorem. We then distinguish two distinct notions of KAK equivalence classes, double coset equivalence and projective equivalence, thereby addressing mathematical inconsistencies regarding KAK classification in the literature. Specifically, for $\mathrm{SU}(4)$, we show that local equivalence classes under multiplication by $\mathrm{SU}(2)\otimes \mathrm{SU}(2)$ are geometrically represented not by the usual "Weyl chamber" as claimed in the existing literature. Instead, the "Weyl chamber" is only recovered by the projective-local equivalence which disregards global phases. We develop a systematic theory for determining equivalence and uniqueness for both notions of equivalence. Our work establishes a rigorous Lie-theoretic foundation for the theory of quantum gates and circuits.
0
0
quant-ph 2026-05-12 2 theorems

Unitaria composes quantum block encodings like NumPy arrays

Unitaria: Quantum Linear Algebra via Block Encodings

High-level operations yield circuits automatically and support classical verification without simulation or extra qubits.

Figure from the paper full image
abstract click to expand
We introduce Unitaria, a Python library that brings the simplicity of classical linear algebra toolkits such as NumPy and SciPy to the implementation of quantum algorithms based on block encodings, a general-purpose abstraction in which a matrix is embedded as a sub-block of a larger unitary operator. Their implementation has so far required deep knowledge of low-level circuit construction, which Unitaria aims to eliminate. The library provides a composable, array-like interface through which users can define block encodings of matrices and vectors, combine them through standard operations such as addition, multiplication, tensor products, and the Quantum Singular Value Transformation, and extract the resulting quantum circuits automatically. A key feature is a matrix-arithmetic evaluation path in which every operation can be computed directly on encoded vectors and matrices without dependence on ancilla qubits or circuit simulation. This enables correctness verification and classical simulation that scale well beyond what state vector simulation permits and also allows resource estimation, including gate counts, qubit counts, and normalization constants, without executing any circuit. Together, these capabilities allow researchers to develop, verify, and analyze quantum linear algebra algorithms today, ahead of the availability of error-corrected hardware. Unitaria is open source and available at https://github.com/tequilahub/unitaria.
0
0
quant-ph 2026-05-12 2 theorems

Quantum receivers surpass diffraction limit for sub-Rayleigh sources

Passive optical superresolution at the quantum limit

Reformulating imaging as quantum estimation yields optimal detection strategies that extract more spatial information than direct intensity,

Figure from the paper full image
abstract click to expand
For more than a century, the diffraction limit has defined the resolution achievable by passive optical imaging systems. Although some resolution improvement can be gained through classical data processing of the image, it is limited by the noise arising from quantum nature of light. Minimizing the effect of this noise requires quantum treatment of optical imaging. By reformulating imaging as a problem of quantum measurement and estimation, it becomes possible to identify optimal detection strategies that recover spatial information previously thought inaccessible. This review summarizes the theoretical framework that underpins this development, from the formulation of quantum Cram\'er-Rao bounds and Chernoff bounds to the construction of receivers that attain them, such as those based on spatial-mode demultiplexing. We show how these methods can beat conventional imaging in the classification, localization, and imaging of sub-Rayleigh incoherent sources. We then discuss extensions to multiparameter and partially coherent scenarios, and highlight the unifying connections between estimation and discrimination tasks. Finally, we survey recent experimental demonstrations that approach quantum-limited resolution and outline emerging applications in microscopy, astronomy, and optical sensing.
0
0
quant-ph 2026-05-12 2 theorems

No measurement-induced transition in monitored disordered fermions

No measurement induced phase transition in the entanglement dynamics of monitored non-interacting one-dimensional fermions in a disordered or quasiperiodic potential

Large-scale simulations and sigma-model analysis find critical monitoring strength is zero for any disorder or quasiperiodicity.

Figure from the paper full image
abstract click to expand
We show that the entanglement entropy (EE) of one-dimensional (1d) non-interacting fermions with $U(1)$ symmetry in the presence of a quasi-periodic or disordered potential in which the occupation number is being monitored by homodyne or quantum jump protocols is always in an area-law phase so no measurement induced phase transition (MIPT) occurs. The reason for the previously claimed MIPT in these systems was a finite size effect related to the fact that the maximum lattice size $L \sim 500$ was of the order of the correlation length. By increasing the system size up to $L \leq 18000$, employing Graphics Processing Unit (GPU), and performing a careful finite size scaling analysis, we find that the critical monitoring strength is consistent with zero so no MIPT occurs. For the disordered case, these numerical results are fully supported by an analytical calculation based on mapping the problem onto a nonlinear sigma model (NLSM) with an additional mass-like term that confirms the absence of the MIPT for any monitoring or disorder strength. Another salient feature of the disordered case, in part related to a different symmetry in the NLSM, is that the correlation length in the weak disorder limit is longer than in the clean limit and increases with the disordered strength.
0
0
quant-ph 2026-05-12 2 theorems

Coupled rings suppress parasitic noise for near-unit squeezing

Squeezing Enhancement Through Resonant Interference in Multi-ring Resonators

Hybridization splits unwanted resonances in two-ring structures, yielding near-unit fidelity in degenerate squeezed light.

Figure from the paper full image
abstract click to expand
We develop a non-perturbative description of squeezed light generation in an arbitrary lossy structure consisting of multiple coupled microring resonators. This is applied to two ring photonic molecules where the interference of the fields in the coupled rings leads to a modification in the resonance spectrum near a shared resonance. Considering a dual-pump degenerate squeezing scheme under a five resonance approximation, we investigate two methods to suppress parasitic four-wave mixing contributions and compensate for group velocity dispersion within a primary resonator through hybridization effects with a second auxiliary resonator. In the former case, this comes from an effective splitting of the unwanted resonances supporting parasitic four-wave mixing interactions that add thermal noise to the desired degenerate squeezed state. For sufficiently strong coupling between the resonators, we demonstrate near complete suppression of such parasitic processes, resulting in near unit fidelities with the corresponding output state that would arise were the parasitic interactions neglected. In the latter case, the hybridization effectively shifts a pump resonance, realigning the desired dual-pump four-wave mixing process and leading to a significant enhancement of the signal generation and output squeezing.
0
0
quant-ph 2026-05-12 Recognition

Hollow-core upgrades cut QKD modules by 49% in metro networks

Selective Placement of Hollow-Core Fibers for QKD and Classical Communication Coexistence

Upgrading just 40% of links allows classical and quantum traffic to share more efficiently, reducing hardware needs.

Figure from the paper full image
abstract click to expand
We investigate the benefits of partially upgrading optical networks with hollow-core fibers for QKD-classical communication coexistence. Results show that upgrading 40% of links in a metro topology can reduce the number of quantum modules by up to 49%.
0
0
quant-ph 2026-05-12 Recognition

Pruning cuts distributed iQFT communication to linear scaling

Communication-Efficient Distributed Inverse Quantum Fourier Transform

Entanglement consumed per node stays constant as the number of processors grows instead of rising quadratically.

Figure from the paper full image
abstract click to expand
The scalability of quantum computing is currently limited by physical, technological, and architectural constraints that hinder the integration of a large number of qubits within a single quantum processor. Distributed quantum computing (DQC) has therefore emerged as a viable alternative, aiming to interconnect multiple smaller quantum processing units (QPUs) to jointly operate on a global quantum state. While this paradigm enables scalable architectures, it introduces significant communication overhead due to the cost of non-local quantum operations across distant nodes. In this work we propose a distributed formulation of the iQFT over a quantum network composed of $P$ nodes, each hosting $Q$ qubits, enabling the execution on a logical register of size $n = P \cdot Q$. Furthermore, we introduce a communication-efficient variant based on a threshold-driven pruning strategy, referred to as a \emph{communication horizon}, which exploits the exponentially decreasing significance of controlled-phase rotations to safely omit remote gates with negligible impact. By reducing the number of inter-node quantum interactions, the proposed approach significantly lowers the quantum communication requirements of the distributed iQFT while preserving its functional correctness. Crucially, we show that this approach fundamentally alters the scaling of the algorithm: the entanglement resource consumption per node saturates to a constant value, reducing the global communication complexity from quadratic $\mathcal{O}(P^2)$ to linear $\mathcal{O}(P)$. As the iQFT constitutes a critical building block in many quantum algorithms, the techniques presented in this paper directly contribute to improving the practicality and scalability of distributed quantum computation.
0
0
quant-ph 2026-05-12 Recognition

Hybrid quantum solver drops ancilla qubit overhead for ODEs

Quantum Differential Equation Solver via Hybrid Oscillator-Qubit Linear Combination of Hamiltonian Simulations

Oscillator kernel encoding reaches 99.9 percent fidelity on heat equations with a more compact oracle than qubit-only LCHS

Figure from the paper full image
abstract click to expand
We introduce a hybrid oscillator-qubit formulation of linear combination of Hamiltonian simulation (LCHS) for solving linear ordinary differential equations. Instead of representing the quadrature rule with a discrete-variable (DV) ancilla register in qubit-only LCHS, the method encodes the LCHS kernel in a continuous-variable (CV) ancillary mode, thereby eliminating the explicit $O(\log M_a)$ ancilla-qubit overhead, where $M_a$ is the number of discretized integral terms in the DV quadrature rule. We derive analytical error bounds for two main approximation mechanisms for the ideal kernel state preparation, showing superalgebraic convergence for Schwartz-class kernels in the truncation cutoff $N$. The required CV non-Gaussianity is captured by the finite squeezed-Fock kernel state, which generically has stellar rank $N-1$, identifying the truncation cutoff as a discrete measure of the oracle's non-Gaussian resource. For the hybrid oscillator-qubit evolution, we also obtain a product-formula bound showing that a $p$th-order formula requires $O(t^{1+1/p}(\Gamma_{p,N}/\epsilon_t)^{1/p})$ Trotter steps to reach error $\epsilon_t$, where $\Gamma_{p,N}$ collects Pauli commutator terms weighted by powers of the truncated position-operator norm $\|\hat{x}\|_N$. We further derive a perturbation bound for the probability of obtaining the required oscillator measurement outcome, showing that an $\epsilon$-close implementation of the ideal LCHS oracle in operator norm induces only an $O(\epsilon)$ perturbation in the postselection probability. In the heat-equation benchmarks, the Law--Eberly protocol achieves end-to-end solution fidelity at least 99.90%. A comparison with a matrix-product-state-based DV LCHS implementation further shows that, the hybrid construction uses a substantially more compact oracle description with reduce circuit cost.
0
0
quant-ph 2026-05-12 2 theorems

Quantum automata need Θ(n²) states to simulate exactly

On the Simulation Cost of Quantum Finite Automata

Exact probabilistic simulation cost for one-way models under strict cutpoints scales quadratically with quantum dimension.

abstract click to expand
This paper identifies exact probabilistic simulation cost as the natural quantitative measure of quantum advantage for finite automata under strict cutpoints. It gives sharp simulation laws for two representative models. A one-way finite automaton with $c$ classical states and a $q$-dimensional quantum register has exact probabilistic simulation cost $\Theta(cq^2)$, while an $n$-dimensional measure-once one-way quantum finite automaton has worst-case cost $\Theta(n^2)$. The proofs develop a prepare--test framework, in which prefixes generate the relevant real operator degrees of freedom and suffixes convert them into strict-cutpoint tests. The same obstruction is recast through finite sign-rank matrices, clarifying the role of Forster's spectral method. Placed beside the surrounding two-way separations, these results give a clean hierarchy of finite-automata quantum advantage.
0
0
quant-ph 2026-05-12 2 theorems

NISQ simulates real magnet spectra with constant time scaling

Quantum Simulation of Magnetic Materials: from Ab-Initio to NISQ

Ab-initio spin models for chromium tri-halides yield matching results up to 48 qubits on cloud quantum hardware, unlike exponential scaling.

abstract click to expand
Quantum computers are increasingly accessible, yet demonstrations of physically meaningful simulations for real materials remain scarce. In our work we simulate low-energy magnetic excitations, specifically spin-wave spectra, of chromium tri-halide monolayers. Starting from ab-initio electronic structure calculations for these two-dimensional magnets, we derive an effective spin model and simulate low-energy spin excitations using a real-time propagation of the spin system on the commercial quantum computing cloud platform IQM Resonance. The results for systems with up to 48 qubits are validated against classical benchmarks. While some spectral features remain challenging for today's NISQ devices, our simulation achieves good agreement at quasi-constant wall-time scaling, compared to the exponential scaling of classical methods. Our results demonstrate that, even in the absence of quantum advantage, useful quantum simulations of real materials are becoming possible for domain experts via commercial cloud access to quantum computers.
0
0
quant-ph 2026-05-12 2 theorems

Multivariate DQI beats weighted Prange on some optimization problems

Decoded Quantum Interferometry for Weighted Optimization Problems

Weight blocks and multivariate polynomial states let the quantum method exceed the natural classical benchmark for certain weighted OPI and

abstract click to expand
Decoded Quantum Interferometry (DQI) is a recently introduced quantum algorithm that reduces discrete optimization to decoding with potential advantages over the best known polynomial-time classical algorithms for certain Max-LINSAT problems. In its original formulation, however, DQI treats all constraints uniformly and cannot exploit the weight structure present in most optimization problems of interest. In this work, we develop a theory of DQI for weighted optimization problems, focusing on the weighted Max-LINSAT problem over a prime field. Grouping constraints into $N$ blocks by distinct weights, we introduce \emph{multivariate DQI states} built from $N$-variable polynomials of bounded total degree, and derive a closed-form asymptotic expression for both their optimal expectation value and their concentration behavior. We give an explicit preparation circuit using a single decoder call, and extend the analysis to imperfect decoding. We also show that, for certain weighted OPI problems, multivariate DQI outperforms a natural weighted analogue of Prange's algorithm, which serves as the weighted counterpart of the classical benchmark used in the unweighted setting. Finally, we extend the ideas to Hamiltonian DQI, obtaining approximate Gibbs states for commuting Pauli Hamiltonians with block structure.
0
0
quant-ph 2026-05-12 Recognition

PT chain scattering invalid above size-dependent gain threshold

Physical relevance of time-independent scattering predictions in periodic mathcal{PT}-symmetric chains

The onset threshold for time-growing states scales as 1/N, placing many large-structure predictions outside the physical regime.

Figure from the paper full image
abstract click to expand
Time-independent scattering methods are widely used to analyze transport in periodic $\mathcal{PT}$-symmetric systems. However, their predictions become unphysical when the system supports time-growing bound states (TGBSs), which manifest as $S$-matrix poles in the first quadrant of the complex wave-number plane. Here, we analytically delineate the region of physical relevance for a $\mathcal{PT}$-symmetric chain of $N$ unit cells with gain/loss strength $\gamma$. We derive the TGBS onset threshold $\gamma_c = 2\sin[\pi/(4N)]$, which scales as $\pi/(2N)$ for large $N$ and vanishes in the thermodynamic limit. Enlarging the structure thus enriches stationary scattering phenomenology but inevitably triggers TGBSs at weaker gain/loss. Time-dependent wave-packet simulations confirm this analytical boundary quantitatively. Applying this criterion, we show that many previously reported predictions of gain-loss-induced localization, reflectionless transport, and coherent perfect absorbers and lasers in large periodic structures fall outside the physically relevant regime. $S$-matrix pole analysis is therefore an indispensable prerequisite for interpreting time-independent scattering predictions in periodic non-Hermitian systems.
0
0
quant-ph 2026-05-12 2 theorems

Quantum perceptron keeps 93.9 percent accuracy after 93 percent signal loss

Quantifying the Hadamard Resilience Law: Discovery of the Coherence Gap in NISQ-Era Classifiers

Hadamard Resilience Law survives noise, but phase errors create a gap at 256 features on current hardware.

Figure from the paper full image
abstract click to expand
We report on a fundamental disparity between stochastic noise models and algorithmic performance in NISQ-era classifiers. Utilizing the ibm_kingston processor, we characterize the "Kingston Constant" ($\kappa \approx 0.07$), representing a 93% signal magnitude collapse. Despite this decay, we demonstrate that the Hadamard Test Perceptron maintains a 93.9% MNIST accuracy, validating our proposed Hadamard Resilience Law. However, a systemic divergence -- the "Coherence Gap" ($\Delta\rho \approx 0.91$) -- emerges at high feature depths ($N=256$), where physical hardware collapses while stochastic simulations remain resilient. This gap identifies coherent phase errors, rather than depolarizing noise, as the primary barrier to scaling quantum linear layers. Furthermore, experimental results on the ibm_kingston processor reveal a "Coherence Wall" at $N=256$, where circuit depth ($D \approx 10k$) exceeds the hardware's resilient depth limit ($D_{max} \approx 3.5k$). We provide a refined hardware-aware model that accounts for this coherence-induced signal decay, establishing a predictive boundary for robust quantum linear layers on current NISQ devices.
1 0
0
quant-ph 2026-05-12 Recognition

Critical momenta mark perfect charging in quantum battery modes

Dynamical Criticality Behind Energy-Storage Singularities in Quantum Batteries

In free-fermion models, dynamical quantum phase transitions coincide with maximal normalized energy storage and zero instantaneous power at

Figure from the paper full image
abstract click to expand
Energy-storage singularities in quantum batteries are often associated with equilibrium quantum criticality. Here we show that, in quench-driven many-body batteries, such singularities can originate from dynamical criticality in momentum space. Using the transverse-field Ising chain as a representative free-fermion quantum battery, we develop a momentum-resolved description of the charging process. The long-time stored energy forms a dephasing plateau whose dependence on the quench strength becomes nonanalytic when a real dynamical critical momentum emerges. More generally, for free-fermion two-band quantum batteries, each momentum sector acts as an independent coherent charging channel, and the condition for a dynamical quantum phase transition (DQPT) is equivalent to perfect normalized charging of the critical mode. At the critical times, this mode has a vanishing Loschmidt amplitude, maximal normalized stored energy, and zero instantaneous power at the turning point between energy absorption and backflow. We further show that the single-mode charging signal-to-noise ratio (SNR) develops sharp signatures at the same critical times, providing a direct charging-based probe of DQPT. Thus, nonequilibrium criticality does not simply enhance the total stored energy or power, which remain shaped by noncritical modes, but reorganizes energy storage by selecting optimal microscopic charging channels. Our results establish a mode-resolved connection between DQPT and quantum-battery charging, suggesting a route toward controlling many-body energy storage through dynamical criticality.
0
0
quant-ph 2026-05-12 2 theorems

Dynamical criticality makes a critical mode charge perfectly in quantum batteries

Dynamical Criticality Behind Energy-Storage Singularities in Quantum Batteries

At DQPT times the critical momentum sector reaches maximum normalized energy with zero power, turning the transition into a charging optimum

Figure from the paper full image
abstract click to expand
Energy-storage singularities in quantum batteries are often associated with equilibrium quantum criticality. Here we show that, in quench-driven many-body batteries, such singularities can originate from dynamical criticality in momentum space. Using the transverse-field Ising chain as a representative free-fermion quantum battery, we develop a momentum-resolved description of the charging process. The long-time stored energy forms a dephasing plateau whose dependence on the quench strength becomes nonanalytic when a real dynamical critical momentum emerges. More generally, for free-fermion two-band quantum batteries, each momentum sector acts as an independent coherent charging channel, and the condition for a dynamical quantum phase transition (DQPT) is equivalent to perfect normalized charging of the critical mode. At the critical times, this mode has a vanishing Loschmidt amplitude, maximal normalized stored energy, and zero instantaneous power at the turning point between energy absorption and backflow. We further show that the single-mode charging signal-to-noise ratio (SNR) develops sharp signatures at the same critical times, providing a direct charging-based probe of DQPT. Thus, nonequilibrium criticality does not simply enhance the total stored energy or power, which remain shaped by noncritical modes, but reorganizes energy storage by selecting optimal microscopic charging channels. Our results establish a mode-resolved connection between DQPT and quantum-battery charging, suggesting a route toward controlling many-body energy storage through dynamical criticality.
0
0
quant-ph 2026-05-12 2 theorems

Low-depth QAOA outperforms classical methods on hypergraph fairness

Quantum Hypergraph Partitioning

By optimizing distributions over partitions with quadratic objectives, quantum circuits beat SDP baselines on maximin and minimax goals.

Figure from the paper full image
abstract click to expand
Quantum optimization algorithms are inherently probabilistic, yet they are most often used to search for a single high-quality solution. In this paper, we instead study hypergraph partitioning problems in which the desired output is itself a probability distribution over partitions. We introduce a distributional perspective on hypergraph partitioning motivated by maximin and minimax objectives such as Fair Cut Cover, and we show how these objectives align naturally with the measurement distribution produced by QAOA. To motivate the formulation, we introduce a workforce-scheduling-inspired toy problem, the Greatest Expected Imbalance problem, in which the goal is to minimize the worst expected imbalance across hyperedges. We then develop QAOA-based quantum solvers that represent distributional solutions natively through quantum states, together with quadratic hypergraph objectives suitable for standard and multi-objective QAOA. These formulations connect balanced hypergraph partitioning, polarized community discovery, and distributional fairness under a unified quantum optimization framework. For comparison, we provide optimal polynomial-time classical approximation algorithms based on semidefinite programming and hyperplane rounding. Experiments on real-world and synthetic hypergraphs demonstrate that low-depth multi-angle QAOA can outperform these classical approximation baselines on the proposed objectives, highlighting the potential of quantum algorithms for optimization problems where the solution is a distribution rather than a single partition.
0

browse all of quant-ph → full archive · search · sub-categories