pith. machine review for the scientific record. sign in

cs.IT

Information Theory

Covers theoretical and experimental aspects of information theory and coding. Includes material in ACM Subject Class E.4 and intersects with H.1.1.

0
cs.IT 2026-05-13 Recognition

This paper derives an improved lower bound of order sqrt(n log log n) on the support size…

An Improved Lower Bound on Support Size of Capacity-Achieving Inputs for the Binomial Channel: Extended version

The capacity-achieving input distribution for the binomial channel must have support size at least order sqrt(n log log n).

abstract click to expand
We study the binomial channel and the structure of its capacity-achieving input and output distributions. It is known that the capacity-achieving input distribution is discrete and supported on finitely many points. The best previously known bounds show that the support size of the capacity-achieving distribution is lower-bounded by a term of order $\sqrt n$ and upper-bounded by a term of order $n/2$, where $n$ is the number of trials. In this work, we derive a new lower bound on the support size of order $\sqrt{n\log\log n}$, up to explicit constants. The proof consists of three main steps. First, we derive new upper and lower bounds on the capacity with a gap that vanishes as $n\to\infty$, which yields $C(n)=\frac12\log\frac{n\pi}{2e}+o(1)$. Second, we show that the Beta-binomial output distribution induced by the reference input $X_r\sim\mathrm{Beta}(1/2,1/2)$ is asymptotically optimal: it approaches the capacity-achieving output distribution in relative entropy and, after a comparison step, in $\chi^2$ divergence. Third, we prove a quantitative $\chi^2$ approximation lower bound showing that this Beta-binomial output cannot be approximated too well by the output induced by a $K$-point input. Combining these ingredients forces the capacity-achieving input distribution to have at least order $\sqrt{n\log\log n}$ mass points.
0
0
cs.IT 2026-05-13 2 theorems

Exact repair achieves optimal storage and bandwidth in quantum storage

Simultaneously Minimizing Storage and Bandwidth Under Exact Repair With Quantum Entanglement

The minimal alpha and d beta_q point known for functional repair holds under exact repair when d is at least 2k-2.

Figure from the paper full image
abstract click to expand
We study exact-regenerating codes for entanglement-assisted distributed storage systems. Consider an $(n,k,d,\alpha,\beta_{\mathsf{q}},B)$ distributed system that stores a file of $B$ classical symbols across $n$ nodes with each node storing $\alpha$ symbols. A data collector can recover the file by accessing any $k$ nodes. When a node fails, any $d$ surviving nodes share an entangled state, and each of them transmits a quantum system of $\beta_{\mathsf{q}}$ qudits to a newcomer. The newcomer then performs a measurement on the received quantum systems to generate its storage. Recent work [1] showed that, under functional repair where the regenerated content may differ from that of the failed node, there exists a unique optimal regenerating point that \emph{simultaneously minimizes both storage $\alpha$ and repair bandwidth $d \beta_{\mathsf{q}}$} when $d \geq 2k-2$. In this paper, we show that, under \emph{exact repair}, where the newcomer reproduces exactly the same content as the failed node, this optimal point remains achievable. Our construction builds on the classical product-matrix framework and the Calderbank-Shor-Steane (CSS)-based stabilizer formalism.
0
0
cs.IT 2026-05-13 Recognition

Finite-field angular metric yields unique projective decoding

Angle Between Two Vectors over Finite Fields and an Application to Projective Unique Decoding

angle_H(u, C) below d/2 guarantees a single closest direction in the code up to scaling, after the function descends to a true metric on the

abstract click to expand
We introduce a Hamming-type angular function $$\mathrm{angle}_H(u,v):= \min_{c \in \mathbb{F}_q^n} d_H(u, cv)$$ on pairs of nonzero vectors in $\mathbb{F}_q^n$ and show that it satisfies all three metric axioms up to scalar multiplication. The function $\mathrm{angle}_H$ is invariant under nonzero scalar multiplication in either argument and therefore descends to a genuine integer-valued metric on the projective space $\mathbb{P}(\mathbb{F}_q^n)$. As a concrete application, we prove an \emph{angular} (or \emph{projective}) version of the unique-decoding theorem for linear codes: if $\mathrm{angle}_H(u, C\setminus\{0\}) < d/2$, where $d$ is the minimum distance of the linear code $C$, then the closest direction in $C$ to $u$ is unique up to nonzero scalar multiplication. We then discuss how this angular viewpoint relates to the proximity-gap programme for Reed--Solomon codes. To the best of our knowledge, this is the first attempt to define an angle notion for vectors over finite fields and interpret it from several perspectives, including geometry, coding theory, and cryptography.
0
0
cs.IT 2026-05-13 2 theorems

Secure key rate equals d-â„“ in d-vertex-connected networks

Secure (Multiple) Key-Cast over Networks: Multiple Eavesdropping Nodes

This rate is achievable and optimal for key-cast when sources reach all nodes via d vertex-disjoint paths despite â„“ node observers.

Figure from the paper full image
abstract click to expand
We study the secure multiple key-cast problem over noiseless networks under node-based eavesdroppers, where one or more source nodes participate in the generation of distinct secret keys to be shared among designated terminal subsets, while an eavesdropper observing up to $\ell$ nodes, including possibly source nodes, obtains no information about the keys. For the single-source setting, we first consider networks in which every node is $d$-vertex connected from the source. We show that a secure key rate of $d-\ell$ is achievable for all such networks. We further show that this rate is optimal by exhibiting $d$-vertex-connected networks whose secure key-cast capacity is at most $d-\ell$. We next study networks in which only the terminal nodes are $d$-vertex connected from the source, while other network nodes may not satisfy this connectivity condition and may be partially-connected. We show that secure multiple key-cast remains achievable in the presence of such partially-connected nodes, and derive coding schemes whose rate depends on the minimum network vertex-connectivity from the source and certain additional network properties. Finally, we generalize these results, for both $d$-vertex-connected networks and networks containing partially-connected nodes, to the multi-source setting; showing that secure multiple key-cast remains achievable even when the eavesdropper may observe all but one of the source nodes.
0
0
cs.IT 2026-05-13 Recognition

CNN spots boundaries in overlapping wireless commands

A Deep Learning-based Receiver for Asynchronous Grant-Free Random Access in Control-to-Control Networks

Single neural network enables reliable decoding and low packet loss in uncoordinated high-traffic control networks.

Figure from the paper full image
abstract click to expand
In this paper, we study grant-free, asynchronous control-to-control (C2C) communications in an indoor scenario with a shared wireless channel. Each communication node transmits command units, each consisting of a variable-length low-density parity-check (LDPC)--coded payload preceded by a start sequence and followed by a tail sequence. Due to the asynchronous nature of the access, transmissions from different nodes are not aligned over time. As a result, each receiving controller observes the superposition of multiple command units transmitted by different nodes over a receiver-defined superframe interval. Each node transmits one or more replicas of the same command unit. We propose a receiver architecture in which the detection of command unit boundaries (start/tail sequences) is carried out by a single convolutional neural network (CNN) operating directly on the received signal. We show that, while start-sequence detection must rely only on the received waveform, tail-sequence detection can additionally exploit the soft information produced by the LDPC decoder, together with channel estimates. Finally, once commands units are successfully decoded, successive interference cancellation (SIC) can be applied. Simulation results demonstrate that the receiver we propose achieves reliable packet-boundary identification and a low end-to-end packet loss rate, even under uncoordinated and high-traffic operating conditions.
0
0
cs.IT 2026-05-13 2 theorems

Link failures cap LEO capacity scalability at O(1/n)

Capacity Scalability of LEO Constellations With Dynamic Link Failures

Under a Markov model of ISL states, scalability rises to an optimum then falls toward zero as satellite count grows.

Figure from the paper full image
abstract click to expand
Dynamic link failures disrupt the connectivity and geometric symmetry of the constellation structure, thereby increasing protocol overhead and degrading the effective capacity for traffic transport. The fundamental relationship between constellation size and effective capacity under protocol overhead constraints remains unclear. To this end, we define capacity scalability as the ratio of constellation capacity under non-failure conditions to protocol overhead. Specifically, if ISL states follow a two-state discrete Markov chain and the maintenance period is $k \geq 1$, the upper bound of capacity scalability under the uniform traffic pattern is $O(1/n)$, where $n$ is the number of satellites. With perfect information about the constellation topology, the upper bound can be achieved via shortest-path routing. For any given protocol, there exists an optimal constellation deployment scale in terms of capacity scalability. When the constellation size is below this optimum scale, capacity scalability increases with constellation size, thereby improving effective capacity. Increasing the maintenance period $k$ can improve capacity scalability, but it does not change the fact that the capacity scalability converges to zero when the constellation size exceeds the optimal scale.
0
0
cs.IT 2026-05-13 Recognition

Framework extends non-GRS MDS-NMDS codes via deep holes

A framework for constructing non-GRS MDS-NMDS codes from deep holes and its application

Codes with covering radius n-k yield new length-n+1 non-GRS families while preserving MDS/NMDS distance and avoiding GRS equivalence.

abstract click to expand
Maximum distance separable (MDS) codes and near MDS (NMDS) codes are of particular interest in coding theory due to their optimal error-correcting capabilities and wide applications in communication, cryptography, and storage systems. A family of linear codes is called a family of non-GRS MDS-NMDS codes if for each $[n,k]_q$ code in the family, it is either an $[n,k,n-k+1]_q$ MDS code that is not monomially equivalent to any GRS code or extended GRS code, or an $[n,k,n-k]_q$ NMDS code. This paper develops a unified framework for constructing new families of non-GRS MDS-NMDS codes via deep holes. We show that, starting from a family of $[n,k]_q$ non-GRS MDS-NMDS codes with covering radius $n-k$, one can systematically obtain more $[n+1,k+1]_q$ non-GRS MDS-NMDS codes. The proposed framework is further reformulated in terms of the second kind of extended codes. This reformulation recovers a main result of Wu, Ding, and Chen (IEEE Trans. Inf. Theory, 71(1): 263-272, 2025), provides a provable reduction in the computational complexity compared with the approach of Ma, Kai, and Zhu (Finite Fields Appl., 114, 102844, 2026), and reveals additional structural properties of the resulting codes. As an application, we determine the covering radius and characterize two classes of deep holes of extended subcodes of GRS codes. By applying our framework, we obtain three new families of non-GRS MDS-NMDS codes and investigate the monomial equivalence between the resulting codes and Roth-Lempel codes.
0
0
cs.IT 2026-05-13 Recognition

Exact Hamming distances derived for binary polycyclic codes from trinomials

On the Hamming Distance and LCD Properties of Binary Polycyclic Codes and Their Duals

The codes are reversible and LCD for certain exponents, producing optimal binary linear codes at larger lengths.

abstract click to expand
Polycyclic codes offer a natural generalization of cyclic codes and provide a broader algebraic framework for constructing linear codes with good parameters. In this paper, we study binary polycyclic codes associated with powers of irreducible polynomials. We first determine their complete algebraic structure and then develop general results on their minimum Hamming distance, including several exact values and bounds. We also examine the Euclidean duals of these codes and derive corresponding results on the Hamming distance of the dual codes. Furthermore, we study the LCD (linear complementary dual) properties of binary polycyclic codes, establish necessary and sufficient conditions for such codes to be LCD codes, and construct several families of binary LCD codes. Our constructions also yield many optimal and LCD optimal binary linear codes, including codes of larger lengths. We then focus on binary polycyclic codes associated with powers of the self-reciprocal irreducible trinomials $x^{2\cdot3^v}+x^{3^v}+1$, where $v\geq0$. For this class, we determine the exact Hamming distance of all such codes and show that these codes are reversible. Moreover, we show that these codes are LCD codes in certain cases. In addition, we propose a conjecture asserting that all binary polycyclic codes associated with $\big(x^{2\cdot3^v}+x^{3^v}+1\big)^{2^\mathcal{T}}$, where $v\geq 0$ and $\mathcal{T}\geq1$, are LCD codes. These results demonstrate that binary polycyclic codes form a rich source of structured codes with strong distance, duality, reversibility, and LCD properties.
0
0
cs.IT 2026-05-13 2 theorems

Node failures scale wireless capacity and delay with sqrt of reliable nodes

On Capacity and Delay of Wireless Networks with Node Failures

Even with the same number of active nodes, a failing network has lower capacity than a failure-free one and needs extra redundancy to match.

Figure from the paper full image
abstract click to expand
One key challenge in designing resilient large-scale wireless ad hoc networks is to understand how random node failures affect fundamental network performance. In this work, we show that both network capacity and delay scale as \scalebox{0.65}{$\textstyle \Theta\left(\sqrt{\frac{n(1-q)}{\log n}}\right)$}, where $n$ is the total number of nodes and $q$ is the node failure probability. The network capacity degenerates to the classical result given by P. Gupta and P. R. Kumar when $q=0$. Based on these results, we find that even with the same number of non-faulty nodes, a network with $n$ nodes and node failure probability $q$ has lower network capacity than a failure-free network with $n(1-q)$ nodes. To compensate for the network capacity loss caused by random node failures, at least $\epsilon(n,q) nq$ redundant nodes are required, where $\epsilon(n,q)>1$. We further prove that the optimal trade-off between network capacity and delay remains $O(1)$ regardless of node failures, implying that high network capacity and low delay cannot be achieved simultaneously. These results demonstrate robustness against stochastic variations in wireless channels.
0
0
cs.IT 2026-05-13 2 theorems

Adversarial test error bounds match exponentially in S

Memory Constrained Adversarial Hypothesis Testing

Upper and lower bounds on asymptotic minimax error share the same exponential dependence on the number of states and coincide for some cases

Figure from the paper full image
abstract click to expand
We study adversarial binary hypothesis testing under memory constraints. The test is a time-invariant randomized finite state machine (FSM) with S states. Associated with each hypothesis is a set of distributions. Given the hypothesis, the distribution of each sample is chosen from the set associated with the hypothesis by an adversary who has access to past samples and the history of states of the FSM so far. We obtain upper and lower bounds on the minimax asymptotic probability of error as a function of S. The bounds have the same exponential behaviour in S and match for a class of problems.
0
0
cs.IT 2026-05-13 2 theorems

CR2 router matches full-info LLM decisions with device signals only

CR²: Cost-Aware Risk-Controlled Routing for Wireless Device-Edge LLM Inference

Margin gate and risk calibration cut wireless deployment costs by up to 16.9% at matched accuracy.

Figure from the paper full image
abstract click to expand
As large language models (LLMs) move from centralized clouds to mobile edge environments, efficient serving must balance latency, energy consumption, and accuracy under constrained device-edge resources. Query-level routing between lightweight on-device models and stronger edge models provides a flexible mechanism to navigate this trade-off. However, existing routers are designed for centralized cloud settings and optimize token-level costs, failing to capture the dynamic latency and energy overheads in wireless edge deployments. In this paper, we formulate mobile edge LLM routing as a deployment-constrained, cost-aware decision problem, and propose CR^2, a two-stage device-edge routing framework. CR^2 decouples a lightweight on-device margin gate from an edge-side utility selector for deferred queries. The margin gate operates on frozen query embeddings and a user-specified cost weight to predict whether local execution is utility-optimal relative to the best edge alternative under the target operating point. We further introduce a conformal risk control (CRC) calibration procedure that maps each operating point to an acceptance threshold, enabling explicit control of the marginal false-acceptance risk under the full-information utility reference. Experiments on the routing task show that CR^2 closely matches a full-information reference router using only device-side signals before deferral. Compared with strong query-level baselines, CR^2 consistently improves the deployable accuracy-cost Pareto frontier and reduces normalized deployment cost by up to 16.9% at matched accuracy.
0
0
cs.IT 2026-05-13 Recognition

Conditional subadditivity tightens Szász and Fischer matrix bounds

From Submodularity to Matrix Determinants: Strengthening Han's, Sz\'asz's, and Fischer's Inequalities

Strengthened inequalities recover Ky Fan's as a special case and are strictly stronger than classical versions for non-diagonal positive-def

abstract click to expand
Dembo, Cover, and Thomas (1991) developed an elegant information-theoretic framework for proving determinantal inequalities for positive definite matrices, which relies on the structural inequalities of differential entropy. Submodular functions, which subsume entropy, inherently satisfy these structural inequalities because they obey generalized forms of the fundamental properties of entropy -- a chain rule and the property that conditioning reduces the function's value (under an appropriate definition of conditioning). Applying subadditivity, Han's inequality (1978), and partition subadditivity (i.e., subadditivity over a partition) yields Hadamard's, Sz\'asz's, and Fischer's inequalities, respectively. Furthermore, this framework recovers Ky Fan's inequality (1955), a strengthening of Hadamard's inequality. This improvement fundamentally arises because conditional subadditivity yields a tighter upper bound on the joint entropy than the one obtained via unconditional subadditivity. In this paper, we establish conditional strengthenings of Han's inequality and partition subadditivity in the general setting of submodular functions. We derive equality conditions for these strengthened bounds and characterize when they strictly improve their unconditional counterparts. We specialize these results to differential entropy and apply them to establish strengthened versions of Sz\'asz's and Fischer's inequalities. The strengthening of Sz\'asz's inequality recovers Ky Fan's inequality as a special case, and is strictly stronger than the classical Sz\'asz's inequality for any non-diagonal positive definite matrix. We also derive an inequality concerning eigenvalues, which generalizes and strictly strengthens a corresponding eigenvalue inequality of Ky Fan. We provide numerical examples to explicitly illustrate the tightness of our proposed matrix determinantal bounds.
0
0
cs.IT 2026-05-13 2 theorems

Explicit generators classify all constacyclic codes of length np^s

Constacyclic codes of length np^s over frac{mathbb{F}_{p^m}[u]}{langle u^trangle}: Torsions and Cardinalities

When the shift constant delta is a unit, every ideal in the quotient ring is generated explicitly, with full lists, torsion, and sizes for n

abstract click to expand
The purpose of this article is to study constacyclic codes of length $np^s$ over $R^t:=\frac{\mathbb{F}_{p^m}[u]}{\langle u^t \rangle },$ where $t$ is a natural number and $\gcd(n,p)=1$. We give generators of all the ideals of $R^{t,n}_{\delta}:=\frac{R^t[x]}{\langle x^{np^s}-\delta \rangle},$ where $\delta= \delta_0+u\delta_1+\dots+u^{t-1}\delta_{t-1}$ is a unit in $R^t$. For $n=1,\ 2, \ 3$ and $t=3$, we provide all types of ideals (constacyclic codes) and also give the torsional degrees as well as cardinalities of these codes.
0
0
cs.IT 2026-05-13 Recognition

Ternary sum entropy maximized by mostly binary variables

Maximum Entropy of Sums of Independent Ternary Random Variables

The maximum occurs when n-1 variables are uniform on {0,2} and the last one's middle probability is set by the difference of two binomial-1/

Figure from the paper full image
abstract click to expand
The classical problem of maximizing the Shannon entropy of a sum of independent random variables supported on a finite alphabet is considered and settled in the ternary case. Namely, the following theorem is established: if \(X_1,\ldots,X_n\) are independent random variables taking values in \(\{0,1,2\}\), then the entropy of \(S_n=X_1+\cdots+X_n\) is maximized when \(X_1,\ldots,X_{n-1}\) are uniform on \(\{0,2\}\) and the probability mass function of \(X_n\) is given by \(\Prob(X_n=0) = \Prob(X_n=2) = w/2\), \(\Prob(X_n=1) = 1-w\), where \(w = \big(1 + 2^{-H(B_n)+H(B_{n-1})}\big)^{-1}\) and \(B_m\sim \Bin(m,1/2)\). The statement can be seen as an extension to ternary alphabets of the Shepp--Olkin--Mateev theorem. The proof uses the Hermite--Biehler theorem, Newton's inequalities, and Yu's maximum-entropy theorem for ultra-log-concave distributions.
0
0
cs.IT 2026-05-13 Recognition

Polar complexity enables lossless codes approaching source entropy

Polar Complexity: A New Descriptive Complexity with Applications to Source and Joint Source-Channel Coding

Sequences are encoded by their minimum polar-compressed length plus the compressed bits in a prefix-free scheme that needs no source model.

Figure from the paper full image
abstract click to expand
This paper first presents a new approach to evaluating the descriptive complexity of finite-length binary sequences. Specifically, we investigate the sequence-wise recovery behavior induced by polar compression and successive cancellation decoding (SCD), and define the polar complexity of a sequence as the minimum polar-compression length (PCL) required for its exact reconstruction. To compute the polar complexity efficiently, we further develop both a bisection-search algorithm and a low-complexity estimation method. We then propose a polar-based two-stage source coding scheme, in which each source sequence is represented by its polar complexity followed by the corresponding polar-compressed sequence. The proposed scheme is strictly lossless and prefix-free. In addition, for BMSs, the normalized average compression length of the proposed scheme can asymptotically approach the source entropy under certain conditions. Simulation results further demonstrate that the scheme can operate without prior knowledge of the source statistics and remains robust across different source distributions. Finally, we integrate the proposed polar source coding with polar channel coding to develop an adaptive double-polar joint source-channel coding (JSCC) scheme, where the encoder and decoder share a predefined set of candidate PCLs to balance error performance and decoding complexity. We formulate the design of the candidate-PCL set as an optimization problem and solve it efficiently via dynamic programming. Simulation results show that the proposed adaptive double-polar JSCC scheme provides a flexible performance-complexity tradeoff and outperforms existing polar-code-based JSCC baselines.
0
0
cs.IT 2026-05-13 Recognition

Random coding gives exact finite-length bound for empirical coordination

Empirical coordination in the finite blocklength regime: an achievability result---Extended version

Method of types produces both precise and second-order rate expressions for agents to match target action distributions with short blocks.

Figure from the paper full image
abstract click to expand
Empirical coordination offers a way to understand how agents can coordinate actions under communication constraints. This paper investigates the finite blocklength regime of this problem, where the encoder and decoder aim to produce a sequence of action pairs that is jointly typical with respect to a target distribution. Adopting Shannon's random coding argument and leveraging the method of types, we analyze the average performance of a random codebook to establish an achievability result. The resulting bound on the optimal rate is presented both in exact form and as an asymptotic expansion, aligning with the prevailing characterizations in the finite blocklength literature. This work extends finite blocklength analysis to the empirical coordination setting, complementing existing results on strong coordination.
0
0
cs.IT 2026-05-13 Recognition

Constacyclic DFT extends cyclic square methods to Schur products

Schur Products of Constacyclic Codes via the Constacyclic Discrete Fourier Transform

Pattern polynomials and additive combinatorics yield dimension formulas for arbitrary constacyclic code products.

abstract click to expand
This paper investigates the Schur product of constacyclic codes via the constacyclic discrete Fourier transform (DFT). We first characterize key properties of the constacyclic DFT, highlighting its differences from the ordinary DFT. We then extend the concept of degenerate cyclic codes to constacyclic codes possessing a nontrivial pattern polynomial, thereby facilitating the analysis of their dimension sequences. Building on these tools, we generalize two established methods for computing the square of cyclic codes to compute the Schur product of arbitrary constacyclic codes. Finally, exploiting the inherent combinatorial structure, we derive properties of the Schur product dimension directly from additive combinatorics.
0
0
cs.IT 2026-05-13 2 theorems

Heterogeneous quantization matches full-precision MIMO detection with fewer bits

Performance of QUBO-Formulated MIMO Detection Under Hardware Precision Constraints

Derived distributions and sufficient-precision bounds let non-uniform bit allocation preserve error rates across antenna counts and up to 2

Figure from the paper full image
abstract click to expand
The evolution of multiple-input, multiple-output (MIMO) systems requires the efficient detection algorithms to overcome the exponential computational complexity of optimal maximum likelihood detection. Reformulating MIMO detection as a quadratic unconstrained binary optimization (QUBO) problem enables the use of highly parallel, physics-inspired, hardware-accelerated solvers and non-von Neumann architectures. However, embedding continuous-valued QUBO coefficients into hardware introduces quantization noise due to finite precision, which can severely degrade detection accuracy. This paper presents a rigorous analysis of the performance impact of finite-precision, hardware-accelerated QUBO solvers in MIMO detection. We analytically derive the probability distribution functions of the QUBO matrix entries and introduce novel homogeneous and heterogeneous quantization schemes based on either instantaneous channel state information or its statistical features. We further derive a sufficient condition on the precision required to maintain the optimal solution after quantization. Extensive numerical experiments, across various MIMO system sizes and modulation orders (up to 256-QAM), show that heterogeneous quantization matches the full-precision baseline bit error rate using significantly fewer bits than homogeneous approaches. We provide hardware-aware guidelines for selecting the optimal quantization strategy.
0
0
cs.IT 2026-05-13 Recognition

Floating-point entropy stays nearly constant under scaling

The Entropy of Floating-Point Numbers

An approximation links it to the continuous distribution and shows the entropy changes little when the variable is rescaled.

Figure from the paper full image
abstract click to expand
Here we present an analytic approximation for the entropy of floating-point numbers, along with bounds on the error of this approximation. It is well-known that the differential entropy is tightly linked to the discrete entropy of a uniformly quantized random variable. Our approximation uncovers a different quantity that provides this link for floating-point quantization. Additionally, we prove that the entropy of a floating-point quantized random variable is approximately unchanged under scaling. Closed-form expressions for the floating-point entropy of common distributions are provided and compared to exact results.
0
0
cs.IT 2026-05-13 2 theorems

Spatially coupled codes deliver strong performance for future systems

Recent Advances in Spatially Coupled Codes: Overview and Outlook

Review covers recent constructions, their properties, and design methods that make them attractive for communications and storage.

Figure from the paper full image
abstract click to expand
The concept of spatial coupling is among the most significant breakthroughs in coding theory over the past decade. The excellent waterfall and error floor performance of spatially coupled codes has positioned them as promising coding candidates for future communication and data storage systems. This article presents an overview of recent advances in spatially coupled codes. In particular, we first review several representative examples of recently proposed spatially coupled codes and highlight their unique features that make them appealing for different applications. Next, we discuss the useful properties of spatially coupled codes and how to design good spatially coupled codes. The article concludes with some future research directions and open problems.
0
0
cs.IT 2026-05-13 1 theorem

Decoder fixes deletions plus insertions in quantum codes

Decoding Algorithm to Composite Errors Consisting of Deletions and Insertions for Quantum Deletion-Correcting Codes Based on Quantum Reed-Solomon Codes

Hagiwara codes from quantum Reed-Solomon codes gain a practical algorithm for composite errors.

Figure from the paper full image
abstract click to expand
This paper focuses on Hagiwara codes, which are quantum deletion-correcting codes constructed by the quantum Reed-Solomon codes. Although Hagiwara codes can correct composite errors consisting of deletions and insertions, an efficient decoding algorithm to such errors remains an open problem. In this paper, we provide a decoding algorithm to such errors for Hagiwara codes.
0
0
cs.IT 2026-05-13 2 theorems

Rational functions produce better optimal LRCs than polynomials

Beyond Polynomials: Optimal Locally Recoverable Codes from Good Rational Functions

Galois counting of split places in function fields yields infinite families with improved locality and distance parameters.

abstract click to expand
Locally recoverable codes (LRCs) have emerged as fundamental objects in modern coding theory, primarily due to their pivotal role in distributed and cloud storage systems. A major breakthrough in their construction was achieved by Tamo and Barg, who introduced the notion of \emph{good polynomials} as a key structural ingredient. In this article, we propose a natural generalization of this paradigm by introducing the concept of \emph{good rational functions}. Building upon this extension, we develop a unified and flexible framework for constructing optimal LRCs. To quantify the quality of a rational function, we embed the problem into the rich context of algebraic function field theory and Galois theory. This perspective allows us to extend the Galois-theoretic framework originally developed by Micheli for good polynomials. In particular, we derive structural and quantitative results on the number of totally split rational places associated with rational functions. Furthermore, we construct explicit families of good rational functions that outperform all good polynomials of the same degree. As a consequence, we obtain infinite families of optimal LRCs with improved parameters compared to those arising from the classical Tamo-Barg construction. These results highlight the intrinsic strength of our approach.
0
0
cs.IT 2026-05-13 1 theorem

Infinite families of optimal codes with positive Griesmer defects

Optimal Codes with Positive Griesmer Defects, Related Optimal and Almost Optimal LRC Codes

New constructions yield codes not equivalent to known classes; some reach the CM bound exactly as locality-two LRCs.

abstract click to expand
Solomon and Stiffler constructed infinitely many families of linear codes meeting the Griesmer bound in 1965. It is well-known in 1990's that certain Griesmer codes (codes with the zero Griesmer defect) are equivalent to Solomon-Stiffler codes or Belov codes. Griesmer codes constructed in some recent papers published in IEEE Trans. Inf. Theory are actually Solomon-Stiffler codes or affine Solomon-Stiffler codes proposed in our previous paper. Therefore it is more challenging to construct optimal codes with positive Griesmer defects. In this paper, we construct several infinite families of optimal codes with positive Griesmer defects. Then these codes are certainly not equivalent to Solomon-Stiffler codes or Belov codes. Weight distributions and subcode support weight distributions of these optimal codes are determined. On the other hand, some of constructed optimal linear codes are optimal locally recoverable codes (LRCs) meeting the Cadambe-Mazumdar (CM) bound. Some of our constructed optimal codes are very close to the CM bound. Localities of these optimal or almost optimal LRC codes are two.
0
0
cs.IT 2026-05-13 2 theorems

Polar codes gain exact leakage certification for public coordinate subsets

RankGuardPolar Private Public Finite Length Polar Codes with Rank-Certified Leakage

A linear extractor isolates leaked combinations, enabling safe publication over shared binary erasure channels.

abstract click to expand
We introduce \textbf{RankGuard-Polar}, a framework for safely publishing a subset of polar codeword coordinates over shared public resources. We assume a strong eavesdropper who has access to the channel input, i.e., the transmitted codeword coordinates published on a public resource access model. Working over \(\mathbb F_2\) and focusing on time-shared public/private BEC uses, we show that leakage from a published index set \(\mathbf{P}\) admits an exact algebraic characterization comes from an information-theoretic viewpoint, and we construct an explicit linear extractor ($R$) that identifies the leaked linear combinations. Building on this identity, we (i) give efficient procedures to compute and certify leakage for any \(\mathbf{P}\), (ii) propose a practical fast algorithm with provable efficiency.
0
0
cs.IT 2026-05-13 Recognition

Outputs alone recover channel parameters and optimal inputs

Parameter Estimation of Mutual Information Maximized Channels

Two algorithms enforce mutual-information optimality to jointly estimate both the unknown channel and its capacity-achieving input from y-0n

Figure from the paper full image
abstract click to expand
We study the problem of estimating a parametric discrete memoryless channel \( p(y \mid x; \boldsymbol{\theta}) \) when the transmitter selects its input distribution \( \pi \) to maximize mutual information under the true parameter \( \boldsymbol{\theta}^* \). Using only i.i.d.\ observations of the channel output, we aim to jointly estimate the capacity-achieving input distribution \( \boldsymbol{\pi}^* \) and the true channel parameter \( \boldsymbol{\theta}^* \). In general, recovery of \( \boldsymbol{\pi}^* \) and \( \boldsymbol{\theta}^* \) can be challenging. To that end, we propose two efficient algorithms based on the Blahut--Arimoto (BA) optimality conditions: (i) a bilevel fixed-point method and (ii) an augmented Lagrangian method. Empirical results demonstrate that both proposed algorithms successfully recover the true \( \boldsymbol{\theta}^* \) and \( \boldsymbol{\pi}^* \), whereas a naive maximum-likelihood approach that ignores the mutual-information maximization constraint fails to do so.
0
0
cs.IT 2026-05-12 2 theorems

Synthesize likelihoods to meet accuracy bounds with minimal prior deviation

Sensor Design for Accuracy-Bounded Estimation via Maximum-Entropy Likelihood Synthesis

The method inverts classical sensor design by constructing the measurement model directly from an error budget and the dynamical prior.

Figure from the paper full image
abstract click to expand
Designing the sensing architecture for large-scale spatio-temporal systems is hard when accuracy requirements are specified but sensor models are uncertain or unavailable. Classical design treats sensor placement and estimation sequentially, requiring valid forward models for each sensing modality. This paper inverts the design flow: given an error budget, synthesize the measurement likelihood that enforces it while injecting minimal information beyond the dynamical prior. The likelihood is constructed by constrained optimization: among all posteriors satisfying a prescribed accuracy bound relative to a target, select the one minimizing Kullback-Leibler divergence from the prior. The solution is a maximum-entropy posterior in relative-entropy form, and the induced likelihood is the Radon-Nikodym derivative. The framework accommodates arbitrary discrepancies and is instantiated for Wasserstein distance, maximum mean discrepancy, $f$-divergences, moment constraints, and hybrid metrics. For each, we derive the discrete particle-level problem, analyze its convex or convex-relaxed structure, and present solvers with complexity scaling. A closed-form solution exists for the symmetric exponential-tilt case, and a distillation procedure converts nonparametric likelihood samples into parametric forms. A two-layer sensor design architecture embeds the synthesized likelihood in the recursive predict-update loop, connecting accuracy budgets to physical sensor placement, precision, and configuration. Numerical experiments comparing four metrics on unimodal and multimodal scenarios confirm the accuracy constraints are reliably enforced and reveal how metric choice determines the amount and spatial distribution of injected information.
0
0
cs.IT 2026-05-12 3 theorems

Fountain codes bound DNA random access at π/4

Random Access Expectation in DNA Storage and Fountain Codes

Fully symmetric codes give normalized expectation at least 0.7854 with 0.7869 achievable under peeling in the large-blocklength limit.

Figure from the paper full image
abstract click to expand
Motivated by DNA data storage, we study the expected number of coded symbols drawn from a linear code until a desired information symbol can be decoded - the random access expectation. We focus on generator matrices with a type of symmetry, conjectured in prior work to be optimal, which we call fully symmetric. We point out an equivalence between binary fully symmetric codes and LT codes. Using this observation, we analyze the random access expectation of binary fully symmetric codes under a peeling decoder, in the large blocklength limit. Under these assumptions, the random access expectation, normalized by the number of information symbols, is at least {\pi}/4 {\approx} 0.7854, while a value of {\approx} 0.7869 is achievable.
0
0
cs.IT 2026-05-12 2 theorems

Flexible privacy sets yield PIR capacity bounds on path and cycle graphs

Private Information Retrieval With Arbitrary Privacy Requirements for Graph-Based Storage

Neighborhood ranges let retrieval privacy vary continuously from local to full while capacity is bounded or solved exactly

abstract click to expand
We reformulate the definition of privacy in the private information retrieval (PIR) problem to accommodate flexible privacy requirements. We focus on graph-replicated PIR, with a generalized privacy requirement, instead of requiring all messages to be private from all servers, during retrieval. Towards this, we define a privacy requirement set for each server, which can be an arbitrary subset of all message indices, as long as the stored message indices are in their privacy requirement set. Since both the storage and privacy requirement sets have many possibilities, we focus on two specific storage settings, namely the path and cyclic graphs. We consider several privacy settings for each of them, which are not necessarily the same, to give different examples for privacy sets. Of particular interest are the privacy sets that comprise the indices of messages stored at servers within a neighborhood range. The neighborhood range parameter allows a transition from the recently introduced local PIR [1] to the standard graph-replicated PIR. In these cases, we derive bounds on the capacity or find the exact capacity.
0
0
cs.IT 2026-05-12 2 theorems

Local privacy multiplies PIR capacity on graph unions

Local Private Information Retrieval: A New Privacy Perspective for Graph-Based Replicated Systems

Requiring privacy only from servers that store the message yields exact rates for cycles and odd paths plus better bounds for other graphs.

abstract click to expand
We rethink the definition of privacy in multi-server, graph-replicated private information retrieval (PIR) systems, and introduce a novel setting where the user's privacy is governed by the servers' storage structure. In particular, while retrieving a message from a server, the user is concerned with hiding their desired message index from the server, only if the server stores the corresponding message. We coin this privacy requirement as local user privacy and the resulting PIR problem as local PIR on the graph. Our goal is to measure the gain in communication efficiency of local PIR, compared to that of canonical PIR, by establishing its capacity, i.e., the maximum number of message symbols retrieved, per downloaded symbol. To this end, we observe a remarkable gain in the local PIR capacity of graphs, that are disjoint union of distinct graphs, which is multiplicative, compared to the PIR capacity, when the individual graphs are identical. For connected graphs, we propose schemes to establish capacity lower bounds for edge-transitive and bipartite graphs, which are greater than the best-known PIR capacity bounds. Finally, we derive the exact local PIR capacity for the cyclic graph, and the path graph with an odd number of vertices.
0
0
cs.IT 2026-05-12 2 theorems

Mamba replaces attention to scale neural decoders for long codes

Scalable Mamba-Based Message-Passing Neural Decoder for Error-Correcting Codes

The decoder gains 0.45 dB over prior neural methods on a 1056-bit LDPC code while using 1.5 times less memory, with savings increasing for 1

Figure from the paper full image
abstract click to expand
Forward error correction is essential for reliable communication over noisy channels. Attention-based model-free neural decoders have shown strong performance for short codes, but their scalability to longer codes is limited by the quadratic memory and computational cost of attention. In this paper, we introduce the Mamba message-passing decoder (MMPD), an attention-free syndrome-based neural decoder for binary linear codes. MMPD retains the Tanner-graph structure of a message-passing decoder by performing local pairwise aggregation along variable-check edges. To enable efficient long-range information propagation, these local updates are combined with bidirectional Mamba state-space blocks. By avoiding dense attention matrices, MMPD scales more favorably for long codes in both memory and computation. Experiments on the (1056, 880) LDPC code show that MMPD achieves a 0.45 dB gain over the state-of-the-art CrossMPT decoder at a specified target bit error rate, while reducing memory consumption by a factor of 1.5. This reduction factor increases substantially for longer codes, demonstrating the applicability of MMPD to scalable neural decoding of practical long codes.
0
0
cs.IT 2026-05-12 2 theorems

Log-sum regularization outperforms l1 at low signal densities

Sparse Signal Recovery using Log-Sum Regularization and Adaptive Smoothing

Adaptive smoothing keeps the proximal operator continuous so that AMP and state evolution accurately predict reconstruction error and error.

Figure from the paper full image
abstract click to expand
We study sparse signal recovery from noisy linear observations using nonconvex log-sum regularization. The log-sum penalty reduces the shrinkage bias of $\ell_1$ regularization and more closely approximates the $\ell_0$ regularization, but its nonconvexity can make reconstruction algorithms unstable. To mitigate this instability, we use an adaptive smoothing strategy that determines the smoothing parameter so that the scalar proximal operator remains continuous. Using this proximal operator, we formulate the approximate message passing (AMP) algorithm and derive the corresponding state evolution (SE) recursion. The fixed point of the SE recursion predicts the final mean squared error (MSE) and, in the noiseless limit, the exact-recovery phase transition. To further investigate finite-dimensional reconstruction behavior, we implement an alternating direction method of multipliers (ADMM) algorithm. In the noiseless setting, we find that the empirical success boundary of ADMM closely agrees with the SE-predicted phase transition. In the noisy setting, we observe that AMP closely follows the SE prediction, whereas ADMM qualitatively reproduces the SE-predicted dependence of the final MSE on the regularization parameter. A comparison with $\ell_1$ regularization shows that log-sum regularization is beneficial in low-density or high-measurement-rate regimes, whereas $\ell_1$ regularization remains preferable at higher densities and lower measurement rates.
0
0
cs.IT 2026-05-12 1 theorem

Coprimeness of q-1 and d-2 yields uniform weight-2 cosets

Weight distributions of cosets of weight 2 of the generalized doubly extended Reed-Solomon codes

in generalized doubly extended Reed-Solomon codes, giving the explicit distribution when the gcd condition holds

abstract click to expand
We consider the weight distributions of the cosets of weight 2 of the generalized $[q+1,q+2-d,d]_q$ doubly extended Reed-Solomon codes (GDRS) of minimum distance $d\ge5$, over the finite field $\mathbb{F}_q$ with $q$ elements. For a GDRS code, we say that Case S occurs if the weight distribution for all cosets of weight 2 is the same or otherwise, Case NS occurs. For Case S, the weight distribution is known; however, any sufficient condition for the occurrence of Case S remained an open problem. We prove that if $q-1$ and $d-2$ are coprime then Case S holds, i.e. the problem is solved. Furthermore, we note that in Case S, the GDRS code is 2-regular. Also, we introduce two new open equivalent combinatorial problems for finite fields $\mathbb{F}_q$ (Problem $A_{q,\mu}^\times$) and for rings $\mathbb{Z}_\mathfrak{R}$ of integers modulo $\mathfrak{R}$ (Problem $A_{\mathfrak{R},\mu}^+$), where $\mu$ is a parameter. In particular, Problem $A_{\mathfrak{R},\mu}^+$ is as follows: for each element $\lambda$ of $\mathbb{Z}_\mathfrak{R}$, determine the number of all possible $\mu$-tuples $\{\lambda_1,\lambda_2,\ldots,\lambda_{\mu}\}$, each of which consists of $\mu$ distinct elements $\lambda_j$ of $\mathbb{Z}_\mathfrak{R}$ such that their sum in $\mathbb{Z}_\mathfrak{R}$ is equal to $\lambda$. Open Problems $A_{q,\mu}^\times$ and $A_{\mathfrak{R},\mu}^+$ are interesting in their own right and, moreover, we proved that their solutions allow us to obtain the weight distributions for Case NS, taking $\mu=d-2$ and $\mathfrak{R}=q-1$. To solve Problem $A_{\mathfrak{R},\mu}^+$, we found a universal method, connected with the values of $\mathfrak{R}$ and $\mu$, using orbits of elements in $\mathbb{Z}_\mathfrak{R}$ and then we solved the problem for many pairs $\mathfrak{R},\mu$, obtaining the needed weight distributions for the corresponding pairs $q=\mathfrak{R}+1,d=\mu+2$.
0
0
cs.IT 2026-05-12 2 theorems

Folded quantum Hermitian codes reach quantum Singleton bound

List-Decodable Folded Quantum Hermitian Codes

They achieve list-decodability with comparable lengths but over smaller alphabets than Reed-Solomon codes.

abstract click to expand
Folded Reed-Solomon codes, introduced by Guruswami and Rudra in 2007, have been shown to achieve the information-theoretically best possible trade-off between the rate of a code and the error-correction radius. In 2024, Bergamaschi, Golowich and Gunn extended this framework by constructing folded quantum Reed-Solomon codes (CSS codes obtained by folding) demonstrating that these codes tolerate errors up to the quantum Singleton bound. In this paper, we construct folded quantum Hermitian codes using the CSS framework and show that these codes are also list-decodable, tolerating errors up to the quantum Singleton bound. Compared to Reed-Solomon codes, Hermitian codes admit comparable lengths over smaller alphabets, enabling more efficient implementations.
0
0
cs.IT 2026-05-12 2 theorems

Exact optimal repair bandwidth for (n,n-2,2) MDS codes found

Optimal Repair Bandwidth and Repair I/O of (n,n-2,2) MDS Array Codes

The minimum cost equals the maximum of two explicit bounds for all admissible lengths over any finite field.

abstract click to expand
We give a complete determination of the exact optimal worst-case repair bandwidth and repair I/O for linear exact repair of $(n,n-2,2)$ MDS array codes over every finite field $\mathbb{F}_q$ and for every admissible code length $3\le n\le q^2+1$. For repair bandwidth, we prove that the optimum is governed, up to a short explicit list of small exceptional cases, by the maximum of the sharpened $n$-only lower bound $\lceil(5n-8)/4\rceil$ and the projective counting, equivalently incidence-multiplicity, bound $2n-q-3$. For repair I/O, we obtain the analogous exact formula with $\lceil(4n-6)/3\rceil$ in place of $\lceil(5n-8)/4\rceil$, with the single special value at $n=4$. Thus, we completely resolve the first non-trivial redundancy and sub-packetization regime $(r,\ell)=(2,2)$ for both repair bandwidth and repair I/O.
0
0
cs.IT 2026-05-12 2 theorems

Adaptive min-sum decoder nears BP for quantum LDPC codes

Syndrome Adaptive Gain Control for Min-Sum Decoding of Quantum LDPC Codes

Scaling factor is adjusted from the fraction of unsatisfied stabilizers, removing the need for per-code offline tuning while keeping min-sum

Figure from the paper full image
abstract click to expand
Min-Sum (MS) decoding is a popular low-complexity alternative to belief propagation (BP), retaining only the minimum incoming message magnitude during check-node (CN) processing, at the cost of systematic message magnitude overestimation. The scaled MS (SMS) decoder compensates for this effect using a fixed scaling factor. We propose the syndrome adaptive gain Min-Sum (SAGMS) decoder for quantum low-density parity-check (QLDPC) codes, which adapts the message gain online based on the fraction of unsatisfied stabilizers, requiring no per-code or per-noise level optimization. We show that the scaling factor required for SMS to match belief propagation decreases with the CN degree, so any fixed scaling optimized for one degree incurs into a growing penalty as the CN degree varies. SAGMS avoids this limitation by adapting the gain during decoding. Simulations on generalized bicycle QLDPC codes demonstrate that SAGMS matches or outperforms the frame error rate (FER) of an offline optimized SMS decoder. Moreover, SAGMS approaches BP performance and, under certain conditions outperforms it while retaining MS-level complexity.
0
0
cs.IT 2026-05-12 Recognition

Optimal learners derived for misspecified universal learning

Misspecified Universal Learning

Minimax regret analysis with log-loss extends to cases where true data process is outside the hypothesis class, unifying modes and tasks.

Figure from the paper full image
abstract click to expand
This paper addresses the problem of universal learning under model misspecification with log-loss. In this setting, the learner operates with a hypothesis class of models denoted by $\Theta$, while the true data-generating process belongs to a broader class $\Phi \supset \Theta$, and may lie outside the assumed hypothesis space. Classical approaches have characterized the minimax regret and identified optimal universal learners in both the well-specified stochastic and individual deterministic frameworks. The misspecified setting has received comparatively less attention, although several important results have emerged in recent years. Extending these foundations, we analyze the minimax regret in the misspecified setting and derive the corresponding optimal universal learner. We propose this formulation as a unified framework for universal learning, applicable to any form of uncertainty in the data-generating process, across both online and batch data arrival modes, as well as supervised and unsupervised learning tasks.
0
0
cs.IT 2026-05-12 2 theorems

2-bit beamforming sustains GNSS at 70 dB jamming

Low-Cost GNSS Anti-Jamming Through 2-Bit Phase Shift Beamforming with Machine Learning

Optimized QPSK weights hold 20.8 dB-Hz C/N0 while unprotected receivers drop to 9.3 dB-Hz.

Figure from the paper full image
abstract click to expand
We investigate low-cost GNSS anti-jamming using beamforming with inexpensive 2-bit phase shifters, constraining each complex array weight to one of four QPSK phase states (real/imaginary = -1 or +1). This severe quantization sharply limits the beampattern solution space, making conventional real-valued beamforming and naive weight quantization highly suboptimal. We formulate a discrete optimization that trades interference suppression against satellite-direction gain, and benchmark known combinatorial optimization methods across array sizes and interference conditions. Simulations show that performance improves with array size, with oracle and greedy search achieving up to 34 dB nulling, but oracle incurs exponential latency and greedy sampling is stochastic. To obtain deterministic low-latency performance, we propose an ML-aided method based on gradient-boosted decision trees followed by local search, which performs similar to the oracle for larger arrays at fixed latency. We further validate the approach experimentally using a fully digital emulation of the QPSK oracle beamformer and compare against a GNSS receiver without beamforming capability. Under mild jamming (J/S approximately 44 dB) both receivers maintain adequate tracking, with QPSK yielding a 4.2 dB higher average C/N0 (37.3 vs. 33.1 dB-Hz). Under moderate and strong jamming (J/S approximately 62-70 dB) the benefit is substantial. At J/S = 70 dB the unprotected receiver degrades to near tracking limits (avg C/N0 = 9.3 dB-Hz) while the QPSK oracle sustains an average C/N0 of 20.8 dB-Hz. These results confirm that 2-bit phase-shift beamforming provides considerable anti-jamming benefit over a standard GNSS receiver, motivating further research on oracle-level practical methods.
0
0
cs.IT 2026-05-12 Recognition

Splitting method decodes random hypergraphs in m^{5/3} time

A Fast Hierarchical Splitting Approach for Non-Adaptive Learning of Random Hypergraphs

The scheme uses O(m log n) queries to recover 3-uniform edges but reduces decoding from cubic in n to polynomial in m for random instances.

abstract click to expand
This work focuses on the problem of learning an unknown $3$-uniform hypergraph using edge-detecting queries. Our goal is to design a querying strategy that recovers the hyperedge set using as few queries as possible. We restrict our attention to random hypergraphs under the Erd\H{o}s--R\'enyi (ER) model, in which each potential hyperedge appears independently with probability $q = \Theta(n^{-3(1-\theta)})$ for $\theta \in (0;1)$. Prior work [Austhof-Reyzin-Tani, ISIT 2025] presents a testing-decoding scheme that uses $O(\bar{m}\log n)$ tests but requires a decoding time of $\Omega(n^3)$, where $\bar{m} = q\binom{n}{3}$ denotes the expected number of hyperedges. In this work, we extend the binary splitting framework and adapt it to the $3$-uniform hypergraph setting. We obtain a testing-decoding scheme that recovers the hyperedge set with high probability using $O(\bar{m} \log n)$ tests and achieves decoding time $O(\bar{m}^{5/3}\log n)$ for the case $\theta > \dfrac{2}{3}$ and $O(\bar{m}^{5/3}\log^2{\bar{m}}\log n)$ for the case $\theta \leq \dfrac{2}{3}$. Thus, compared with prior work, our result significantly improves the decoding complexity while maintaining optimal query complexity.
0
0
cs.IT 2026-05-12 2 theorems

HMM aligns unlabeled RSS to build accurate radio maps

Survey-Free Radio Map Construction via HMM-Based Coarse-to-Fine Inference

In unidirectional corridors the coarse-to-fine method delivers 8.96 dB MAE and 3.33 m localization without surveys.

Figure from the paper full image
abstract click to expand
Traditional radio map construction methods mandate labor-intensive data collection and precise location labeling. To address these limitations, we propose a novel survey-free approach for radio map construction that relies solely on unlabeled Received Signal Strength (RSS) measurements, thereby obviating the need for manual site surveys or auxiliary Inertial Measurement Units (IMUs). The key idea involves embedding multiple unlabeled RSS sequences into a known indoor layout, specifically targeting corridor-guided environments with a dominant unidirectional pedestrian flow. However, aligning the embedded coordinates with the RSS collection locations remains challenging due to the random fluctuations inherent in RSS data. To tackle this, we introduce a Hidden Markov Model (HMM)- based Coarse-to-Fine Inference (HCFI) framework. At the coarse level, we employ an HMM-based region label inference algorithm to partition RSS sequences and align the RSS segments with specific physical regions using graph-based inference. At the fine level, we develop an HMM-based location label inference technique to estimate RSS collection coordinates by leveraging RSS propagation principles while incorporating sequential spatio-temporal mobility probability. Empirical results from an office environment demonstrate that the proposed method achieves a radio map construction Mean Absolute Error (MAE) of 8.96 dB. Furthermore, based on the estimated radio map, k-Nearest Neighbor (KNN) localization yields an average positioning error of approximately 3.33 meters, offering a highly viable, survey-free solution for radio map construction under sequential topological assumptions.
0
0
cs.IT 2026-05-12 1 theorem

CSI data alone recovers trajectories for indoor radio mapping

Annotation-Free Indoor Radio Mapping via Physics-Informed Trajectory Inference

Physics-informed Bayesian model uses PADP continuity to reach 0.88 m localization error without labels or IMUs.

Figure from the paper full image
abstract click to expand
Constructing indoor radio maps traditionally requires extensive site surveys with precise user-location labels, making the calibration process costly and time-consuming. Existing calibration-reduction methods either depend on partial location annotations or exploit inertial measurement units (IMUs) to provide relative motion cues; however, IMU-assisted solutions are constrained by hardware availability, device-level access restrictions, and accumulated sensor drift. In this paper, we study a location-label-free indoor radio mapping problem under known access-point deployment geometry and a known walkable spatial domain. We propose a physics-informed trajectory inference framework that uses only Channel State Information (CSI), without relying on user-location labels or IMU measurements. The key idea is to recover the latent spatial coordinates of CSI measurements by exploiting the local spatial continuity of multipath propagation. To this end, we construct a Power-Angle-Delay Profile (PADP) feature distance from MIMO-OFDM CSI and show that, within a local neighborhood and under quasi-static multipath conditions, this distance provides a physically meaningful proxy for small spatial displacements. We then incorporate the PADP-based continuity constraint into a spatially regularized Bayesian inference model for joint trajectory recovery and propagation-parameter estimation. Experiments on a real-world industrial CSI dataset demonstrate that the proposed framework achieves an average localization error of 0.88 m and a relative beam map construction error of 6.68%, improving upon representative channel-embedding and IMU-assisted baselines.
0
0
cs.IT 2026-05-12 Recognition

Conditional privacy measure removes penalty on semantic recovery

R\'enyi Rate-Distortion-Perception-Privacy Tradeoff under Indirect Observation

In Gaussian indirect observations standard metrics force a tradeoff with utility; a residual-only definition decouples them and yields exact

Figure from the paper full image
abstract click to expand
We introduce a R\'enyi Rate-Distortion-Perception-Privacy (R-RDPP) framework for indirect source coding. A latent source~$S$ is correlated with a private attribute~$U$, and the encoder observes only a noisy view~$X$ such that $(S,U) - X - Y$ holds at the decoder output~$Y$. The communication cost is measured by Sibson's $\alpha$-mutual information $\Ialp$, the privacy leakage by $\Ibeta$, the semantic distortion between $S$ and $Y$, and the realism constraint at the semantic marginal $P_S$. We characterize the scalar Gaussian RDPP tradeoff, revealing that standard privacy metrics inherently penalize legitimate semantic recovery. To resolve this, we introduce a conditional privacy measure that quantifies only the residual leakage. In addition, we refine the achievability bounds for $\alpha > 1$ via the Poisson functional representation. By deriving the exact geometric-mixture distribution of the Poisson index, we obtain exact closed-form expressions for integer-order R\'enyi entropies and sharper computable bounds in regimes where the resulting expression improves the logarithmic-moment approach.
0
0
cs.IT 2026-05-12 2 theorems

Closed-form estimators compute multi-source PID for Gaussians

Closed-Form Gaussian Estimators for Multi-Source Partial Information Decomposition

Log-determinant formulas on covariance blocks give explicit redundancy, unique information, and synergy terms for any number of sources.

Figure from the paper full image
abstract click to expand
Computing multi-source partial information decomposition (PID) for continuous data is hard: existing closed-form Gaussian estimators are restricted to two source variables, while continuous arbitrary-source estimators are typically learning-based and do not provide closed-form expressions. To address this, we develop closed-form Gaussian estimators for multi-source PID. We provide two-source redundancy, multi-source unique information, the K-th order synergistic effect from source subsets of size K, and the total synergistic effect. The estimators are derived from the conditional-independence-based information measures introduced in our earlier work, under which every quantity reduces to a log-determinant expression in covariance blocks of the system. The resulting estimator is plug-in consistent, affine invariant, source-permutation symmetric, and additive over independent systems. We validate it on a controlled Gaussian benchmark, evaluate its computational efficiency against baselines, and confirm its numerical stability in finite-sample regimes. To our knowledge, this is the first covariance-based closed-form estimator that provides multi-source continuous PID measures.
0
0
cs.IT 2026-05-12 2 theorems

Finite-field OFDM yields global QC-LDPC code for joint decoding

A Global Coding Scheme for OFDM over Finite Fields

Algebraic subcarriers via Galois transform create partial-geometry codes decoded with parallel binary algorithms at linear cost.

Figure from the paper full image
abstract click to expand
This paper proposes a highly efficient global coded-multiplexing scheme, conceptualized as Orthogonal Frequency Division Multiplexing over a finite field (FF-OFDM), for reliable multiuser communications. By utilizing a prime length cyclic code and its Hadamard equivalents as algebraic subcarriers, independent data streams are globally multiplexed via a Galois Fourier Transform (GFT) without rate loss. We show that this finite-field synthesis intrinsically generates a global Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) code over $\mathrm{GF}(2^s)$, whose parity-check matrix is governed by the structural rigor of partial geometries. At the receiver, supported by a binary decomposition theorem, the received nonbinary global codeword is jointly decoded using parallel binary iterative soft-decision algorithms prior to demultiplexing. This joint decoding enables seamless reliability information sharing across all user streams, achieving near-bound error performance, rapid convergence without error floors, and strictly linear amortized decoding complexity.
0
0
cs.IT 2026-05-12 Recognition

Common randomness simplifies cross-domain compression to direct coupling

Cross-Domain Lossy Compression via Constrained Minimum Entropy Coupling

Rate-constrained minimum entropy coupling removes intermediate representations while preserving classification performance in image tasks.

Figure from the paper full image
abstract click to expand
This paper studies cross-domain lossy compression through the lens of minimum entropy coupling (MEC) with rate and classification constraints. In this setting, an encoder observes samples from a degraded source domain, while the decoder is required to generate outputs following a prescribed target distribution and to preserve information relevant to a downstream classification task. Motivated by logarithmic-loss distortion, we adopt an information-based objective that maximizes the coupling strength between the source and reconstruction, rather than minimizing a sample-wise distortion. Under common randomness, we formulate a rate-constrained MEC problem (MEC-B) and show that the intermediate representation can be removed without loss of optimality, yielding an equivalent deterministic coupling formulation. For Bernoulli sources, closed-form expressions are derived with and without classification constraints. In addition, we implement a neural restoration framework using quantization, entropy modeling, distribution matching, and classification regularization. Experiments on MNIST super-resolution and SVHN denoising show that increasing the available rate improves classification accuracy and yields more informative reconstructions.
0
0
cs.IT 2026-05-11 2 theorems

Algorithm learns adversary utilities for sublinear regret in coding game

Learning from Acceptance: Cumulative Regret in the Game of Coding

Repeated observations of acceptance outcomes let the data collector approach optimal performance without knowing the trade-off in advance.

Figure from the paper full image
abstract click to expand
Classical coding-theoretic guarantees often rely on trust assumptions, such as requiring sufficiently many honest nodes compared with adversarial ones. These assumptions are difficult to enforce in open decentralized systems where participants are not centrally certified. At the same time, such environments often contain incentive mechanisms: participants may be rewarded only when their submitted data are accepted and the system remains functional. This changes the role of an adversary. Rather than acting as a pure saboteur, a strategic adversary may submit data that are consistent enough to be accepted while still degrading the quality of the final estimate. The game-of-coding framework models this strategic interaction between a data collector (DC) and an adversary. Existing works on the game of coding mostly consider the complete-information case, where the DC knows how the adversary trades off acceptance and estimation error. In this paper, we study an incomplete-information version of the game of coding in which the DC, acting as a Stackelberg leader, does not know the adversary's utility trade-off and must learn through repeated interaction. Prior work on the unknown-adversary setting considered an explore-then-commit objective, where only the final selected acceptance rule is evaluated. In contrast, we study the full learning trajectory: every acceptance rule used during the algorithm is executed and contributes to performance. We propose an algorithm that refines its search around promising acceptance rules, prove that it achieves sublinear cumulative regret, and evaluate its performance through numerical experiments.
0
0
cs.IT 2026-05-11 Recognition

Recovery algorithms classify linear batch codes into a hierarchy

Recovery Algorithms for Linear Batch Codes

A systematic study establishes order relations among algorithm types and extends graph-based constructions to arbitrary bipartite graphs.

abstract click to expand
Various types of recovery algorithms for batch codes have been investigated, such as asynchronous recovery or recovery as afforded by batch codes obtained from Almost Affinely Disjoint (AAD) families. In this paper, we offer the first systematic investigation of linear batch codes equipped with particular recovery algorithms. We introduce and investigate various known and new types of algorithms, and we investigate the order hierarchy of these types of batch codes. The simplest known recovery algorithms are those associated with graph-based batch codes. We investigate the resulting batch codes for arbitrary bipartite graphs, thereby generalizing some known results.
0
0
cs.IT 2026-05-11 2 theorems

Rényi entropy is subadditive on the majorization lattice for all α

Geometry of R\'enyi Entropy on the Majorization Lattice

It is also supermodular for α=0 and α≥1 after relating comonotone and independent couplings of marginal distributions.

Figure from the paper full image
abstract click to expand
Majorization is a stochastic ordering relation that compares the relative diversity of probability distributions with numerous applications in econometrics, spectral theory, and ecology. It is well-known that the majorization partial order forms a complete lattice on the set of ordered probability distributions. In this work, we study the properties of R\'enyi entropy on the majorization lattice. We establish a fundamental relation between the comonotone coupling and the independent coupling associated with a collection of marginal distributions. Consequently, we show that, for every order $ \alpha \in [0,\infty] $, the R\'enyi entropy is subadditive on the majorization lattice. We further characterize the supermodular regime, showing that R\'enyi entropy is supermodular on the majorization lattice for $ \alpha \in \{0\} \cup [1,\infty] $.
0
0
cs.IT 2026-05-11 Recognition

Support size sets the privacy limit for sparse local DP mechanisms

Sparse Discrete Laplace and Gaussian Mechanisms under Local Differential Privacy

Smallest viable support cardinality achieves target privacy with least distortion in Laplace and Gaussian channels.

Figure from the paper full image
abstract click to expand
We study sparse locally private channels of the form $M(y\mid x)\propto w(x,y) 1\{y\in S(x)\},$ where the admissible output set $S(x)$ is allowed to depend on the private input $x$ and is assumed to be small. Here, we consider the sparse discrete-Laplace family with kernel $w(x,y)=e^{-\lambda d(x,y)}$ and the sparse Gaussian family with kernel $w(x,y)=e^{-d(x,y)^2/(2\sigma^2)}$. For both families we give exact characterizations of pure and approximate local differential privacy. For pure $\varepsilon$-local differential privacy, we show that input-dependent sparse supports are obtained when all supports coincide. For $(\varepsilon,\delta)$-local differential privacy, we derive exact formulas for the privacy defect in terms of support leakage and excess privacy loss on the overlap region. We then specialize the analysis to radius-truncated sparse discrete-Laplace and radius-truncated sparse Gaussian mechanisms and obtain explicit privacy-sparsity tradeoffs in terms of the support size $s$. In particular, we show that nontrivial approximate local privacy requires a minimum support size, whereas larger supports reduce support leakage but increase distortion. For the Gaussian family, the overlap term exhibits an additional quadratic dependence on the support radius, which implies a sharper tradeoff between privacy and sparsity. These results identify the support cardinality as the intrinsic complexity parameter of the mechanism and yield an optimal design principle: choose the smallest support size that satisfies the target privacy constraint.
0
0
cs.IT 2026-05-11 2 theorems

Feature selection tolerates noise and weak symmetry

Universal Feature Selection with Noisy Observations and Weak Symmetry Conditions

The SVD-based procedure still attains near-optimal error exponents when symmetry deviations and observation noise stay small

abstract click to expand
This paper relaxes the restrictive symmetry conditions adopted in [4], [5] and extends their universal feature selection framework to accommodate noisy observations as well as attribute structures that may exhibit directional preferences. We introduce the notion of weak spherical symmetry, quantified by second-moment distances, which allows controlled deviations from rotational invariance. Under this relaxed condition, we develop a universal feature selection framework based on the singular value decomposition of the canonical dependence matrix computed from noisy data. Our main result shows that the selected features achieve asymptotically optimal error exponents up to a residual term that depends on the symmetry deviation $\delta$ and the noise levels $\eta_1, \eta_2$. When $\delta, \eta_1, \eta_2$ are relatively small, our result recovers that of [5], thereby demonstrating that exact spherical symmetry is unnecessary. Overall, our findings highlight the robustness of the selection framework against second-moment deviations and observation noise, thereby broadening its applicability across diverse inference tasks and providing a theoretically grounded tool for universal feature selection in practical scenarios.
0
0
cs.IT 2026-05-11 2 theorems

Secure subset retrieval hits rate 1-1/N for any family

Secure and Private Structured-Subset Retrieval: Fundamental Limits and Achievable Schemes

The bound holds with shared-randomness ratio exactly D over N-1 and one scheme works for every demand family.

abstract click to expand
This work introduces the \emph{Secure and Private Structured-Subset Retrieval (SPSSR)} problem. In SPSSR, a user wishes to retrieve one subset from an arbitrary family of size-$D$ subsets from $K$ messages replicated across $N$ non-colluding servers that share randomness unknown to the user. The privacy requirement ensures that no server learns which subset is requested, while the security requirement ensures that the user learns nothing about the messages outside the requested subset. This generalizes Symmetric Multi-message Private Information Retrieval (SMPIR), where the candidate demand sets consist of all size-$D$ subsets. We show that, for every candidate demand family, the maximum achievable retrieval rate is equal to ${1-1/N}$. We also show that the minimum ratio between the size of the shared randomness and the message size required to achieve this rate is ${D/(N-1)}$, and that, for balanced linear SPSSR schemes, the minimum required subpacketization level is ${(N-1)/\gcd(D,N-1)}$; both quantities are independent of the demand family. Our converse proof for the maximum achievable retrieval rate applies to arbitrary demand families, unlike the existing proof for SMPIR, which is tailored to the full demand family. For achievability, we construct a single SPSSR scheme that applies uniformly to every demand family, achieves the optimal retrieval rate with the optimal shared-randomness ratio, and requires the optimal subpacketization level among balanced linear schemes. This subpacketization level is no larger than that of known SMPIR schemes in any parameter regime and is smaller in some regimes.
0
0
cs.IT 2026-05-11 1 theorem

Any demand family permits 1-1/N retrieval rate

Secure and Private Structured-Subset Retrieval: Fundamental Limits and Achievable Schemes

Shared randomness ratio D/(N-1) and subpacketization (N-1)/gcd(D,N-1) hold uniformly across families.

abstract click to expand
This work introduces the \emph{Secure and Private Structured-Subset Retrieval (SPSSR)} problem. In SPSSR, a user wishes to retrieve one subset from an arbitrary family of size-$D$ subsets from $K$ messages replicated across $N$ non-colluding servers that share randomness unknown to the user. The privacy requirement ensures that no server learns which subset is requested, while the security requirement ensures that the user learns nothing about the messages outside the requested subset. This generalizes Symmetric Multi-message Private Information Retrieval (SMPIR), where the candidate demand sets consist of all size-$D$ subsets. We show that, for every candidate demand family, the maximum achievable retrieval rate is equal to ${1-1/N}$. We also show that the minimum ratio between the size of the shared randomness and the message size required to achieve this rate is ${D/(N-1)}$, and that, for balanced linear SPSSR schemes, the minimum required subpacketization level is ${(N-1)/\gcd(D,N-1)}$; both quantities are independent of the demand family. Our converse proof for the maximum achievable retrieval rate applies to arbitrary demand families, unlike the existing proof for SMPIR, which is tailored to the full demand family. For achievability, we construct a single SPSSR scheme that applies uniformly to every demand family, achieves the optimal retrieval rate with the optimal shared-randomness ratio, and requires the optimal subpacketization level among balanced linear schemes. This subpacketization level is no larger than that of known SMPIR schemes in any parameter regime and is smaller in some regimes.
0
0
cs.IT 2026-05-11 2 theorems

Superposition coding boosts covert rates on degraded broadcast channels

Covert Capacity of Degraded Broadcast Channels

The computable capacity region shows time-sharing is suboptimal and superposition achieves higher rates while avoiding warden detection.

Figure from the paper full image
abstract click to expand
We derive the capacity region of the degraded broadcast channel (DBC) subject to the constraint that the communication is not detected by an adversary, the Warden. Our capacity result is in a computable form and numerical results show that time-sharing is suboptimal in general, and improved rates can be obtained through superposition coding.
0
0
cs.IT 2026-05-11 Recognition

Eulerian cycles build capacity-achieving weakly constrained codes

Error-Correcting Weakly Constrained Codes: Constructions and Achievable Rates

Expurgation then produces positive-rate codes with linear distance, and concatenation enables efficient encoding and decoding.

abstract click to expand
We investigate weakly constrained codes, in which specific patterns occur with prescribed frequencies rather than being strictly forbidden as in conventional constrained coding. We propose a capacity-achieving construction of a weakly constrained codebook based on Eulerian cycles. We then obtain, via expurgation, weakly constrained codes with linear minimum distance and positive rate, and analyze the rates achievable. Finally, we propose a practical concatenated code construction that supports polynomial-time encoding and decoding.
0
0
cs.IT 2026-05-11 2 theorems

Every (n-2) subspace of R^n holds vector with ratio >= n/2-1

Tight Lower Bounds on The Single-Error Detection Threshold for Analog Error-Correcting Codes

This settles the open question on minimal height profiles for single-error detection when redundancy equals two and shows the general bound

Figure from the paper full image
abstract click to expand
Analog error-correcting codes (Analog ECCs) for approximate vector-matrix multiplication have been extensively studied as means to achieve fault-tolerant in-memory computation. The theoretical foundations for such coding schemes, particularly the characterization of their correction capabilities via the height profile, have been well established in recent literature. In this paper, we focus on the case of single-error detection Analog ECCs. Among several open problems related to this case proposed by Ron M. Roth in [1], Problem 1 asks: "Identify the values of $k$ and $n$ for which every linear $[n, k]$ code $\mathcal{C}$ over $\mathbb{R}$ satisfies: $$\mathsf{h}_1(\mathcal{C}):=\max_{\boldsymbol{c}\in \mathcal{C}\setminus{\{\boldsymbol{0}\}}}\mathsf{h}_1(\boldsymbol{c})\geq \Big\lceil \frac{k}{n-k} \Big\rceil.\text{"}$$ Here, for any $\boldsymbol{x}\in\mathbb{R}^n$, $\mathsf{h}_1(\boldsymbol{x})$ represents the ratio between the largest and second largest absolute values of $\boldsymbol{x}$'s entries. As the simplest special case of Problem 1 (with $n-k=2$), the following problem was posed as Problem 2 in [1]: "Must every $(n-2)$-dimensional subspace of $\mathbb{R}^n$, $n$ even, contain a nonzero vector in which the ratio between the largest and second largest absolute values of its entries is at least $(n/2)-1$?" These problems directly pertain to the lower bounds on the single-error detection threshold for Analog ECCs: Problem 1 corresponds to arbitrary $n-k$ and Problem 2 corresponds to $n-k=2$. In this paper, we provide an affirmative answer to Problem 2 and a rigorous proof using theories related to convex optimization. Furthermore, we extend our analytical method to show that the lower bound in Problem 1 is tight for the case where $n-k$ divides $k$. Our results fill the gap in the lower bound theory of thresholds for single-error detection in Analog ECCs.
0
0
cs.IT 2026-05-11 2 theorems

Bounds set optimal trade-offs for multi-bit watermarking

Fundamental Trade-Offs in Multi-Bit Watermarking of Stochastic Processes

Information-theoretic analysis gives scheme-agnostic limits on false alarms, errors, distortion and rate for stochastic data.

Figure from the paper full image
abstract click to expand
We study multi-bit watermarking for data generated by stochastic processes, where a hidden message is embedded during sampling and must be decodable by an authorized detector that possesses side information unavailable to unauthorized observers. In high-stakes deployments, a practical watermark must simultaneously control false alarms, preserve generation quality without distorting the output distribution, and support reliable multi-bit decoding. Satisfying all three goals at once inevitably creates fundamental trade-offs. We formulate watermark embedding as a distributional information-embedding problem and watermark detection as a multiple-hypothesis testing problem under distortion and rate constraints, leading to four fundamental metrics: false-alarm probability, detection error probability, distortion, and information rate. Within this information-theoretic framework, we derive matched converse and achievability bounds that characterize the optimal trade-offs and provide scheme-agnostic benchmarks for any watermarking method. For stationary ergodic stochastic processes, we further obtain matched asymptotic limits and connect them to the finite-sample regime. Finally, we present a reference watermarking construction satisfying our assumptions and empirically illustrating the predicted trade-offs.
0
0
cs.IT 2026-05-11 Recognition

Two-level rotations raise secrecy rates in multicast ISAC

Sensing-Aided Secure Multicast in Two-Level Rotatable Antenna-Enabled ISAC Systems: Modeling and Optimization

Array and element rotations plus CRB uncertainty modeling outperform fixed antennas by directing gains to users and suppressing leakage to e

Figure from the paper full image
abstract click to expand
In physical layer security, the channel state information (CSI) of passive eavesdroppers is usually difficult to obtain, which has motivated sensing-aided secure communication (SASC). However, in secure multicast scenarios, conventional fixed-position antennas (FPAs) provide limited spatial flexibility for simultaneously serving multiple legitimate users and suppressing leakage toward possible eavesdropper directions. Motivated by this, a novel two-level rotatable antenna (RA)-enabled sensing-aided secure multicast scheme is proposed in this paper. In the proposed architecture, array-level and element-wise rotations are jointly exploited with analog beamforming for user enhancement and leakage suppression. To characterize imperfect eavesdropper sensing, the maximum likelihood estimator and the corresponding Cram\'er-Rao bound (CRB) are derived to quantify the angular estimation accuracy. Based on the derived CRB, a probabilistic angular uncertainty region is constructed. A CRB-aware max-min secrecy-rate problem is then formulated by evaluating the eavesdropper leakage over sampled high-probability directions within this region. The non-convex problem is handled through a tractable lower-bound reformulation based on Jensen's inequality and smooth approximation, followed by an alternating optimization algorithm combining manifold optimization and projected-gradient updates. Simulation results show the effectiveness and robustness of the proposed scheme compared with various benchmarks. Beam patterns further reveal that array-level and element-wise rotations play complementary roles in maintaining strong gains toward legitimate users and forming a low-gain region over the eavesdropper angular uncertainty interval.
0
0
cs.IT 2026-05-11 2 theorems

Formula sets optimal distance under parity-check support masks

On Codes with Support-Constrained Parity Checks

The distance is achieved over large fields, but generalized Reed-Solomon subcodes fail for some masks like the K6,6 incidence structure.

Figure from the paper full image
abstract click to expand
We study linear codes that maximize minimum distance subject to arbitrary support constraints on the parity-check matrix. Such constraints arise naturally in the design of LDPC codes, locally repairable codes, and hardware-constrained systems where each parity check must involve only a limited number of code symbols. They are also essential in quantum error correction, where sparse stabilizers reduce measurement noise and respect the connectivity constraints of physical qubit architectures. We derive the optimal minimum distance possible given support constraints on the parity-check matrix and show it is achievable over sufficiently large fields. When this maximum distance coincides with the Singleton bound for unconstrained parity check matrices, the dual GM-MDS construction yields generalized Reed--Solomon codes obeying the mask. In the generator-matrix setting, the GM-MDS theorem guarantees that the optimal distance can always be achieved by a subcode of a generalized Reed--Solomon code while satisfying arbitrary support constraints. We show that this is not true for the parity-check setting. We exhibit a set of support constraints, derived from the vertex-edge incidence of $K_{6,6}$, for which the optimal minimum distance cannot be realized by any subcode of a generalized Reed--Solomon code over any field. We also analyze structured constraint families -- regular, balanced, and cyclic masks -- through numerical optimization, providing design guidance for practical code constructions.
0
0
cs.IT 2026-05-11 Recognition

Fluid antennas raise RIS-NOMA sum rates at high SNR

Fluid Antennas Assisted RIS-NOMA Communication Networks

Simulations show gains versus fixed antennas and orthogonal access, with further improvement from more RIS elements or larger fluid antennas

Figure from the paper full image
abstract click to expand
This paper introduces a fluid antenna system (FAS) into reconfigurable intelligent surface (RIS) assisted non-orthogonal multiple access (NOMA) communication networks, where the non-orthogonal users are equipped with planar fluid antennas. Specifically, we formulate a sum rate maximization problem for FAS-RIS-NOMA networks, which jointly optimizes the fluid ports, the RIS deployment, and the phase shift matrix. To solve the resulting non-convex optimization problem involving highly coupled variables, an iterative algorithm based on alternating optimization is employed to decompose the original problem into three subproblems. Exhaustive search is employed for optimizing the fluid ports, particle swarm optimization is used for the RIS deployment, and semidefinite relaxation with successive convex approximation is adopted for optimizing the phase shift matrix. Finally, the simulation results show that: 1) compared with traditional antenna systems and orthogonal multiple access, the FAS-RIS-NOMA networks achieve higher system throughput under high signal-to-noise ratio conditions; and 2) by increasing the number of RIS elements and enlarging the FAS size, the sum rate of FAS-RIS-NOMA networks can be significantly enhanced.
0
0
cs.IT 2026-05-11 2 theorems

Non-binary LDPC codes attain ultimate distance bound

Non-binary LDPC codes for Data Storage

Lifted constructions from binary matrices reach the maximum possible minimum distance for erasure correction.

Figure from the paper full image
abstract click to expand
In modern data storage systems, non-binary LDPC codes for recovering from disk failures are increasingly considered strong competitors to MDS codes such as Reed-Solomon codes. Since disk failures can be modeled as erasures, we analyze non-binary LDPC codes over a $q$-ary field in the $q$-ary erasure channel, relative to MDS codes. Our focus is on non-binary LDPC codes whose parity-check matrix is obtained by replacing the non-zero entries of a binary base matrix by elements of a $q$-ary finite field. For such LDPC codes, we introduce the notion of ultimate distance, which upper-bounds their minimum distance. We derive a random-coding bound on the number of non-correctable erasure patterns for the Gallager ensemble of regular non-binary LDPC codes under maximum-likelihood decoding. An algorithm for finding the ultimate distance is presented. A low-complexity algorithm for searching for the minimum distance of the non-binary LDPC code is proposed. Finally, we construct examples of non-binary LDPC codes achieving the ultimate distance.
0
0
cs.IT 2026-05-11 1 theorem

Partitioned polar codes cut SCLF complexity by 77%

On Reducing Decoding Complexity of Successive-Cancellation List Flip Decoding of Polar Codes

PSCLF adds 0.1 dB gain over SCLF and matches plain SCL latency at low FER through early termination and restarts.

abstract click to expand
The recently proposed SCLF decoding algorithm for polar codes improves the error-correcting performance of state-of-the-art SCL decoding. However, it comes at the cost of a higher complexity. In this paper, partitioned polar codes tailored for the proposed PSCLF decoding algorithm are used to reduce the complexity of SCLF. Indeed, compared to SCLF, PSCLF allows early termination and is able to restart by skipping part of the decoding tree traversed sequentially. In order to maximize the coding gain, design of partitions tailored to PSCLF is proposed. In this extended paper, dynamic flip metric is used, as well as the possibility to flip multiple times during SCL. An analysis on the impact of this strategy on the early-termination or the CRC collisions encountered in PSCLF is carried out. Error-correction performance of multiple code rates and multiple partition strategies are shown. With the baseline algorithm SCL with $L=2$, degradation of $0.05$ dB is shown with respect to SCL-64, using $\omega=3$ flip per trial with $T_{max}=300$ trials. Numerical results show that the proposed PSCLF algorithm has an error-correction performance gain of up to 0.1 dB with respect to SCLF with same decoding parameters. This work is also compared with existing techniques to reduce the complexity of the SCLF decoding algorithm. The proposed algorithm reduces the complexity up to 77 % at the frame-error rate of $0.01$ with respect to SCLF and is able to reduce more the decoding complexity of SCLF embedding as well a restart mechanism. The average execution time of PSCLF matches the latency of SCL at $\text{FER}=4\cdot10^{-3}$ and lower.
0
0
cs.IT 2026-05-11 Recognition

New test pattern sets improve Chase decoding of BCH codes by 0.2 dB

Chase-like Decoding: Test Pattern Design and Performance Analysis

A covering algorithm selects patterns that capture likely errors, outperforming Chase-II sets at the same complexity for high-rate codes.

Figure from the paper full image
abstract click to expand
Chase-like decoding algorithms are a popular choice for soft-input decoding of algebraic codes. In this paper, we evaluate the performance of different test pattern sets using three methods. For test pattern sets with a certain structure such as Chase-II test patterns and patterns up to a maximum logistic weight, we use a method that relies on order statistics. The performance of arbitrary sets of test patterns is evaluated by calculating covered space probabilities and via direct Monte Carlo simulation. Based on the idea of covering as many likely error patterns as possible, we propose an algorithm for the design of test pattern sets which perform up to 0.2$\,$dB better for high-rate BCH codes than commonly used test pattern sets.
0
0
cs.IT 2026-05-11 2 theorems

Covering algorithm improves Chase decoding by 0.2 dB on BCH codes

Chase-like Decoding: Test Pattern Design and Performance Analysis

New test-pattern sets cover likely errors more effectively than Chase-II patterns for high-rate BCH codes.

Figure from the paper full image
abstract click to expand
Chase-like decoding algorithms are a popular choice for soft-input decoding of algebraic codes. In this paper, we evaluate the performance of different test pattern sets using three methods. For test pattern sets with a certain structure such as Chase-II test patterns and patterns up to a maximum logistic weight, we use a method that relies on order statistics. The performance of arbitrary sets of test patterns is evaluated by calculating covered space probabilities and via direct Monte Carlo simulation. Based on the idea of covering as many likely error patterns as possible, we propose an algorithm for the design of test pattern sets which perform up to 0.2$\,$dB better for high-rate BCH codes than commonly used test pattern sets.
0
0
cs.IT 2026-05-11 2 theorems

Embeddings yield O(min{Δ, d/n}) KL-risk smoothing for next-word distributions

Semantic Smoothing for Language Models via Distribution Estimation and Embeddings

Interpolation blends empirical counts with embedding-derived neighbors, cutting worst-case risk and test perplexity on standard estimators.

Figure from the paper full image
abstract click to expand
We propose semantic smoothing, a smoothing method for language models that uses embeddings to share statistical observations across semantically similar contexts. The starting point is a decomposition of log-perplexity that motivates smoothing as a collection of distribution-estimation problems under Kullback-Leibler (KL) loss. We then show that, under a Lipschitz-logit model for embedding-based language generation, proximity of context embeddings implies proximity of the corresponding next-word distributions in KL divergence. Combining these observations, we formulate semantic smoothing as distribution estimation in KL loss with KL-proximity side information. For $n$ samples on a $d$-symbol alphabet with a side-information distribution at KL distance $\Delta$, we give an interpolation estimator with worst-case KL risk $O(\min\{\Delta,d/n\})$, and prove a matching-order lower bound for uniform side information. We extend the estimator to multiple and empirically estimated synonymous distributions. Experiments on synthetic Markov data and WikiText-103 bigram models using Word2Vec, GloVe, and GPT-2 embeddings show that semantic smoothing consistently reduces test perplexity when applied to add-constant and Kneser-Ney estimates.
0
0
cs.IT 2026-05-11 Recognition

Physics-based computers achieve optimal detection for 2048-antenna MIMO

Physics-Inspired Probabilistic Computing for Extremely Large-Scale MIMO Detection in Future 6G Wireless Systems

Probabilistic Ising machines reach maximum-likelihood accuracy with 100 iterations and outperform standard MMSE methods for future 6G arrays

abstract click to expand
Extremely large-scale multiple-input multiple-output (XL-MIMO) architectures are a key enabler of forthcoming 6G wireless communication networks by allowing high data rates through massive spatial multiplexing. Here, we approach these problems with physics-inspired unconventional computing based on Ising machines (IMs). For binary modulation, probabilistic IMs (PIMs) and oscillator-based IMs achieve optimal ML detection with systems up to 2048x2048 antennas with only 100 iterations, matching optimal sphere decoder performance for computationally treatable sizes and outperforming the minimum mean-square error (MMSE) industrial standard. For M-QAM up to 256, a generalized PIM-inspired framework, based on d-dimensional probabilistic variables (p-dits) that directly encode QAM symbols, shows low bit-error-rate across sizes up to 256x256 antennas, outperforming or matching MMSE with reduced algorithmic complexity. Unlike the binary mapping, the p-dit interaction matrix is independent of the QAM order, enabling adaptive MIMO modulation. These results show a promising scalable paradigm for XL MIMO detection in future 6G networks.
0
0
cs.IT 2026-05-11 2 theorems

Ising machines solve optimal detection in 2048-antenna MIMO

Physics-Inspired Probabilistic Computing for Extremely Large-Scale MIMO Detection in Future 6G Wireless Systems

Probabilistic physics-inspired computing matches ideal performance for massive 6G wireless systems in 100 steps while beating current MMSE.

abstract click to expand
Extremely large-scale multiple-input multiple-output (XL-MIMO) architectures are a key enabler of forthcoming 6G wireless communication networks by allowing high data rates through massive spatial multiplexing. Here, we approach these problems with physics-inspired unconventional computing based on Ising machines (IMs). For binary modulation, probabilistic IMs (PIMs) and oscillator-based IMs achieve optimal ML detection with systems up to 2048x2048 antennas with only 100 iterations, matching optimal sphere decoder performance for computationally treatable sizes and outperforming the minimum mean-square error (MMSE) industrial standard. For M-QAM up to 256, a generalized PIM-inspired framework, based on d-dimensional probabilistic variables (p-dits) that directly encode QAM symbols, shows low bit-error-rate across sizes up to 256x256 antennas, outperforming or matching MMSE with reduced algorithmic complexity. Unlike the binary mapping, the p-dit interaction matrix is independent of the QAM order, enabling adaptive MIMO modulation. These results show a promising scalable paradigm for XL MIMO detection in future 6G networks.
0
0
cs.IT 2026-05-11 2 theorems

Parametric model reconstructs radio maps for unknown satellite beams

Beam-Aware Radio Map Estimation With Physics-Consistent Parametric Modeling for Unknown Multiple Satellites

It identifies active satellites and builds accurate continuous RSS fields, aiding interference handling in dense LEO networks.

Figure from the paper full image
abstract click to expand
Satellite networks with dense low Earth orbit (LEO) constellations rely on aggressive spectrum reuse, making co-channel interference a dominant and rapidly varying factor that limits link availability and complicates spectrum sharing and compliance. Satellite radio map (RM) construction is therefore essential for interference cognition, yet it is challenging because the active satellite set is unknown, beam footprints and pointing are not directly observable, and received signal strength (RSS) measurements are difficult to calibrate under coupled link budget variations and noise. These latent uncertainties yield a severely underdetermined inverse problem with strong signature coherence, where existing methods often trade detection recall for precision and still fail to recover a faithful continuous RSS field. This paper proposes a beam-aware RM estimation framework that unifies active satellite identification and RSS field reconstruction through physics-consistent parametric modeling. An interpretable structural prior links geometry and beam shaping to spatial RSS formation, and an adaptive model order selection strategy infers the number of active satellites from measurements by balancing fit and complexity. Extensive experiments across varying signal to noise ratio (SNR), total satellite count, and active satellite count demonstrate consistently higher RSS spatial correlation, lower root mean squared error (RMSE), and improved F1 score, validating the proposed approach for interference-aware satellite RM construction in satellite networks.
0
0
cs.IT 2026-05-11 2 theorems

Kolmogorov-Nagumo means capture more conditional entropies

Kolmogorov--Nagumo Mean Frameworks for Conditional Entropy

The new averaging framework represents Augustin-Csiszár conditional entropy in cases where standard η-averaging cannot.

abstract click to expand
This study focuses on conditional entropy frameworks based on the Kolmogorov--Nagumo (KN) mean. First, $(\eta, \psi)$-KN averaging (\texttt{EPKNAVG}), a KN-mean extension of the $\eta$-averaging (\texttt{EAVG}) framework for $(\eta, F)$-entropies, is introduced and proven to be equivalent to \texttt{EAVG} under suitable concavification conditions. Second, motivated by generalized $g$-vulnerability, a new framework is proposed for generalized $g$-conditional entropies. This framework captures conditional entropies beyond the scope of \texttt{EAVG}-type representations. In particular, it is shown that there exists an $\alpha$ and a joint probability distribution $p_{X, Y}$ such that the Augustin--Csisz{\' a}r conditional entropy $H_{\alpha}^{\mathrm{C}}(X|Y)$ cannot be represented by any $(\eta,F)$-entropy satisfying \texttt{EAVG}. In contrast, it is represented within the proposed framework. Furthermore, sufficient conditions are derived under which the proposed generalized $g$-conditional entropies satisfy the conditioning reduces entropy property and the data-processing inequality.
0
0
cs.IT 2026-05-11 2 theorems

Generalized framework represents Augustin-Csiszár conditional entropy

Kolmogorov--Nagumo Mean Frameworks for Conditional Entropy

Kolmogorov-Nagumo means let generalized g-conditional entropies include alpha-parameterized measures missed by prior eta-averaging.

abstract click to expand
This study focuses on conditional entropy frameworks based on the Kolmogorov--Nagumo (KN) mean. First, $(\eta, \psi)$-KN averaging (\texttt{EPKNAVG}), a KN-mean extension of the $\eta$-averaging (\texttt{EAVG}) framework for $(\eta, F)$-entropies, is introduced and proven to be equivalent to \texttt{EAVG} under suitable concavification conditions. Second, motivated by generalized $g$-vulnerability, a new framework is proposed for generalized $g$-conditional entropies. This framework captures conditional entropies beyond the scope of \texttt{EAVG}-type representations. In particular, it is shown that there exists an $\alpha$ and a joint probability distribution $p_{X, Y}$ such that the Augustin--Csisz{\' a}r conditional entropy $H_{\alpha}^{\mathrm{C}}(X|Y)$ cannot be represented by any $(\eta,F)$-entropy satisfying \texttt{EAVG}. In contrast, it is represented within the proposed framework. Furthermore, sufficient conditions are derived under which the proposed generalized $g$-conditional entropies satisfy the conditioning reduces entropy property and the data-processing inequality.
0
0
cs.IT 2026-05-11 2 theorems

New framework represents conditional entropies eta-averaging misses

Kolmogorov--Nagumo Mean Frameworks for Conditional Entropy

Kolmogorov-Nagumo based generalized g-conditional entropies include Augustin-Csiszár cases for alpha in (0,1) or (1, infinity).

abstract click to expand
This study focuses on conditional entropy frameworks based on the Kolmogorov--Nagumo (KN) mean. First, $(\eta, \psi)$-KN averaging (\texttt{EPKNAVG}), a KN-mean extension of the $\eta$-averaging (\texttt{EAVG}) framework for $(\eta, F)$-entropies, is introduced and proven to be equivalent to \texttt{EAVG} under suitable concavification conditions. Second, motivated by generalized $g$-vulnerability, a new framework is proposed for generalized $g$-conditional entropies. This framework captures conditional entropies beyond the scope of \texttt{EAVG}-type representations. In particular, it is shown that there exists an $\alpha$ and a joint probability distribution $p_{X, Y}$ such that the Augustin--Csisz{\' a}r conditional entropy $H_{\alpha}^{\mathrm{C}}(X|Y)$ cannot be represented by any $(\eta,F)$-entropy satisfying \texttt{EAVG}. In contrast, it is represented within the proposed framework. Furthermore, sufficient conditions are derived under which the proposed generalized $g$-conditional entropies satisfy the conditioning reduces entropy property and the data-processing inequality.
0
0
cs.IT 2026-05-11 Recognition

Syndrome reformulation proves proximity gaps for random codes near capacity

A Syndrome-Space Approach to Proximity Gaps and Correlated Agreement for Random Linear Codes

Direct proofs reach radius 1-R-ε for large alphabets and near-capacity bounds for constant alphabets without list decoding.

abstract click to expand
Proximity gaps and correlated agreement have become central tools in the analysis of interactive oracle proofs of proximity (IOPPs) and code-based SNARKs. Informally, a proximity-gap statement says that for a structured set of words -- such as a line, an affine space, or a curve -- either all points are close to the code, or most are far from it. Such statements are essential in sampling-based proof systems, where a verifier queries only a few random locations on a structured object but must still obtain a global soundness guarantee. In Reed--Solomon-based proof systems, one would ideally like the proximity parameter to approach the information-theoretic limit $1-R$, since this is the largest possible radius for a rate-$R$ code and directly affects protocol efficiency. While recent work has substantially strengthened the picture for algebraic codes and linked proximity gaps to decoding-related structural properties, it remains unclear whether analogous results for random linear codes can be proved directly, rather than through decoding-theoretic surrogates. In this work, we establish a direct approach to proximity gaps and correlated agreement for random linear codes in the random parity-check-matrix model, without relying on list decoding as the main engine of the proof. Our approach is based on a syndrome-space reformulation together with a witness-based reduction mechanism, and it yields strong results for affine lines, affine spaces, and polynomial curves. It is conceptually different from the existing decoding-driven route for random linear codes, and it also leads to sharper parameters, including the optimal-up-to-$\varepsilon$ large-alphabet radius bound $\rho<1-R-\varepsilon$ for $q=\Theta(n)$, as well as near-capacity bounds over constant alphabets with improved alphabet-size requirements.
0
0
cs.IT 2026-05-11 Recognition

Log-domain SOCS rule nears full performance in turbo product codes

A Log-Domain Approximation of SOCS Decoding for Turbo Product Codes

A piecewise-linear function of reliability gaps replaces probability arithmetic while staying compatible with standard iterative decoding.

Figure from the paper full image
abstract click to expand
This paper studies low-complexity soft-output decoding of turbo product codes with extended Bose--Chaudhuri--Hocquenghem component codes. Recent soft-output from covered space (SOCS) decoding substantially improves the quality of extrinsic information compared with the conventional Chase--Pyndiah decoder, but its probabilistic-domain implementation is less attractive for hardware-oriented realizations. We therefore propose a log-domain approximation of SOCS based on max-log approach. The proposed soft-input soft-output rule replaces probability-domain operations with a piecewise-linear function of reliability gaps between competing Chase-II decoding list and out of the list hypotheses, which preserves compatibility with the standard iterative TPC decoding loop. Numerical results for a TPC built from (256,239) eBCH component codes show that the proposed decoder clearly outperforms the baseline Chase--Pyndiah decoder with the same list size and approaches the performance of SOCS decoder.
0
0
cs.IT 2026-05-11 2 theorems

Bregman losses yield dual UMVUEs via Rao-Blackwell

UMVUE-Type Estimators under Bregman Losses

One direction recovers classical unbiased estimation; the other supports new minimum-variance estimators through dual-space conditioning.

abstract click to expand
We study unbiased estimation under Bregman losses and develop an extension of the classical theory of uniformly minimum variance unbiased estimators (UMVUEs). Exploiting bias--variance-type decompositions for Bregman divergences, we consider two natural loss functions, $D_{\varphi}(\theta,\hat{\theta})$ and $D_{\varphi}(\hat{\theta},\theta)$, and their corresponding notions of unbiasedness. We show that the latter formulation reduces to the classical setting, whereas the former yields a different framework in which unbiasedness is characterized in the dual space induced by $\nabla\varphi$. For the nontrivial case, we establish analogs of the Rao--Blackwell and Lehmann--Scheff{\'e} theorems, providing a systematic construction of type-I Bregman UMVUEs.
0
0
cs.IT 2026-05-11 2 theorems

Environment carries data by distinguishable physical states sensed via RF

Embodied Communication: Sensing-Induced Reliability Fields and Capacity Bounds

Multi-snapshot sensing produces a reliability field that turns state distinction into a geometric codebook problem with an optimal finite-

Figure from the paper full image
abstract click to expand
This paper introduces embodied communication, a new wireless communication modality in which information is imprinted onto environmental states and recovered by the receiver through sensing. No dedicated communication transmitter is activated, and no additional communication spectrum is occupied; instead, the sensed environment itself becomes the carrier of information. The key insight is that sensing must be reinterpreted for communication. Rather than asking how accurately an unknown physical state can be estimated, embodied communication asks how reliably two states can be distinguished. We formalize this idea through a multi-snapshot radio frequency (RF) sensing model and derive a sensing-induced reliability field that quantifies the distinguishability between physical states. This field turns embodied symbol design into a geometric packing problem shaped by the sensing resolution of the infrastructure. For this embodied channel, we characterize the finite-snapshot $\epsilon$-capacity through achievable designs and converses. We develop lattice-based codebooks, obtain a closed-form hexagonal design under a main-lobe approximation, and establish information-theoretic and geometric upper bounds. We further reveal an intrinsic sensing-duration tradeoff: more sensing snapshots improve reliability, but also lengthen each embodied symbol, leading to a finite optimal sensing time. These results expose a latent communication pathway in sensing-enabled infrastructure and show how the environment can be transformed from a passive backdrop into an active information carrier.
0
0
cs.IT 2026-05-11 2 theorems

Channel dimension caps wireless AI models at ~30 million parameters

How Big Should a Wireless Foundation Model Be?

Physical constraints on propagation degrees of freedom make further scaling ineffective once data is sufficient.

Figure from the paper full image
abstract click to expand
Wireless foundation models are rapidly emerging as a key enabler of AI-native communication systems, yet a fundamental question remains unanswered: how large should these models be? We present a principled, physics-grounded answer, showing that the intrinsic dimensionality (dNL, the nonlinear manifold dimension of the channel) acts as the fundamental bottleneck, defining the scaling ceiling once a data-sufficient regime is reached. This dimensionality is not a design choice but a physical constraint: Maxwell's equations, finite scatterers, and antenna aperture inherently constrain wireless propagation environments to a limited number of degrees of freedom -- spanning 5-35 across both real-world OTA measurements and 3GPP-standardized channel models we evaluate -- orders of magnitude below the ~1,000-dimensional semantic space of language. As a consequence, we propose a scaling framework for wireless AI: taking NTN satellite channels as a representative case (dNL ~= 14), scaling gains diminish rapidly beyond ~30 million parameters, entering a stochastic asymptote above 70M where a further 1.6x increase (96M->150M) yields only 0.52 dB. Beyond this ceiling, inference-time adaptation via pilot-aided test-time training (TTT) is far more effective: a compact 12M-parameter model surpasses a static 96M model by 9.9 dB (NMSE, SNR = 20 dB) / 7.6 dB (MCM, SNR = 10 dB) at one-eighth the parameters. With dNL distributions validated across real-world indoor massive MIMO measurements, our scaling laws and TTT gains are demonstrated through NTN satellite simulations, reframing wireless AI design: channel geometry -- not model size -- fundamentally governs the scaling laws of physical-layer wireless AI.
0
0
cs.IT 2026-05-11 2 theorems

Movable subarrays boost near-field multiuser rates

Movable Subarray-Aided Hybrid Beamforming for Near-Field Multiuser Communications

Coupling subarray movement with hybrid beamforming adds position-dependent freedom for sharper focusing and less interference.

Figure from the paper full image
abstract click to expand
Movable antenna (MA)-enabled near-field (NF) communications offer significant potential for 6G, yet existing designs often neglect the practical constraints of hybrid beamforming (HBF) for extremely large-scale MIMO (XL-MIMO). Conversely, MA-aided HBF frequently overlooks the rich NF degrees of freedom (DoFs). This paper proposes a movable subarray (MSA)-aided HBF architecture for NF multiuser systems, which strikes a strategic balance between hardware practicality and spatial flexibility. By coupling MSA movement with HBF, the proposed design simultaneously exploits NF distance-dependent and MSA position-dependent DoFs, enabling highly precise beamfocusing and superior interference mitigation. To alleviate the computational burden, a hybrid planar-spherical wave model is introduced for efficient channel approximation. Furthermore, an alternating optimization (AO) algorithm is developed by integrating fractional programming, the alternating direction method of multipliers (ADMM), and projected gradient ascent. Simulation results validate substantial sum-rate gains over fixedposition antenna (FPA) benchmarks.
0
0
cs.IT 2026-05-11 2 theorems

MLE attains sub-Gaussian tails and entropic normality

Sub-Gaussian Concentration and Entropic Normality of the Maximum Likelihood Estimator

Score assumptions produce tail bounds, moment convergence, and vanishing relative entropy to the Gaussian, with smoothing removable when the

abstract click to expand
It is well known that, under standard regularity conditions, the maximum likelihood estimator (MLE) satisfies a central limit theorem and converges in distribution to a Gaussian random variable as the sample size grows. This paper strengthens this classical result by developing several stronger forms of asymptotic normality for the normalized MLE. With additional assumptions on the score, we first establish sub-Gaussian tail bounds and convergence of all moments for the normalized estimation error. We then prove an entropic central limit theorem for a smoothed version of the estimator, showing convergence in relative entropy to the limiting Gaussian law. When the Fisher information of the normalized estimate is bounded, or its density has bounded first derivative, we further show that the smoothing can be removed, yielding entropic normality of the MLE itself. The proofs develop auxiliary tools that may be of independent interest, including exponential consistency bounds, high-moment estimates, and entropy-control arguments for the estimator.
0
0
cs.IT 2026-05-08 2 theorems

Two RF chains match full digital fluid antenna performance

Hybrid Multiport Receivers for Slow Fluid Antenna Multiple Access

Hybrid multiport receiver cuts computation over 60 percent in slow multiuser FAMA while keeping selection benefits.

Figure from the paper full image
abstract click to expand
We propose a novel receiver architecture that preserves the performance benefits of multiport selection in fluid-antenna systems while requiring only a very small number of radio-frequency (RF) chains. The resulting fluid-antenna hybrid multiport (FAHM) receiver effectively decouples port selection from signal combining by integrating a low-complexity analog combining network similar to those used in conventional hybrid multiantenna designs. We develop a stopping criterion to determine the number of selected ports, which limits the performance loss associated with port selection, and then design the hybrid combiner for a given RF-chain budget. The FAHM architecture is evaluated in a multiuser set-up operating under slow fluid-antenna multiple access (FAMA). In this scenario, a FAHM implementation with only 2 RF chains showcases a performance comparable to a fully-digital conventional multiport scheme with a much larger number of RF chains. Additionally, the proposed receiver architecture attains over 60% reduction in computational burden when integrated with a novel efficient implementation of the state-of-the-art generalized-eigenvector port-selection method.
0
0
cs.IT 2026-05-08 1 theorem

Source encryption rate bound is independent of error and leakage levels

A Framework of Variable-Length Source Encryption using Mutual Information Security Criterion: Universal Coding, Strong Converse Theorem

The strong converse shows that the threshold for reliable and secure communication remains fixed for any chosen tolerances, with universal c

Figure from the paper full image
abstract click to expand
In this paper, we propose a framework of source encryption, where cryptographic processing is applied to a prescribed fixed length source code. The proposed source encryption framework is based on the secure communication framework of the Shannon cipher system. In the proposed framework, we use the mutual information as a measure of information leakage to an adversary. For the proposed framework, we explicitly establish the necessary and sufficient condition for reliable and secure communication under the condition that error probability and information leakage, respectively, are upper bounded by prescribed constants $\varepsilon\in (0,1)$ and $\delta \in (0,\infty)$. We also show that the obtained necessary and sufficient condition does not depend on the constants $\varepsilon\in (0,1)$ and $\delta\in (0,\infty)$, demonstrating that we have the strong converse theorem for the proposed framework of source encryption. We further prove the existence of encryption/decryption schemes, which are universal in the sense that they work effectively for any distributions of the plain text and those of the key used for the encryption.
1 0
0
cs.IT 2026-05-08

Skew Goppa codes are subfield subcodes of Reed-Solomon codes

Generalized Skew Multivariate Goppa Codes

A new parity check matrix establishes the link and provides bounds on dimension and minimum distance for the multivariate case.

abstract click to expand
We introduce Generalized Skew Multivariate Goppa codes relying on the theory of multivariate Ore polynomials. These codes contain, as a particular case, the Generalized Skew Goppa codes. By providing a new parity check matrix for the latter, we show that, under some hypotheses, they are subfield subcodes of Generalized Skew Reed--Solomon codes. This result turns out to be helpful to study the parameters of Skew Multivariate Goppa codes, for which we provide bounds on their dimension and minimum distance.
0
0
cs.IT 2026-05-08 2 theorems

Skew Goppa codes shown as Reed-Solomon subfield subcodes

Generalized Skew Multivariate Goppa Codes

New parity check matrix yields bounds on dimension and minimum distance for the multivariate versions.

abstract click to expand
We introduce Generalized Skew Multivariate Goppa codes relying on the theory of multivariate Ore polynomials. These codes contain, as a particular case, the Generalized Skew Goppa codes. By providing a new parity check matrix for the latter, we show that, under some hypotheses, they are subfield subcodes of Generalized Skew Reed--Solomon codes. This result turns out to be helpful to study the parameters of Skew Multivariate Goppa codes, for which we provide bounds on their dimension and minimum distance.
0

browse all of cs.IT → full archive · search · sub-categories