In this note, we derive an elementary version of the coarea formula by considering the mass of a solid body with density $g (x)$. Then we present an rigorous proof using the changing variable formula. To this end we construct the diffeomorphism $\Phi$ via the gradient flow and compute its Jacobian determinant via geometric method.
The resulting counts satisfy combinatorial identities and admit asymptotic approximations for large sets.
abstractclick to expand
When one inserts a number of identical bars in between blocks of an ordered set partition, they get a barred preferential arrangement. In this study we define a new generalization of barred preferential arrangements, by considering barred preferential arrangements with no fixed blocks, and ones where the first r elements of a set are singletons. We derive several combinatorial identities. Combinatorially these numbers are a kind of generalized barred preferential arrangements. We also provide some asymptotic results for these numbers.
Automated crack inspection is increasingly recognized as a critical component of infrastructure monitoring; however, cracks continue to be reported primarily as binary segmentation masks by many current vision-based systems. While localization is facilitated by such masks, limited structural information is provided for robust engineering interpretation. For practical crack assessment, measurable morphological features -- including centerline geometry, branching behavior, junction locations, topology, and severity-related indicators -- are required. In this work, \textit{CrackMorph-XAI-Net}, an explainable morphology-aware framework for image-based crack analysis, is presented. Crack image and region-mask data are converted into a sequence of interpretable structural outputs through four distinct stages: topology-preserving skeleton extraction, junction detection via Gaussian heatmap regression, morphology descriptor computation, and severity-oriented screening. To support rigorous stage-wise evaluation, the standard \textit{CRACK500} benchmark is extended with aligned skeleton maps, junction heatmaps, and topology labels. Experimental validation demonstrates that a mean Dice coefficient of 0.991 is achieved by the learned skeleton extraction stage, with topology preserved in 98.5\% of test images. Furthermore, a recall of 0.964 and an F1-score of 0.887 are obtained in the junction detection stage, highlighting the efficacy of heatmap regression for sparse structural targets. Strong agreement between predicted and reference morphology values is revealed by descriptor-level evaluation, with correlations exceeding 0.95 for length, width, orientation, junction count, and tortuosity.
We propose a new integral based on Taylor measures, study its properties extensively, and we illustrate that it includes many concepts from mathematics as special cases. In particular, the new integral emerges as a generalization of the discrete Fourier transform, and we identify general conditions for it to be invertible when applied to any real or complex sequence. Applications to the mathematical sciences are also presented.
Cartan method identifies which equations with 4D or 5D symmetry algebras transform to linear form via contact maps.
abstractclick to expand
Using Cartan equivalence method, invariant coframes are constructed for two branches of rank one and zero, which characterize linearizable third-order ODEs under contact transformations with four- and five-dimensional Lie symmetry algebras, respectively. A procedure for deriving the corresponding contact transformations is also presented, along with illustrative examples.
Sine-cosine ratio plus truncated series gives exact order 2P+1 for any natural number P
abstractclick to expand
In this paper, we present a fixed point method for the arctangent based on sine and cosine. Let $t\in \mathbb{R}^{+}$ and $P\in \mathbb{N}$. We define: \[T\left(x\right)=x-\sum_{k=1}^{P}\,\frac{\left(-1\right)^{k-1}}{2\,k-1} \left(\frac {\sin\!\left(x\right)-t\cos\!\left(x\right)} {\cos\!\left(x\right)+t\sin\!\left(x\right)} \right)^{2\,k-1}.\] For every initial value $x_0$ sufficiently close to $\arctan\left(t\right)$, the sequence \[x_{n+1}=T\left(x_{n}\right)\;;\,n=0,1,\ldots\] is converging to $\arctan\left(t\right)$ with order of convergence exactly $\left(2\,P+1\right)$. The computational test we performed demonstrates the efficiency of the method. \selectlanguage{ngerman} \[\] \[\textbf{Zusammenfassung}\] In dieser Abhandlung stellen wir ein Fixpunktverfahren zur Berechnung des arcustangens auf Basis von sinus und cosinus vor. Es sei $t\in \mathbb{R}^{+}$ und $P\in\mathbb{N}$. Wir definieren: \[T\left(x\right)=x-\sum_{k=1}^{P}\,\frac{\left(-1\right)^{k-1}}{2\,k-1} \left(\frac {\sin\!\left(x\right)-t\cos\!\left(x\right)} {\cos\!\left(x\right)+t\sin\!\left(x\right)}\right) ^{2\,k-1}.\] F\"ur jeden Startwert $x_0$ hinreichend nahe bei $\arctan\left(t\right)$ konvergiert die Folge \[x_{n+1}=T\left(x_{n}\right)\;;\,n=0,1,\ldots\] gegen $\arctan\left(t\right)$ mit Konvergenzordnung genau $\left(2\,P+1\right)$. Anhand einer praktischen Berechnung von $\frac{\pi}{4}$ zeigen wir die Effizienz des Verfahrens. \[\text{Deutsche Version ab Seite 17}\]
Sine-cosine map subtracts a partial arctan series to reach order exactly 2P+1 from a nearby start.
abstractclick to expand
In this paper, we present a fixed point method for the arctangent based on sine and cosine. Let $t\in \mathbb{R}^{+}$ and $P\in \mathbb{N}$. We define: \[T\left(x\right)=x-\sum_{k=1}^{P}\,\frac{\left(-1\right)^{k-1}}{2\,k-1} \left(\frac {\sin\!\left(x\right)-t\cos\!\left(x\right)} {\cos\!\left(x\right)+t\sin\!\left(x\right)} \right)^{2\,k-1}.\] For every initial value $x_0$ sufficiently close to $\arctan\left(t\right)$, the sequence \[x_{n+1}=T\left(x_{n}\right)\;;\,n=0,1,\ldots\] is converging to $\arctan\left(t\right)$ with order of convergence exactly $\left(2\,P+1\right)$. The computational test we performed demonstrates the efficiency of the method. \selectlanguage{ngerman} \[\] \[\textbf{Zusammenfassung}\] In dieser Abhandlung stellen wir ein Fixpunktverfahren zur Berechnung des arcustangens auf Basis von sinus und cosinus vor. Es sei $t\in \mathbb{R}^{+}$ und $P\in\mathbb{N}$. Wir definieren: \[T\left(x\right)=x-\sum_{k=1}^{P}\,\frac{\left(-1\right)^{k-1}}{2\,k-1} \left(\frac {\sin\!\left(x\right)-t\cos\!\left(x\right)} {\cos\!\left(x\right)+t\sin\!\left(x\right)}\right) ^{2\,k-1}.\] F\"ur jeden Startwert $x_0$ hinreichend nahe bei $\arctan\left(t\right)$ konvergiert die Folge \[x_{n+1}=T\left(x_{n}\right)\;;\,n=0,1,\ldots\] gegen $\arctan\left(t\right)$ mit Konvergenzordnung genau $\left(2\,P+1\right)$. Anhand einer praktischen Berechnung von $\frac{\pi}{4}$ zeigen wir die Effizienz des Verfahrens. \[\text{Deutsche Version ab Seite 17}\]
The fuzzy Sombor index applies the classical Sombor index to fuzzy graphs, incorporating both edge membership values and fuzzy vertex degrees. For $\alpha>1$, the general fuzzy Sombor index it is defined as \[ \mathrm{SO}^{\mu}_{\alpha}(\Gamma)=\sum_{uv\in V(\Gamma)} \left( \mu(u,v)\, \sqrt{\mu_u^2+\mu_v^2} \right)^{\alpha}. \] This paper analyses extremal features of $\mathrm{SO}^{\mu}$ across different types of fuzzy graphs. We determine the maximum value (resp. minimum value) of $\mathrm{SO}^{\mu}$ characterise in regular fuzzy graph. We established significant inequality between the fuzzy Sombor index and other well-known fuzzy topological indices.
In complete suprametric spaces these mappings have fixed points even if discontinuous, with k-continuity as a sufficient alternative.
abstractclick to expand
The aim of this paper is to generalize some fixed point theorems in the class of convex contraction of order $m$ on a complete suprametric space. Then, we will prove that the class of convex contraction of order m is strong enough to generate a fixed point on a complete suprametric spaces but do not force the mapping to be continuous at the fixed point, and it can be replaced by relatively weaker conditions of $k$-continuity or $T$-orbitally lower semi-continuous. On this way a new and distinct solution to the open problem of Rhoades (Contemp Math 72:233-245,1988) is found. In sequel, we will prove some fixed point results in the setting suprametric spaces which are generalizations of the results regarding Sehgal, \'Ciri\'c and Fisher's quasi-contraction. Some examples and application will be approved our results.
Codimension-three Riesz reduction induces the kernel whose expectation under a specific covariance matches the heat-regularized trace.
abstractclick to expand
Under a prescribed heat-regularized Gaussian source covariance, we give a quadratic-form representation of the scalar Casimir trace associated with a codimension-three Riesz reduction. For a product operator $L_M=L_B-\Delta_\perp$, with $L_B$ positive self-adjoint and bounded below, transverse reduction of the ambient Riesz operator $L_M^{-s}$ produces the brane multiplier $L_B^{m/2-s}$, up to an explicit Gamma-function constant. The exponent $s=1+m/2$ is therefore the critical Riesz exponent for obtaining the ordinary brane Green operator $L_B^{-1}$; in codimension three this gives $s=5/2$.
Using this induced Green kernel, we prescribe a Gaussian generalized scalar source with covariance proportional to $L_B^{3/2}e^{-\tau L_B}$. The expectation of its quadratic Green-kernel energy is then exactly the heat-regularized scalar Casimir trace \[
\frac{\hbar c}{2}
\operatorname{Tr}\!\left(L_B^{1/2}e^{-\tau L_B}\right). \] With the same finite-part prescription, the identity specializes in the Dirichlet parallel-plate geometry to the standard scalar finite part.
We also record a deterministic flat Green-energy calibration at the plate scale. Within the plate-compatible rectangular aspect-ratio family, the cubical cell is selected by spectral, heat-trace, and Green-energy extremal criteria, and the associated comparison coefficient is the corresponding extremal calibration value. The construction is a scalar spectral representation theorem; no electromagnetic, gravitational, brane-dynamical, or fundamental-constant identification is asserted.
Prescribing a heat-regularized Gaussian covariance on the brane makes its quadratic energy expectation match the trace, reducing to the know
abstractclick to expand
Under a prescribed heat-regularized Gaussian source covariance, we give a quadratic-form representation of the scalar Casimir trace associated with a codimension-three Riesz reduction. For a product operator $L_M=L_B-\Delta_\perp$, with $L_B$ positive self-adjoint and bounded below, transverse reduction of the ambient Riesz operator $L_M^{-s}$ produces the brane multiplier $L_B^{m/2-s}$, up to an explicit Gamma-function constant. The exponent $s=1+m/2$ is therefore the critical Riesz exponent for obtaining the ordinary brane Green operator $L_B^{-1}$; in codimension three this gives $s=5/2$.
Using this induced Green kernel, we prescribe a Gaussian generalized scalar source with covariance proportional to $L_B^{3/2}e^{-\tau L_B}$. The expectation of its quadratic Green-kernel energy is then exactly the heat-regularized scalar Casimir trace \[
\frac{\hbar c}{2}
\operatorname{Tr}\!\left(L_B^{1/2}e^{-\tau L_B}\right). \] With the same finite-part prescription, the identity specializes in the Dirichlet parallel-plate geometry to the standard scalar finite part.
We also record a deterministic flat Green-energy calibration at the plate scale. Within the plate-compatible rectangular aspect-ratio family, the cubical cell is selected by spectral, heat-trace, and Green-energy extremal criteria, and the associated comparison coefficient is the corresponding extremal calibration value. The construction is a scalar spectral representation theorem; no electromagnetic, gravitational, brane-dynamical, or fundamental-constant identification is asserted.
This paper introduces a class of extended central factorial numbers generated by a parity-dependent recurrence relation, termed the "flickering operator". We demonstrate that the resulting triangular structure, now indexed as OEIS A395021, provides a unified recursive framework for alternating bit sequences (A000975) and normalized tangent-secant coefficients (A036969). This study provides an alternative integer-based expansion for power sums. While similar to the central factorial methods explored by Knuth (1993), our flickering basis offers an integrated computational scheme that avoids fractional Bernoulli numbers by construction. We provide explicit closed-form expressions, discuss its geometric derivation from finite difference tables, and present a full Python implementation. Structural Synthesis. A key contribution of this work is the unification of previously disparate combinatorial sequences into a single coherent framework. While certain columns of the flickering triangle T(n, k) (such as A008957) could be partially retrieved from the diagonals of existing central factorial arrays, our structure provides a complete representation including previously unindexed even-positioned terms. Furthermore, the row-wise analysis reveals that the flickering operator generates full integer sequences where previously only the odd-indexed elements (e.g., A002451) were identified. This synthesis bridges the gap between these sequences, positioning A395021 as the underlying master structure.
The resummation of superfactorially divergent series represents a significant computational challenge in mathematical physics. In the present paper the resummation of a specific class of Stieltjes series characterized by a moment sequence growing as $(2n)!$ will be addressed. Despite the fact that Carleman's condition is satisfied for these series, the convergence rate of Pad\'e approximants is severely hindered by the logarithmic divergence of the associated Carleman series. Weniger's $\delta$ transformation is proposed as a highly efficient alternative resummation tool. By employing recently established results on the converging factors of superfactorially divergent Stieltjes series, an exact integral representation for the truncation error is obtained. This representation enables the rigorous derivation of the leading-order asymptotic behavior of the transformation error, as well as the estimation of the related convergence rate, for real positive arguments. Numerical experiments strongly support the theoretical findings, suggesting that the $\delta$ transformation offers a robust and computationally efficient framework for decoding this class of wildly divergent expansions
Two new hierarchical versions of Raha's similarity-based approximate reasoning use restricted equivalence functions to limit rule growth yet
abstractclick to expand
Given that the restricted equivalence functions (REFs) can serve to measure the similarity of two fuzzy sets, this motivates the integration of REFs with similarity-based approximate reasoning systems to enhance inference capabilities. Therefore, this work primarily constructs hierarchical similarity-based approximate reasoning (SBAR) using REFs. Specifically, we first characterize REFs with a given aggregation function, then discuss the approximation equality of SBAR method proposed by Raha et al. with REFs. Finally, we suggest two REF-based hierarchical Raha's SBAR methods which efficiently restrain the explosion of fuzzy rules.
Revised theorem shows increasing and left-continuous φ suffices for extending fuzzy Lipschitz maps to the reals
abstractclick to expand
In the paper [E. Jim\'enez-Fern\'andez, J. Rodr\'{\i}guez-L\'opez, E. A. S\'anchez-P\'erez, Fuzzy Sets and Systems 406 (2021),66-81], a McShane-Whitney extension theorem is presented for real-valued fuzzy Lipschitz maps between fuzzy metric spaces. Specifically, the codomain space is considered as a so-called Euclidean fuzzy metric space $(\mathbb{R},M_{\phi,g},\ast).$ However, while the function $\phi$ is only required to be increasing, some results of the paper implicitly assume that $\phi$ is invertible, even though this is not explicitly stated. We propose here an alternative possibility that only requires $\phi$ to be also left-continuous.
If generated by C¹ functions with nonvanishing derivatives and bounded by one member, the best lower or upper bound stays inside the family.
abstractclick to expand
We show that every family of quasi-arithmetic means generated by (a subset of) $\mathcal{C}^1$ functions with nonvanishing derivative which is bounded (from below or from above) by a quasi-arithmetic mean, possesses the best (lower or upper) bound which is a quasi-arithmetic mean generated by a function belonging to the same family.
ε-T-transitive relations let aggregation and inference proceed without computing exact transitive closures, under controlled error.
abstractclick to expand
The transitivity of fuzzy relations plays an important role in fuzzy set theory, artificial intelligence, clustering and decision-making. However, it is often difficult for fuzzy relations to satisfy the transitivity property in many practical applications. This has motivated researchers to investigate the degree to which a fuzzy relation is transitive. Therefore, this work first investigates two different measures of T-transitivity for fuzzy relations using some well-known fuzzy implications. And then, the relationship between two different degrees of transitivity is investigated. Further, the concept of an {\epsilon}-T-transitive fuzzy relation is introduced, and the aggregation functions that preserve the {\epsilon}-T-transitivity of fuzzy relations are characterized. Finally, the {\epsilon}-T-transitive fuzzy relation is utilized to make inferences and cluster objects. Compared to finding the T-transitive closure, it is reasonable to cluster objects using the {\epsilon}-T-transitive fuzzy relation under the permissible error.
Derived theorems support encryption schemes and clutter-suppressing detection for 3D data.
abstractclick to expand
Point clouds can be regarded as discrete samples of smooth manifolds and are typically analyzed via the eigenfunctions of the Laplace-Beltrami operator. This paper extends manifold spectral analysis to the fractional domain, enabling continuous interpolation between the spatial and spectral domains for point cloud data. First, a point cloud manifold fractional harmonic transform (PMFHT) is proposed, with its fundamental properties rigorously derived, along with the associated convolution, correlation, and sampling theorems. These theoretical results establish a solid foundation for stable fractional-order spectral representation on manifolds. Second, within the PMFHT framework, two representative algorithms are developed. On the one hand, by integrating multi-order PMFHT with chaotic phase modulation, a point cloud encryption scheme is constructed, characterized by a large key space and high sensitivity to key perturbations. On the other hand, an optimal filter is designed in the fractional manifold spectral domain, leading to a maritime target detection method specifically tailored for point cloud data, which effectively suppresses sea clutter while preserving weak target energy under low signal-to-clutter ratio conditions. Finally, experiments on measured data validate the effectiveness of the proposed algorithms.
The condition is necessary for pseudoprimality and sufficient to prove primality, with a direct tie to Pépin's test for base-3 cases.
abstractclick to expand
We establish a necessary condition for pseudoprimality and a sufficient condition for primality of Fermat numbers, based on a congruence involving the exponent $(F_n-1)/4$. Moreover, in connection with P\'epin's primality test, we obtain a characterization of pseudoprimality to the base $3$ (and, more generally, to other P\'epin-admissible bases).
Rescaled even polynomials share the same zero distribution but their largest zeros near 1 on distinct exponential scales.
abstractclick to expand
We consider the two families of even polynomials $\Xi_n$ and $\Lambda_n$ studied in~\cite{TallaWaffo2026arxiv2602.16761}, together with the rescaled polynomials $\widetilde{\Xi}_n(x):=\Xi_n(\sqrt{x})$ and $\widetilde{\Lambda}_n(x):=\Lambda_n(\sqrt{x})$, $n\ge2$. Their zeros are real, simple, and contained in $(0,1)$. Writing them as $0<x^{(\Xi)}_{1,n}<\cdots<x^{(\Xi)}_{n-1,n}<1$ and $0<x^{(\Lambda)}_{1,n}<\cdots<x^{(\Lambda)}_{n-1,n}<1$, we study the asymptotic behaviour of the largest zeros $x^{(\Xi)}_{n-1,n}$ and $x^{(\Lambda)}_{n-1,n}$. We prove that the two families have different exponential rates at the right endpoint: \[
\frac{1}{n-1}\log\bigl(1-x^{(\Lambda)}_{n-1,n}\bigr)\to-\log4,
\qquad
\frac{1}{n-1}\log\bigl(1-x^{(\Xi)}_{n-1,n}\bigr)\to-\log9. \] Thus, although the two families share the same global limiting zero distribution, their extreme right zeros approach $1$ on different exponential scales. The proof is based on the representation of $\Xi_n$ and $\Lambda_n$ in terms of Eulerian polynomials of type~B and type~A, respectively, and on an elementary estimate for the smallest negative zero in terms of the first non-constant coefficient.
Extending set and logic concepts lets masses exceed 1, fall below 0, or become indeterminate when combining satellite, IoT, and social data.
abstractclick to expand
In this paper, for the first time, we extend the Over/Under/Off Set/Logic/Probability used in uncertain theories (such as: fuzzy, neutrosophic and extensions) to the Over/Under/Off Mass that could be used in Information Fusion. The approach is exemplified in three scenarios: (1) wildfire evacuation and resource allocation with satellite, IoT, and social media; (2) coverage gaps where indeterminacy must be managed; and (3) security monitoring where contradictory or erroneous reports are discounted.
We show that for an integer $\ell$, there exists an acute integer lattice triangle of lattice perimeter $\ell$ such that its orthocenter is an integer lattice point, if and only if $\ell=6 $ or $\ell\ge 8$. Analogous results are obtained for the circumcenter and the centroid, and the results are contrasted with those for obtuse and right triangles.
This paper presents a computational study of P-positions (losing positions for the player to move) in 4 x n Chomp, the combinatorial game played on a 4 x n rectangular grid. An optimized C++ solver with bitpacked state representation tabulates all 4,316,097 P-positions for boards with n <= 500, constituting the most extensive computational study of 4 x n Chomp to date. The analysis reveals four structural conjectures: (1) a Unique Extension property, stating that for any triple (a,b,c) of row lengths, there is at most one valid fourth-row length d completing a P-position; (2) convergence of row-length ratios to fixed asymptotic constants; (3) a period-112 modular structure governing the set of extendable triples; and (4) a linear cone geometry for the P-position set in (a,b,c)-space. These findings suggest a richer deterministic structure in 4 x n Chomp than previously suspected and raise new questions about the relationship between k-row games for consecutive k.
We propose a solution to the tenth of Professor Clark Kimberling's unsolved problems found on https://faculty.evansville.edu/ck6/integer/unsolved.html. We are required to find the parametric equations of a simple and closed curve $C$ on the unit sphere $S$ with arc-length $4 \pi$, that minimizes the mean arc-distance from $S$ to $C$. We give explicit definitions of the mean arc-distance from $C$ to $S$, $M$ and the mean arc-distance from $S$ to $C$, $\tilde{M}$. We show that these two quantities are not the same. We show that for all closed and simple curves $C$ of arc-length $4 \pi$ on $S$, $M$ is constant and is equal to $2 \pi^{2}$. Therefore all such curves minimize $M$. We show that in contrast, $\tilde{M}$ varies for different closed and simple curves $C$ of arc-length $4 \pi$ on $S$. We find such a curve that minimizes $\tilde{M}$.
\textbf{Background} Measles has resurged globally in the post-pandemic period as routine immunisation recovery remains below the two-dose threshold required to interrupt transmission. Bangladesh, previously nearing measles--rubella elimination, entered 2026 with widening coverage gaps, depleted vaccine stocks, and increasing numbers of missed children. We conducted a situation analysis to assess the scale, concentration, and programmatic implications of the outbreak.
\textbf{Methods} We performed a rapid mixed-evidence review from 1--15 April 2026 using data from WHO, UNICEF, DGHS bulletins, PubMed/MEDLINE, ReliefWeb, SEARO updates, and Bangla-language media. Of 46 records screened, 19 were included. Analysis was based on aggregated, publicly available surveillance and programme data.
\textbf{Findings} By 15 April 2026, Bangladesh reported 19{,}161 suspected cases, 2{,}973 confirmed cases, 166 suspected deaths, and 32 confirmed deaths across 58 districts since 15 March 2026. The outbreak was spatially concentrated: the top two divisions accounted for 56.5\% of cases (HHI = 0.217). Children under five comprised 81\% of cases, including 34\% infants under nine months. Vaccination status showed 72\% zero-dose and 16\% partially vaccinated cases. Coverage declined from 88.6\% to 86\% for MR1 and from 89\% to 80.7\% for MR2 (2019--2024), leaving about 20 million children vulnerable.
\textbf{Interpretation} The resurgence reflects accumulated immunity gaps rather than vaccine failure, driven by subnational inequities and programme disruption. Urgent priorities include targeted vaccination campaigns, restoration of vitamin A supplementation, strengthening paediatric care capacity, and integrating real-time surveillance into outbreak response.
Formal symbols and function elements compactly extend the results to summation starting at any complex point.
abstractclick to expand
This is the third and last of three papers introducing generalised Cesaro convergence and is split into two parts. In part 1 we introduce the notion of a "Cesaro-adapted scale" and use it to prove the key generalised Cesaro summation/convergence theorems developed in the first paper in this series. We also use it to trivially extend these results to the case of remainder Cesaro summation/convergence relative to arbitrary $z_{0}\in\mathbb{C}$ (not just $z_{0}=0$). In the course of the working we introduce the concepts of "formal symbols" and "formal function elements", which allow us to express many results in extremely compact form and simplify our arguments considerably.
Part 2 is self-contained and devoted to further exploring this "formal" world. We express a number of additional results in surprisingly compact form using formal symbols and function elements, and use them to give simple proofs of several non-trivial results. We also investigate their fascinating properties. These include the need to avoid evaluating too early; the consequent need to retain stand-alone zeros (both "to the left" and "to the right") lest they be brought back to life before evaluation; and the need to use continuous limits to resolve singular ratios in final evaluation when required. Finally, we consider in detail the formal extension we have introduced of our Cesaro-adapted scale to a 1-parameter continuum of period-1 functions $\overset{\lor}{q}_{\rho}(\alpha)$, $\rho\in\mathbb{C}$. We analyse their distributional aspects when $\rho\in\mathbb{Z}_{<0}$ and derive their Fourier-series coefficients in general. We conclude with a miscellany of further observations, including a formal re-casting of the general Euler-McLaurin sum formula in very compact form, and a number of additional analytical and combinatorial characteristics of the $\overset{\lor}{q}_{\rho}(\alpha)$ and associated operators.
Key properties emerge directly from the geometric location of terms and invariance under dilation and scaling.
abstractclick to expand
In this second of three introductory papers, we extend the notion of generalised Cesaro summation/convergence to the more natural setting of what we call remainder Cesaro summation/convergence. This greatly expands the range of problems susceptible to Cesaro methods and introduces the geometric location of summands as a critical consideration. We also show that geometric generalised Cesaro convergence is invariant under dilation and scaling. We present a number of calculations illustrating the utility of these developments. In particular we introduce a new, more natural definition of the classical Gamma function using remainder Cesaro summation/products, and show that many its key properties - both basic and advanced - fall out directly and intuitively from this Cesaro definition and its geometric and dilation-invariance properties. We also consider other examples and show how Cesaro methodology explains the common structure of many well-known functional equations.
The auxiliary E(x) transfers the sqrt(x/log x) control on S(x) directly to π(x) through a short inequality chain.
abstractclick to expand
We introduce the weighted prime sum $S(x) = \sum_{p \le x} \sqrt{(\log p)/p}$ and the derived quantity $E(x) = S(x)^2 - M(x)$, where $M(x) = \sum_{p \le x} (\log p)/p$. We prove that the order-of-magnitude estimate $S(x) \asymp \sqrt{x / \log x}$ implies the Chebyshev bounds $\pi(x) \asymp x / \log x$ through a short and transparent chain of inequalities. The mechanism passes through $E(x)$, which we show satisfies $E(x) \asymp \pi(x)$ whenever the size estimate for $S(x)$ holds. We also establish that $S(x) \asymp \sqrt{x / \log x}$ follows from the classical estimate $\sum_{p \le x} (\log p)/p = \log x + O(1)$ (Mertens' theorem), so the entire argument is self-contained. The result itself (the Chebyshev bounds) is classical, but the proof route through the $S$-$E$ mechanism appears to be new.
Monotone paths from leaves to a center hold for the graph if and only if they hold for such a tree; convex sequences weight spiders to match
abstractclick to expand
The primary objective of this paper is to investigate the notions of geometric and sequential convexity within a graph-theoretic framework, with the aim of examining various structural properties and exploring the connection between these two branches of mathematics.
A simple connected vertex-weighted graph $G(V,E)$ with a non-empty set of leaf vertices is said to be star-convex if there exists at least one node $u\in V(G)$ such that, for every chosen leaf vertex $v$, there is a monotone path (either increasing or decreasing) connecting $v$ to $u$. One of the main results states that a graph $G$ is star-convex if and only if there exists a tree $T\subseteq G$ that contains all leaf vertices and is itself star-convex.
On the other hand, a sequence $\big(u_n\big)_{n=0}^{\infty}$ is said to be convex if it satisfies the following inequality $$ 2u_{i}\leq u_{i-1}+u_{i+1}\qquad \mbox{for all}\quad i\in \mathbb{N}. $$ We demonstrate that, under minimal assumptions, a class of convex sequences can be embedded into a spider graph so as to make it star-convex.
They extend averaging of partial sums to broader classes, giving a direct way to continue complex functions analytically with examples.
abstractclick to expand
This is the first in a set of three papers providing an introduction to generalised Cesaro convergence. We start with traditional Cesaro methods for extending classical convergence and further generalise these to allow the calculation of limits/sums for a much broader class of divergent sequences/series. These provide a constructive means of analytic continuation of functions of a complex variable and we give many examples. Future sets of papers will use these methods to derive new results (and re-derive many existing results) in areas including analytic number theory; the theory of the Riemann zeta function; reversal of order of summation; exponential sums; classical integration; Taylor series and Mellin transforms; asymptotic analysis; and a number of others.
Bangladesh exhibits marked year-to-year variability in dengue, partly driven by meteorological fluctuations that shape \textit{Aedes} breeding-site persistence, mosquito development, and transmission. We exploit a contrast between Dhaka (consistently high burden) and Barishal (recently rising burden despite lower population density) and frame feature-set design and predictor structure as the main methodological contributions. Using monthly dengue data from DGHS \cite{DGHS} and meteorological data from World Weather Online \cite{Weather} for January 2022--October 2025, we compare four climate feature sets that vary wetness (rainy days vs.\ rainfall) and sunshine (sun days vs.\ sun hours), while temperature and humidity appear in all sets. We evaluate two predictor configurations: lagged climate covariates only, and lagged climate covariates plus 1-month lagged dengue incidence ($Y_{t-1}$). Climate lags (0--4 months) are applied in correlation and forecasting. Both divisions show similar delayed associations: rainfall metrics peak positively near a 2-month lag, humidity near a 1-month lag, sunshine metrics are most negative around a 2-month lag, and temperature is weakly positive at longer lags. We then benchmark MPR, ANN, XGBoost, and SARIMAX across all sets. Best performance differs: Dhaka favors ANN-1 with SET-1 (RMSE=2176.70, MAE=1282.00, MAPE=31.54\%), whereas Barishal favors SARIMAX(0,1,1)(1,0,0,12) with SET-2 (RMSE=817.56, MAE=717.78, MAPE=39.96\%). Analyses use consistent monthly aggregation and division-specific tuning.
A Galileo sequence \((a_n)\) is a sequence of positive integers whose partial sums $S_n$ satisfy $S_{2n}=kS_n$ for some $k>1$. In this paper we prove that every polynomial Galileo sequence is given by first differences of the form \(a_n= C\left(n^d-(n-1)^d\right)\). We then show that every positive Galileo sequence has a binary-tree representation. Finally, for positive monotone integer-valued Galileo sequences, we prove power-law growth bounds, and give a continuous analog together with a characterization of all continuous solutions.
The axioms are mutually independent and the coupling law is the unique fixed point of a contraction with explicit closed form.
abstractclick to expand
We introduce a minimal ZFC-internal axiom system for pre-structural data (X, A, mu, mu^{otimes 2}, R, I, Pi_R, G, E_0, eta), where Pi_R : X -> R is a designated map and G subset X x X is a measurable relation; admissible structural models are those pre-structural data satisfying Axioms I-III, which couple a finitely additive measure, an idempotent retraction, and an idempotent symmetric relation through a single coupling law (Axiom III). The axiom system is satisfiable in ZFC via explicit finite and countable models, including finite families with eta neq 0. The three axioms, and the three subclauses of Axiom III, are mutually independent, witnessed by explicit separating models. The coupling law admits a fixed-point reformulation: it is the unique bounded finitely additive solution of a Banach-contraction equation f = T_eta f determined by (mu, Pi_R, eta), with closed form f_*(B) = mu(B) + (eta/(1-eta)) mu(Pi_R^{-1}(B)) and a Neumann-series expansion. Admissible structural models with a common eta form a category Struct_eta in which Pi_R and G appear as idempotents analogous to the two sides of a monad-comonad pair. Under fiber measurability together with either a finiteness hypothesis (R-fin) or countability plus sigma-additivity (R-ctbl), a quotient-factorization theorem reduces the general admissibility problem to the identity-retraction case Pi_R = id_X; in that case, under the hypotheses of Theorem 5.6 (pi-id-classification), each G-equivalence class C_k satisfies mu(C_k) in {0, (1-eta)^{-1}}.
It recovers known formulas like the Hardy-Hille formula as special cases and evaluates a diagonal case with the Le Roy function.
abstractclick to expand
In this paper, we study generating functions of Erd\'{e}lyi's multivariate Laguerre polynomials $L_{n_1,\cdots,n_k}^{(\alpha)}(x_1,\cdots,x_k)$ with a varying complex parameter. Our main result is a multiple generating function from which several useful consequences can be derived. We also present an interesting evaluation for a generating function of the main diagonal sequence $L_{n,\cdots,n}^{(-\beta-kn)}(x_1,\cdots,x_k)$ which involves in a natural way the well-known Le Roy function ([Darboux Bull. 24 (2) (1899), 245--268]; [Toulouse Ann. 2 (2) (1900), 317--430]). The significance of the multivariate Laguerre polynomials $L_{n_1,\cdots,n_k}^{(\alpha)}(x_1,\cdots,x_k)$ is demonstrated by observing that this class not only includes the generalized Hardy-Hille formula and the product formula but also contains the multiple Laguerre polynomials of the second kind as its important special cases. The paper gives in detail various consequences of the results presented in this paper and also mentions possible lines for future work.
The usual limit formula then follows from linear decomposition of the function.
abstractclick to expand
This paper presents an algebraic-geometric construction of the derivative developed initially within the class of polynomial functions without introducing limits at the initial stage. Tangency is characterized by an algebraic condition: the difference between a function and a linear approximation has a double root at a given point. On this basis, the derivative is defined as a functional correspondence assigning to each point the slope of the tangent. Within the class of polynomials, the existence, uniqueness, and fundamental rules of differentiation are established purely algebraically. The constructed model is then extended conceptually to elementary functions and connected to the linear decomposition of functions, from which the classical limit representation of the derivative naturally emerges. Thus, the limit appears not as a starting point but as an analytic expression of an already constructed concept.
The paper gives the complete roster of positive integers up to 10000 expressible as x^4 - y^4 for nonzero rationals x and y.
abstractclick to expand
In Section 6.6 of the book {\it Number Theory, Volume I: Tools and Diophantine Equations, Graduate Texts in Mathematics, Volume 239, Springer (2007)}, Cohen investigated the solubility of the equation $n=x^4+y^4$ in the rational numbers $x,y$ for all positive integers $n \leq 10000$. Motivated by this, we investigate the equation $n=x^4-y^4$ and obtain the complete list of positive integers $n\leq 10000$ that can be represented in this form for some nonzero rational numbers $x$ and $y$.
We investigate a combinatorial puzzle in which $N$ apples and $N$ pears are distributed among baskets subject to two constraints: every basket must contain the same number of apples, and every basket must contain a distinct number of pears. We prove that the maximum number of baskets is the largest divisor of $N$ not exceeding $(1 + \sqrt{1+8N})/2$. For the original puzzle with $N = 60$, this yields 10 baskets. The solution reveals a rich interplay between divisibility and combinatorics, leading to a natural classification of integers into perfect values, primes, and highly composite numbers according to their basket-packing efficiency. Computational results for $N$ up to one million confirm the asymptotic growth rate of $\sqrt{2N}$, and a complete tabulation for $N = 1$ to 100 is included.
One of the essential questions of the theory of multidimensional integrals concerns the evaluation of integrals taken in given domains. In the simplest case, when integrating over parallelepipeds, evaluation can easily be performed by repeated integration. In the case of the Lebesgue integral, the question is easily solvable by Fubini's theorem. In the case of the Riemann integral, the situation is complicated by the difference between Jordan and Lebesgue measures. In this paper, we show that in certain important applications of Riemann integrals, one can establish a modification of the theorem on repeated integration in which Fubini's theorem is as powerful as in the case of the Lebesgue integral.
One of the essential questions of the theory of multidimensional integrals concerns the evaluation of integrals taken in given domains. In the simplest case, when integrating over parallelepipeds, evaluation can easily be performed by repeated integration. In the case of the Lebesgue integral, the question is easily solvable by Fubini's theorem. In the case of the Riemann integral, the situation is complicated by the difference between Jordan and Lebesgue measures. In this paper, we show that in certain important applications of Riemann integrals, one can establish a modification of the theorem on repeated integration in which Fubini's theorem is as powerful as in the case of the Lebesgue integral.
We introduce a deficiency-based representation and approximation framework for values of the Riemann zeta function. The method is based on comparing two nonlinear accumulation mechanisms: global transformation of a base partial sum and local transformation of each term. Their gap defines a cumulative deficiency functional that yields the exact identity \[ \zeta(q)=\zeta(p)^{q/p}-D_{\infty}^{(p,q)}, \qquad q>p>1. \] This converts zeta approximation into estimation of a nonlinear deficit. We derive corrected estimators that remove first-order bias and prove the convergence law \[ B_n^{(p,q)}-\zeta(q)=O\!\left(n^{-\min(2p-2,q-1)}\right). \] For odd targets, suitable choices of the base exponent recover the natural truncation rate while preserving the structural identity. Numerical experiments for $\zeta(3),\zeta(5),\zeta(7)$ confirm theory, demonstrate strong finite-sample behavior, and illustrate extension to spectral zeta functions. The contribution is structural rather than replacing classical Euler--Maclaurin methods: we provide a unified nonlinear viewpoint on zeta approximation, convexity-induced correction terms, and tunable approximation families.
Necessary and sufficient criteria decide when each of n circles can hit one vertex from each polygon.
abstractclick to expand
Given a regular $n$-gon on the plane, it is evident that from any point on the plane, taken as a center, one can draw $n$ concentric circles such that each circle passes through one of the vertices of the polygon. Naturally, this raises the problem of whether such a construction is possible for any two given regular $n$-gons on the plane. In this paper, we establish the necessary and sufficient conditions for the existence of $n$ concentric circles such that each circle passes through one vertex of each of the two regular $n$-gons.
Keywords and phrases: Polygonal distances, cyclic averages, concentric circles, two regular polygons, two equilateral triangles, two squares
Fiber contraction plus Faà di Bruno's formula lift existence from continuous to n-times differentiable solutions with all derivatives kept 0
abstractclick to expand
In this paper, we investigate the existence of $C^n$, $n\in \mathbb{N}^+$, solutions for a class of second-order iterative functional equations involving iterates of the unknown function and a nonlinear term. Applying the Fiber Contraction Theorem and Fa\`a di Bruno's Formula, we establish the existence of bounded $C^n$ solutions with bounded derivatives of order from $1$ to $n$.
Inductive Cartan method builds four branches of invariant 1-forms to characterize classes and recover transformations.
abstractclick to expand
Four coframes of invariant 1-forms are explicitly constructed using the Inductive Cartan equivalence method with rank zero corresponding to four distinct branches. These coframes are employed to characterize non-linearizable fourth-order ODEs under point transformation with a five-point symmetry Lie subalgebra. Moreover, we propose a procedure for obtaining the point transformation by using the derived invariant coframes, demonstrated through examples.
Zero counting measures converge to the same auxiliary limit under interlacing conditions on d/c
abstractclick to expand
We study the second-order differential operators \(\mathcal D_{\Xi}\) and \(\mathcal D_{\Lambda}\) associated with the rescaled polynomial families \((\widetilde{\Xi}_n)\) and \((\widetilde{\Lambda}_n)\), and more generally the polynomial sequences generated by iterating these operators from an arbitrary linear initial datum \(cx-d\).
We establish structural properties of \(\mathcal D_{\Xi}\) and \(\mathcal D_{\Lambda}\), including factorizations into first-order operators, weighted divergence forms, formal self-adjointness, and hypergeometric descriptions of the corresponding formal eigenvalue equations. We also show that both operators preserve hyperbolicity, preserve zeros in \((0,b)\) for \(b\ge 1\), and preserve proper position.
For the iterated polynomial sequences, we derive explicit closed formulae in terms of the auxiliary families \((\widetilde{\Xi}_n)\) and \((\widetilde{\Lambda}_n)\), prove strict interlacing of consecutive zeros under explicit conditions on \(d/c\), and obtain asymptotic formulae for the normalized logarithmic derivatives. As a consequence, the associated zero counting measures converge weakly to the same limiting probability measure as in the auxiliary case.
The sigma index of a graph, defined as the population variance of its degree sequence, is a fundamental measure of structural irregularity. In this paper, we introduce and systematically investigate its natural extension to fuzzy graphs, termed the fuzzy sigma index $$ \sigma^*(\Gamma) = \frac{1}{n} \sum_{v \in V(\Gamma)} \left( d_\Gamma(v) - \frac{2\,\mathrm{ew}}{n}\right)^2, $$ where $d_\Gamma(v)$ denotes the fuzzy degree of a vertex $v$, and $\mathrm{ew}$ represents the fuzzy size of the fuzzy graph $\Gamma=(V,\nu, \mu)$. We establish several fundamental properties of this topological index. In particular, we derive sharp lower and upper bounds. Analyze the behavior of $\sigma^*(\Gamma)$ under standard fuzzy graph operations. This work provides a foundation for further study of variance-based topological indices in fuzzy graph theory.
Using the log-convexity of the Gamma function and Euler's reflection formula, we give a new proof of a classical weighted sine product inequality. Two different parameter choices yield two competing upper bounds for the same product. We determine precisely, via algebraic criteria, when one bound is sharper than the other. Explicit results are given for the general $n$-angle case, the $2n$-angle case, and for two and three angles. Several sharp corollaries are derived, including $\sin(\pi x)\leq \sin(2\pi x(1-x))$.
A completed proof shows the magic-square equations can always be solved with nine different primes once the target prime is at least 5.
abstractclick to expand
We present an integrated version of the global program proving that every prescribed prime \(q_0\ge 5\) occurs in some \(3\times 3\) magic square whose nine entries are distinct positive primes. The manuscript explicitly corrects the four points that had prevented the previous version from being regarded as closed: (i) the notation for the fixed prime \(q_0\) is now kept uniformly distinct from the notation for the sieve moduli \(d\); (ii) the weight convention is unified by working with the function \(\vt(n)=\log n\) on the primes and \(0\) off the primes, while \(\Lambda\) is used only inside the analytic estimates where it is the natural variable; (iii) the full residual notation \((W,a_W,b_W,S_1,A_d,g(d))\) has been incorporated throughout the manuscript; and (iv) the final closure is replaced by a residual-completion theorem on the \emph{common support of the core}, thereby eliminating the logical gap produced by intersecting two independent theorems.
We propose and analyse a class of analytically solvable models of quantum reinforcement learning (QRL), formulated as finite-horizon Markov decision processes in finite-dimensional Hilbert spaces. The models are built around a `unitary-control-then-measure' protocol, in which a learning agent applies unitary transformations to a quantum state and interleaves each control step with a projective measurement onto a prescribed reference basis. Exact closed-form expressions for trajectory probabilities, rewards, and the expected return are derived for four concrete realisations: a closed-chain and an anti-periodic qubit implementation, a qutrit model with ladder coupling, and a four-level two-qubit system. Two structural features of these QRL protocols are rigorously analysed. First, we identify and quantify a two-level reduction in the computational complexity of the expected return, from the nominally exponential $O(e^N)$ scaling in the trajectory length~$N$ to an explicit power-law $O(N^{\mathcal{I}})$: a trajectory-based level, arising from equivalence classes of paths sharing the same unordered state counts and transition frequencies, and a policy-based level, arising from the sparsity of the transition graph enforced by constrained unitary actions. Second, we characterise the degeneracy of optimal policies. The low-dimensional models exhibit unique optima whose asymptotic behaviour with~$N$ is governed by the quantum Zeno effect, while the four-level system displays both plateau-type quasi-degeneracy at large horizons and genuine discrete degeneracy at critical energy parameters -- phenomena with no counterpart in the measurement-free quantum optimal control landscape.
Given a reference metric and its pair, P yields g and the Levi-Civita connection plus a symmetric deformation-rate correction.
abstractclick to expand
This paper develops a deformation-field geometry for spaces whose local frames may undergo internal stretching, compression, and shear. Ordinary Riemannian geometry takes an intrinsic metric geometry \((M,g)\) as the given datum and uses its Levi-Civita comparison. The present framework retains additional data: a fixed reference metric geometry and a deformation field \(P\) representing \(g\) by \(g=P^T\bar gP\). This makes the dilation-shear structure relative to the fixed reference visible. The deformation field yields a dilation-shear compensation \(\Lambda=P^{-1}\bar\nabla P\), and the natural total comparison connection is \(\Gamma=\mathring\Gamma+\Lambda\), where \(\mathring\Gamma\) is the Levi-Civita connection of the represented metric. Curvature, torsion, and nonmetricity of \(\Gamma\) are then determined by \(\mathring\Gamma\) and \(\Lambda\), rather than postulated as independent affine data. Examples involving one-dimensional stretching, conformal deformation, anisotropic dilation, shear, and spherical geometries distinguish metric curvature, embedded realization, and internal deformation non-uniformity.
Representing the metric via a fixed reference and internal stretch field P makes affine properties follow from the Levi-Civita connection of
abstractclick to expand
This paper develops a deformation-field geometry for spaces whose local frames may undergo internal stretching, compression, and shear. Ordinary Riemannian geometry takes an intrinsic metric geometry \((M,g)\) as the given datum and uses its Levi-Civita comparison. The present framework retains additional data: a fixed reference metric geometry and a deformation field \(P\) representing \(g\) by \(g=P^T\bar gP\). This makes the dilation-shear structure relative to the fixed reference visible. The deformation field yields a dilation-shear compensation \(\Lambda=P^{-1}\bar\nabla P\), and the natural total comparison connection is \(\Gamma=\mathring\Gamma+\Lambda\), where \(\mathring\Gamma\) is the Levi-Civita connection of the represented metric. Curvature, torsion, and nonmetricity of \(\Gamma\) are then determined by \(\mathring\Gamma\) and \(\Lambda\), rather than postulated as independent affine data. Examples involving one-dimensional stretching, conformal deformation, anisotropic dilation, shear, and spherical geometries distinguish metric curvature, embedded realization, and internal deformation non-uniformity.
The incomplete numbers perspective allows uniform derivation for q-analogues of special numbers and is expected to apply more broadly.
abstractclick to expand
In this paper, we clarified the relationship between continued fractions, determinants, and identities, making it easier to apply these methods systematically in other settings. In particular, we studied finite continued fractions from the perspective of incomplete numbers (restricted or associated numbers) and also explored their relationships with determinant representations and identities. Most of the new results in this paper concern $q$-analogues of special numbers, whereas the classical cases mainly serve to illustrate and unify the general framework. The framework developed here is flexible and allows one to derive continued fractions, determinant formulas, and coefficient identities in a uniform way for several new $q$-families, and it is expected to be applicable to other families of special numbers, as well.
The partial quotients weight the approximation errors so their sums equal the target value and the target value plus one.
abstractclick to expand
Many classical identities arise from nothing more mysterious than looking at the same object in two different ways. A number, a function, or a combinatorial object may admit several natural decompositions, and by disassembling it in one way and reassembling it in another, we often obtain unexpected corollaries. Telescoping sums provide a particularly vivid incarnation of this principle: by arranging terms so that successive contributions cancel, one performs a conceptual ``cut-and-paste'' that often admits a clean geometric interpretation. Generating functions offer a complementary perspective. Encoding a problem into a formal power series and then evaluating that series at a prescribed point naturally expresses the same quantity as an infinite (or finite) expansion, and equating these representations yields a wealth of identities.
For example, for a real number \(\alpha\) given by its continued fraction expansion $\alpha = [a_0, a_1,a_2,\dots]$, with convergents \(p_n/q_n\) and error terms $E_n := p_n - \alpha q_n$, one can obtain ``additive'' decompositions of the form $\sum_{n\ge-1} a_{n+1}\,\lvert E_n\rvert \;=\; \alpha + 1$, $\sum_{n\ge-1} a_{n+1}\,E_n^{2} \;=\; \alpha$. Thus $\alpha$ and $\alpha+1$ themselves appear as weighted sums of the local approximation errors of their convergents. In this note we explore what such decompositions yield in two explicit cases: the continued fraction \[ e^{1/s} = [1;\,{\overline{(2k-1)s-1,1,1}}]_{k=1}^{\infty} \] and the continued fraction \[ \frac{s}{u}\tanh\!\Bigl(\frac{1}{s}\Bigr)
= [\,0;\,\overline{(4k-3)u,\,(4k-1)\tfrac{s^{2}}{u}}\,]_{k=1}^{\infty} \]
The mathematical representation of uncertainty has led to a proliferation of preference structures, such as interval-valued fuzzy sets, intuitionistic fuzzy sets, and various granular models. While these extensions are often studied independently, they share profound geometric and topological foundations. This paper provides a unifying framework by identifying these disparate structures with the simplicial geometry of $n$-dimensional fuzzy sets. We first conduct an extensive revision of both classical and modern preference structures, demonstrating that they are distinct semantic interpretations of the same underlying topological objects within the lattice $L_n$. Building on this unification, we introduce a new, highly interpretable preference structure based on Deck-of-Cards membership functions. This approach generalizes the revised models by providing a flexible mechanism to represent complex membership degrees through monotonic sequences. Furthermore, we establish a formal simplicial structure for the set of multidimensional fuzzy sets $L_\infty$. By employing face and degeneracy maps, we demonstrate how this framework unifies existing models into a single simplicial set, allowing for the consistent transformation of information across different levels of granularity. The examples provided illustrate the utility of this simplicial connection in several contexts, offering a robust topological foundation for future developments in fuzzy set theory.
Many generalized set models have the same basic form: they assign a value to each object, and the main difference lies in the kind of values that are allowed. This paper studies that common form through scale-valued sets (SV-sets), defined as maps $U\times E\to\Sigma$, where $U$ is a universe, $E$ is a parameter set, and $\Sigma$ is a bounded De Morgan lattice. With a suitable choice of scale, SV-sets include ordinary sets, fuzzy sets, soft sets, bounded multisets, intuitionistic fuzzy sets, $L$-fuzzy sets, and Type-2 fuzzy sets. We study the basic structure of SV-sets. The relation between SV-sets and lattice-valued interval soft sets is also discussed. For complete chains, the SV setting gives a natural topological construction, and for groups, it gives an algebraic structure through SV-subgroups. The applications show how graded suitability and supporting evidence can be kept together in a single model, whereas one-coordinate reductions lose information.
Rearrangement of the Wilson relation between left factorials and Bell numbers produces G_p inside the poor man's adele ring A modulo p
abstractclick to expand
Wilson's theorem is notably related to left factorials, expressed as $K_p \equiv \mathbf{Bell}_{p-1} - 1 \pmod p$, for prime $p\geq3$. This study examines a Kurepa-Bell-Wilson congruence (\textbf{KBW}), $\frac{K_p + 1}{p}\equiv \frac{ \mathbf{Bell}_{p-1}}{p}+ W_p \pmod{p}$, and demonstrates that it naturally generates the non-zero "Gertsch quotient ($\mathbb{G}_p$)," which, for larger primes modulo $p$ resides in the poor man's adele ring $\mathcal{A}$ .
For large primes p the first qualifying u, u+1 both generate the units group and stay below this explicit logarithmic size.
abstractclick to expand
Let $p>1$ be a large prime number and let $x=O((\log p)^2(\log\log p)^5$ be a real number. It is proved that the least consecutive pair of primitive roots $u\ne\pm1, v^2$ and $u+1$ satisfies the upper bound $u\ll x$ in the prime field $\mathbb{F}_p$.
Convergence of binomial weighted averages remains unchanged after applying any absolutely summable linear transformation that sums to one.
abstractclick to expand
We study binomially weighted summation methods given by \[ (x_n)_{n\in \mathbb{N}} \mapsto \left(\sum_{k=0}^n\binom{n}{k}r^k(1-r)^{n-k}x_k\right)_{n\in \mathbb{N}} \] for $r\in (0,1)$, and their behavior under composition with summation methods of the form \[ (x_n)_{n\in \mathbb{N}} \mapsto \left(\sum_{k=0}^n\lambda_k x_{n-k}\right)_{n\in \mathbb{N}}. \] Our main result shows that if the binomially weighted averages of a sequence $(x_n)_{n\in \mathbb{N}}$ converge to a limit then the binomially weighted averages of the sequence $\left(\sum_{k=0}^n\lambda_kx_{n-k}\right)_{n\in \mathbb{N}}$ converge to the same limit whenever $(\lambda_n)_{n\in\mathbb{N}}$ is an absolutely summable sequence with $\sum_{k=0}^{\infty}\lambda_k = 1$. This result disproves a theorem appearing in the literature. Additionally, we discuss applications and extensions of our main result to compositions with weighted Ces\`aro averages.
We construct a family of self-adjoint operators on the prime numbers whose entries depend on pairwise arithmetic divergences, replacing geometric distance with number-theoretic dissimilarity. The resulting spectra encode how coherence propagates through the prime sequence and define an emergent arithmetic geometry. From these spectra we extract observables such as the heat trace, entropy, and eigenvalue growth, which reveal persistent spectral compression: eigenvalues grow sublinearly, entropy scales slowly, and the inferred dimension remains strictly below one. This rigidity appears across logarithmic, entropic, and fractal-type kernels, reflecting intrinsic arithmetic constraints. Analytically, we show that for the unnormalized Laplacian, the continuum limit of its squared Hamiltonian corresponds to the one-dimensional bi-Laplacian, whose heat trace follows a short-time scaling proportional to $t^{-1/4}$. Under the spectral dimension convention $d_s=-2\,d\log\Theta/d\log t$, this result produces $d_s = 1/2$ directly from first principles, without fitting or external hypotheses. This value signifies maximal spectral compression and the absence of classical diffusion, indicating that arithmetic sparsity enforces a coherence-limited, non-Euclidean geometry linking spectral and number-theoretic structure.
Sub-modulus fields with nonzero 2 yield explicit bounds on norm differences via sums and max differences.
abstractclick to expand
Let $\mathbb{F}$ be a sub-modulus field such that $2 \neq 0$. Let $\mathcal{X}$ be a sub-normed linear space over $\mathbb{F}$. Then we show that \begin{align*} \bigg|\|x\|-\|y\|\bigg|\leq \frac{2}{|2|}\|x+y\|+\frac{2}{|2|}\max\{\|x-y\|, \|y-x\|\}-(\|x\|+\|y\|) \end{align*} and \begin{align*} \bigg|\|x\|-\|y\|\bigg|\leq \|x\|+\|y\|-\frac{2}{|2|}\|x+y\|+\frac{2}{|2|}\max\{\|y-x\|, \|x-y\|\}. \end{align*} Above inequalities are finite field versions of important Tarski-Maligranda inequalities obained by Maligranda [\textit{Banach J. Math. Anal., 2008}].
Fix a prime $p \ge 5$ and define $g(2n,p)=\#\{(h,k)\in\mathbb{Z}_{>0}^2 : h+k=2n,\; h\le k,\; \gcd(h,6p)=\gcd(k,6p)=1\}$. We derive explicit closed-form expressions for $g(2n,p)$ in terms of the canonical remainder operator $\delta_k(x)=x-k\lfloor x/k\rfloor$, elementary step functions, and the minimal solutions of the congruences $6x \equiv -1 \pmod{p}$ and $6x \equiv -5 \pmod{p}$. A key ingredient is an explicit formula for the minimal solution of $\delta_k(a_0 x)=b_0$ obtained via the Euclidean algorithm, which determines the excluded residue classes directly. The resulting formulas show that $g(2n,p)$ is piecewise affine along arithmetic progressions of $n$, governed by residue classes modulo $3$ and $p$. For fixed $p$, after precomputing two residue parameters in $O(\log p)$ time, each evaluation of $g(2n,p)$ requires only $O(1)$ operations, compared to $O(n)$ for direct enumeration. The formulas are validated computationally for all $2n \le 10^5$ and primes $p \in \{5,7,11,13,17,19,23\}$, with perfect agreement with brute-force enumeration.
Special prime families (twin, Sophie Germain, safe, cousin, sexy, Chen, and isolated primes) are central objects of analytic number theory, yet no efficiently computable probabilistic filter exists for identifying likely members among known primes at large scale. Classical sieves assign no probability weights to surviving candidates, and prior machine learning approaches are limited by the algorithmic randomness of the prime indicator sequence, yielding near-zero true positive rates.
We present PrimeFamilyNet, a multi-head residual network conditioned on the backward prime gap and modular primorial residues of a known prime $p$, learning probabilistic filters for all seven families simultaneously and generalising across nine orders of magnitude from training ($10^7$--$10^9$) to evaluation at $10^{16}$.
Isolated prime recall increased monotonically from $0.809$ at $5\times10^8$ to $0.984$ at $10^{16}$, a gain of $17.5$ percentage points and the only family among seven to improve with scale. Because recall is invariant to class prevalence, this reflects genuine decision boundary sharpening, not the rising isolated-prime fraction at extreme scales. A model trained only to $10^9$ reproduced the correct asymptotic direction without density supervision, corroborating Hardy--Littlewood $k$-tuple predictions.
The causal model retained over $95\%$ recall for five families near $10^{10}$ while reducing the search space by $62$--$88\%$. For Chen primes, causal recall exceeded non-causal recall at every scale (margin $+0.245$ at $10^{16}$) because $g^+=2$ encodes only the prime case of the Chen condition. Focal Loss collapsed sparse algebraic family recall to $0.000$. Asymmetric Loss outperformed weighted BCE in-distribution but degraded more steeply out-of-distribution, showing that in-distribution recall alone is a misleading criterion for scale-generalisation tasks.
This paper introduces a biharmonic interpolatory subdivision framework on Riemannian manifolds. In the Euclidean setting, the six-point Deslauriers-Dubuc stencil is characterised as the unique minimiser of a discrete curvature-variation energy under symmetric six-point support and degree-five polynomial reproduction conditions, linking a classical interpolatory rule to a first-principles fairness criterion. Exact symbol analysis establishes fourth-order smoothness. The construction extends to the two-sphere and the hyperbolic plane via a second-order reduced governing ODE derived from the biharmonic Euler-Lagrange equation on constant-curvature surfaces. This reduced model yields closed-form insertion rules, and proximity analysis confirms that the manifold scheme satisfies the Wallner-Dyn second-order condition, preserving fourth-order smoothness. A hierarchy of biharmonic stencils achieving higher smoothness orders is also described. Numerical experiments demonstrate that the six-point scheme delivers lower fairness energy and smoother curvature profiles than the classical four-point Dyn-Gregory-Levin scheme, while remaining more local and exhibiting less ringing on non-uniform data than the eight-point variant.
Five axioms, an injectivity theorem, and a Lean 4 formalization recast observable space as an equivalence class of outcomes
abstractclick to expand
We introduce Recognition Geometry (RG), an axiomatic framework in which geometric structure is not assumed a priori but derived. The starting point of the theory is a configuration space together with recognizers that map configurations to observable events. Observational indistinguishability induces an equivalence relation, and the observable space is obtained as a recognition quotient. Locality is introduced through a neighborhood system, without assuming any metric or topological structure. A finite local resolution axiom formalizes the fact that any observer can distinguish only finitely many outcomes within a local region. We prove that the induced observable map R-bar : C_R -> E is injective, establishing that observable states are uniquely determined by measurement outcomes with no hidden structure. The framework connects deeply with existing approaches: C*-algebraic quantum theory, information geometry, categorical physics, causal set theory, noncommutative geometry, and topos-theoretic foundations all share the measurement-first philosophy, yet RG provides a unified axiomatic foundation synthesizing these perspectives. Comparative recognizers allow us to define order-type relations based on operational comparison. Under additional assumptions, quantitative notions of distinguishability can be introduced in the form of recognition distances, defined as pseudometrics. Several examples are provided, including threshold recognizers on R^n, discrete lattice models, quantum spin measurements, and an example motivated by Recognition Science. In the last part, we develop the composition of recognizers, proving that composite recognizers refine quotient structures and increase distinguishing power. We introduce symmetries and gauge equivalence, showing that gauge-equivalent configurations are necessarily observationally indistinguishable, though the converse does not hold in general. A significant part of the axiomatic framework and the main constructions are formalized in the Lean 4 proof assistant, providing an independent verification of logical consistency.