pith. machine review for the scientific record. sign in

arxiv: 2604.20214 · v1 · submitted 2026-04-22 · 📡 eess.SP · cs.IT· math.IT

Recognition: unknown

Computationally Efficient Sparse Signal Recovery via Linear Sketching and Deep Unfolding

Authors on Pith no claims yet

Pith reviewed 2026-05-10 00:25 UTC · model grok-4.3

classification 📡 eess.SP cs.ITmath.IT
keywords sparse signal recoveryiterative shrinkage-thresholdingdeep unfoldingsketchingconvergence analysiscomputational efficiencyhybrid algorithm
0
0 comments X

The pith

DU-PSISTA recovers sparse signals via linear contraction using periodic sketching and deep unfolding.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces DU-PSISTA, which reduces the cost of iterative shrinkage-thresholding by projecting gradients through random sketches and periodically reverting to the full iteration. A tunable period parameter controls how often the accurate full step is taken versus the cheaper sketched one. Deep unfolding trains the step sizes, thresholds, and other parameters from data so the hybrid sequence adapts. The authors prove that under suitable choices the error contracts linearly toward a small neighborhood of the true sparse vector. Experiments confirm the method matches the accuracy of standard deep-unfolded ISTA at noticeably lower per-iteration cost when the sketch dimension and period are chosen well.

Core claim

The proposed DU-PSISTA algorithm, which periodically switches between standard ISTA and its sketched variant while unfolding the iterations for parameter learning, achieves a linear-type contraction to a neighborhood of the true sparse signal under sufficient conditions on the step sizes, thresholding factors, sketch matrix, and period parameter. This provides an interpretation for why the hybrid structure improves recovery accuracy.

What carries the argument

The periodic alternation between full ISTA and dimension-reduced sketched ISTA, with all parameters (steps, thresholds, sketch size) learned end-to-end by deep unfolding.

If this is right

  • Recovery error decreases linearly until it enters a small ball around the true signal whose radius depends on the sketch size and period.
  • Average per-iteration cost scales with the sketch dimension rather than the original ambient dimension during the sketched phases.
  • The single period parameter directly trades computation for accuracy without retraining the entire network.
  • Data-driven unfolding yields better fixed points than manually tuned hybrids on the same sketch schedule.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same periodic-sketch pattern could be inserted into other unfolded first-order methods whenever gradient cost dominates.
  • If learned parameters can be shown to drive the neighborhood radius to zero as training data grows, the scheme would approach exact recovery.
  • High-dimensional applications such as MRI or wireless channel estimation would be natural places to test whether the observed complexity saving persists on structured rather than random matrices.

Load-bearing premise

The step sizes, thresholds, sketch matrix, and period can be selected or learned so that the contraction neighborhood stays small enough for the hybrid to outperform pure sketched or pure full iterations.

What would settle it

An experiment on random sparse vectors in which the reconstruction error fails to show linear contraction for any choice of period and sketch dimension would disprove the main convergence guarantee.

Figures

Figures reproduced from arXiv: 2604.20214 by Ayano Nakai-Kasai, Tadashi Wadayama, Tatsuki Tokumura.

Figure 1
Figure 1. Figure 1: Algorithm diagram of Periodic Sketched ISTA [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparison of MSE performance for different period [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 5
Figure 5. Figure 5: Comparison of MSE performance for different sketch size [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Comparison of execution time for large system ( [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Comparison of MSE performance for Gaussian sketch and Count [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Learned step sizes ηt for different period P in DU-PSISTA for large system (n = 1024, m = 512, l = 256). 0 5 10 15 20 25 30 35 40 Iterations 0.005 0.010 0.015 0.020 0.025 0.030 DU-PSISTA (P=2) DU-PSISTA (P=3) DU-PSISTA (P=5) DU-PSISTA (P=8) DU-ISTA [PITH_FULL_IMAGE:figures/full_fig_p010_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Learned thresholding parameters λt for different period P in DU￾PSISTA for large system (n = 1024, m = 512, l = 256). Figures 8 and 9 show the examples of the learned step sizes ηt and thresholding parameters λt for the large system with l = 256 and different periods P, respectively. From [PITH_FULL_IMAGE:figures/full_fig_p010_9.png] view at source ↗
read the original abstract

This paper provides a sparse signal recovery algorithm, DU-PSISTA (Deep Unfolded-Periodic Sketched Iterative Shrinkage-Thresholding Algorithm), which aims to balance computational efficiency and accuracy for recovering high-dimensional sparse signals, and a convergence analysis under sufficient conditions. DU-PSISTA introduces a random matrix projection known as sketching to reduce the dimensionality of gradient computations and periodically alternates between the standard ISTA and the sketched variant. This hybrid structure enables flexible control over the trade-off between accuracy and computational complexity through a pre-configurable period parameter. The algorithm includes many parameters to be tuned such as step sizes and thresholding factors so that we incorporate deep unfolding that optimizes the parameters through data-driven training, enabling the algorithm to adaptively improve convergence speed and performance. We show that the proposed method achieves a linear-type contraction to a neighborhood of the true sparse signal with properly selected parameters. The analysis provides an interpretation for the effectiveness of the hybrid structure to improve recovery accuracy. Numerical experiments confirm that our method achieves comparable recovery performance to conventional deep unfolded ISTA while reducing computational complexity, especially when the period parameter and sketch size are properly selected. The results are also consistent with the theoretical insights.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes DU-PSISTA, a hybrid algorithm that periodically alternates between standard ISTA and a sketched variant (using random linear projections to reduce gradient computation cost) for sparse signal recovery. Parameters including step sizes and thresholding factors are tuned via deep unfolding on training data, and a convergence theorem is stated showing linear-type contraction of the iterates to a neighborhood of the true sparse vector under sufficient conditions on the step sizes, thresholding factors, sketch matrix, and period parameter. Experiments claim comparable recovery accuracy to standard deep-unfolded ISTA at lower complexity when the period and sketch size are chosen appropriately.

Significance. If the convergence result can be shown to apply to the learned parameters and the neighborhood radius can be bounded explicitly in terms of sketch dimension and period, the hybrid structure would offer a theoretically motivated way to trade computation for accuracy in high-dimensional sparse recovery. The combination of sketching, periodic hybridization, and deep unfolding is a reasonable direction, but the current gap between the 'sufficient conditions' and the data-driven training limits the strength of the contribution.

major comments (3)
  1. [Convergence analysis / Theorem statement] The convergence theorem (stated in the abstract and presumably proved in the analysis section) establishes linear contraction only under external 'sufficient conditions' on step sizes, thresholding factors, sketch matrix, and period parameter. The manuscript does not verify that the parameters obtained after deep unfolding satisfy these conditions, nor does it show that the resulting neighborhood radius remains small enough for the claimed accuracy-efficiency trade-off to hold in practice.
  2. [Convergence analysis] No explicit scaling bound is derived for the neighborhood radius as a function of sketch size or period parameter. Without such a bound, it is difficult to confirm that the hybrid structure improves recovery accuracy in a controlled manner as asserted in the abstract.
  3. [Abstract and analysis section] The interpretation that the hybrid structure 'provides an interpretation for the effectiveness ... to improve recovery accuracy' is asserted but not derived from the contraction result; the theorem treats the periodic alternation as given rather than quantifying the benefit of the sketched versus standard steps.
minor comments (2)
  1. [Numerical experiments] The experimental section should report the specific values of the learned step sizes and thresholding factors after training, together with a check (even empirical) that they lie inside the region where the sufficient conditions hold.
  2. [Notation and preliminaries] Notation for the sketch matrix, period parameter, and neighborhood radius should be introduced once and used consistently; several symbols appear to be redefined across sections.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback on the convergence analysis. The comments correctly identify gaps between the sufficient conditions in the theorem and the data-driven parameters, as well as the need for clearer links to the hybrid structure's benefits. We address each point below and will revise the manuscript accordingly to strengthen the theoretical claims and their connection to practice.

read point-by-point responses
  1. Referee: [Convergence analysis / Theorem statement] The convergence theorem (stated in the abstract and presumably proved in the analysis section) establishes linear contraction only under external 'sufficient conditions' on step sizes, thresholding factors, sketch matrix, and period parameter. The manuscript does not verify that the parameters obtained after deep unfolding satisfy these conditions, nor does it show that the resulting neighborhood radius remains small enough for the claimed accuracy-efficiency trade-off to hold in practice.

    Authors: We agree this is a limitation: the theorem gives sufficient conditions, but we do not (and cannot a priori) verify that the parameters learned via deep unfolding satisfy them exactly. Deep unfolding minimizes empirical loss on training data, which empirically yields parameters that produce the observed accuracy-efficiency trade-off in experiments. In revision we will add a dedicated discussion paragraph clarifying the distinction between sufficient conditions and learned parameters, and we will include supplementary plots showing the observed neighborhood size (via residual error after convergence) for the learned parameters across different sketch sizes and periods. This will not constitute a formal verification but will better bridge theory and practice. revision: partial

  2. Referee: [Convergence analysis] No explicit scaling bound is derived for the neighborhood radius as a function of sketch size or period parameter. Without such a bound, it is difficult to confirm that the hybrid structure improves recovery accuracy in a controlled manner as asserted in the abstract.

    Authors: The proof of the contraction result already expresses the neighborhood radius in terms of the sketch matrix's restricted isometry constant, the period P, the step sizes, and the thresholding parameters. We acknowledge that a simplified closed-form scaling (e.g., O(1/sqrt(m)) or similar) is not extracted. In the revised analysis section we will rewrite the neighborhood bound to isolate the explicit dependence on sketch dimension m and period P, making the trade-off controllable by these design parameters more transparent, even without a single-term asymptotic scaling. revision: yes

  3. Referee: [Abstract and analysis section] The interpretation that the hybrid structure 'provides an interpretation for the effectiveness ... to improve recovery accuracy' is asserted but not derived from the contraction result; the theorem treats the periodic alternation as given rather than quantifying the benefit of the sketched versus standard steps.

    Authors: The contraction mapping is derived for the periodic hybrid iteration; the radius and contraction factor depend on how frequently the full (more accurate) ISTA step is taken versus the sketched step. By varying the period, the average per-iteration cost and the steady-state error can be traded off. We will revise both the abstract and the analysis section to explicitly derive this interpretation from the bound: we will state that the hybrid contraction factor is a convex combination (weighted by the period) of the full and sketched factors, thereby quantifying the benefit of inserting occasional full steps. revision: yes

Circularity Check

0 steps flagged

No circularity; convergence theorem is independent of deep-unfolding training

full rationale

The paper states a linear-type contraction result for the hybrid DU-PSISTA iterates to a neighborhood of the true signal, derived under explicit sufficient conditions on step sizes, thresholding factors, sketch matrix, and period parameter. This is a standard contraction-mapping argument applied to the periodic sketched/ISTA recursion and does not depend on or reduce to the values obtained by subsequent data-driven training. Deep unfolding is introduced only as a separate mechanism for selecting those parameters; the theorem itself makes no reference to training data, loss minimization, or fitted quantities. No self-citations are used to justify uniqueness or an ansatz, no fitted input is relabeled as a prediction, and no renaming of known results occurs. The derivation chain is therefore self-contained against its stated assumptions.

Axiom & Free-Parameter Ledger

3 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard sparse-recovery assumptions plus several tunable quantities whose values are either chosen by hand or learned from data; no new physical entities are postulated.

free parameters (3)
  • period parameter
    Controls how often the algorithm switches between full and sketched ISTA iterations
  • step sizes and thresholding factors
    Multiple per-iteration parameters optimized via deep unfolding training
  • sketch size
    Determines the reduced dimension of the random projection and the resulting complexity-accuracy trade-off
axioms (1)
  • domain assumption Linear-type contraction holds under sufficient conditions on the parameters and the random sketching matrix
    Invoked to guarantee convergence to a neighborhood of the true signal

pith-pipeline@v0.9.0 · 5517 in / 1414 out tokens · 93904 ms · 2026-05-10T00:25:37.533225+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

37 extracted references · 5 canonical work pages

  1. [1]

    Computationally efficient sparse signal recovery by deep unfolded-periodic sketched ista,

    T. Tokumura, A. Nakai-Kasai, and T. Wadayama, “Computationally efficient sparse signal recovery by deep unfolded-periodic sketched ista,” inProc. 2025 Asia Pacific Signal and Information Processing Asso- ciation Annual Summit and Conference (APSIPA ASC), IEEE, 2025, pp. 1–6

  2. [2]

    Sketching as a tool for numerical linear algebra,

    D. P. Woodruff, “Sketching as a tool for numerical linear algebra,”Foundations and Trends® in Theoretical Computer Science, vol. 10, no. 1–2, pp. 1–157, 2014

  3. [3]

    Compressed sensing,

    D. L. Donoho, “Compressed sensing,”IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006

  4. [4]

    Y . C. Eldar and G. Kutyniok,Compressed sensing: theory and applications. Cambridge university press, 2012

  5. [5]

    A user’s guide to compressed sensing for communications sys- tems,

    K. Hayashi, M. Nagahara, and T. Tanaka, “A user’s guide to compressed sensing for communications sys- tems,”IEICE Trans. Commun., vol. 96, no. 3, pp. 685– 712, 2013

  6. [6]

    Toward 6G: An overview of the next generation of intelligent network connectivity,

    S. Prasad Tera, R. Chinthaginjala, G. Pau, and T. Hoon Kim, “Toward 6G: An overview of the next generation of intelligent network connectivity,”IEEE Access, vol. 13, pp. 925–961, 2025

  7. [7]

    Sparse signal processing for grant-free massive connectivity: A future paradigm for random access protocols in the internet of things,

    L. Liu, E. G. Larsson, W. Yu, P. Popovski, C. Ste- fanovic, and E. De Carvalho, “Sparse signal processing for grant-free massive connectivity: A future paradigm for random access protocols in the internet of things,” IEEE Signal Process. Mag., vol. 35, no. 5, pp. 88–99, 2018

  8. [8]

    Compressive-sensing-based grant-free massive access for 6g massive communication,

    Z. Gao et al., “Compressive-sensing-based grant-free massive access for 6g massive communication,”IEEE Internet of Things Journal, vol. 11, no. 5, pp. 7411– 7435, 2024.DOI: 10.1109/JIOT.2023.3334878

  9. [9]

    Nonlinear wavelet image processing: Vari- ational problems, compression, and noise removal through wavelet shrinkage,

    A Charnbolle, R. A. DeV ore, N. Y . Lee, and B. J. Lucier, “Nonlinear wavelet image processing: Vari- ational problems, compression, and noise removal through wavelet shrinkage,” en,IEEE Trans. Image Process., vol. 7, no. 3, pp. 319–335, 1998

  10. [10]

    An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,

    I Daubechies, M Defrise, and C De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” en,Commun. Pure Appl. Math., vol. 57, no. 11, pp. 1413–1457, Nov. 2004

  11. [11]

    A fast iterative shrinkage- thresholding algorithm with application to wavelet- based image deblurring,

    A. Beck and M. Teboulle, “A fast iterative shrinkage- thresholding algorithm with application to wavelet- based image deblurring,” inProc. 2009 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing, IEEE, Apr. 2009, pp. 693–696

  12. [12]

    Learning fast approximations of sparse coding,

    K. Gregor and Y . LeCun, “Learning fast approximations of sparse coding,” inProc. 27th international confer- ence on international conference on machine learning, 2010, pp. 399–406

  13. [13]

    Deep unfolding: Model-based inspiration of novel deep ar- chitectures,

    J. R. Hershey, J. L. Roux, and F. Weninger, “Deep unfolding: Model-based inspiration of novel deep ar- chitectures,”arXiv preprint, arXiv:1409.2574, 2014

  14. [14]

    Deep unfold- ing for communications systems: A survey and some new directions,

    A. Balatsoukas-Stimming and C. Studer, “Deep unfold- ing for communications systems: A survey and some new directions,” in2019 IEEE International Workshop on Signal Processing Systems (SiPS), IEEE, Oct. 2019, pp. 266–271

  15. [15]

    Soft-output joint channel estimation and data detection using deep unfolding,

    H. Song, X. You, C. Zhang, and C. Studer, “Soft-output joint channel estimation and data detection using deep unfolding,” inProc. 2021 IEEE Information Theory Workshop (ITW), IEEE, 2021, pp. 1–5

  16. [16]

    Ista-net++: Flexible deep unfolding network for compressive sensing,

    D. You, J. Xie, and J. Zhang, “Ista-net++: Flexible deep unfolding network for compressive sensing,” inProc. 2021 IEEE International Conference on Multimedia and Expo (ICME), IEEE, 2021, pp. 1–6

  17. [17]

    Trainable ISTA for sparse signal recovery,

    D. Ito, S. Takabe, and T. Wadayama, “Trainable ISTA for sparse signal recovery,”IEEE Trans. Signal Pro- cess., vol. 67, no. 12, pp. 3113–3125, 2019

  18. [18]

    Trainable projected gradient detector for massive overloaded MIMO channels: Data-driven 12 tuning approach,

    S. Takabe, M. Imanishi, T. Wadayama, R. Hayakawa, and K. Hayashi, “Trainable projected gradient detector for massive overloaded MIMO channels: Data-driven 12 tuning approach,”IEEE Access, vol. 7, pp. 93 326– 93 338, 2019

  19. [19]

    A tutorial survey of architectures, algorithms, and applications for deep learning,

    L. Deng, “A tutorial survey of architectures, algorithms, and applications for deep learning,”APSIPA Trans. Signal Inf. Process., vol. 3, no. e2, 2014

  20. [20]

    Deep admm-net for compressive sensing mri,

    Y . Yang, J. Sun, H. Li, and Z. Xu, “Deep admm-net for compressive sensing mri,” inAdvances in Neural Infor- mation Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, Eds., vol. 29, Curran Associates, Inc., 2016. [Online]. Available: https : / / proceedings . neurips . cc / paperfiles / paper / 2016 / file / 1679091c5a880faf6fb...

  21. [21]

    Deep learning-based average consensus,

    M. Kishida, M. Ogura, Y . Yoshida, and T. Wadayama, “Deep learning-based average consensus,”IEEE Access, vol. 8, pp. 142 404–142 412, 2020.DOI: 10 . 1109 / ACCESS.2020.3014148

  22. [22]

    Model-based deep learning,

    N. Shlezinger and Y . C. Eldar, “Model-based deep learning,” en,Found. Trends® Signal Process., vol. 17, no. 4, pp. 291–416, Aug. 2023

  23. [23]

    Exploiting the structure via sketched gradient algorithms,

    J. Tang, M. Golbabaee, and M. Davies, “Exploiting the structure via sketched gradient algorithms,” inProc. 2017 IEEE Global Conference on Signal and Infor- mation Processing (GlobalSIP), IEEE, 2017, pp. 1305– 1309

  24. [24]

    Randomized structure-adaptive optimization,

    J. Tang, “Randomized structure-adaptive optimization,” Ph.D. dissertation, University o f Edinburgh, 2019

  25. [25]

    Large-scale beamforming for massive mimo via randomized sketching,

    H. Choi, T. Jiang, Y . Shi, X. Liu, Y . Zhou, and K. B. Letaief, “Large-scale beamforming for massive mimo via randomized sketching,”IEEE Transactions on Vehicular Technology, vol. 70, no. 5, pp. 4669–4681, 2021.DOI: 10.1109/TVT.2021.3071543

  26. [26]

    Fast randomized-MUSIC for mm-wave massive MIMO radars,

    B. Li, S. Wang, J. Zhang, X. Cao, and C. Zhao, “Fast randomized-MUSIC for mm-wave massive MIMO radars,”IEEE Transactions on Vehicular Technology, vol. 70, no. 2, pp. 1952–1956, 2021.DOI: 10 . 1109 / TVT.2021.3051266

  27. [27]

    Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds,

    X. Chen, Z. Wang, J. Liu, and W. Yin, “Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds,”Adv. Neural Inf. Process. Syst., vol. 2018-Decem, no. 3, pp. 9061–9071, 2018

  28. [28]

    ALISTA: Ana- lytic weights are as good as learned weights in LISTA,

    J. Liu, X. Chen, Z. Wang, and W. Yin, “ALISTA: Ana- lytic weights are as good as learned weights in LISTA,” inInternational Conference on Learning Representa- tions, 2019. [Online]. Available: https://openreview.net/ forum?id=B1lnzn0ctQ

  29. [29]

    Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,

    V . Monga, Y . Li, and Y . C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,”IEEE Signal Process. Mag., vol. 38, no. 2, pp. 18–44, Mar. 2021

  30. [30]

    Regression shrinkage and selection via the lasso,

    R. Tibshirani, “Regression shrinkage and selection via the lasso,”J. R. Stat. Soc. Ser. B Methodol., vol. 58, no. 1, pp. 267–288, 1996

  31. [31]

    Proximal algorithms,

    N. Parikh and S. Boyd, “Proximal algorithms,” en, Found. trends® optim., vol. 1, no. 3, pp. 127–239, Jan. 2014

  32. [32]

    Improved approximation algorithms for large matrices via random projections,

    T. Sarlos, “Improved approximation algorithms for large matrices via random projections,” in2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), IEEE, Oct. 2006, pp. 143–152

  33. [33]

    Iterative hessian sketch: Fast and accurate solution approximation for constrained least-squares,

    M. Pilanci and M. J. Wainwright, “Iterative hessian sketch: Fast and accurate solution approximation for constrained least-squares,”Journal of Machine Learn- ing Research, vol. 17, no. 53, pp. 1–38, 2016

  34. [34]

    Decoding by linear pro- gramming,

    E. J. Candes and T. Tao, “Decoding by linear pro- gramming,”IEEE transactions on information theory, vol. 51, no. 12, pp. 4203–4215, 2005

  35. [35]

    Extensions of lipschitz mappings into a hilbert space,

    W. B. Johnson and J. Lindenstrauss, “Extensions of lipschitz mappings into a hilbert space,”Contemporary Mathematics, vol. 26, pp. 189–206, 1984

  36. [36]

    A simple proof of the restricted isometry property for random matrices,

    R. Baraniuk, M. Davenport, R. DeV ore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,”Constructive approximation, vol. 28, no. 3, pp. 253–263, 2008

  37. [37]

    New and improved johnson– lindenstrauss embeddings via the restricted isometry property,

    F. Krahmer and R. Ward, “New and improved johnson– lindenstrauss embeddings via the restricted isometry property,”SIAM Journal on Mathematical Analysis, vol. 43, no. 3, pp. 1269–1281, 2011. Tatsuki Tokumurareceived B.E. degree from Nagoya Institute of Technology, Japan, in 2024. He is currently a Master’s student at Graduate School of Engineering, Nagoya ...