pith. machine review for the scientific record. sign in

arxiv: 2605.08111 · v1 · submitted 2026-04-27 · 💻 cs.LG · cs.AI· stat.ME

Recognition: 2 theorem links

· Lean Theorem

TTCD:Transformer Integrated Temporal Causal Discovery from Non-Stationary Time Series Data

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:34 UTC · model grok-4.3

classification 💻 cs.LG cs.AIstat.ME
keywords causal discoverytime seriesnon-stationary datatransformer modelscausal structure learningreconstruction learningtemporal relations
0
0 comments X

The pith

A transformer framework discovers both contemporaneous and lagged causal relations in non-stationary time series by distilling signals through decoder reconstruction.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper seeks to identify causal links that occur at the same time or with delays in time series data that shifts over time and includes noise, a common challenge in areas like environmental monitoring and economics. Existing approaches either depend on independence tests that falter with limited samples or impose strict statistical assumptions that do not hold in practice. TTCD addresses this by integrating attention mechanisms for temporal and frequency features with a reconstruction process that cleans causal information before inferring the graph structure. This end-to-end design avoids many prior restrictions and shows stronger performance across tested datasets.

Core claim

TTCD is an end-to-end framework that learns contemporaneous and lagged causal relations from non-stationary time series through a Non-Stationary Feature Learner combining temporal and frequency-domain attention with dynamic profiling, followed by a Causal Structure Learner that infers the graph from signals distilled via the transformer decoder's reconstruction process, which mitigates noise and spurious correlations while preserving meaningful dependencies without assumptions on noise distributions or data generation.

What carries the argument

Reconstruction-guided causal signal distillation, which isolates essential causal signals by leveraging the transformer decoder's reconstruction of the input data to filter noise and spurious correlations before causal graph inference.

If this is right

  • The approach can recover causal structures in settings with nonlinear relations and distribution changes without relying on conditional independence tests.
  • It produces more accurate and consistent results than prior methods on synthetic, benchmark, and real-world datasets.
  • The method identifies both immediate and time-delayed relations while reducing the impact of noise.
  • It offers a unified solution for causal discovery that avoids strong statistical assumptions on the data.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the distillation works as described, the method could extend to other domains with shifting time series such as climate or financial data where ground-truth causal graphs are partially known.
  • The reconstruction step might reduce the need for separate preprocessing stages in causal pipelines.
  • Testing the framework on data with controlled change points could reveal how well the dynamic profiling captures shifts.

Load-bearing premise

The transformer decoder reconstruction process can reliably separate true causal dependencies from noise and spurious correlations without introducing new biases or needing assumptions about noise or how the data was generated.

What would settle it

Apply TTCD to synthetic non-stationary time series generated from a known causal graph with added high noise levels and distribution shifts, then measure whether the recovered graph matches the known structure in accuracy metrics.

Figures

Figures reproduced from arXiv: 2605.08111 by Jianwu Wang, Omar Faruque, Sahara Ali, Xue Zheng.

Figure 1
Figure 1. Figure 1: Proposed TTCD framework to learn full temporal causal graph. The Non-Stationary [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Example of the proposed custom causal layers with four variables and a time lag of 4. [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Causal graph of (a) our synthetic dataset-1 and (b) the real world Turbulence Kinetic [PITH_FULL_IMAGE:figures/full_fig_p016_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The structure of Causal Conv2D model without the transformer. Instead of using a [PITH_FULL_IMAGE:figures/full_fig_p018_4.png] view at source ↗
read the original abstract

The widespread availability of complex time series data in various domains such as environmental science, epidemiology, and economics demands robust causal discovery methods that can identify intricate contemporaneous and lagged relationships in non-stationary, nonlinear, and noisy settings. Existing constraint-based methods often rely heavily on conditional independence tests that degrade for limited data samples and complex distributions, while score-based methods impose strong statistical assumptions. Recent methods address special cases such as change point detection or distribution shifts, but struggle to provide a unified solution. We propose the Transformer Integrated Temporal Causal Discovery (TTCD) Framework, a novel end-to-end approach that learns contemporaneous and lagged causal relations from non-stationary time series. TTCD introduces a Non-Stationary Feature Learner integrating temporal and frequency-domain attention with dynamic non-stationarity profiling, and a custom Causal Structure Learner. A key innovation is reconstruction-guided causal signal distillation, to distill essential causal signals through the reconstruction process of the transformer decoder, which mitigates noise and spurious correlations while preserving meaningful dependencies. The Causal Structure Learner operates on distilled reconstructed signals to infer the underlying causal graph without restrictive assumptions on noise distributions or data generation processes. Experiments on synthetic, benchmark, and real world datasets show that TTCD consistently outperforms state-of-the-art baselines in both accuracy and consistency with domain knowledge, demonstrating the approach's effectiveness for causal discovery in challenging real world contexts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes the TTCD framework, an end-to-end transformer-based method for learning contemporaneous and lagged causal relations from non-stationary time series. It introduces a Non-Stationary Feature Learner combining temporal/frequency attention with dynamic profiling, a Causal Structure Learner, and a key component called reconstruction-guided causal signal distillation that uses the transformer decoder's reconstruction process to extract causal signals while suppressing noise and spurious correlations. The method claims to operate without restrictive assumptions on noise or data generation, with experiments showing consistent outperformance over baselines on synthetic, benchmark, and real-world data.

Significance. If the reconstruction step can be shown to isolate causal structure rather than merely statistical dependencies, the approach would address a longstanding gap in causal discovery for non-stationary, nonlinear, and noisy time series by providing a unified, assumption-light alternative to constraint- and score-based methods. The integration of frequency-domain attention and end-to-end learning is a potentially useful direction, though the current manuscript provides no derivations, ablations, or identifiability results to support the central distillation claim.

major comments (3)
  1. [Abstract] Abstract: The claim that 'reconstruction-guided causal signal distillation' 'mitigates noise and spurious correlations while preserving meaningful dependencies' is presented as a key innovation without any equation, loss formulation, or proof showing that the decoder reconstruction objective (typically MSE or similar) distinguishes causal edges from non-causal associations such as common-cause confounders or non-stationary artifacts. This leaves the separation as an unverified modeling assumption.
  2. [Method] Method description (Causal Structure Learner): The statement that the learner 'operates on distilled reconstructed signals to infer the underlying causal graph without restrictive assumptions on noise distributions or data generation processes' is circular with respect to the training objective; no explicit causal regularization, intervention constraint, or identifiability argument is supplied to force the reconstruction to prefer causal over correlational explanations.
  3. [Experiments] Experiments: The abstract asserts 'consistent outperformance' on synthetic, benchmark, and real-world datasets, yet supplies no referenced ablation studies, error bars, or quantitative results demonstrating the isolated contribution of the distillation step versus a standard transformer reconstruction baseline. This makes the empirical support for the central claim difficult to evaluate.
minor comments (2)
  1. [Abstract] The abstract introduces several new terms ('Non-Stationary Feature Learner', 'reconstruction-guided causal signal distillation') without immediate reference to prior related transformer causal discovery work; a short related-work paragraph would improve context.
  2. [Method] Notation for lagged versus contemporaneous relations and the precise form of the reconstruction loss should be defined explicitly in the method section to allow reproducibility.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for the constructive feedback on TTCD. The comments highlight important areas where the presentation of the distillation mechanism, its theoretical grounding, and empirical validation can be strengthened. We address each major comment point by point below, indicating the specific revisions we will make.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The claim that 'reconstruction-guided causal signal distillation' 'mitigates noise and spurious correlations while preserving meaningful dependencies' is presented as a key innovation without any equation, loss formulation, or proof showing that the decoder reconstruction objective (typically MSE or similar) distinguishes causal edges from non-causal associations such as common-cause confounders or non-stationary artifacts. This leaves the separation as an unverified modeling assumption.

    Authors: We agree that the abstract states the claim at a high level without supporting equations or proofs. The current manuscript explains the distillation conceptually in the method section via the decoder's reconstruction process but does not supply a formal loss formulation or proof that it separates causal edges from confounders or artifacts. We will revise the abstract to provide a concise reference to the reconstruction objective and add a detailed description of the distillation loss (including any regularization terms) in the method section. We will also include a limitations discussion noting the absence of theoretical identifiability guarantees. revision: yes

  2. Referee: [Method] Method description (Causal Structure Learner): The statement that the learner 'operates on distilled reconstructed signals to infer the underlying causal graph without restrictive assumptions on noise distributions or data generation processes' is circular with respect to the training objective; no explicit causal regularization, intervention constraint, or identifiability argument is supplied to force the reconstruction to prefer causal over correlational explanations.

    Authors: We acknowledge that the current phrasing can appear circular without additional detail on the objective. The manuscript does not include explicit causal regularization terms, intervention-based constraints, or an identifiability argument beyond the end-to-end reconstruction. We will revise the method section to explicitly describe the training objective, clarify how the Non-Stationary Feature Learner and decoder interact during distillation, and note that the approach relies on empirical separation rather than formal guarantees. The claim of operating without restrictive assumptions will be qualified accordingly. revision: yes

  3. Referee: [Experiments] Experiments: The abstract asserts 'consistent outperformance' on synthetic, benchmark, and real-world datasets, yet supplies no referenced ablation studies, error bars, or quantitative results demonstrating the isolated contribution of the distillation step versus a standard transformer reconstruction baseline. This makes the empirical support for the central claim difficult to evaluate.

    Authors: We agree that the experiments section does not currently reference ablation studies isolating the distillation component or provide direct comparisons to a standard transformer reconstruction baseline with error bars. We will add these elements in the revision, including new ablation results on synthetic data with error bars from multiple runs and a comparison against a vanilla transformer autoencoder to quantify the distillation's contribution. The abstract will be updated to reference these supporting results. revision: yes

standing simulated objections not resolved
  • Lack of formal derivations or identifiability results demonstrating that the reconstruction-guided distillation isolates causal structure rather than statistical dependencies or non-stationary artifacts.

Circularity Check

0 steps flagged

No significant circularity in the derivation chain

full rationale

The paper proposes TTCD as a novel end-to-end framework combining a Non-Stationary Feature Learner and Causal Structure Learner, with reconstruction-guided causal signal distillation presented as an architectural innovation rather than a first-principles mathematical derivation. The abstract describes the reconstruction process as distilling causal signals but does not provide equations or a derivation chain that reduces this claim to a self-definition, fitted input, or self-citation by construction. No load-bearing steps reduce the central claims to tautological inputs; the method is instead validated externally via experiments on synthetic, benchmark, and real-world datasets, rendering it self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 3 invented entities

The abstract introduces several new named modules and a distillation process without specifying numerical free parameters or providing independent evidence for the invented mechanisms; the listed axioms are the implicit background assumptions required for the claimed behavior.

axioms (2)
  • domain assumption Transformer attention mechanisms can jointly capture temporal and frequency-domain features while profiling non-stationarity
    Invoked by the Non-Stationary Feature Learner description
  • ad hoc to paper The transformer decoder reconstruction process preserves meaningful causal dependencies while removing noise and spurious correlations
    Central to the reconstruction-guided causal signal distillation claim
invented entities (3)
  • Non-Stationary Feature Learner no independent evidence
    purpose: Integrates temporal and frequency-domain attention with dynamic non-stationarity profiling
    New module introduced to handle non-stationary time series
  • reconstruction-guided causal signal distillation no independent evidence
    purpose: Distills essential causal signals from the transformer decoder reconstruction
    Key innovation claimed to mitigate noise and spurious correlations
  • Causal Structure Learner no independent evidence
    purpose: Infers the underlying causal graph from the distilled signals
    Custom component that operates without restrictive noise assumptions

pith-pipeline@v0.9.0 · 5548 in / 1713 out tokens · 53206 ms · 2026-05-12T01:34:31.308400+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

208 extracted references · 208 canonical work pages

  1. [1]

    Scaling Learning Algorithms Towards

    Bengio, Yoshua and LeCun, Yann , booktitle =. Scaling Learning Algorithms Towards

  2. [2]

    and Osindero, Simon and Teh, Yee Whye , journal =

    Hinton, Geoffrey E. and Osindero, Simon and Teh, Yee Whye , journal =. A Fast Learning Algorithm for Deep Belief Nets , volume =

  3. [3]

    2016 , publisher=

    Deep learning , author=. 2016 , publisher=

  4. [4]

    Communication, Simulation, and Intelligent Agents: Implications of Personal Intelligent Machines for Medical Education

    Clancey, William J. Communication, Simulation, and Intelligent Agents: Implications of Personal Intelligent Machines for Medical Education. Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83)

  5. [5]

    Classification Problem Solving

    Clancey, William J. Classification Problem Solving. Proceedings of the Fourth National Conference on Artificial Intelligence

  6. [6]

    , title =

    Robinson, Arthur L. , title =. 1980 , doi =. https://science.sciencemag.org/content/208/4447/1019.full.pdf , journal =

  7. [7]

    New Ways to Make Microcircuits Smaller---Duplicate Entry

    Robinson, Arthur L. New Ways to Make Microcircuits Smaller---Duplicate Entry. Science

  8. [8]

    Clancey and Glenn Rennels , abstract =

    Diane Warner Hasling and William J. Clancey and Glenn Rennels , abstract =. Strategic explanations for a diagnostic consultation system , journal =. 1984 , issn =. doi:https://doi.org/10.1016/S0020-7373(84)80003-6 , url =

  9. [9]

    and Rennels, Glenn R

    Hasling, Diane Warner and Clancey, William J. and Rennels, Glenn R. and Test, Thomas. Strategic Explanations in Consultation---Duplicate. The International Journal of Man-Machine Studies

  10. [10]

    Poligon: A System for Parallel Problem Solving

    Rice, James. Poligon: A System for Parallel Problem Solving

  11. [11]

    Transfer of Rule-Based Expertise through a Tutorial Dialogue

    Clancey, William J. Transfer of Rule-Based Expertise through a Tutorial Dialogue

  12. [12]

    The Engineering of Qualitative Models

    Clancey, William J. The Engineering of Qualitative Models

  13. [13]

    2017 , eprint=

    Attention Is All You Need , author=. 2017 , eprint=

  14. [14]

    Pluto: The 'Other' Red Planet

    NASA. Pluto: The 'Other' Red Planet

  15. [15]

    Abril and Robert Plant

    Patricia S. Abril and Robert Plant. The patent holder's dilemma: Buy, sell, or troll?. Communications of the ACM. 2007. doi:10.1145/1188913.1188915

  16. [16]

    Deciding equivalances among conjunctive aggregate queries

    Sarah Cohen and Werner Nutt and Yehoshua Sagic. Deciding equivalances among conjunctive aggregate queries. doi:10.1145/1219092.1219093

  17. [17]

    Special issue: Digital Libraries. 1996

  18. [18]

    Understanding Policy-Based Networking

    David Kosiur. Understanding Policy-Based Networking. 2001

  19. [21]

    The title of book two. 2008. doi:10.1007/3-540-09237-4

  20. [22]

    Asad Z. Spector. Achieving application requirements. Distributed Systems. 1990. doi:10.1145/90417.90738

  21. [23]

    Douglass and David Harel and Mark B

    Bruce P. Douglass and David Harel and Mark B. Trakhtenbrot. Statecarts in use: structured analysis and object-orientation. Lectures on Embedded Systems. 1998. doi:10.1007/3-540-65193-4_29

  22. [24]

    Donald E. Knuth. The Art of Computer Programming, Vol. 1: Fundamental Algorithms (3rd. ed.). 1997

  23. [25]

    Donald E. Knuth. The Art of Computer Programming. 1998

  24. [26]

    Structured Variational Inference Procedures and their Realizations (as incol)

    Dan Geiger and Christopher Meek. Structured Variational Inference Procedures and their Realizations (as incol). Proceedings of Tenth International Workshop on Artificial Intelligence and Statistics, The Barbados

  25. [27]

    Stan W. Smith. An experiment in bibliographic mark-up: Parsing metadata for XML export. Proceedings of the 3rd. annual workshop on Librarians and Computers. 2010. doi:99.9999/woot07-S422

  26. [28]

    Catch me, if you can: Evading network signatures with web-based polymorphic worms

    Matthew Van Gundy and Davide Balzarotti and Giovanni Vigna. Catch me, if you can: Evading network signatures with web-based polymorphic worms. Proceedings of the first USENIX workshop on Offensive Technologies

  27. [29]

    Predicate Path expressions

    Sten Andler. Predicate Path expressions. Proceedings of the 6th. ACM SIGACT-SIGPLAN symposium on Principles of Programming Languages. 1979. doi:10.1145/567752.567774

  28. [30]

    LOGICS of Programs: AXIOMATICS and DESCRIPTIVE POWER

    David Harel. LOGICS of Programs: AXIOMATICS and DESCRIPTIVE POWER. 1978

  29. [31]

    Anisi , title =

    David A. Anisi , title =

  30. [32]

    Clarkson

    Kenneth L. Clarkson. Algorithms for Closest-Point Problems (Computational Geometry). 1985

  31. [33]

    Introduction to Bayesian Statistics

    Harry Thornburg. Introduction to Bayesian Statistics. 2001

  32. [34]

    CLIFFORD: a Maple 11 Package for Clifford Algebra Computations, version 11

    Rafal Ablamowicz and Bertfried Fauser. CLIFFORD: a Maple 11 Package for Clifford Algebra Computations, version 11. 2007

  33. [35]

    Stats and Analysis

    Poker-Edge.Com. Stats and Analysis. 2006

  34. [36]

    A more perfect union

    Barack Obama. A more perfect union. 2008

  35. [37]

    The fountain of youth

    Joseph Scientist. The fountain of youth. 2009

  36. [38]

    Solder man

    Dave Novak. Solder man. ACM SIGGRAPH 2003 Video Review on Animation theater Program: Part I - Vol. 145 (July 27--27, 2003). 2003. doi:99.9999/woot07-S422

  37. [39]

    Interview with Bill Kinder: January 13, 2005

    Newton Lee. Interview with Bill Kinder: January 13, 2005. Comput. Entertain. 2005. doi:10.1145/1057270.1057278

  38. [40]

    The Enabling of Digital Libraries

    Bernard Rous. The Enabling of Digital Libraries. Digital Libraries. 2008

  39. [42]

    (new) Finding minimum congestion spanning trees , journal =

    Werneck, Renato and Setubal, Jo\. (new) Finding minimum congestion spanning trees , journal =. doi:10.1145/351827.384253 , acmid = 384253, publisher =

  40. [44]

    and Mei, Alessandro , title =

    Conti, Mauro and Di Pietro, Roberto and Mancini, Luigi V. and Mei, Alessandro , title =. Inf. Fusion , volume =. 2009 , issn =. doi:10.1016/j.inffus.2009.01.002 , acmid =

  41. [45]

    and Hutchful, David K

    Li, Cheng-Lun and Buyuktur, Ayse G. and Hutchful, David K. and Sant, Natasha B. and Nainwal, Satyendra K. , title =. CHI '08 extended abstracts on Human factors in computing systems , year =. doi:10.1145/1358628.1358946 , acmid =

  42. [46]

    , title =

    Hollis, Billy S. , title =. 1999 , isbn =

  43. [47]

    Goossens, Michel and Rahtz, S. P. and Moore, Ross and Sutor, Robert S. , title =. 1999 , isbn =

  44. [48]

    and Rosenberg, Arnold L

    Buss, Jonathan F. and Rosenberg, Arnold L. and Knott, Judson D. , title =. 1987 , source =

  45. [49]

    CHI '08: CHI '08 extended abstracts on Human factors in computing systems , year =

    , note =. CHI '08: CHI '08 extended abstracts on Human factors in computing systems , year =

  46. [50]

    Algorithms for Closest-Point Problems (Computational Geometry) , year =

    Clarkson, Kenneth Lee , advisor =. Algorithms for Closest-Point Problems (Computational Geometry) , year =

  47. [51]

    SIGCOMM Comput. Commun. Rev. , year =

  48. [52]

    2004 , isbn =

    IEEE TCSC Executive Committee , booktitle =. 2004 , isbn =. doi:http://dx.doi.org/10.1109/ICWS.2004.64 , acmid =

  49. [53]

    Distributed systems (2nd Ed.) , year =

  50. [54]

    , title =

    Petrie, Charles J. , title =. 1986 , source =

  51. [55]

    Donald E. Knuth. Seminumerical Algorithms. 1981

  52. [56]

    E-commerce and cultural values , year =

    Kong, Wei-Chang , Title =. E-commerce and cultural values , year =

  53. [57]

    E-commerce and cultural values , year =

    Kong, Wei-Chang , type =. E-commerce and cultural values , year =

  54. [58]

    Chapter 9 , booktitle =

    Kong, Wei-Chang , editor =. Chapter 9 , booktitle =. 2002 , address =

  55. [59]

    E-commerce and cultural values , editor =

    Kong, Wei-Chang , title =. E-commerce and cultural values , editor =. 2003 , isbn =

  56. [60]

    E-commerce and cultural values - (InBook-num-in-chap) , chapter =

    Kong, Wei-Chang , editor =. E-commerce and cultural values - (InBook-num-in-chap) , chapter =. 2004 , address =

  57. [61]

    E-commerce and cultural values (Inbook-text-in-chap) , chapter =

    Kong, Wei-Chang , editor =. E-commerce and cultural values (Inbook-text-in-chap) , chapter =. 2005 , address =

  58. [62]

    E-commerce and cultural values (Inbook-num chap) , chapter =

    Kong, Wei-Chang , editor =. E-commerce and cultural values (Inbook-num chap) , chapter =. 2006 , address =

  59. [63]

    Microelectron

    Mehdi Saeedi and Morteza Saheb Zamani and Mehdi Sedighi , title =. Microelectron. J. , volume =. 2010 , pages =

  60. [64]

    Mehdi Saeedi and Morteza Saheb Zamani and Mehdi Sedighi and Zahra Sasanian , title =. J. Emerg. Technol. Comput. Syst. , volume =

  61. [65]

    Kirschmer, Markus and Voight, John , title =. SIAM J. Comput. , issue_date =. 2010 , issn =. doi:https://doi.org/10.1137/080734467 , acmid =

  62. [66]

    Hoare, C. A. R. , title =. Structured programming (incoll) , editor =. 1972 , isbn =

  63. [67]

    History of programming languages I (incoll) , editor =

    Lee, Jan , title =. History of programming languages I (incoll) , editor =. 1981 , isbn =. doi:http://doi.acm.org/10.1145/800025.1198348 , acmid =

  64. [68]

    , title =

    Dijkstra, E. , title =. Classics in software engineering (incoll) , year =

  65. [69]

    , title =

    Wenzel, Elizabeth M. , title =. Multimedia interface design (incoll) , year =. doi:10.1145/146022.146089 , acmid =

  66. [70]

    , title =

    Mumford, E. , title =. Critical issues in information systems research (incoll) , year =

  67. [71]

    and Golden, Donald G

    McCracken, Daniel D. and Golden, Donald G. , title =. 1990 , isbn =

  68. [72]

    The analysis of linear partial differential operators

    H. The analysis of linear partial differential operators. 1985 , PAGES =

  69. [73]

    IEEE", address =

    A. Adya and P. Bahl and J. Padhye and A.Wolman and L. Zhou , title =. Proceedings of the IEEE 1st International Conference on Broadnets Networks (BroadNets'04) , publisher = "IEEE", address = "Los Alamitos, CA", year =

  70. [74]

    I. F. Akyildiz and W. Su and Y. Sankarasubramaniam and E. Cayirci , title =. Comm. ACM , volume = 38, number = "4", year =

  71. [75]

    I. F. Akyildiz and T. Melodia and K. R. Chowdhury , title =. Computer Netw. , volume = 51, number = "4", year =

  72. [76]

    ACM", address =

    P. Bahl and R. Chancre and J. Dungeon , title =. Proceeding of the 10th International Conference on Mobile Computing and Networking (MobiCom'04) , publisher = "ACM", address = "New York, NY", year =

  73. [77]

    8 (Special Issue on Sensor Networks)

    D. Culler and D. Estrin and M. Srivastava , title =. IEEE Comput. , volume = 37, number = "8 (Special Issue on Sensor Networks)", publisher = "IEEE", address = "Los Alamitos, CA", year =

  74. [78]

    Natarajan and M

    A. Natarajan and M. Motani and B. de Silva and K. Yap and K. C. Chua , title =. Network Architectures , editor =. 960935712

  75. [79]

    Tzamaloukas and J

    A. Tzamaloukas and J. J. Garcia-Luna-Aceves , title =

  76. [80]

    Zhou and J

    G. Zhou and J. Lu and C.-Y. Wan and M. D. Yarvis and J. A. Stankovic , title =

  77. [81]

    Mapping Powerlists onto Hypercubes

    Jacob Kornerup. Mapping Powerlists onto Hypercubes. 1994

  78. [82]

    Automatic Parallelization for Distributed-Memory Multiprocessing Systems

    Michael Gerndt. Automatic Parallelization for Distributed-Memory Multiprocessing Systems

  79. [83]

    J. E. Archer, Jr. and R. Conway and F. B. Schneider. User recovery and reversal in interactive systems. ACM Trans. Program. Lang. Syst

  80. [84]

    D. D. Dunlop and V. R. Basili. Generalizing specifications for uniformly implemented loops. ACM Trans. Program. Lang. Syst

Showing first 80 references.