pith. machine review for the scientific record. sign in

arxiv: 2605.02675 · v1 · submitted 2026-05-04 · 📊 stat.ML · cs.LG· q-bio.NC

Recognition: 4 theorem links

· Lean Theorem

Online Generalised Predictive Coding

Adeel Razi, Karl Friston, Mehran H. Z. Bazargani, Szymon Urbas, Thomas Brendan Murphy

Authors on Pith no claims yet

Pith reviewed 2026-05-08 18:16 UTC · model grok-4.3

classification 📊 stat.ML cs.LGq-bio.NC
keywords online inferencegeneralised filteringpredictive codingvariational inferencedata assimilationdynamic expectation maximisationstate estimationtriple estimation
0
0 comments X

The pith

Online generalised predictive coding enables real-time tracking of hidden states by separating fast belief updates from slow parameter learning.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper extends generalised filtering to online data assimilation by using a separation of temporal scales. This lets the scheme perform fast Bayesian updating of dynamic hidden states while slowly adjusting model parameters and uncertainty estimates. Numerical experiments with nonlinear and potentially chaotic generative models show that the online DEM approach can track latent states even when the internal model differs substantially from the true dynamics. A sympathetic reader would care because the method unifies inference, learning, and uncertainty estimation in a single online framework inspired by predictive coding.

Core claim

By separating temporal scales in the variational scheme, online DEM jointly infers dynamic latent states, learns unknown parameters, and estimates precisions, allowing it to track states of a nonlinear generative process in real time even when the functional form of the model differs from the true dynamics.

What carries the argument

Separation of temporal scales that supports fast variational belief updating about hidden states alongside slow updates to parameters and precisions.

If this is right

  • Streaming data can be assimilated without requiring full re-optimization over all variables at every time step.
  • Triple estimation of states, parameters, and uncertainty becomes feasible for online applications.
  • The scheme remains functional under model mismatch between the internal dynamics and the true generative process.
  • It supplies a unified variational procedure for inference and learning in time-varying environments.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could support real-time applications such as neural signal decoding where both states and model parameters evolve.
  • If scale separation proves robust, it may reduce computational cost relative to joint optimization over all timescales.
  • Extensions to partially observed or high-dimensional systems would test whether the same separation continues to hold.

Load-bearing premise

A clean separation of temporal scales is valid and sufficient to enable slow parameter and precision updates while supporting fast state inference without destabilizing the variational scheme.

What would settle it

A simulation in which the online DEM scheme loses track of latent states in a chaotic generative model when the assumed separation of temporal scales is enforced would falsify the central claim.

Figures

Figures reproduced from arXiv: 2605.02675 by Adeel Razi, Karl Friston, Mehran H. Z. Bazargani, Szymon Urbas, Thomas Brendan Murphy.

Figure 1
Figure 1. Figure 1: A realisation of a GLV-GP state trajectories with smooth state noise (top), and view at source ↗
Figure 2
Figure 2. Figure 2: The evolution of a) observation noise posterior estimate view at source ↗
Figure 3
Figure 3. Figure 3: The evolution of the posterior expectation over view at source ↗
Figure 4
Figure 4. Figure 4: The mean squared error (MSE) between the true states view at source ↗
Figure 5
Figure 5. Figure 5: The blue curves show FA values, while the orange and green curves corre view at source ↗
Figure 6
Figure 6. Figure 6: The inferred states for kx = 2 orders of motion. The left panel is for scenario￾different: Lorenz-GM vs. GLV-GP, and the right panel is for scenario-same: GLV-GM vs. GLV-GP. Each row corresponds to a different precision prior ratio. D.2 kx = 3 In view at source ↗
Figure 7
Figure 7. Figure 7: The inferred states for kx = 3 orders of motion. The left panel is for scenario￾different: Lorenz-GM vs. GLV-GP, and the right panel is for scenario-same: GLV-GM vs. GLV-GP. Each row corresponds to a different precision prior ratio. E Predicted sensations yˆ The following demonstrates the predicted sensations, yˆt , for kx = 2 (Appendix. E.1), and kx = 3 (Appendix. E.2) orders of motion. In all figures in … view at source ↗
Figure 8
Figure 8. Figure 8: The predicted sensations, yˆ, for kx = 2 orders of motion. The left panel is for scenario-different: Lorenz-GM vs. GLV-GP, and the right panel is for scenario-same: GLV-GM vs. GLV-GP. Each row corresponds to a different precision prior ratio. E.2 kx = 3 The following demonstrates the predicted sensations yˆ˜t for kx = 3 orders of motion. In the left panel: Lorenz-GM vs. GLV-GP, we can see that as the preci… view at source ↗
Figure 9
Figure 9. Figure 9: The predicted sensations, yˆ, for kx = 3 orders of motion. The left panel is for scenario-different: Lorenz-GM vs. GLV-GP, and the right panel is for scenario-same: GLV-GM vs. GLV-GP. Each row corresponds to a different precision prior ratio. F Convergence of µλ y For a given any precision prior ratio (highlighted by different colour per figure), we show the behaviour of the approximate posterior means µλy… view at source ↗
Figure 10
Figure 10. Figure 10: The evolution of µλy posterior estimate in a) scenario-different and b) scenario-same, with kx = 2 orders of motion and along 7 precision prior ratios. The solid lines represent the posterior means at a given time, and the shaded bands represent credible regions within two standard deviations of the mean. F.2 kx = 3 We observe a similar behaviour to the previous section, where we had kx = 2 orders of moti… view at source ↗
Figure 11
Figure 11. Figure 11: The evolution of µλy posterior estimate in a) scenario-different and b) scenario-same with kx = 3 orders of motion and along 7 precision prior ratios. The solid lines represent the posterior means at a given time, and the shaded bands represent credible regions within two standard deviations of the mean. G Convergence of µλ x In this section, given any particular precision prior ratio, the evolution of th… view at source ↗
Figure 12
Figure 12. Figure 12: The evolution of µλx posterior estimate in a) scenario-different and b) scenario-same, with kx = 2 orders of motion and along 7 precision prior ratios. The solid lines represent the posterior means at a given time, and the shaded bands represent credible regions within two standard deviations of the mean. G.2 kx = 3 For kx = 3 orders of motion, we see something very similar to the previous case with kx = … view at source ↗
Figure 13
Figure 13. Figure 13: The evolution of µλx posterior estimate in a) scenario-different and b) scenario-same, with kx = 3 orders of motion and along 7 precision prior ratios. The solid lines represent the posterior means at a given time, and the shaded bands represent credible regions within two standard deviations of the mean. 42 view at source ↗
Figure 14
Figure 14. Figure 14: The evolution of the posterior expectation over view at source ↗
Figure 15
Figure 15. Figure 15: The evolution of the parameters a12, a13, and a23 of the matrix A in GLV-GM for the scenario-same case with kx = 2 orders of motion, shown in subfigures (a), (b), and (c), respectively, across seven precision prior ratios. The solid lines represent the posterior means at a given time, and the shaded bands represent credible regions within two standard deviations of the mean. H.2 kx = 3 For scenario-differ… view at source ↗
Figure 16
Figure 16. Figure 16: The evolution of the posterior expectation over view at source ↗
Figure 17
Figure 17. Figure 17: The evolution of the parameters a12, a13, and a23 of the matrix A in GLV-GM for the scenario-same case with kx = 3 orders of motion, shown in subfigures (a), (b), and (c), respectively, across seven precision prior ratios. The solid lines represent the posterior means at a given time, and the shaded bands represent credible regions within two standard deviations of the mean. Note that the presented models… view at source ↗
read the original abstract

This paper introduces an extension of generalised filtering for online applications. Generalised filtering refers to data assimilation schemes that jointly infer latent states, learn unknown model parameters, and estimate uncertainty in an integrated framework -- e.g., estimate state and observation noise -- at the same time (i.e., triple estimation). This framework appears across disciplines under different names, including variational Kalman-Bucy filtering in engineering, generalised predictive coding in neuroscience, and Dynamic Expectation Maximisation (DEM) in time-series analysis. Here, we specialise DEM for ``online'' data assimilation, through a separation of temporal scales. We describe the variational principles and procedures that allow one to assimilate data in a way that allows for a slow updating of parameters and precisions, which contextualise fast Bayesian belief updating about the dynamic hidden states. Using numerical studies, we demonstrate the validity of online DEM (ODEM) using a non-linear -- and potentially chaotic -- generative model, to show that the ODEM scheme can track the latent states of the generative process, even when its functional form differs fundamentally from the dynamics of the generative model. Framed from a neuro-mimetic predictive coding perspective, ODEM offers a biologically inspired solution to online inference, learning, and uncertainty estimation in dynamic environments.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 3 minor

Summary. The paper introduces Online Dynamic Expectation Maximisation (ODEM), an extension of Dynamic Expectation Maximisation (DEM) / generalised filtering for online data assimilation. It achieves this via explicit separation of temporal scales, enabling slow updates to parameters and precisions while performing fast variational Bayesian inference over dynamic latent states. Numerical experiments on a nonlinear, potentially chaotic generative model are used to claim that ODEM can track latent states even when the functional form of the inference model differs fundamentally from the true generative dynamics. The work is framed as a neuro-mimetic predictive coding solution for online inference, learning, and uncertainty estimation.

Significance. If the central claim holds, ODEM would offer a principled, variational online triple-estimation scheme (states, parameters, precisions) that extends established DEM literature with a clean timescale separation. This could be valuable for real-time applications in time-series analysis and for biologically plausible models in neuroscience. The manuscript correctly builds on prior variational principles rather than introducing circularity, and the numerical demonstrations address a practically relevant mismatch scenario. However, the significance is currently tempered by the absence of analytical stability guarantees or quantified robustness metrics.

major comments (3)
  1. [Numerical studies / abstract] Numerical studies (abstract and results): the reported state-tracking performance under model mismatch lacks implementation details (e.g., numerical integration scheme, step size, initial conditions), error bars or variability measures across runs, baseline comparisons (standard DEM, online Kalman-Bucy, or particle filters), and any sensitivity analysis to the temporal scale separation rate. Without these, it is impossible to determine whether successful tracking is robust or an artifact of the chosen mismatch, integration parameters, or initial conditions.
  2. [Methods / scale separation] Scale-separation construction (methods): the claim that slow parameter/precision updates remain stable while fast state inference proceeds under fundamental model mismatch and potential chaos rests entirely on the numerical illustrations. No Lyapunov analysis, convergence bounds, or perturbation analysis is provided to show that chaotic divergence in the fast dynamics cannot leak into the slow updates or destabilise the variational free-energy gradients.
  3. [Numerical studies] Model-mismatch experiment: the generative model is described as nonlinear and potentially chaotic, yet no quantitative characterisation (e.g., Lyapunov exponents, attractor dimension) or explicit statement of how the inference model differs functionally is given. This makes it difficult to assess how “fundamental” the mismatch truly is and whether the observed tracking generalises beyond the specific example.
minor comments (3)
  1. [Methods] Notation for the timescale separation parameter and the resulting slow/fast equations could be clarified with an explicit summary table or diagram showing which quantities are updated at which rate.
  2. [Introduction] The manuscript would benefit from citing recent online variational filtering literature (e.g., online variational Bayes or streaming DEM variants) to better situate the novelty of the scale-separation approach.
  3. [Figures] Figure captions should explicitly state the mismatch condition and the performance metric used for state tracking.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed comments, which have helped us identify areas where the manuscript can be strengthened. We address each major comment point by point below, indicating the revisions we intend to make.

read point-by-point responses
  1. Referee: [Numerical studies / abstract] Numerical studies (abstract and results): the reported state-tracking performance under model mismatch lacks implementation details (e.g., numerical integration scheme, step size, initial conditions), error bars or variability measures across runs, baseline comparisons (standard DEM, online Kalman-Bucy, or particle filters), and any sensitivity analysis to the temporal scale separation rate. Without these, it is impossible to determine whether successful tracking is robust or an artifact of the chosen mismatch, integration parameters, or initial conditions.

    Authors: We agree that the numerical studies would benefit from greater transparency and additional controls. In the revised manuscript we will expand the methods and results sections to include: the specific numerical integration scheme and step sizes used; initial conditions for all simulations; error bars or standard deviations computed across multiple independent runs; direct comparisons against standard DEM and an online Kalman-Bucy filter; and a sensitivity analysis with respect to the timescale-separation parameter. These additions will allow readers to evaluate the robustness of the reported tracking performance. revision: yes

  2. Referee: [Methods / scale separation] Scale-separation construction (methods): the claim that slow parameter/precision updates remain stable while fast state inference proceeds under fundamental model mismatch and potential chaos rests entirely on the numerical illustrations. No Lyapunov analysis, convergence bounds, or perturbation analysis is provided to show that chaotic divergence in the fast dynamics cannot leak into the slow updates or destabilise the variational free-energy gradients.

    Authors: The referee is correct that the manuscript provides no analytical stability guarantees and relies on numerical demonstration. The work is framed as an algorithmic extension supported by illustrative experiments rather than a theoretical proof of stability under arbitrary mismatch and chaos. We will revise the discussion to explicitly state this scope limitation and to suggest that a full Lyapunov or perturbation analysis of the coupled fast-slow system constitutes an important direction for future research. We do not claim that the numerical results constitute a general proof. revision: partial

  3. Referee: [Numerical studies] Model-mismatch experiment: the generative model is described as nonlinear and potentially chaotic, yet no quantitative characterisation (e.g., Lyapunov exponents, attractor dimension) or explicit statement of how the inference model differs functionally is given. This makes it difficult to assess how “fundamental” the mismatch truly is and whether the observed tracking generalises beyond the specific example.

    Authors: We will augment the model-mismatch experiment section with a quantitative characterisation of the generative dynamics, including computed Lyapunov exponents (where positive) and, if appropriate, an estimate of attractor dimension. We will also provide an explicit, side-by-side description of the functional forms of the generative and inference models, highlighting the precise points of structural difference. These clarifications will make the degree of mismatch concrete and help readers judge the scope of the reported tracking results. revision: yes

Circularity Check

0 steps flagged

Online specialization via explicit timescale separation is independent of target result; relies on prior variational DEM but does not reduce by construction

full rationale

The derivation starts from established variational free-energy minimization in DEM (cited from prior literature including Friston et al.), then introduces an explicit separation of temporal scales to enable slow parameter/precision updates alongside fast state inference. This scale-separation ansatz is stated directly in the paper rather than derived from the target tracking result, and the central claim is supported by numerical simulations on a mismatched chaotic generative model that function as external validation. No step equates a fitted quantity to a 'prediction' by construction, nor does any load-bearing uniqueness theorem collapse to self-citation alone. Self-citations exist but are not the sole justification for the online extension.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The central claim rests on standard variational Bayesian assumptions plus the domain-specific premise that temporal scale separation can be applied without loss of stability or accuracy in online settings.

free parameters (1)
  • temporal scale separation rate
    Controls the relative speed of slow parameter/precision updates versus fast state belief updates; its value is chosen to enable the online regime.
axioms (2)
  • standard math Variational free-energy minimization supports joint state-parameter-uncertainty estimation in dynamic systems
    Invoked as the foundation for DEM and its online extension.
  • domain assumption Separation of temporal scales is valid and sufficient for stable online assimilation
    Required to justify slow parameter updates alongside fast state inference.

pith-pipeline@v0.9.0 · 5536 in / 1308 out tokens · 52899 ms · 2026-05-08T18:16:35.125486+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

71 extracted references · 10 canonical work pages

  1. [1]

    PLOS Computational Biology , volume=

    Predictive coding networks for temporal prediction , author=. PLOS Computational Biology , volume=. 2024 , publisher=

  2. [2]

    Nature neuroscience , volume=

    Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects , author=. Nature neuroscience , volume=. 1999 , publisher=

  3. [3]

    Journal of mathematical psychology , volume=

    A tutorial on the free-energy framework for modelling perception and learning , author=. Journal of mathematical psychology , volume=. 2017 , publisher=

  4. [4]

    From pixels to planning:

    Friston, Karl and Heins, Conor and Verbelen, Tim and Da Costa, Lancelot and Salvatori, Tommaso and Markovic, Dimitrije and Tschantz, Alexander and Koudahl, Magnus and Buckley, Christopher and Parr, Thomas , journal=. From pixels to planning:. 2025 , publisher=

  5. [5]

    Physica D: Nonlinear Phenomena , volume=

    Slaving principle revisited , author=. Physica D: Nonlinear Phenomena , volume=. 1996 , publisher=

  6. [6]

    An introduction:

    Haken, Hermann , booktitle=. An introduction:. 2004 , publisher=

  7. [7]

    Biological cybernetics , volume=

    Action understanding and active inference , author=. Biological cybernetics , volume=. 2011 , publisher=

  8. [8]

    Frontiers in Psychiatry , volume=

    The computational anatomy of psychosis , author=. Frontiers in Psychiatry , volume=. 2013 , publisher=

  9. [9]

    Theoretical analysis and simulations of the generalized

    Malcai, Ofer and Biham, Ofer and Richmond, Peter and Solomon, Sorin , journal=. Theoretical analysis and simulations of the generalized. 2002 , publisher=

  10. [10]

    Philosophical Transactions of the Royal Society B: Biological Sciences , volume=

    A theory of cortical responses , author=. Philosophical Transactions of the Royal Society B: Biological Sciences , volume=. 2005 , publisher=

  11. [11]

    Mind , volume =

    Eells, Ellery , title = ". Mind , volume =. 2004 , month =. doi:10.1093/mind/113.451.591 , url =

  12. [12]

    Mathematical Problems in Engineering , volume=

    Generalised filtering , author=. Mathematical Problems in Engineering , volume=. 2010 , publisher=

  13. [13]

    Signal Processing, Sensor Fusion, and Target Recognition XX , volume=

    Bayesian state estimation using generalized coordinates , author=. Signal Processing, Sensor Fusion, and Target Recognition XX , volume=. 2011 , publisher=

  14. [14]

    2013 , publisher=

    Mathematics of Kalman-Bucy filtering , author=. 2013 , publisher=

  15. [15]

    The free-energy principle:

    Friston, Karl , journal=. The free-energy principle:. 2010 , publisher=

  16. [16]

    Active inference:

    Parr, Thomas and Pezzulo, Giovanni and Friston, Karl J , year=. Active inference:

  17. [17]

    Advances in neural information processing systems , volume=

    Deep predictive coding network with local recurrent processing for object recognition , author=. Advances in neural information processing systems , volume=

  18. [18]

    Bayesian brain:

    Doya, Kenji and Ishii, Shin and Pouget, Alexandre and Rao, Rajesh PN , year=. Bayesian brain:

  19. [19]

    1866 , publisher=

    von Helmholtz, Hermann , title=. 1866 , publisher=

  20. [20]

    Neuroscience & Biobehavioral Reviews , volume=

    Active inference and learning , author=. Neuroscience & Biobehavioral Reviews , volume=. 2016 , publisher=

  21. [21]

    2020 , publisher=

    Introduction to Machine Learning , author=. 2020 , publisher=

  22. [22]

    Cortex , volume=

    Repetition suppression and its contextual determinants in predictive coding , author=. Cortex , volume=. 2016 , publisher=

  23. [23]

    Hrvoje Ljubic and Goran Martinovic and Tomislav Volaric , title =. Intell. Data Anal. , volume =. 2022 , url =. doi:10.3233/IDA-215735 , timestamp =

  24. [24]

    Kingma and Max, Welling , year=

    Diederik P. Kingma and Max Welling , title =. Found. Trends Mach. Learn. , volume =. 2019 , url =. doi:10.1561/2200000056 , timestamp =

  25. [25]

    Spiking Neural Networks:

    Jo. Spiking Neural Networks:. 2022 , url =. doi:10.1109/ACCESS.2022.3179968 , timestamp =

  26. [26]

    2016 , publisher=

    Neuroplasticity , author=. 2016 , publisher=

  27. [27]

    Dropout as a bayesian approximation:

    Gal, Yarin and Ghahramani, Zoubin , booktitle=. Dropout as a bayesian approximation:. 2016 , organization=

  28. [28]

    2017 international conference on engineering and technology (ICET) , pages=

    Understanding of a convolutional neural network , author=. 2017 international conference on engineering and technology (ICET) , pages=. 2017 , organization=

  29. [29]

    International Conference on Machine Learning , pages=

    Hyperbolic disk embeddings for directed acyclic graphs , author=. International Conference on Machine Learning , pages=. 2019 , organization=

  30. [30]

    Ronneberger, Olaf and Fischer, Philipp and Brox, Thomas , booktitle=. U-net:. 2015 , organization=

  31. [31]

    Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun , title =. 2016. 2016 , url =. doi:10.1109/CVPR.2016.90 , timestamp =

  32. [32]

    Advances in neural information processing systems , volume=

    Which neural net architectures give rise to exploding and vanishing gradients? , author=. Advances in neural information processing systems , volume=

  33. [33]

    Mian , title =

    Naveed Akhtar and Ajmal S. Mian , title =. 2018 , url =. doi:10.1109/ACCESS.2018.2807385 , timestamp =

  34. [34]

    Goodfellow and Yoshua Bengio and Aaron C

    Ian J. Goodfellow and Yoshua Bengio and Aaron C. Courville , title =. 2016 , url =

  35. [35]

    An empirical investigation of catastrophic forgetting in gradient-based neural networks.arXiv preprint arXiv:1312.6211,

    An empirical investigation of catastrophic forgetting in gradient-based neural networks , author=. arXiv preprint arXiv:1312.6211 , year=

  36. [36]

    Mantas Lukosevicius and Herbert Jaeger , title =. Comput. Sci. Rev. , volume =. 2009 , url =. doi:10.1016/j.cosrev.2009.03.005 , timestamp =

  37. [37]

    International conference on artificial neural networks , pages=

    Bidirectional LSTM networks for improved phoneme classification and recognition , author=. International conference on artificial neural networks , pages=. 2005 , organization=

  38. [38]

    PLoS One , volume=

    Credit assignment during movement reinforcement learning , author=. PLoS One , volume=. 2013 , publisher=

  39. [39]

    Surv.51, 1–42, DOI: 10.1145/3236009 (2019)

    Riccardo Guidotti and Anna Monreale and Salvatore Ruggieri and Franco Turini and Fosca Giannotti and Dino Pedreschi , title =. 2019 , url =. doi:10.1145/3236009 , timestamp =

  40. [40]

    This looks like that:

    Chen, Chaofan and Li, Oscar and Tao, Daniel and Barnett, Alina and Rudin, Cynthia and Su, Jonathan K , journal=. This looks like that:

  41. [41]

    Advances in Neural Information Processing Systems , volume=

    Online variational filtering and parameter learning , author=. Advances in Neural Information Processing Systems , volume=

  42. [42]

    Frontiers in Computational Neuroscience , volume=

    Variational online learning of neural dynamics , author=. Frontiers in Computational Neuroscience , volume=. 2020 , publisher=

  43. [43]

    2006 , publisher=

    Pattern recognition and machine learning , author=. 2006 , publisher=

  44. [44]

    Neuroimage , volume=

    Variational free energy and the Laplace approximation , author=. Neuroimage , volume=. 2007 , publisher=

  45. [45]

    Kalman filtering and smoothing solutions to temporal

    Hartikainen, Jouni and S. Kalman filtering and smoothing solutions to temporal. 2010 IEEE International Workshop on Machine Learning for Signal Processing , pages=. 2010 , organization=

  46. [46]

    Chopin, Nicolas and Jacob, Pierre E and Papaspiliopoulos, Omiros , journal=. SMC2:. 2013 , publisher=

  47. [47]

    Methodology and Computing in Applied Probability , volume=

    Biased online parameter inference for state-space models , author=. Methodology and Computing in Applied Probability , volume=. 2017 , publisher=

  48. [48]

    Genetics , volume=

    Estimation of population growth or decline in genetically monitored populations , author=. Genetics , volume=. 2003 , publisher=

  49. [49]

    The Annals of Statistics , number =

    The pseudo-marginal approach for efficient Monte Carlo computations , author =. The Annals of Statistics , number =

  50. [50]

    2022 IEEE 61st Conference on Decision and Control (CDC) , pages=

    Free energy principle for the noise smoothness estimation of linear systems with colored noise , author=. 2022 IEEE 61st Conference on Decision and Control (CDC) , pages=. 2022 , organization=

  51. [51]

    Entropy , VOLUME =

    Meera, Ajith Anil and Wisse, Martijn , TITLE =. Entropy , VOLUME =. 2021 , NUMBER =

  52. [52]

    2023 62nd IEEE Conference on Decision and Control (CDC) , pages=

    Adaptive noise covariance estimation under colored noise using dynamic expectation maximization , author=. 2023 62nd IEEE Conference on Decision and Control (CDC) , pages=. 2023 , organization=

  53. [53]

    A tutorial on hidden

    Rabiner, Lawrence R , journal=. A tutorial on hidden

  54. [54]

    Journal of Basic Engineering , volume =

    A new approach to linear filtering and prediction problems , author=. Journal of Basic Engineering , volume =

  55. [55]

    2007 , publisher=

    Stochastic processes and filtering theory , author=. 2007 , publisher=

  56. [56]

    and Salmond, David J

    Gordon, Neil J. and Salmond, David J. and Smith, Adrian F.M. , booktitle=. Novel approach to nonlinear/non-. 1993 , organization=

  57. [57]

    Filtering via simulation:

    Pitt, Michael K and Shephard, Neil , journal=. Filtering via simulation:. 1999 , publisher=

  58. [58]

    IEEE Transactions on Signal Processing , volume=

    Variational. IEEE Transactions on Signal Processing , volume=. 2008 , publisher=

  59. [59]

    1965 , publisher=

    The theory of stochastic processes , author=. 1965 , publisher=

  60. [60]

    and Parr, Thomas and Meera, Ajith Anil and Friston, Karl , title =

    Da Costa, Lancelot and Da Costa, Nathaël and Heins, Conor and Medrano, Johan and Pavliotis, Grigorios A. and Parr, Thomas and Meera, Ajith Anil and Friston, Karl , title =. Studies in Applied Mathematics , volume =. doi:https://doi.org/10.1111/sapm.70062 , url =. https://onlinelibrary.wiley.com/doi/pdf/10.1111/sapm.70062 , year =

  61. [61]

    2023 , publisher=

    Zeidman, Peter and Friston, Karl and Parr, Thomas , journal=. 2023 , publisher=

  62. [62]

    2008 , publisher=

    Friston, Karl J and Trujillo-Barreto, N and Daunizeau, Jean , journal=. 2008 , publisher=

  63. [63]

    International Journal of Data Analysis Techniques and Strategies , volume=

    Heywood cases: Possible causes and solutions , author=. International Journal of Data Analysis Techniques and Strategies , volume=. 2022 , publisher=

  64. [64]

    Proceedings of the Royal Society of London

    On finite sequences of real numbers , author=. Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character , volume=. 1931 , publisher=

  65. [65]

    A tutorial on

    Schulz, Eric and Speekenbrink, Maarten and Krause, Andreas , journal=. A tutorial on. 2018 , publisher=

  66. [66]

    Handbook of statistics , volume=

    Non-linear time series models and dynamical systems , author=. Handbook of statistics , volume=. 1985 , publisher=

  67. [67]

    Statistics , volume=

    Bayesian filtering: From Kalman filters to particle filters, and beyond , author=. Statistics , volume=

  68. [68]

    Particle Filtering:

    Godsill, Simon , booktitle=. Particle Filtering:. 2019 , volume=

  69. [69]

    2015 , journal =

    On particle methods for parameter estimation in state-space models , author=. 2015 , journal =

  70. [70]

    Regularised differentiation of measurement data , author=. Proc. XXI IMEKO World Congress" Measurement in Research and Industry , pages=

  71. [71]

    International conference on machine learning , pages=

    Gaussian process kernels for pattern discovery and extrapolation , author=. International conference on machine learning , pages=. 2013 , organization=