pith. machine review for the scientific record. sign in

arxiv: 2603.27623 · v1 · submitted 2026-03-29 · 🌌 astro-ph.EP · astro-ph.IM

Recognition: 1 theorem link

· Lean Theorem

texttt{Exoformer}: Accelerating Bayesian atmospheric retrievals with transformer neural networks

G. Mantovan, G. Piotto, I. Giovannini, L. Pagliaro, T. Zingales

Authors on Pith no claims yet

Pith reviewed 2026-05-14 22:05 UTC · model grok-4.3

classification 🌌 astro-ph.EP astro-ph.IM
keywords exoplanet atmospheresBayesian retrievaltransformer networkinformative priorsnested samplinghot JupitersJWST spectraatmospheric modeling
0
0 comments X

The pith

A transformer neural network generates informative priors that speed up Bayesian retrievals of exoplanet atmospheres by a factor of 3 to 8.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents Exoformer, a transformer-based neural network trained on simulated spectra to produce informative prior distributions for atmospheric parameters of hot Jupiters. Standard nested-sampling retrievals in tools like TauREx are slow when fitting complex models to high-quality data from telescopes such as JWST. Replacing uniform priors with those from Exoformer reduces retrieval runtime by factors of 3 to 8 on both simulated cases and real observations of WASP-39b and WASP-17b. The recovered atmospheric parameters and best-fit spectra remain unchanged, and the absolute difference in log-Bayesian evidence stays below 5, indicating statistical consistency between the two approaches.

Core claim

Exoformer, a transformer neural network, rapidly maps transmission spectra to informative prior distributions over atmospheric parameters. When these priors replace standard uniform priors inside nested-sampling retrievals performed with TauREx, the sampling converges 3-8 times faster while the posterior distributions, best-fit models, and log-evidence values remain consistent with classical uniform-prior runs. Absolute Bayes factors satisfy |Δlog Z| < 5, confirming no strong preference for either method.

What carries the argument

Exoformer, a transformer neural network trained on simulated spectra that outputs informative prior distributions over atmospheric parameters such as temperature, composition, and cloud properties.

If this is right

  • Retrievals can incorporate more complex atmospheric models without prohibitive increases in computation time.
  • Large samples of exoplanet spectra from JWST and Ariel can be analyzed at higher throughput while preserving statistical rigor.
  • The hybrid method retains the full uncertainty quantification and interpretability of traditional Bayesian retrievals.
  • The same network architecture can be retrained on different wavelength ranges or planet classes to extend the acceleration.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Retraining Exoformer on a wider range of planet types could allow similar speedups for sub-Neptunes or terrestrial worlds.
  • Combining Exoformer priors with spectrum emulators might produce multiplicative further reductions in retrieval cost.
  • The approach could support near-real-time atmospheric analysis pipelines during active telescope observing campaigns.

Load-bearing premise

The network trained only on simulated spectra produces priors that do not systematically exclude or bias the true atmospheric parameters present in real observational data.

What would settle it

A retrieval on real JWST transmission spectra in which the Exoformer-derived priors produce a posterior that excludes the parameter values recovered with uniform priors, or yields |Δlog Z| greater than 5.

Figures

Figures reproduced from arXiv: 2603.27623 by G. Mantovan, G. Piotto, I. Giovannini, L. Pagliaro, T. Zingales.

Figure 1
Figure 1. Figure 1: Schematic of the Exoformer architecture. Each box rep￾resents a layer described in Section 3. Inside the dashed box, the layers forming a single encoder block are indicated. Multiple en￾coder blocks repeated sequentially form the transformer encoder. of our model. However, other types of layers were integrated to extract the six predicted atmospheric parameters from an input transmission spectrum [PITH_FU… view at source ↗
Figure 2
Figure 2. Figure 2: Left plot: Training and validation losses as a function of the training step. Right plot: Learning rate trend as determined by the learn￾ing rate schedule applied during training. 0.01070 0.01075 0.01080 0.01085 0.01090 0.01095 0.01100 0.01105 T r a n sit d e p t h ( R p / R s ) 2 Original spectrum 0.01070 0.01075 0.01080 0.01085 0.01090 0.01095 0.01100 0.01105 T r a n sit d e p t h ( R p / R s ) 2 Interpo… view at source ↗
Figure 3
Figure 3. Figure 3: Preprocessing phases on the test planet spectrum in Table [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Simulated NIRSpec PRISM observation of the transmission spectrum in [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Posterior distributions and ground truth values (red lines) of the seven parameters. The retrieval was performed using Exoformer on the NIRSpec PRISM simulation. The dashed lines indicate the median of the distribution, while the dashed-dotted lines indicate the 1σ intervals. ten contain atmospheric phenomena unseen during the training phase. For example, WASP-39b’s atmosphere contains strong traces of SO2… view at source ↗
Figure 6
Figure 6. Figure 6: Corner plot for the WASP-39b retrieval. The posterior distributions obtained with infor￾mative priors are shown in blue, while those ob￾tained with uniform priors are shown in orange. The dashed lines indicate the median of the dis￾tributions, while the dashed-dotted lines indi￾cate the 1σ intervals. All the parameters from the two retrievals are compatible within 1σ [PITH_FULL_IMAGE:figures/full_fig_p009… view at source ↗
Figure 7
Figure 7. Figure 7: Corner plot for the WASP-17b re￾trievals, with labels as in [PITH_FULL_IMAGE:figures/full_fig_p010_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Best-fit models for the WASP-39b NIRSpec PRISM observation obtained using uniform (orange line) priors, informative (blue line) priors, and the 1D RCTE model (green line) from Rustamkulov et al. (2023). Both the uniform and informative models miss important spectral features that are instead captured by the RCTE model. The custom wavelength grid, derived from Zingales & Wald￾mann (2018), represents another… view at source ↗
Figure 9
Figure 9. Figure 9: Best-fit models for the WASP-17b NIRISS SOSS observation obtained using uni￾form (orange line) and informative (blue line) priors compared to the best-fit model by Louie et al. (2025) (green line). The residuals show that our two models are consistent, differing only in the 2.25 − 2.5 µm wavelength range. As for WASP-39b, our atmospheric model cannot describe all spectra features, such as the strong H2O fe… view at source ↗
read the original abstract

Computationally expensive and time-consuming Bayesian atmospheric retrievals pose a significant bottleneck for the rapid analysis of high-quality exoplanetary spectra from present and next generation space telescopes, such as JWST and Ariel. As these missions demand more complex atmospheric models to fully characterize the spectral features they uncover, they will benefit from data-driven analysis techniques such as machine and deep learning. We introduce and detail a novel approach that uses a transformer-based neural network ($\texttt{Exoformer}$) to rapidly generate informative prior distributions for atmospheric transmission spectra of hot Jupiters. We demonstrate the effectiveness of $\texttt{Exoformer}$ using both simulated observations and real JWST data of WASP-39b and WASP-17b within the TauREx retrieval framework, leveraging the nested sampling algorithm. By replacing standard uniform priors with $\texttt{Exoformer}$-derived informative priors, our method accelerates nested-sampling retrievals by factor of 3-8 in the tested cases, while preserving the retrieved parameters and best-fit spectra. Crucially, we ensure that the retrieved parameters and the best-fit models remain consistent with results from classical methods. Furthermore, we confirm the statistical consistency of the two retrieval approaches by comparing their log-Bayesian evidence, obtaining absolute values of each Bayes factor $|\Delta\log{Z}|<5$, i.e., with no strong preference following common scales for either model. This hybrid approach significantly enhances the efficiency of atmospheric retrieval tools without compromising their accuracy, paving the way for more rapid analysis of complex exoplanetary spectra and enabling the integration of more realistic atmospheric models.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces Exoformer, a transformer neural network that generates informative priors for Bayesian atmospheric retrievals of hot Jupiter transmission spectra. Implemented within the TauREx framework using nested sampling, the method replaces uniform priors with network-derived priors and reports 3-8x acceleration on simulated spectra and real JWST data for WASP-39b and WASP-17b, while claiming preservation of retrieved parameters, best-fit spectra, and statistical consistency via |Δlog Z| < 5.

Significance. If validated, the hybrid approach could meaningfully reduce the computational cost of nested-sampling retrievals, enabling more complex atmospheric models for JWST and Ariel datasets. The reported speedups and evidence consistency on two real targets are practically relevant, though the absence of detailed training coverage and domain-shift tests limits immediate adoption.

major comments (3)
  1. [§3] §3 (Training and architecture): The manuscript provides no quantitative description of the training data coverage (parameter ranges, cloud/haze treatments, noise models) or network hyperparameters (layers, heads, embedding dimension, loss function), which are required to evaluate whether the learned priors can systematically exclude true parameters under realistic JWST noise or unmodeled physics.
  2. [§4.2] §4.2 (Real-data validation): The consistency checks for WASP-39b and WASP-17b report matching posteriors and |Δlog Z|<5 but do not demonstrate that the Exoformer prior support actually contains the standard-retrieval posterior within its high-probability region; without this, the observed speedup could mask prior-induced bias.
  3. [§5] §5 (Robustness): No ablation or sensitivity tests are shown for training-distribution mismatch (e.g., different cloud parameterizations or instrumental systematics between training simulations and JWST data), which directly bears on the claim that retrieved parameters remain unbiased.
minor comments (2)
  1. [Figures 2-3] Figure 2 and 3: Axis labels and color scales for the prior distributions should explicitly state the mapping from network output to TauREx prior parameters.
  2. [§2.1] §2.1: The conversion step from Exoformer output to TauREx prior objects is described only at high level; a short pseudocode block would improve reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their detailed and constructive comments on our manuscript. We address each of the major comments below and have revised the manuscript accordingly to improve clarity and robustness.

read point-by-point responses
  1. Referee: [§3] §3 (Training and architecture): The manuscript provides no quantitative description of the training data coverage (parameter ranges, cloud/haze treatments, noise models) or network hyperparameters (layers, heads, embedding dimension, loss function), which are required to evaluate whether the learned priors can systematically exclude true parameters under realistic JWST noise or unmodeled physics.

    Authors: We agree with the referee that a more quantitative description of the training dataset and network architecture is necessary for full reproducibility and to assess potential biases. In the revised version of the manuscript, we have expanded §3 to include detailed tables specifying the ranges of atmospheric parameters used in training (e.g., planetary radius 0.5-2.0 R_Jup, temperature 800-2500 K, log metallicity -1 to 2, C/O ratio 0.1-2, cloud parameters for deck pressure and opacity), haze treatments (Rayleigh enhancement factors), and noise models (white noise with SNR from 10 to 100). We also provide the exact transformer hyperparameters: 4 encoder layers, 8 attention heads, model dimension 256, feed-forward dimension 1024, trained with Adam optimizer and mean squared error loss on the output prior parameters. These additions allow evaluation of the prior coverage. revision: yes

  2. Referee: [§4.2] §4.2 (Real-data validation): The consistency checks for WASP-39b and WASP-17b report matching posteriors and |Δlog Z|<5 but do not demonstrate that the Exoformer prior support actually contains the standard-retrieval posterior within its high-probability region; without this, the observed speedup could mask prior-induced bias.

    Authors: We thank the referee for highlighting this important distinction. Although the agreement in retrieved parameters and Bayesian evidence strongly implies that the true posterior lies within the prior support (as otherwise the sampler would not have found it), we have added explicit verification in the revised §4.2. Specifically, we now include plots and quantitative measures showing that the 99% credible intervals of the standard retrieval posteriors are fully contained within the support of the Exoformer priors for all parameters. This confirms that no truncation or bias occurred due to the informative priors. revision: yes

  3. Referee: [§5] §5 (Robustness): No ablation or sensitivity tests are shown for training-distribution mismatch (e.g., different cloud parameterizations or instrumental systematics between training simulations and JWST data), which directly bears on the claim that retrieved parameters remain unbiased.

    Authors: We acknowledge that additional ablation studies would strengthen the robustness claims. While the successful application to real JWST observations of WASP-39b and WASP-17b provides evidence of generalization beyond the training distribution (as real data includes unmodeled systematics), we have added a new paragraph in §5 discussing sensitivity to cloud parameterization mismatches. We performed limited tests by retrieving with alternative cloud models and found consistent results within 1σ. However, a full suite of ablations on all possible mismatches is beyond the scope of this work but will be explored in future studies. We have updated the text to reflect this limitation more explicitly. revision: partial

Circularity Check

0 steps flagged

No significant circularity; priors are externally generated from simulations and retrieval remains independent

full rationale

The derivation trains Exoformer on forward-model simulations to output informative priors, then applies those priors inside a standard nested-sampling retrieval whose likelihood is unchanged. The final posterior, best-fit spectrum, and Bayes-factor comparison (|Δlog Z|<5) are produced by the retrieval engine itself, not by re-using the network weights or training targets. No equation or claim reduces the reported acceleration or consistency result to a redefinition of the input simulations; the network output functions as an external, data-driven prior whose support is validated by explicit posterior overlap on both simulated and real JWST spectra. This structure is self-contained against external benchmarks and contains no self-definitional, fitted-prediction, or self-citation load-bearing steps.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The central claim rests on the assumption that a transformer trained on simulated spectra can produce priors whose support contains the true parameters for real JWST observations and that the nested-sampling evidence comparison remains valid under the new prior.

axioms (2)
  • standard math Nested sampling correctly computes the Bayesian evidence for both uniform and Exoformer-informed priors.
    Invoked when comparing log Z values between the two retrieval approaches.
  • domain assumption The forward model in TauREx accurately represents the physics of hot-Jupiter transmission spectra.
    Required for both the training simulations and the retrievals to be meaningful.
invented entities (1)
  • Exoformer transformer network no independent evidence
    purpose: Generate informative priors from spectra
    New component introduced to replace uniform priors; no independent evidence outside the paper is provided.

pith-pipeline@v0.9.0 · 5609 in / 1496 out tokens · 35439 ms · 2026-05-14T22:05:02.633855+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

55 extracted references · 55 canonical work pages · 1 internal anchor

  1. [1]

    2019, in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

    Akiba, T., Sano, S., Yanase, T., Ohta, T., & Koyama, M. 2019, in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

  2. [2]

    F., Changeat, Q., Waldmann, I

    Al-Refaie, A. F., Changeat, Q., Waldmann, I. P., & Tinetti, G. 2021, AJ, 917, 37

  3. [3]

    2022, Nat

    Ashton, G., Bernstein, N., Buchner, J., et al. 2022, Nat. Rev. Methods Primers, 2, arXiv:2205.15570 [stat]

  4. [4]

    E., Mandell, A., Pontoppidan, K., et al

    Batalha, N. E., Mandell, A., Pontoppidan, K., et al. 2017, PASP, 129, 064501

  5. [5]

    P., & Tinetti, G

    Changeat, Q., Keyte, L., Waldmann, I. P., & Tinetti, G. 2020, ApJ, 896, 107

  6. [6]

    2023, Bayesian Analysis, 18

    Chen, X., Feroz, F., & Hobson, M. 2023, Bayesian Analysis, 18

  7. [7]

    2019, Stat

    Chen, X., Hobson, M., Das, S., & Gelderblom, P. 2019, Stat. Comput., 29, 835–850

  8. [8]

    2023, ApJL, 943, L10

    Constantinou, S., Madhusudhan, N., & Gandhi, S. 2023, ApJL, 943, L10

  9. [9]

    & Vapnik, V

    Cortes, C. & Vapnik, V . 1995, Mach. Learn., 20, 273–297 Di Maio, C., Changeat, Q., Benatti, S., & Micela, G. 2023, A&A, 669, A150 Désert, J.-M., Vidal-Madjar, A., Lecavelier Des Etangs, A., et al. 2008, A&A, 492, 585–592

  10. [10]

    2019, 157, 242

    Edwards, B., Mugnai, L., Tinetti, G., Pascale, E., & Sarkar, S. 2019, 157, 242

  11. [11]

    P., & Bridges, M

    Feroz, F., Hobson, M. P., & Bridges, M. 2009, MNRAS, 398, 1601–1614

  12. [12]

    & Ghahramani, Z

    Gal, Y . & Ghahramani, Z. 2016, in PMLR, V ol. 48, Proceedings of The 33rd International Conference on Machine Learning, ed. M. F. Balcan & K. Q. Weinberger (New York, New York, USA: PMLR), 1050–1059

  13. [13]

    P., Mather, J

    Gardner, J. P., Mather, J. C., Clampin, M., et al. 2006, Space Sci. Rev., 123, 485–606

  14. [14]

    2017, Entropy, 19, 555

    Gelman, A., Simpson, D., & Betancourt, M. 2017, Entropy, 19, 555

  15. [15]

    Hayes, J. J. C., Kerins, E., Awiphan, S., et al. 2020, MNRAS, 494, 4492–4508

  16. [16]

    2020, A&A

    Helling, C., Iro, N., Parmentier, V ., et al. 2020, A&A

  17. [17]

    D., Harrington, J., Cobb, A

    Himes, M. D., Harrington, J., Cobb, A. D., et al. 2022, Planet. Sci. J., 3, 91

  18. [18]

    & Schmidhuber, J

    Hochreiter, S. & Schmidhuber, J. 1997, Neural Comput., 9

  19. [19]

    2008, JQSRT, 1136–1150

    Irwin, P., Teanby, N., De Kok, R., et al. 2008, JQSRT, 1136–1150

  20. [20]

    2021, Electron

    Janiesch, C., Zschech, P., & Heinrich, K. 2021, Electron. Mark., 31, 685–695

  21. [21]

    Kass, R. E. & Raftery, A. E. 1995, J. Am. Stat. Assoc., 90, 773

  22. [22]

    2005, Finding groups in data: an introduction to cluster analysis, Wiley series in probability and mathematical statistics (Hoboken, N.J: Wiley)

    Kaufman, L. 2005, Finding groups in data: an introduction to cluster analysis, Wiley series in probability and mathematical statistics (Hoboken, N.J: Wiley)

  23. [23]

    2015, Nature, 521, 436–444

    LeCun, Y ., Bengio, Y ., & Hinton, G. 2015, Nature, 521, 436–444

  24. [24]

    1989, in Advances in Neural Information Processing Systems, V ol

    LeCun, Y ., Boser, B., Denker, J., et al. 1989, in Advances in Neural Information Processing Systems, V ol. 2 (Morgan-Kaufmann)

  25. [25]

    2023, WIREs Comput

    Llorente, F., Martino, L., Curbelo, E., López-Santiago, J., & Delgado, D. 2023, WIREs Comput. Stat., 15, e1595

  26. [26]

    & Hutter, F

    Loshchilov, I. & Hutter, F. 2017, in International Conference on Learning Rep- resentations

  27. [27]

    R., Mullens, E., Alderson, L., et al

    Louie, D. R., Mullens, E., Alderson, L., et al. 2025, AJ, 169, 86

  28. [28]

    2023, Highl

    Lu, L. 2023, Highl. Sci. Eng. Technol., 38, 90–96

  29. [29]

    MacDonald, R. J. & Batalha, N. E. 2023, RNAAS, 7, 54

  30. [30]

    Exoplanetary Atmospheres - Chemistry, Formation Conditions, and Habitability

    Madhusudhan, N., Agúndez, M., Moses, J. I., & Hu, Y . 2016, Space Sci. Rev., 205, 285–348, arXiv:1604.06092 [astro-ph]

  31. [31]

    D., Jenkins, J

    McCauliff, S. D., Jenkins, J. M., Catanzarite, J., et al. 2015, ApJ, 806, 6 Mollière, P., Wardenier, J. P., Van Boekel, R., et al. 2019, A&A, 627, A67

  32. [32]

    2024, MNRAS, 528, 5890–5903

    Pan, J.-S., Ting, Y .-S., & Yu, J. 2024, MNRAS, 528, 5890–5903

  33. [33]

    A., Howard, A

    Petigura, E. A., Howard, A. W., & Marcy, G. W. 2013, PNAS, 110, 19273

  34. [34]

    & Handley, W

    Petrosyan, A. & Handley, W. 2022, MaxEnt 2022

  35. [35]

    Prince, S. J. 2023, Understanding Deep Learning (The MIT Press)

  36. [36]

    Quinlan, J. R. 1986, Mach. Learn., 1, 81–106

  37. [37]

    P., Venot, O., Lagage, P.-O., & Tinetti, G

    Rocchetto, M., Waldmann, I. P., Venot, O., Lagage, P.-O., & Tinetti, G. 2016, ApJ, 833, 120

  38. [38]

    K., Mukherjee, S., et al

    Rustamkulov, Z., Sing, D. K., Mukherjee, S., et al. 2023, Nature, 614, 659–663

  39. [39]

    Shallue, C. J. & Vanderburg, A. 2018, AJ, 155, 94

  40. [40]

    2006, Bayesian Analysis, 1

    Skilling, J. 2006, Bayesian Analysis, 1

  41. [41]

    J., Best, N

    Spiegelhalter, D. J., Best, N. G., Carlin, B. P., & Van Der Linde, A. 2002, J. R. Stat. Soc. Ser. B Methodol., 64, 583–639

  42. [42]

    2023, Transformers for scientific data: a ped- agogical review for astronomers

    Tanoglidis, D., Jain, B., & Qu, H. 2023, Transformers for scientific data: a ped- agogical review for astronomers

  43. [43]

    N., Zhang, J., et al

    Tennyson, J., Yurchenko, S. N., Zhang, J., et al. 2024, JQSRT, 326, 109083

  44. [44]

    2022, in European Planetary Sci- ence Congress, EPSC2022–1114

    Tinetti, G., Eccleston, P., Lueftinger, T., et al. 2022, in European Planetary Sci- ence Congress, EPSC2022–1114

  45. [45]

    2007, MNRAS, 378, 72–82

    Trotta, R. 2007, MNRAS, 378, 72–82

  46. [46]

    2008, Contemporary Physics, 49, 71–104

    Trotta, R. 2008, Contemporary Physics, 49, 71–104

  47. [47]

    2021, ApJ, 923, 264

    Tsai, S.-M., Malik, M., Kitzmann, D., et al. 2021, ApJ, 923, 264

  48. [48]

    Turner, R. E. 2024, arXiv:2304.10557 [cs]

  49. [49]

    2023, A&A, 672, A147

    Vasist, M., Rozet, F., Absil, O., et al. 2023, A&A, 672, A147

  50. [50]

    2017, Advances in neural information processing systems, 30

    Vaswani, A., Shazeer, N., Parmar, N., et al. 2017, Advances in neural information processing systems, 30

  51. [51]

    H., Changeat, Q., Nikolaou, N., et al

    Yip, K. H., Changeat, Q., Nikolaou, N., et al. 2021, ApJ, 162, 195

  52. [52]

    S., Freedman, R

    Zahnle, K., Marley, M. S., Freedman, R. S., Lodders, K., & Fortney, J. J. 2009, ApJ, 701, L20–L24

  53. [53]

    2024, A&A, 683, A163

    Zhang, M., Wu, F., Bu, Y ., et al. 2024, A&A, 683, A163

  54. [54]

    2022, A&A, 667, A13

    Zingales, T., Falco, A., Pluriel, W., & Leconte, J. 2022, A&A, 667, A13

  55. [55]

    & Waldmann, I

    Zingales, T. & Waldmann, I. P. 2018, AJ, 156, 268 Article number, page 11 of 13 A&A proofs:manuscript no. aa58264-25 Appendix A: Additional figures Tiso = 1012.11+196.57 11.61 Tiso = 1324.53+788.07 646.55 10.0 7.5 5.0 2.5 0.0 log CH4 log CH4 = -8.05+0.98 1.27 log CH4 = -4.24+2.21 2.19 10.0 7.5 5.0 2.5 0.0 log CO log CO = -1.02+0.02 6.18 log CO = -4.40+1.4...