Recognition: unknown
ACT: Anti-Crosstalk Learning for Cross-Sectional Stock Ranking via Temporal Disentanglement and Structural Purification
Pith reviewed 2026-05-10 01:32 UTC · model grok-4.3
The pith
The ACT framework improves cross-sectional stock ranking by disentangling temporal components and purifying structural relations to reduce crosstalk.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors claim that their Anti-Crosstalk (ACT) framework, through temporal disentanglement of stock sequences into trend, fluctuation, and shock components using dedicated branches and a Progressive Structural Purification Encoder for sequential removal of structural crosstalk on the trend component, followed by adaptive fusion, achieves superior cross-sectional ranking by preventing unintended information interference across predictive factors.
What carries the argument
Temporal disentanglement via three dedicated branches for trend, fluctuation, and shock, combined with the Progressive Structural Purification Encoder that sequentially purifies structural information on the trend component.
If this is right
- State-of-the-art ranking accuracy is reached on the CSI300 and CSI500 datasets.
- Portfolio performance improves over prior graph-based ranking methods.
- Gains reach as high as 74.25 percent on the CSI300 dataset.
- Both temporal-scale and structural forms of crosstalk are handled without losing usable signals.
Where Pith is reading between the lines
- The same branch-based decomposition could transfer to other multi-entity time-series ranking tasks such as sector or asset-class prediction.
- Running the method on non-Chinese markets would test whether the crosstalk reduction holds under different market microstructures.
- Pairing the purification encoder with attention layers might further isolate relation-specific signals in larger graphs.
Load-bearing premise
Decomposing stock sequences into trend, fluctuation, and shock components via dedicated branches effectively decouples non-transferable local patterns while sequential structural purification removes heterogeneous relation crosstalk without discarding predictive signals.
What would settle it
An ablation study on CSI300 and CSI500 showing that models without the temporal disentanglement branches or the purification encoder reach equal or higher ranking accuracy would falsify the claim that these steps are required to cut crosstalk.
Figures
read the original abstract
Cross-sectional stock ranking is a fundamental task in quantitative investment, relying on both temporal modeling of individual stocks and the capture of inter-stock dependencies. While existing deep learning models leverage graph-based approaches to enhance ranking accuracy by propagating information over relational graphs, they suffer from a key challenge: crosstalk, namely unintended information interference across predictive factors. We identify two forms of crosstalk: temporal-scale crosstalk, where trends, fluctuations, and shocks are entangled in a shared representation and non-transferable local patterns contaminate cross-stock learning; and structural crosstalk, where heterogeneous relations are indiscriminately fused and relation-specific predictive signals are obscured. To address both issues, we propose the Anti-CrossTalk (ACT) framework for cross-sectional stock ranking via temporal disentanglement and structural purification. Specifically, ACT first decomposes each stock sequence into trend, fluctuation, and shock components, then extracts component-specific information through dedicated branches, which effectively decouples non-transferable local patterns. ACT further introduces a Progressive Structural Purification Encoder to sequentially purify structural crosstalk on the trend component after mitigating temporal-scale crosstalk. An adaptive fusion module finally integrates all branch representations for ranking. Experiments on CSI300 and CSI500 demonstrate that ACT achieves state-of-the-art ranking accuracy and superior portfolio performance, with improvements of up to 74.25% on the CSI300 dataset.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes the Anti-Crosstalk (ACT) framework to improve cross-sectional stock ranking by addressing temporal-scale crosstalk (entangled trends, fluctuations, and shocks in shared representations) and structural crosstalk (indiscriminate fusion of heterogeneous relations). ACT decomposes each stock's time series into trend, fluctuation, and shock components processed by dedicated branches, applies a Progressive Structural Purification Encoder to the trend component, and uses an adaptive fusion module to integrate representations for final ranking. Experiments on CSI300 and CSI500 are reported to achieve state-of-the-art ranking accuracy and portfolio performance, with gains up to 74.25% on CSI300.
Significance. If the central empirical claims hold after rigorous validation, the work could advance financial time-series modeling by providing a structured approach to disentangle multi-scale temporal patterns and purify relational signals, potentially yielding more robust cross-stock predictors than standard graph-based methods. The explicit separation of non-transferable local patterns from transferable signals is a conceptually promising direction for quantitative finance applications.
major comments (2)
- [Experiments] Experiments section (and associated tables/figures): The headline claim of SOTA ranking accuracy and up to 74.25% improvement on CSI300 is presented without reported details on the full set of baselines, statistical significance tests (e.g., Diebold-Mariano or bootstrap), ablation studies isolating the contribution of each disentanglement branch and the purification encoder, or explicit controls for look-ahead bias and market-microstructure effects. These omissions make it impossible to attribute performance gains specifically to the anti-crosstalk mechanisms rather than added capacity or generic regularization.
- [Method] Method description (temporal disentanglement and Progressive Structural Purification Encoder): No quantitative diagnostics are referenced to verify that the three branches remain sufficiently uncorrelated (e.g., mutual information or cosine similarity between branch outputs) or that purification retains predictive cross-stock signals on the trend component. Without such checks or component-wise predictive-power ablations, the core assumption that the architecture isolates non-transferable patterns while preserving transferable ones remains untested and load-bearing for the claimed mechanism.
minor comments (1)
- [Method] Notation for the three components (trend, fluctuation, shock) and the purification steps should be introduced with explicit equations early in the method section to improve readability and allow direct comparison with prior multi-scale temporal models.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback. The comments highlight important areas for strengthening empirical validation and methodological verification. We address each major comment below and will revise the manuscript accordingly to improve rigor and clarity.
read point-by-point responses
-
Referee: [Experiments] Experiments section (and associated tables/figures): The headline claim of SOTA ranking accuracy and up to 74.25% improvement on CSI300 is presented without reported details on the full set of baselines, statistical significance tests (e.g., Diebold-Mariano or bootstrap), ablation studies isolating the contribution of each disentanglement branch and the purification encoder, or explicit controls for look-ahead bias and market-microstructure effects. These omissions make it impossible to attribute performance gains specifically to the anti-crosstalk mechanisms rather than added capacity or generic regularization.
Authors: We acknowledge that the current manuscript would benefit from expanded empirical details to strengthen attribution of results. In the revision, we will: (i) list all baselines with references and implementation details; (ii) report Diebold-Mariano tests and bootstrap confidence intervals for the reported improvements; (iii) add comprehensive ablation studies isolating each temporal branch (trend, fluctuation, shock) and the Progressive Structural Purification Encoder; and (iv) include explicit discussion of look-ahead bias controls (confirming strictly causal feature computation) and market-microstructure robustness (e.g., via transaction-cost-adjusted portfolio metrics). These additions will directly address the concern that gains may stem from capacity rather than the anti-crosstalk design. revision: yes
-
Referee: [Method] Method description (temporal disentanglement and Progressive Structural Purification Encoder): No quantitative diagnostics are referenced to verify that the three branches remain sufficiently uncorrelated (e.g., mutual information or cosine similarity between branch outputs) or that purification retains predictive cross-stock signals on the trend component. Without such checks or component-wise predictive-power ablations, the core assumption that the architecture isolates non-transferable patterns while preserving transferable ones remains untested and load-bearing for the claimed mechanism.
Authors: We agree that explicit verification of the disentanglement assumptions is essential. In the revised version, we will add quantitative diagnostics including mutual information and average cosine similarity between the three branch outputs to demonstrate sufficient decorrelation. We will also include component-wise predictive-power ablations (e.g., ranking performance using only individual branches or the purified trend component) to show that non-transferable local patterns are isolated while transferable cross-stock signals are retained. These analyses will provide direct empirical support for the mechanism. revision: yes
Circularity Check
No circularity in the ACT framework's derivation or claims
full rationale
The paper introduces a new neural architecture (temporal disentanglement into trend/fluctuation/shock branches plus progressive structural purification) whose design choices are presented as motivated engineering rather than derived from prior equations or self-referential definitions. No mathematical derivations, fitted parameters renamed as predictions, or load-bearing self-citations appear in the abstract or described components. Experimental SOTA results on CSI300/CSI500 are reported as empirical outcomes, not forced by construction from the model itself. The central claims therefore remain independent of the inputs they purport to explain.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Jingyi Gu and Fadi P. Deek and Guiling Wang. Stock broad-index trend patterns learning via domain knowledge informed generative network. arXiv preprint. arXiv:2302.14164
-
[2]
Temporal pattern attention for multivariate time series forecasting
Shun-Yao Shih and Fan-Keng Sun and Hung-yi Lee. Temporal pattern attention for multivariate time series forecasting. Machine Learning. doi:10.1007/s10994-019-05815-0
-
[3]
Stock market trend prediction using dynamical Bayesian factor graph
Lili Wang and Zitian Wang and Shuai Zhao and Shaohua Tan. Stock market trend prediction using dynamical Bayesian factor graph. Expert Systems with Applications. doi:10.1016/j.eswa.2015.01.035
-
[4]
AAAI Conference on Artificial Intelligence , year=
FactorGCL: A Hypergraph-Based Factor Model with Temporal Residual Contrastive Learning for Stock Returns Prediction , author=. AAAI Conference on Artificial Intelligence , year=
-
[5]
Multi-Variate Time Series Forecasting on Variable Subsets
Zezhi Shao and Zhao Zhang and Fei Wang and Yongjun Xu. Pre-training Enhanced Spatial-temporal Graph Neural Network for Multivariate Time Series Forecasting. Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '22). doi:10.1145/3534678.3539396
-
[6]
Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation. doi:10.1162/neco.1997.9.8.1735
-
[7]
Neural Information Processing Systems , year=
LightGBM: A Highly Efficient Gradient Boosting Decision Tree , author=. Neural Information Processing Systems , year=
-
[8]
Wang and J
S. Wang and J. Cao and P. Yu. Deep Learning for Spatio-Temporal Data Mining: A Survey. IEEE Transactions on Knowledge and Data Engineering
-
[9]
NGAT: A Node-Level Graph Attention Network for Long-Term Stock Prediction , isbn =
Niu, Yingjie and Zhao, Mingchuan and Poti, Valerio and Dong, Ruihai , year =. NGAT: A Node-Level Graph Attention Network for Long-Term Stock Prediction , isbn =
-
[10]
DUET : Dual Clustering Enhanced Multivariate Time Series Forecasting
Xiangfei Qiu and Xingjian Wu and Yan Lin and Chenjuan Guo and Jilin Hu and Bin Yang. DUET : Dual Clustering Enhanced Multivariate Time Series Forecasting. Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining
-
[11]
FinMamba: Market-Aware Graph Enhanced Multi-Level Mamba for Stock Movement Prediction , journal =
Yifan Hu and Peiyuan Liu and Yuante Li and Dawei Cheng and Naiqi Li and Tao Dai and Jigang Bao and Shu. FinMamba: Market-Aware Graph Enhanced Multi-Level Mamba for Stock Movement Prediction , journal =. 2025 , doi =
2025
-
[12]
Thompson and Sofia Martinez and Ethan K
Liam J. Thompson and Sofia Martinez and Ethan K. Wong and Amelia R. Clark and Oliver B. Reid. Industry-linked Stock Volatility Prediction Based on Graph Neural Networks. Frontiers in Business and Finance. doi:10.71465/fbf452
-
[13]
ArXiv , year=
Gated Fusion Enhanced Multi-Scale Hierarchical Graph Convolutional Network for Stock Movement Prediction , author=. ArXiv , year=
-
[14]
Hadi S. AlQahtani and Mohammed J. Alhaddad and Mutasem Jarrah. Stock Market Prediction of the Saudi Telecommunication Sector Using Univariate Deep Learning Models. International Journal of Advanced Computer Science and Applications. doi:10.14569/IJACSA.2025.0160835
-
[15]
ADGAT : Anomaly Detection-based Graph Adversarial Defense Framework
Youhuizi Li and Yi Wang and Yuyu Yin and Tingting Liang. ADGAT : Anomaly Detection-based Graph Adversarial Defense Framework. Neurocomputing. doi:10.1016/j.neucom.2025.131368
-
[16]
Ze-Lin Wei and Hong-Yu An and Yao Yao and Wei-Cong Su and Guo Li and Saifullah and Bi-Feng Sun and Mu-Jiang-Shan Wang. FSTGAT : Financial Spatio-Temporal Graph Attention Network for Non-Stationary Financial Systems and Its Application in Stock Price Prediction. Symmetry. doi:10.3390/sym17081344
-
[17]
HiSTGNN : Hierarchical Spatio-Temporal Graph Neural Network for Weather Forecasting
Minbo Ma and Peng Xie and Fei Teng and Bin Wang and Shenggong Ji and Junbo Zhang and Tianrui Li. HiSTGNN : Hierarchical Spatio-Temporal Graph Neural Network for Weather Forecasting. Information Sciences. doi:10.1016/j.ins.2023.119580
-
[18]
Yuanbo Xu and Xiao Cai and En Wang and Wenbin Liu and Yongjian Yang and Funing Yang. Dynamic Traffic Correlations Based Spatio-Temporal Graph Convolutional Network for Urban Traffic Prediction. Information Sciences. doi:10.1016/j.ins.2022.11.086
-
[19]
Multi-Granularity Spatio-Temporal Correlation Networks for Stock Trend Prediction
Jiahao Chen and Liang Xie and Wenjing Lin and Yuchen Wu and Haijiao Xu. Multi-Granularity Spatio-Temporal Correlation Networks for Stock Trend Prediction. IEEE Access. doi:10.1109/ACCESS.2024.3393774
-
[20]
MASTER : Market-Guided Stock Transformer for Stock Price Forecasting
Tong Li and Zhaoyang Liu and Yanyan Shen and Xue Wang and Haokun Chen and Sen Huang. MASTER : Market-Guided Stock Transformer for Stock Price Forecasting. Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence. doi:10.1609/aaai.v38i1.27767
-
[21]
Gaurang Sonkavde and Deepak Sudhakar Dharrao and Anupkumar M. Bongale and Sarika T. Deokate and Deepak Doreswamy and Subraya Krishna Bhat. Forecasting Stock Market Prices Using Machine Learning and Deep Learning Models: A Systematic Review, Performance Analysis and Discussion of Implications. International Journal of Financial Studies. doi:10.3390/ijfs11030094
-
[22]
C. Narendra Babu and B. Eswara Reddy. A Moving-Average Filter Based Hybrid ARIMA--ANN Model for Forecasting Time Series Data. Applied Soft Computing. doi:10.1016/j.asoc.2014.05.028
-
[23]
Predicting Returns with Financial Ratios
Jonathan Lewellen. Predicting Returns with Financial Ratios. Journal of Financial Economics. doi:10.1016/j.jfineco.2002.11.002
-
[24]
Gong-meng Chen and Michael Firth and Oliver M. Rui. The Dynamic Relation Between Stock Returns, Trading Volume, and Volatility. Financial Review. doi:10.1111/j.1540-6288.2001.tb00024.x
-
[25]
Stock Price Prediction Based on Deep Neural Networks
Pengfei Yu and Xuesong Yan. Stock Price Prediction Based on Deep Neural Networks. Neural Computing and Applications. doi:10.1007/s00521-019-04212-x
-
[26]
Role of Financial Markets in a Global Economy and the Concept of Uncertainty
Ercan Ekmekcioglu. Role of Financial Markets in a Global Economy and the Concept of Uncertainty. International Journal of Academic Research in Economics and Management Sciences. doi:10.6007/IJAREMS/v2-i4/147
-
[27]
FDI and Economic Growth: The Role of Local Financial Markets
Laura Alfaro and Areendam Chanda and Sebnem Kalemli-Ozcan and Selin Sayek. FDI and Economic Growth: The Role of Local Financial Markets. Journal of International Economics. doi:10.1016/S0022-1996(03)00081-3
-
[28]
A Systematic Review on Graph Neural Network-based Methods for Stock Market Forecasting
Manali Patel and Krupa Jariwala and Chiranjoy Chattopadhyay. A Systematic Review on Graph Neural Network-based Methods for Stock Market Forecasting. ACM Computing Surveys. doi:10.1145/3696411
-
[29]
Ana Lazcano and Pedro Javier Herrera and Manuel Monge. A Combined Model Based on Recurrent Neural Networks and Graph Convolutional Networks for Financial Time Series Forecasting. Mathematics. doi:10.3390/math11010224
-
[30]
Hao Qian and Hongting Zhou and Qian Zhao and Hao Chen and Hongxiang Yao and Jingwei Wang and Ziqi Liu and Fei Yu and Zhiqiang Zhang and Jun Zhou. MDGNN : Multi-Relational Dynamic Graph Neural Network for Comprehensive and Dynamic Stock Investment Prediction. Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence. doi:10.1609/aaai.v38i13.29381
-
[31]
Kipf and Max Welling
Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. Proceedings of the Fifth International Conference on Learning Representations
-
[32]
M. Nabipour and P. Nayyeri and H. Jabani and A. Mosavi and E. Salwana and Shahab S. Deep Learning for Stock Market Prediction. Entropy. doi:10.3390/e22080840
-
[33]
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,
Modeling the Stock Relation with Graph Network for Overnight Stock Movement Prediction , author =. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,. 2020 , month =
2020
-
[34]
Chengfeng Xu and Huifeng Huang and Xiang Ying and Jing Gao and Zhe Li and Peng Zhang and Junyi Xiao and Jian Zhang and Jun Luo. HGNN : Hierarchical Graph Neural Network for Predicting the Classification of Price-Limit-Hitting Stocks. Information Sciences. doi:10.1016/j.ins.2022.06.010
-
[35]
2022 , eprint=
FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting , author=. 2022 , eprint=
2022
-
[36]
Stock Movement Prediction from Tweets and Historical Prices
Xu, Yumo and Cohen, Shay B. Stock Movement Prediction from Tweets and Historical Prices. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2018. doi:10.18653/v1/P18-1183
-
[37]
Shumway and David S
Robert H. Shumway and David S. Stoffer , title =. 2017 , pages =
2017
-
[38]
Petar Veli. Graph Attention Networks , year =. 1710.10903 , archiveprefix =
work page internal anchor Pith review arXiv
-
[39]
Jensen , title =
Yunyao Cheng and Peng Chen and Chenjuan Guo and Kai Zhao and Qingsong Wen and Bin Yang and Christian S. Jensen , title =. Proceedings of the VLDB Endowment , year =
-
[40]
Kavitha and A
G. Kavitha and A. Udhayakumar and D. Nagarajan , title =. 2013 , eprint =
2013
-
[41]
Proceedings of the 20th IEEE International Conference on Data Mining (ICDM) , year =
Chuheng Zhang and Yuanqi Li and Xi Chen and Yifei Jin and Pingzhong Tang and Jian Li , title =. Proceedings of the 20th IEEE International Conference on Data Mining (ICDM) , year =
-
[42]
2020 , eprint=
A Dynamic Bayesian Model for Interpretable Decompositions of Market Behaviour , author=. 2020 , eprint=
2020
-
[43]
Crosstalk Impacts on Homogeneous Weakly-Coupled Multicore Fiber Based IM/DD System , url=
Gan, Lin and Zhou, Jiajun and Huo, Liang and Shen, Li and Yang, Chen and Tong, Weijun and Fu, Songnian and Tang, Ming and Liu, Deming , year=. Crosstalk Impacts on Homogeneous Weakly-Coupled Multicore Fiber Based IM/DD System , url=. doi:10.1109/acp.2018.8595879 , booktitle=
-
[44]
2022 , eprint=
HIST: A Graph-based Framework for Stock Trend Forecasting via Mining Concept-Oriented Shared Information , author=. 2022 , eprint=
2022
-
[45]
Stock ranking prediction using a graph aggregation network based on stock price and stock relationship information , journal =. 2023 , issn =. doi:10.1016/j.ins.2023.119236 , author =
-
[46]
On the Properties of Neural Machine Translation: Encoder-Decoder Approaches , doi =
Bahdanau, Dzmitry and Cho, Kyunghyun and Rachmad, Yoesoep and Merrienboer, Bart , year =. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches , doi =
-
[47]
2023 , eprint=
Attention Is All You Need , author=. 2023 , eprint=
2023
-
[48]
2023 , eprint=
TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis , author=. 2023 , eprint=
2023
-
[49]
A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction , doi =
Qin, Yao and Song, Dongjin and Chen, Haifeng and Cheng, Wei and Jiang, Guofei and Cottrell, Garrison , year =. A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction , doi =
-
[50]
2017 , booktitle =
Zhang, Liheng and Aggarwal, Charu and Qi, Guo-Jun , title =. 2017 , booktitle =
2017
-
[51]
2023 , eprint=
Stock Market Prediction via Deep Learning Techniques: A Survey , author=. 2023 , eprint=
2023
-
[52]
ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , author=
A Distillation-based Future-aware Graph Neural Network for Stock Trend Prediction , doi=. ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , author=. 2025 , month=apr, pages=
2025
-
[53]
Advances in Neural Information Processing Systems , eprint=
Shiyang Li and Xiaoyong Jin and Yao Xuan and Xiyou Zhou and Wenhu Chen and Yu-Xiang Wang and Xifeng Yan , title =. Advances in Neural Information Processing Systems , eprint=
-
[54]
2022 , eprint=
Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting , author=. 2022 , eprint=
2022
-
[55]
AdaMCT: adaptive mixture of CNN-transformer for sequential recommendation , author=. Proceedings of the 32nd ACM International Conference on Information and Knowledge Management , pages=. doi:10.1145/3583780.3614773 , publisher=
-
[56]
and Koltun, Vladlen , year =
Bai, Shaojie and Kolter, J. and Koltun, Vladlen , year =. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling , doi =
-
[57]
TabNet: Attentive Interpretable Tabular Learning , volume =
Arik, Sercan and Pfister, Tomas , year =. TabNet: Attentive Interpretable Tabular Learning , volume =. Proceedings of the AAAI Conference on Artificial Intelligence , doi =
-
[58]
CatBoost: unbiased boosting with categorical features , url =
Prokhorenkova, Liudmila and Gusev, Gleb and Vorobev, Aleksandr and Dorogush, Anna Veronika and Gulin, Andrey , booktitle =. CatBoost: unbiased boosting with categorical features , url =
-
[59]
Xgboost: A scalable tree boosting system,
Xgboost: A scalable tree boosting system , author=. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , publisher=. 2016 , month=aug, pages=. doi:10.1145/2939672.2939785 , collection=
-
[60]
The Twelfth International Conference on Learning Representations , year=
TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting , author=. The Twelfth International Conference on Learning Representations , year=
-
[61]
The Twelfth International Conference on Learning Representations , year=
iTransformer: Inverted Transformers Are Effective for Time Series Forecasting , author=. The Twelfth International Conference on Learning Representations , year=
-
[62]
2026 , eprint=
FreqCycle: A Multi-Scale Time-Frequency Analysis Method for Time Series Forecasting , author=. 2026 , eprint=
2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.