Recognition: no theorem link
Efficient Prompt Learning for Traffic Forecasting
Pith reviewed 2026-05-12 01:16 UTC · model grok-4.3
The pith
A lightweight prompt tuning framework adapts pre-trained spatio-temporal GNNs to novel traffic distributions without changing model parameters.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes that a simple prompt tuning mechanism called SimpleST can enhance the generalization of pre-trained spatio-temporal GNNs by inserting learnable prompt vectors that capture distribution shifts, all while leaving the original model parameters unchanged. This enables efficient adaptation to new data distributions in traffic forecasting tasks, outperforming methods that require full model updates or ignore shifts entirely.
What carries the argument
The SimpleST prompt tuning framework, which adds lightweight, learnable prompt vectors to the input or intermediate representations of a frozen spatio-temporal GNN to compensate for distribution shifts in traffic data.
Load-bearing premise
Small prompt vectors are sufficient to capture and compensate for the complex spatio-temporal distribution shifts without modifying the GNN parameters or architecture.
What would settle it
Running the method on a traffic dataset with extreme distribution shifts where the prompt-tuned predictions match or underperform the unadapted pre-trained model, or where full retraining is necessary for gains.
Figures
read the original abstract
Accurate traffic prediction is essential for optimizing transportation systems, enhancing resource allocation, and improving overall urban administration. Spatio-temporal graph neural networks (GNNs) have achieved state-of-the-art performance and have been widely used in various spatio-temporal prediction scenarios. However, these prediction methods often exhibit low generalization ability, struggling with distribution shifts caused by spatio-temporal dynamics. To address this challenge, we propose an approach to enhance the generalization and adaptation of spatio-temporal GNNs through efficient prompting. Specifically, we introduce a lightweight and model-agnostic prompt tuning framework for spatio-temporal GNNs, named SimpleST. It facilitates adapting pre-trained spatio-temporal GNNs to novel distributions while keeping the model parameters fixed. This prompt mechanism reduces the overhead and complexity of adaptation, enabling efficient utilization of pre-trained models for out-of-distribution generalization. Extensive experiments conducted on five real-world urban spatio-temporal datasets demonstrate the superiority of our approach in terms of prediction accuracy and computational efficiency.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces SimpleST, a lightweight and model-agnostic prompt tuning framework for spatio-temporal GNNs that adapts pre-trained models to novel distributions in traffic forecasting while keeping all GNN parameters fixed. It claims this reduces adaptation overhead and achieves superior prediction accuracy and efficiency on five real-world urban spatio-temporal datasets.
Significance. If the empirical results hold and the prompt mechanism is shown to handle complex shifts, the work would be significant for enabling efficient reuse of pre-trained spatio-temporal models in non-stationary settings such as traffic networks, where full retraining is costly. It extends prompt-learning techniques from other domains to GNNs in a parameter-efficient manner.
major comments (2)
- [Method section (prompt insertion and adaptation mechanism)] The central claim that small learnable prompt vectors inserted into a frozen spatio-temporal GNN can compensate for arbitrary distribution shifts (e.g., changes in volume, connectivity, or temporal patterns) lacks any derivation, capacity bound, or analysis showing that the shifts are low-rank or align with the fixed feature extractors. This assumption is load-bearing for the adaptation framework and is not addressed in the method description.
- [Experiments section] The experimental claims of superiority on five datasets are asserted without quantitative details on metrics (MAE/RMSE), baselines, ablation studies on prompt dimensionality, or statistical tests in the provided summary; the full experimental section must supply these to substantiate the out-of-distribution generalization results.
minor comments (1)
- [Abstract] The abstract would be strengthened by including one or two key quantitative results (e.g., average improvement over baselines) to support the superiority claim.
Simulated Author's Rebuttal
We thank the referee for the detailed and constructive feedback on our manuscript introducing SimpleST. We address each major comment below with clarifications drawn directly from the full paper and indicate where revisions will be made to improve rigor and clarity.
read point-by-point responses
-
Referee: The central claim that small learnable prompt vectors inserted into a frozen spatio-temporal GNN can compensate for arbitrary distribution shifts (e.g., changes in volume, connectivity, or temporal patterns) lacks any derivation, capacity bound, or analysis showing that the shifts are low-rank or align with the fixed feature extractors. This assumption is load-bearing for the adaptation framework and is not addressed in the method description.
Authors: We agree that the method section would benefit from additional discussion of the underlying assumptions. The current description emphasizes the practical, model-agnostic design and empirical effectiveness rather than formal capacity analysis. In the revision we will expand the method section with a dedicated paragraph explaining the design intuition: prompt vectors are inserted at the input and intermediate layers to modulate spatio-temporal features in a low-dimensional space, allowing the frozen GNN backbone to retain its pre-trained representations while the prompts capture dataset-specific shifts. We will also explicitly state the empirical observation that traffic distribution shifts often admit low-rank adaptations and note this as a limitation requiring future theoretical work. revision: yes
-
Referee: The experimental claims of superiority on five datasets are asserted without quantitative details on metrics (MAE/RMSE), baselines, ablation studies on prompt dimensionality, or statistical tests in the provided summary; the full experimental section must supply these to substantiate the out-of-distribution generalization results.
Authors: The full manuscript already contains these details in Section 4. Tables 1 and 2 report MAE and RMSE on all five datasets (METR-LA, PEMS-BAY, PEMS03, PEMS04, PEMS07) with comparisons against eight baselines including STGCN, DCRNN, Graph WaveNet, and recent prompt-based methods. Table 3 presents ablation results on prompt dimensionality (dimensions 8, 16, 32, 64) and insertion strategies. All results include mean and standard deviation over five random seeds. We will revise the abstract and introduction to explicitly reference these tables and metrics so that readers encountering only the summary obtain the quantitative evidence immediately. revision: partial
- Providing a formal derivation or capacity bound proving that prompt vectors can compensate for arbitrary distribution shifts; such analysis would require substantial new theoretical development beyond the empirical scope of the present work.
Circularity Check
No circularity: empirical method proposal with no self-referential derivations
full rationale
The paper introduces SimpleST as a lightweight prompt-tuning framework for adapting frozen spatio-temporal GNNs to distribution shifts. The provided abstract and description contain no equations, derivations, or first-principles claims. The central contribution is a model-agnostic prompting mechanism validated through experiments on five datasets. No load-bearing step reduces to its own inputs by construction, no fitted parameters are relabeled as predictions, and no self-citations form the justification chain. The approach is self-contained as an engineering proposal whose validity rests on external empirical results rather than internal logical closure.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Arjovsky, M., Bottou, L., Gulrajani, I., Lopez-Paz, D.: Invariant risk minimization. arXiv (2019)
work page 2019
-
[2]
Exploring visual prompts for adapting large- scale models
Bahng, H., Jahanian, A., Sankaranarayanan, S., Isola, P.: Exploring visual prompts for adapting large-scale models. arXiv preprint arXiv:2203.17274 (2022)
-
[3]
Advances in neural information processing systems33, 17,804–17,815 (2020)
Bai, L., Yao, L., Li, C., Wang, X., Wang, C.: Adaptive graph con- volutional recurrent network for traffic forecasting. Advances in neural information processing systems33, 17,804–17,815 (2020)
work page 2020
-
[4]
arXiv preprint arXiv:2405.19761 (2024)
Chang, Z., Yu, L., Li, H., Wu, S., Chen, G., Zhang, D.: Revisiting cnns for trajectory similarity learning. arXiv preprint arXiv:2405.19761 (2024)
-
[5]
Chen, Y., Segovia-Dominguez, I., et al.: Tamp-s2gcnets: cou- pling time-aware multipersistence knowledge representation with spatio-supra graph convolutional networks for time-series fore- casting. In: ICLR (2021)
work page 2021
-
[6]
In: International Con- ference on Artificial Intelligence (AAAI), vol
Choi, J., Choi, H., Hwang, J., Park, N.: Graph neural controlled differential equations for traffic forecasting. In: International Con- ference on Artificial Intelligence (AAAI), vol. 36, pp. 6367–6374 (2022)
work page 2022
-
[7]
In: International Conference on Artificial Intelligence (AAAI), vol
Diao, Z., Wang, X., Zhang, D., Liu, Y., Xie, K., He, S.: Dynamic spatial-temporal graph convolutional neural networks for traffic forecasting. In: International Conference on Artificial Intelligence (AAAI), vol. 33, pp. 890–897 (2019)
work page 2019
-
[8]
In: International Conference on Artificial Intelligence (AAAI), vol
Ding, Z., Zhao, R., Zhang, J., Gao, T., Xiong, R., Yu, Z., Huang, T.: Spatio-temporal recurrent networks for event-based optical flow estimation. In: International Conference on Artificial Intelligence (AAAI), vol. 36, pp. 525–533 (2022)
work page 2022
-
[9]
arXiv preprint arXiv:2302.06180 (2023)
Du, Y., Hu, Y., Zhang, Z., Fang, Z., Chen, L., Zheng, B., Gao, Y.: Ldptrace: Locally differentially private trajectory synthesis. arXiv preprint arXiv:2302.06180 (2023)
-
[10]
arXiv preprint arXiv:2202.10936 (2022)
Du, Y., Liu, Z., Li, J., Zhao, W.X.: A survey of vision-language pre-trained models. arXiv preprint arXiv:2202.10936 (2022)
-
[11]
arXiv preprint arXiv:2209.15240 (2022)
Fang, T., Zhang, Y., Yang, Y., Wang, C.: Prompt tuning for graph neural networks. arXiv preprint arXiv:2209.15240 (2022)
-
[12]
Proceedings of the VLDB Endowment14(8), 1289–1297 (2021)
Fang, Z., Pan, L., Chen, L., Du, Y., Gao, Y.: Mdtp: A multi-source deep traffic prediction framework over spatio-temporal trajectory data. Proceedings of the VLDB Endowment14(8), 1289–1297 (2021)
work page 2021
-
[13]
arXiv preprint arXiv:2304.05982 (2023)
Guastella, D.A., Bontempi, G.: Traffic modeling with sumo: A tuto- rial. arXiv preprint arXiv:2304.05982 (2023)
-
[14]
In: International Conference on Artificial Intelligence (AAAI), pp
Guo, S., Lin, Y., Feng, N., Song, C., Wan, H.: Attention based spatial- temporal graph convolutional networks for traffic flow forecast- ing. In: International Conference on Artificial Intelligence (AAAI), pp. 922–929. International Conference on Artificial Intelligence (AAAI)
-
[15]
Proceedings of the VLDB Endowment17(5), 1081–1090 (2024)
Han, J., Zhang, W., Liu, H., Tao, T., Tan, N., Xiong, H.: Bigst: Linear complexity spatio-temporal graph neural network for traffic fore- casting on large-scale road networks. Proceedings of the VLDB Endowment17(5), 1081–1090 (2024)
work page 2024
-
[16]
Proceedings of the VLDB Endowment15(7), 1493–1505 (2022)
Han, X., Cheng, R., Ma, C., Grubenmann, T.: Deeptea: Effective and efficient online time-dependent trajectory outlier detection. Proceedings of the VLDB Endowment15(7), 1493–1505 (2022)
work page 2022
-
[17]
Electronics 11(14), 2230 (2022)
Han, X., Gong, S.: Lst-gcn: Long short-term memory embedded graph convolution network for traffic flow forecasting. Electronics 11(14), 2230 (2022)
work page 2022
-
[18]
Han, X., Zhang, Z., Ding, N., Gu, Y., Liu, X., Huo, Y., Qiu, J., Yao, Y., Zhang, A., Zhang, L., et al.: Pre-trained models: Past, present and future. AI Open2, 225–250 (2021)
work page 2021
-
[19]
In: European Conference on Computer Vision, pp
Jia, M., Tang, L., Chen, B.C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.N.: Visual prompt tuning. In: European Conference on Computer Vision, pp. 709–727. Springer (2022)
work page 2022
-
[20]
In: 2023 IEEE 39th international conference on data engineering (ICDE), pp
Jiang, J., Pan, D., Ren, H., Jiang, X., Li, C., Wang, J.: Self-supervised trajectory representation learning with temporal regularities and travel semantics. In: 2023 IEEE 39th international conference on data engineering (ICDE), pp. 843–855. IEEE (2023)
work page 2023
-
[21]
arXiv preprint arXiv:2303.14483 (2023)
Jin, G., Liang, Y., Fang, Y., Huang, J., Zhang, J., Zheng, Y.: Spatio- temporal graph neural networks for predictive learning in urban computing: A survey. arXiv preprint arXiv:2303.14483 (2023)
-
[22]
arXiv preprint arXiv:2410.19192 (2024)
Kieu, D., Kieu, T., Han, P., Yang, B., Jensen, C.S., Le, B.: Team: Topological evolution-aware framework for traffic forecasting– extended version. arXiv preprint arXiv:2410.19192 (2024)
-
[23]
Proceedings of the VLDB Endowment18(8), 2308–2320 (2025)
Lai, D., Xu, J., Qu, J., Chao, P., Fang, J., Liu, C.: Tmlkd: Few-shot trajectory metric learning via knowledge distillation. Proceedings of the VLDB Endowment18(8), 2308–2320 (2025)
work page 2025
-
[24]
In: International Conference on Machine Learning (ICML), pp
Lan, S., Ma, Y., Huang, W., Wang, W., Yang, H., Li, P.: Dstagnn: Dynamic spatial-temporal aware graph neural network for traf- fic flow forecasting. In: International Conference on Machine Learning (ICML), pp. 11,906–11,917. PMLR (2022)
work page 2022
-
[25]
In: Proceedings of the AAAI conference on artificial intelligence, vol
Li, R., Wang, S., Zhu, F., Huang, J.: Adaptive graph convolutional neural networks. In: Proceedings of the AAAI conference on artificial intelligence, vol. 32 (2018) 23
work page 2018
-
[26]
Li, Y., Yu, R., Shahabi, C., Liu, Y.: Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926 (2017)
-
[27]
The VLDB Journal33(3), 685–702 (2024)
Liang, A., Yao, B., Wang, B., Liu, Y., Chen, Z., Xie, J., Li, F.: Sub- trajectory clustering with deep reinforcement learning. The VLDB Journal33(3), 685–702 (2024)
work page 2024
-
[28]
Luo, Y., Liu, Q., Liu, Z.: Stan: Spatio-temporal attention network for next location recommendation. In: WWW, pp. 2177–2185 (2021)
work page 2021
-
[29]
In: International Joint Conferences on Artificial Intelligence (IJCAI), vol
Lv, Z., Xu, J., Zheng, K., Yin, H., Zhao, P., Zhou, X.: Lc-rnn: A deep learning model for traffic speed prediction. In: International Joint Conferences on Artificial Intelligence (IJCAI), vol. 2018, p. 27 (2018)
work page 2018
-
[30]
In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp
Ma, Y., Gerard, P., Tian, Y., Guo, Z., Chawla, N.V.: Hierarchical spatio-temporal graph neural networks for pandemic forecasting. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 1481–1490 (2022)
work page 2022
-
[31]
In: Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pp
Meng, C., Rambhatla, S., Liu, Y.: Cross-node federated graph neural network for spatio-temporal data modeling. In: Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pp. 1202–1211 (2021)
work page 2021
-
[32]
In: Companion of the 2023 International Conference on Management of Data, pp
Musleh, M., Mokbel, M.: A demonstration of kamel: A scalable bert-based system for trajectory imputation. In: Companion of the 2023 International Conference on Management of Data, pp. 191–194 (2023)
work page 2023
-
[33]
Proceedings of the VLDB Endowment 17(3) (2023)
Musleh, M., Mokbel, M.F.: Kamel: A scalable bert-based system for trajectory imputation. Proceedings of the VLDB Endowment 17(3) (2023)
work page 2023
-
[34]
In: International Conference on Knowledge Discovery and Data Mining (KDD), pp
Pan, Z., Liang, Y., Wang, W., Yu, Y., Zheng, Y., Zhang, J.: Urban traf- fic prediction from spatio-temporal data using deep meta learning. In: International Conference on Knowledge Discovery and Data Mining (KDD), pp. 1720–1730 (2019)
work page 2019
-
[35]
Science China technological sciences63(10), 1872–1897 (2020)
Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., Huang, X.: Pre-trained models for natural language processing: A survey. Science China technological sciences63(10), 1872–1897 (2020)
work page 2020
-
[36]
In: Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp
Roy, A., Roy, K.K., Ahsan Ali, A., Amin, M.A., Rahman, A.M.: Sst- gnn: simplified spatio-temporal traffic forecasting model using graph neural network. In: Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 90–102. Springer (2021)
work page 2021
-
[37]
arXiv preprint arXiv:2301.10569 (2023)
Sahili, Z.A., Awad, M.: Spatio-temporal graph neural networks: A survey. arXiv preprint arXiv:2301.10569 (2023)
-
[38]
In: International Conference on Knowledge Discovery and Data Mining (KDD), pp
Shao, Z., Zhang, Z., Wang, F., Xu, Y.: Pre-training enhanced spatial- temporal graph neural network for multivariate time series fore- casting. In: International Conference on Knowledge Discovery and Data Mining (KDD), pp. 1567–1577 (2022)
work page 2022
-
[39]
In: International Conference on Knowledge Discovery and Data Mining (KDD), pp
Sun, M., Zhou, K., He, X., Wang, Y., Wang, X.: Gppt: Graph pre- training and prompt tuning to generalize graph neural networks. In: International Conference on Knowledge Discovery and Data Mining (KDD), pp. 1717–1727 (2022)
work page 2022
-
[40]
Knowledge- based systems242, 108,199 (2022)
Ta, X., Liu, Z., Hu, X., Yu, L., Sun, L., Du, B.: Adaptive spatio- temporal graph neural network for traffic forecasting. Knowledge- based systems242, 108,199 (2022)
work page 2022
-
[41]
Wang, H., Li, J., Wu, H., Hovy, E., Sun, Y.: Pre-trained language models and their applications. Engineering25, 51–65 (2023)
work page 2023
-
[42]
In: International Confer- ence on Artificial Intelligence (AAAI), vol
Wang, L., Adiga, A., Chen, J., Sadilek, A., Venkatramanan, S., Marathe, M.: Causalgnn: Causal-based graph neural networks for spatio-temporal epidemic forecasting. In: International Confer- ence on Artificial Intelligence (AAAI), vol. 36, pp. 12,191–12,199 (2022)
work page 2022
-
[43]
In: International Conference on Knowledge Discovery and Data Mining (KDD), pp
Wu, Z., Pan, S., Long, G., Jiang, J., Chang, X., Zhang, C.: Connecting the dots: Multivariate time series forecasting with graph neural networks. In: International Conference on Knowledge Discovery and Data Mining (KDD), pp. 753–763
-
[44]
In: International Conference on Knowledge Discovery and Data Mining (KDD), pp
Wu, Z., Pan, S., Long, G., Jiang, J., Chang, X., Zhang, C.: Connecting the dots: Multivariate time series forecasting with graph neural networks. In: International Conference on Knowledge Discovery and Data Mining (KDD), pp. 753–763. ACM (2020)
work page 2020
-
[45]
Advances in Neural Information Processing Systems36(2024)
Xia, Y., Liang, Y., Wen, H., Liu, X., Wang, K., Zhou, Z., Zimmer- mann, R.: Deciphering spatio-temporal graph forecasting: A causal lens and treatment. Advances in Neural Information Processing Systems36(2024)
work page 2024
-
[46]
Xu, M., Dai, W., Liu, C., Gao, X., Lin, W., Qi, G.J., Xiong, H.: Spatial- temporal transformer networks for traffic flow forecasting. arXiv preprint arXiv:2001.02908 (2020)
-
[47]
In: Annual Meeting of the Association for Com- putational Linguistics (ACL) (2022)
Yao, Y., Dong, B., Zhang, A., Zhang, Z., Xie, R., Liu, Z., Lin, L., Sun, M., Wang, J.: Prompt tuning for discriminative pre-trained language models. In: Annual Meeting of the Association for Com- putational Linguistics (ACL) (2022)
work page 2022
-
[48]
Proceedings of the VLDB Endowment17(7), 1605–1617 (2024)
Yuan, H., Cong, G., Li, G.: Nuhuo: An effective estimation model for traffic speed histogram imputation on a road network. Proceedings of the VLDB Endowment17(7), 1605–1617 (2024)
work page 2024
-
[49]
Proceedings of the VLDB Endowment11(9), 934–946 (2018)
Zhang, D., Ding, M., Yang, D., Liu, Y., Fan, J., Shen, H.T.: Trajec- tory simplification: an experimental study and quality analysis. Proceedings of the VLDB Endowment11(9), 934–946 (2018)
work page 2018
-
[50]
In: Proceedings of the AAAI Conference on Artificial Intelligence, vol
Zhang, Q., Gao, X., Wang, H., Yiu, S.M., Yin, H.: Efficient traffic prediction through spatio-temporal distillation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, pp. 1093– 1101 (2025)
work page 2025
-
[51]
arXiv preprint arXiv:2505.04445 (2025)
Zhang, Q., Qu, L., Wen, H., Huang, D., Yiu, S.M., Hung, N.Q.V., Yin, H.: M2rec: Multi-scale mamba for efficient sequential recommen- dation. arXiv preprint arXiv:2505.04445 (2025)
-
[52]
arXiv preprint arXiv:2405.09592 (2024)
Zhang, Q., Wang, H., Long, C., Su, L., He, X., Chang, J., Wu, T., Yin, H., Yiu, S.M., Tian, Q., et al.: A survey of generative techniques for spatial-temporal data mining. arXiv preprint arXiv:2405.09592 (2024)
-
[53]
arXiv preprint arXiv:2505.09205 (2025)
Zhang, Q., Wen, H., Yuan, W., Chen, C., Yang, M., Yiu, S.M., Yin, H.: Hmamba: Hyperbolic mamba for sequential recommendation. arXiv preprint arXiv:2505.09205 (2025)
-
[54]
arXiv preprint arXiv:2410.02191 (2024)
Zhang, Q., Yang, P., Yu, J., Wang, H., He, X., Yiu, S.M., Yin, H.: A sur- vey on point-of-interest recommendation: Models, architectures, and security. arXiv preprint arXiv:2410.02191 (2024)
-
[55]
Zhang, Q., Yu, C., Wang, H., Yan, Y., Cao, Y., Yin, H., Yiu, S.M., Wu, T.: Fldmamba: Integrating fourier and laplace transform decompo- sition with mamba for enhanced time series prediction
-
[56]
arXiv preprint arXiv:2405.13085 (2024)
Zhang, Y., Hu, B., Chen, Z., Guo, L., Liu, Z., Zhang, Z., Liang, L., Chen, H., Zhang, W.: Multi-domain knowledge graph collaborative pre-training and prompt tuning for diverse downstream tasks. arXiv preprint arXiv:2405.13085 (2024)
-
[57]
Advances in neural information processing systems35, 6074–6089 (2022)
Zhang, Z., Wang, X., Zhang, Z., Li, H., Qin, Z., Zhu, W.: Dynamic graph neural networks under spatio-temporal distribution shift. Advances in neural information processing systems35, 6074–6089 (2022)
work page 2022
-
[58]
arXiv preprint arXiv:2411.15096 (2024)
Zhou, S., Shang, S., Chen, L., Jensen, C.S., Kalnis, P.: Red: Effective trajectory representation learning with comprehensive informa- tion. arXiv preprint arXiv:2411.15096 (2024)
-
[59]
In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp
Zhou, Z., Huang, Q., Yang, K., Wang, K., Wang, X., Zhang, Y., Liang, Y., Wang, Y.: Maintaining the status quo: Capturing invariant re- lations for ood spatiotemporal learning. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 3603–3614 (2023)
work page 2023
-
[60]
IEEE Access9, 35,973–35,983 24
Zhu, J., Wang, Q., Tao, C., Deng, H., et al.: Ast-gcn: Attribute- augmented spatiotemporal graph convolutional network for traffic forecasting. IEEE Access9, 35,973–35,983 24
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.