Recognition: unknown
Looking for the Bottleneck in Fine-grained Temporal Relation Classification
Pith reviewed 2026-05-08 03:29 UTC · model grok-4.3
The pith
Classifying temporal interval relations by first determining point relations at endpoints and decoding them yields better results than direct classification.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes that the Interval from Point approach, which classifies the point relations between the four endpoints of two temporal entities and then decodes them into an interval relation, achieves a temporal awareness score of 70.1 percent on the TempEval-3 dataset, establishing a new state-of-the-art for classifying the full set of thirteen interval relations.
What carries the argument
The Interval from Point method that decomposes the classification of interval relations into the classification of point relations between endpoints followed by decoding.
If this is right
- Direct multi-class classification over thirteen relations is more difficult than over six point relations.
- The mapping from point relations to interval relations is deterministic and preserves accuracy gains.
- Focusing on the complete set of interval relations is viable with this decomposition.
- Higher performance is obtained compared to prior methods on the benchmark dataset.
Where Pith is reading between the lines
- The success of this method implies that the primary difficulty lies in distinguishing among many similar interval classes rather than in identifying basic ordering at points.
- This technique could be extended to other domains involving interval-based reasoning, such as spatial relations in text.
- Improved temporal graphs from this method might enhance performance in related tasks like coreference resolution for events.
- Researchers could investigate the error patterns in point classification to further refine the approach.
Load-bearing premise
That the classification of point relations between endpoints is substantially more accurate than direct classification of interval relations, with the decoding step not introducing offsetting errors.
What would settle it
Demonstrating through controlled experiments that the accuracy of predicting the six point relations is not higher than predicting the thirteen interval relations directly, or that the overall system performance falls below that of direct classification methods.
Figures
read the original abstract
Temporal relation classification is the task of determining the temporal relation between pairs of temporal entities in a text. Despite recent advancements in natural language processing, temporal relation classification remains a considerable challenge. Early attempts framed this task using a comprehensive set of temporal relations between events and temporal expressions. However, due to the task complexity, datasets have been progressively simplified, leading recent approaches to focus on the relations between event pairs and to use only a subset of relations. In this work, we revisit the broader goal of classifying interval relations between temporal entities by considering the full set of relations that can hold between two time intervals. The proposed approach, Interval from Point, involves first classifying the point relations between the endpoints of the temporal entities and then decoding these point relations into an interval relation. Evaluation on the TempEval-3 dataset shows that this approach can yield effective results, achieving a temporal awareness score of $70.1$ percent, a new state-of-the-art on this benchmark.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes 'Interval from Point', a decomposition for fine-grained temporal relation classification: first predict the six point relations among the four endpoints of two temporal entities, then deterministically decode the resulting tuple into one of Allen's 13 interval relations. On the TempEval-3 benchmark the method is reported to achieve a temporal awareness score of 70.1 %, presented as a new state-of-the-art while using the full set of interval relations rather than a reduced subset.
Significance. If the reported score is reproducible and the decoding step is shown to preserve performance, the work offers a concrete demonstration that breaking the 13-class interval problem into six simpler point classifications can be effective on a public benchmark. The approach directly addresses the historical simplification of temporal datasets and supplies an empirical data point (70.1 % temporal awareness) that future systems can compare against. No machine-checked proofs or parameter-free derivations are present, but the use of an external benchmark with a concrete numeric claim is a verifiable strength.
major comments (2)
- [Abstract] Abstract: the central claim that the Interval-from-Point decomposition yields a new SOTA of 70.1 % temporal awareness rests on the decoder's ability to map arbitrary 6-tuples of point relations to valid interval relations. Because the six point classifiers are independent, 3^6 = 729 possible tuples arise, many of which violate intra-interval ordering or inter-interval transitivity and therefore have no legal decoding to any of Allen's 13 relations. The abstract supplies neither the empirical frequency of such invalid tuples nor the decoder's resolution rule, leaving open the possibility that decoding errors offset any advantage of the 6-class formulation.
- [Abstract] Abstract: no information is given on model architecture, training procedure, hyper-parameter selection, or statistical significance of the 70.1 % score. Without these details the support for the claim that the decomposition itself is responsible for the improvement remains only partially verifiable.
minor comments (1)
- [Abstract] Abstract: the term 'temporal awareness score' is introduced without a definition or citation; a one-sentence gloss or reference to the TempEval-3 evaluation protocol would improve immediate readability.
Simulated Author's Rebuttal
We thank the referee for their detailed and constructive comments on our manuscript. We address each major comment below and indicate the revisions that will be made to strengthen the presentation of our Interval-from-Point approach.
read point-by-point responses
-
Referee: [Abstract] Abstract: the central claim that the Interval-from-Point decomposition yields a new SOTA of 70.1 % temporal awareness rests on the decoder's ability to map arbitrary 6-tuples of point relations to valid interval relations. Because the six point classifiers are independent, 3^6 = 729 possible tuples arise, many of which violate intra-interval ordering or inter-interval transitivity and therefore have no legal decoding to any of Allen's 13 relations. The abstract supplies neither the empirical frequency of such invalid tuples nor the decoder's resolution rule, leaving open the possibility that decoding errors offset any advantage of the 6-class formulation.
Authors: We agree that the abstract does not include these specifics. The manuscript body (Section 3.2) describes the deterministic decoder, which resolves any 6-tuple by selecting the Allen interval relation whose endpoint relations agree with the predicted tuple on the largest number of positions. We did not report the frequency of invalid tuples in the submitted version. We will revise the abstract to briefly note the decoder and add a short analysis of invalid-tuple frequency (computed on the TempEval-3 test set) in the results section so that readers can assess its impact. revision: yes
-
Referee: [Abstract] Abstract: no information is given on model architecture, training procedure, hyper-parameter selection, or statistical significance of the 70.1 % score. Without these details the support for the claim that the decomposition itself is responsible for the improvement remains only partially verifiable.
Authors: The abstract is kept concise to emphasize the core contribution. Full details of the model (independent fine-tuned transformer classifiers for each of the six point relations), training procedure, hyper-parameter tuning, and significance testing appear in Sections 4 and 5. To address the concern directly, we will add a single sentence to the abstract summarizing the experimental setup and the statistical comparison against prior work. revision: partial
Circularity Check
No circularity: empirical decomposition evaluated on external benchmark
full rationale
The paper describes a standard supervised classification pipeline: train a model to predict the six point relations among four endpoints, then apply a deterministic decoder to map the resulting tuple to one of Allen's 13 interval relations. The only quantitative claim is an F1-style temporal awareness score of 70.1 % obtained by running the trained system on the held-out TempEval-3 test set. No equation, parameter fit, or self-citation is shown to be definitionally equivalent to the reported score; the benchmark data and evaluation protocol are external to the method itself. Consequently the derivation chain contains no self-definitional, fitted-input, or load-bearing self-citation steps.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The six point relations between endpoints can be classified independently and then combined via deterministic rules into one of Allen's 13 interval relations without loss of coverage.
Reference graph
Works this paper leans on
-
[1]
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A Next-generation Hyperparameter Optimization Frame- work. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining(New York, NY, USA). ACM, 2623–2631. doi:10.1145/3292500.3330701
-
[2]
Loubna Ben Allal, Anton Lozhkov, Elie Bakouch, Gabriel Martín Blázquez, Guil- herme Penedo, Lewis Tunstall, Andrés Marafioti, Hynek Kydlíček, Agustín Pi- queres Lajarín, Vaibhav Srivastav, Joshua Lochner, Caleb Fahlgren, Xuan-Son Nguyen, Clémentine Fourrier, Ben Burtenshaw, Hugo Larcher, Haojun Zhao, Cyril Zakka, Mathieu Morlon, Colin Raffel, Leandro von ...
2025
-
[3]
James F. Allen. 1983. Maintaining knowledge about temporal intervals.Commun. ACM26, 11 (Nov. 1983), 832–843. doi:10.1145/182.358434
-
[4]
Sarah Alsayyahi and Riza Batista-Navarro. 2023. TIMELINE: Exhaustive An- notation of Temporal Relations Supporting the Automatic Ordering of Events in News Articles. InProceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singap...
-
[5]
Miguel Ballesteros, Rishita Anubhai, Shuai Wang, Nima Pourdamghani, Yogarshi Vyas, Jie Ma, Parminder Bhatia, Kathleen McKeown, and Yaser Al-Onaizan. 2020. Severing the Edge Between Before and After: Neural Architectures for Temporal Ordering of Events. InProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)(Online)...
-
[6]
Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An Annotation Framework for Dense Event Ordering. InProceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)(Stroudsburg, PA, USA). Association for Computational Linguistics, 501–
2014
-
[7]
doi:10.3115/v1/P14-2082
-
[8]
Ziyang Chen, Jinzhi Liao, and Xiang Zhao. 2023. Multi-granularity Temporal Question Answering over Knowledge Graphs. InProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 11378...
-
[9]
Dmitriy Dligach, Timothy Miller, Chen Lin, Steven Bethard, and Guergana Savova
-
[10]
InProceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
Neural Temporal Relation Extraction. InProceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. 746–751
-
[11]
Jennifer D’Souza and Vincent Ng. 2013. Classifying Temporal Relations with Rich Linguistic Knowledge. InProceedings of the 2013 Conference of the North SIGIR ’26, July 20–24, 2026, Melbourne, VIC, Australia Hugo Sousa, Ricardo Campos, and Alípio Jorge American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies(Atlanta,...
2013
-
[12]
Caroline Hagège and Xavier Tannier. 2007. XRCE-T: XIP Temporal Module for TempEval campaign. InProceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)(Prague, Czech Republic), Eneko Agirre, Lluís Màrquez, and Richard Wicentowski (Eds.). Association for Computational Linguistics, 492–495. https://aclanthology.org/S07-1110/
2007
-
[13]
Rujun Han, Xiang Ren, and Nanyun Peng. 2021. ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning. InProceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 5367–5380
2021
-
[14]
Kimihiro Hasegawa, Nikhil Kandukuri, Susan Holm, Yukari Yamakawa, and Teruko Mitamura. 2024. Formulation Comparison for Timeline Construction using LLMs.arXiv preprint(3 2024)
2024
-
[15]
Quzhe Huang, Yutong Hu, Shengqi Zhu, Yansong Feng, Chang Liu, and Dongyan Zhao. 2023. More than Classification: A Unified Framework for Event Temporal Relation Extraction. InProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)(Stroudsburg, PA, USA). Association for Computational Linguistics, 9631–...
-
[16]
Natsuda Laokulrat, Makoto Miwa, and Yoshimasa Tsuruoka. 2015. Stacking approach to temporal relation classification with temporal inference.Information and Media Technologies11 (9 2015), 53–78
2015
-
[17]
Natsuda Laokulrat, Makoto Miwa, Yoshimasa Tsuruoka, and Takashi Chikayama
-
[18]
UTTime: Temporal Relation Classification using Deep Syntactic Features. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)(Atlanta, Georgia, USA), Suresh Manandhar and Deniz Yuret (Eds.). Association for Computational Linguistics, 88–92...
2013
-
[19]
Artuur Leeuwenberg and Marie Francine Moens. 2017. Structured Learning for Temporal Relation Extraction from Clinical Records. InProceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. 1150–1158
2017
-
[20]
Artuur Leeuwenberg and Marie-Francine Moens. 2018. Temporal Information Extraction by Predicting Relative Time-lines. InProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing(Stroudsburg, PA, USA). Association for Computational Linguistics, 1237–1246. doi:10.18653/v1/D18-1155
-
[21]
Artuur Leeuwenberg and Marie-Francine Moens. 2019. A Survey on Temporal Reasoning for Temporal Information Extraction from Text.Journal of Artificial Intelligence Research66 (2019), 341–380
2019
-
[22]
Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Jonathan Ben-tzur, Moritz Hardt, Benjamin Recht, and Ameet Talwalkar. 2020. A Sys- tem for Massively Parallel Hyperparameter Tuning. InProceedings of Ma- chine Learning and Systems, I Dhillon, D Papailiopoulos, and V Sze (Eds.), Vol. 2. 230–246. https://proceedings.mlsys.org/paper_files/paper...
2020
-
[23]
Jian Liu, Jinan Xu, Yufeng Chen, and Yujie Zhang. 2021. Discourse-Level Event Temporal Ordering with Uncertainty-Guided Graph Completion. InProceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Zhi-Hua Zhou (Ed.). International Joint Conferences on Artificial Intelligence Organization, 3871–3877. doi:10.24963/ij...
-
[24]
Hieu Man, Nghia Trung Ngo, Linh Ngo Van, and Thien Huu Nguyen. 2022. Select- ing Optimal Context Sentences for Event-Event Relation Extraction.Proceedings of the AAAI Conference on Artificial Intelligence36 (6 2022), 11058–11066. Issue
2022
-
[25]
doi:10.1609/aaai.v36i10.21354
-
[26]
Inderjeet Mani, Marc Verhagen, Ben Wellner, Chungmin Lee, and James Puste- jovsky. 2006. Machine Learning of Temporal Relations. InProceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. 753–760
2006
-
[27]
Puneet Mathur, Rajiv Jain, Franck Dernoncourt, Vlad Morariu, Quan Hung Tran, and Dinesh Manocha. 2021. TIMERS: Document-level Temporal Relation Extrac- tion. InProceedings of the 59th Annual Meeting of the Association for Computa- tional Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)(Strouds...
-
[28]
Yuanliang Meng, Anna Rumshisky, and Alexey Romanov. 2017. Temporal In- formation Extraction for Question Answering Using Syntactic Dependencies in an LSTM-based Architecture. InProceedings of the 2017 Conference on Empirical Methods in Natural Language Processing(Copenhagen, Denmark). Association for Computational Linguistics, 887–896. doi:10.18653/v1/D17-1092
-
[29]
Paramita Mirza and Sara Tonelli. 2014. Classifying Temporal Relations with Simple Features. InProceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics(Stroudsburg, PA, USA). Association for Computational Linguistics, 308–317. doi:10.3115/v1/E14-1033
-
[30]
Paramita Mirza and Sara Tonelli. 2016. Catena: Causal and Temporal Relation Extraction from Natural Language Texts. InProceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 64–75
2016
-
[31]
2019.When does label smoothing help?Curran Associates Inc., Red Hook, NY, USA
Rafael Müller, Simon Kornblith, and Geoffrey Hinton. 2019.When does label smoothing help?Curran Associates Inc., Red Hook, NY, USA
2019
-
[32]
Aakanksha Naik, Luke Breitfeller, and Carolyn Rose. 2019. TDDiscourse: A Dataset for Discourse-Level Temporal Ordering of Events. InProceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue(Stroudsburg, PA, USA). Association for Computational Linguistics, 239–249. doi:10.18653/v1/W19-5929
-
[33]
Qiang Ning, Zhili Feng, and Dan Roth. 2017. A Structured Learning Approach to Temporal Relation Extraction. InProceedings of the 2017 Conference on Empirical Methods in Natural Language Processing(Stroudsburg, PA, USA). Association for Computational Linguistics, 1027–1037. doi:10.18653/v1/D17-1108
-
[34]
Qiang Ning, Zhili Feng, Hao Wu, and Dan Roth. 2018. Joint Reasoning for Temporal and Causal Relations. InProceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)(Stroudsburg, PA, USA). Association for Computational Linguistics, 2278–2288. doi:10.18653/ v1/P18-1212
2018
-
[35]
Qiang Ning, Sanjay Subramanian, and Dan Roth. 2019. An Improved Neural Baseline for Temporal Relation Extraction. InProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)(Stroudsburg, PA, USA). Association for Computational Linguistics,...
2019
-
[36]
Qiang Ning, Hao Wu, Haoruo Peng, and Dan Roth. 2018. Improving Temporal Relation Extraction with a Globally Acquired Statistical Resource. InProceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) (Stroudsburg, PA, USA). Association for Computa...
-
[37]
Qiang Ning, Hao Wu, and Dan Roth. 2018. A Multi-Axis Annotation Scheme for Event Temporal Relations. InProceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)(Stroudsburg, PA, USA). Association for Computational Linguistics, 1318–1328. doi:10.18653/ v1/P18-1122
2018
-
[38]
James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al
-
[39]
InCorpus Linguistics, Vol
The Timebank Corpus. InCorpus Linguistics, Vol. 2003. 40
2003
-
[40]
Hayley Ross, Jonathon Cai, and Bonan Min. 2020. Exploring Contextualized Neural Language Models for Temporal Dependency Parsing. InProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (Stroudsburg, PA, USA). Association for Computational Linguistics, 8548–8553. doi:10.18653/v1/2020.emnlp-main.689
-
[41]
Roser Saur, Jessica Littman, Bob Knippen, Robert Gaizauskas, Andrea Setzer, and James Pustejovsky. 2006. TimeML Annotation Guidelines Version 1.2.1
2006
-
[42]
Hugo Sousa, Ricardo Campos, and Alípio Mário Jorge. 2023. tieval: An Evalua- tion Framework for Temporal Information Extraction Systems. InProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval(New York, NY, USA). ACM, 2871–2879. doi:10.1145/ 3539618.3591892
-
[43]
William F. Styler, Steven Bethard, Sean Finan, Martha Palmer, Sameer Pradhan, Piet C de Groen, Brad Erickson, Timothy Miller, Chen Lin, Guergana Savova, and James Pustejovsky. 2014. Temporal Annotation in the Clinical Domain. Transactions of the Association for Computational Linguistics2 (12 2014), 143–154. doi:10.1162/tacl_a_00172
-
[44]
Julien Tourille, Olivier Ferret, Aurelie Neveol, and Xavier Tannier. 2017. Neural Architecture for Temporal Relation Extraction: A Bi-lstm Approach for Detecting Narrative Containers. InProceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 224–230
2017
-
[45]
Naushad UzZaman. 2013. TempEval-3 Evaluation Toolkit GitHub Repository. https://github.com/naushadzaman/tempeval3_toolkit. Accessed: 2026-04-27
2013
-
[46]
Naushad UzZaman and James Allen. 2011. Temporal Evaluation. InProceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. 351–356
2011
-
[47]
Naushad UzZaman, Hector Llorens, Leon Derczynski, James Allen, Marc Verha- gen, and James Pustejovsky. 2013. Semeval-2013 Task 1: Tempeval-3: Evaluating Time Expressions, Events, and Temporal Relations. InSecond Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluati...
2013
-
[48]
Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. Semeval-2007 Task 15: Tempeval Temporal Relation Identification. InProceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007). 75–80
2007
-
[49]
Marc Verhagen and James Pustejovsky. 2008. Temporal Processing with the TARSQI Toolkit. InCOLING 2008: Companion Volume: Demonstrations. 189–192
2008
-
[50]
Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. SemEval-2010 Task 13: TempEval-2. InProceedings of the 5th international work- shop on semantic evaluation(Uppsala, Sweden). Association for Computational Looking for the Bottleneck in Fine-grained Temporal Relation Classification SIGIR ’26, July 20–24, 2026, Melbourne, VIC, Austral...
2010
-
[51]
Haoyu Wang, Muhao Chen, Hongming Zhang, and Dan Roth. 2020. Joint Con- strained Learning for Event-Event Relation Extraction. InProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (Stroudsburg, PA, USA). Association for Computational Linguistics, 696–706. doi:10.18653/v1/2020.emnlp-main.51
-
[52]
Haoyu Wang, Hongming Zhang, Yuqian Deng, Jacob Gardner, Dan Roth, and Muhao Chen. 2023. Extracting or Guessing? Improving Faithfulness of Event Temporal Relation Extraction. InProceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics(Stroudsburg, PA, USA). Association for Computational Linguistics, 541–553...
-
[53]
Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, and Jie Zhou. 2022. MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction. InProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing(Stroudsburg,...
-
[54]
Ran Zhang, Jihed Ouni, and Steffen Eger. 2024. Cross-lingual Cross-temporal Summarization: Dataset, Models, Evaluation.Computational Linguistics50, 3 (Sept. 2024), 1001–1047. doi:10.1162/coli_a_00519
-
[55]
Shuaicheng Zhang, Qiang Ning, and Lifu Huang. 2022. Extracting Temporal Event Relation with Syntax-guided Graph Transformer. InFindings of the Association for Computational Linguistics: NAACL 2022(Stroudsburg, PA, USA). Association for Computational Linguistics, 379–390. doi:10.18653/v1/2022.findings-naacl.29
-
[56]
Xinyu Zhao, Shih-Ting Lin, and Greg Durrett. 2021. Effective Distant Supervision for Temporal Relation Extraction. InProceedings of the Second Workshop on Domain Adaptation for NLP(Kyiv, Ukraine), Eyal Ben-David, Shay Cohen, Ryan McDonald, Barbara Plank, Roi Reichart, Guy Rotman, and Yftah Ziser (Eds.). Association for Computational Linguistics, 195–203. ...
2021
-
[57]
doi:10.18653/v1/D19-1332 , pages =
Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. “Going on a vaca- tion” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding. InProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)(Stroudsburg...
-
[58]
Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2021. Temporal Reasoning on Implicit Events from Distant Supervision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies(Stroudsburg, PA, USA). Association for Computational Ling...
-
[59]
Jie Zhou, Shenpo Dong, Hongkui Tu, Xiaodong Wang, and Yong Dou. 2022. RSGT: Relational Structure Guided Temporal Relation Extraction. InProceedings of the 29th International Conference on Computational Linguistics(Gyeongju, Republic of Korea), Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hs...
2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.