Skip to main content
Log in

Augmenting Trigger Semantics to Improve Event Coreference Resolution

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Due to the small size of the annotated corpora and the sparsity of the event trigger words, the event coreference resolver cannot capture enough event semantics, especially the trigger semantics, to identify coreferential event mentions. To address the above issues, this paper proposes a trigger semantics augmentation mechanism to boost event coreference resolution. First, this mechanism performs a trigger-oriented masking strategy to pre-train a BERT (Bidirectional Encoder Representations from Transformers)-based encoder (Trigger-BERT), which is fine-tuned on a large-scale unlabeled dataset Gigaword. Second, it combines the event semantic relations from the Trigger-BERT encoder with the event interactions from the soft-attention mechanism to resolve event coreference. Experimental results on both the KBP2016 and KBP2017 datasets show that our proposed model outperforms several state-of-the-art baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Fouad M M, Atyah M A. Efficient topic detection system for online Arabic news. International Journal of Computer Applications, 2018, 180(12): 7–12. https://doi.org/10.5120/ijca2018916236.

    Article  Google Scholar 

  2. Weissenborn D, Wiese G, Seiffe L. Making neural QA as simple as possible but not simple. In Proc. the 21st Conference on Computational Natural Language Learning, Aug. 2017, pp.271–280. https://doi.org/10.18653/v1/K17-1028.

  3. Liu S L, Chen Y B, Liu K, Zhao J. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proc. the 55th Annual Meeting of the Association for Computational Linguistics, Jul. 2017, pp.1789–1798. https://doi.org/10.18653/v1/P17-1164.

  4. Devlin J, Chang M W, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jun. 2019, pp.4171–4186. https://doi.org/10.18653/v1/N19-1423.

  5. Ellis J, Getman J, Kuster N, Song Z, Bies A, Strassel S. Overview of linguistic resources for the TAC KBP 2016 evaluations: Methodologies and results. In Proc. Text Analysis Conference, November 2016.

  6. Getman J, Ellis J, Song Z, Tracey J, Strassel S. Overview of linguistic resources for the TAC KBP 2017 evaluations: Methodologies and results. In Proc. Text Analysis Conference, November 2017.

  7. Linguistic Data Consortium. ACE (Automatic Content Extraction) English annotation guidelines for events. Technical Report, Linguistic Data Consortium, 2005. https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/english-events-guidelines-v5.4.3.pdf, March 2023.

  8. Mitamura T, Liu Z Z, Hovy E H. Overview of TAC KBP 2015 event nugget track. In Proc. the 2015 Text Analysis Conference, Nov. 2015.

  9. Ahn D. The stages of event extraction. In Proc. the Workshop on Annotating and Reasoning about Time and Events, Jul. 2006.

  10. Bejan A, Harabagiu S. Unsupervised event coreference resolution with rich linguistic features. In Proc. the 48th Annual Meeting of the Association for Computational Linguistics, Jul. 2010, pp.1412–1422.

  11. Chen Z, Ji H, Haralick R. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In Proc. the 2009 Workshop on Events in Emerging Text Types, Sept. 2009, pp.17–22.

  12. Liu Z Z, Araki J, Hovy E H, Mitamura T. Supervised within-document event coreference using information propagation. In Proc. the 9th International Conference on Language Resources and Evaluation, May 2014, pp.4539– 4544.

  13. Ng V, Cardie C. Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution. In Proc. the 19th International Conference on Computational Linguistics, Aug. 2002. https://doi.org/10.3115/1072228.1072367.

  14. Peng H R, Song Y Q, Roth D. Event detection and coreference with minimal supervision. In Proc. the 2016 Conference on Empirical Methods in Natural Language Processing, Nov. 2016, pp.392–402. https://doi.org/10.18653/v1/D161038.

  15. Lu J, Ng V. Joint learning for event coreference resolution. In Proc. the 55th Annual Meeting of the Association for Computational Linguistics, Jul. 2017, pp.90–101. https://doi.org/10.18653/v1/P17-1009.

  16. Huang Y J, Lu J, Kurohashi S, Ng V. Improving event coreference resolution by learning argument compatibility from unlabeled data. In Proc. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jun. 2019, pp.785–795. https://doi.org/10.18653/v1/N19-1085.

  17. Cui Y M, Che W X, Liu T, Qin B, Yang Z Q. Pre-training with whole word masking for Chinese BERT. IEEE/ACM Trans. Audio, Speech, and Language Processing, 2021, 29: 3504–3514. https://doi.org/10.1109/TASLP.2021.3124365.

  18. Lin T Y, Goyal P, Girshick R, He K M, Dollár P. Focal loss for dense object detection. In Proc. the 2017 IEEE International Conference on Computer Vision, Oct. 2017, pp.2999–3007. https://doi.org/10.1109/ICCV.2017.324.

  19. Choubey P K, Huang R H. Improving event coreference resolution by modeling correlations between event coreference chains and document topic structures. In Proc. the 56th Annual Meeting of the Association for Computational Linguistics, Jul. 2018, pp.485–495. https://doi.org/10.18653/v1/P18-1045.

  20. Vilain M B, Burger J D, Aberdeen J S, Connolly D, Hirschman L. A model-theoretic coreference scoring scheme. In Proc. the 6th Conference on Message Understanding, Nov. 1995, pp.45–52.

  21. Bagga A, Baldwin B. Algorithms for scoring coreference chains. In Proc. the 1st International Conference on Language Resources and Evaluation, May 1998, pp.563–566.

  22. Recasens M, Hovy E. BLANC: Implementing the rand index for coreference evaluation. Natural Language Engineering, 2011, 17(4): 485–510. https://doi.org/10.1017/S135132491000029X.

    Article  Google Scholar 

  23. Luo X Q. On coreference resolution performance metrics. In Proc. the 2005 Conference on Human Language Technology and Empirical Methods in Natural Language Processing, Oct. 2005, pp.25–32. https://doi.org/10.3115/1220575.1220579.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pei-Feng Li.

Supplementary Information

ESM 1

(PDF 260 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huan, M., Xu, S. & Li, PF. Augmenting Trigger Semantics to Improve Event Coreference Resolution. J. Comput. Sci. Technol. 38, 600–611 (2023). https://doi.org/10.1007/s11390-022-1143-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-022-1143-8

Keywords

Navigation