Skip to main content
Log in

Text Model for the Automatic Scoring of Business Letter Writing

  • Published:
Automatic Control and Computer Sciences Aims and scope Submit manuscript

Abstract

This article describes a text model designed to automatically score a cohesive text in the form of a letter on a theme. The scoring parameters are formulated and formalized in the form of 14 criteria with the help of expert English language teachers. The criteria include parameters related to the analysis of vocabulary, including the features of the data domain, text subject, writing style and format, and logical connection in sentences. The authors have developed algorithms for determining the corresponding numerical characteristics using methods and tools for automatic text analysis. The algorithms are based on the analysis of the composition and structure of sentences, using data from specialized dictionaries. The characteristics are focused on checking business e-mails, but can be adapted to the analysis of other written texts, for example, by replacing dictionaries. Based on the developed algorithms, a system for automatic text scoring is created. An experiment is carried out to analyze the results of this system’s operation on a corpus of 20 texts, previously marked up by English teachers. Automatic scoring and the scoring of experts are compared using heat maps and the the UMAP two-dimensional representation of vectors applied to the characteristic text vectors. In most cases, there are no significant differences between the scores; moreover, automatic scoring turns out to be more objective. Thus, the developed model successfully copes with this task and can be used to evaluate texts written by humans. The results will be used for automatic student language profiling. The advantages of the model lie in the good interpretability of the results, credibility, and development prospects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.

REFERENCES

  1. Al-Bargi, A., Exploring online writing assessment amid Covid-19: Challenges and opportunities from teachers’ perspectives, Arab World Engl. J., 2022, no. 2, pp. 3–21. https://doi.org/10.24093/awej/covid2.1

  2. Soboleva, N.P. and Nilova, M.A., Teaching writing to students of humanitarian specialties using modern educational technologies, Kazanskii Vestn. Molodykh Uchenykh, 2018, vol. 2, no. 5, pp. 57–59.

    Google Scholar 

  3. Fareed, M., Ashraf, A., and Bilal, M., ESL learners' writing skills: Problems, factors and suggestions, J. Educ. Soc. Sci., 2016, vol. 4, no. 2, pp. 83–94. https://doi.org/10.20547/jess0421604201

    Article  Google Scholar 

  4. Al-Mwzaiji, Kh.N.A. and Alzubi, A.A.F., Online self-evaluation: The EFL writing skills in focus, Asian-Pac. J. Second Foreign Lang. Educ., 2022, vol. 7, no. 1, pp. 1–16. https://doi.org/10.1186/s40862-022-00135-8

    Article  Google Scholar 

  5. Hussein, M.A., Hassan, H., and Nassef, M., Automated language essay scoring systems: A literature review, PeerJ Comput. Sci., 2019, vol. 5, p. e208. https://doi.org/10.7717/peerj-cs.208

    Article  PubMed  PubMed Central  Google Scholar 

  6. John Bernardin, H., Thomason, S., Ronald Buckley, M., and Kane, J., Rater rating-level bias and accuracy in performance appraisals: The impact of rater personality, performance management competence, and rater accountability, Hum. Resour. Manage., 2016, vol. 55, no. 2, pp. 321–340. https://doi.org/10.1002/hrm.21678

    Article  Google Scholar 

  7. Ke, Z. and Ng, V., Automated essay scoring: A survey of the state of the art, Proc. Twenty-Eighth Int. Joint Conf. on Artificial Intelligence, 2019, vol. 19, pp. 6300–6308. https://doi.org/10.24963/ijcai.2019/879

  8. Uto, M., A review of deep-neural automated essay scoring models, Behaviormetrika, 2021, vol. 48, no. 2, pp. 459–484. https://doi.org/10.1007/s41237-021-00142-y

    Article  Google Scholar 

  9. Vajjala, S., Automated assessment of non-native learner essays: Investigating the role of linguistic features, Int. J. Artif. Intell. Educ., 2018, vol. 28, no. 1, pp. 79–105. https://doi.org/10.1007/s40593-017-0142-3

    Article  Google Scholar 

  10. Taghipour, K. and Ng, H., A neural approach to automated essay scoring, Proc. 2016 Conf. on Empirical Methods in Natural Language Processing, Su, J., Duh, K., and Carreras, X., Eds., Austin, Texas: Association for Computational Linguistics, 2016, pp. 1882–1891. https://doi.org/10.18653/v1/d16-1193

  11. Xia, L., Liu, J., and Zhang, Z., Automatic essay scoring model based on two-layer bi-directional long-short term memory network, Proc. 2019 3rd Int. Conf. on Computer Science and Artificial Intelligence, Normal, Ill., 2019, New York: Association for Computing Machinery, 2019, pp. 133–137. https://doi.org/10.1145/3374587.3374596

  12. Uto, M. and Okano, M., Robust neural automated essay scoring using item response theory, Artificial Intelligence in Education. AIED 2020, Bittencourt, I., Cukurova, M., Muldner, K., Luckin, R., and Millán, E., Eds., Lecture Notes in Computer Science, vol. 12163, Cham: Springer, 2020, pp. 549–561. https://doi.org/10.1007/978-3-030-52237-7_44

    Book  Google Scholar 

  13. Tay, Yi., Phan, M., Tuan, L., and Hui, S., SkipFlow: Incorporating neural coherence features for end-to-end automatic text scoring, Proc. AAAI Conf. Artif. Intell., 2018, vol. 32, no. 1, pp. 5948–5955. https://doi.org/10.1609/aaai.v32i1.12045

  14. Farag, Yo., Yannakoudakis, H., and Briscoe, T., Neural automated essay scoring and coherence modeling for adversarially crafted input, Proc. 2018 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Walker, M., Ji, H., and Stent, A., Eds., New Orleans: Association for Computational Linguistics, 2018, vol. 1, pp. 263–271. https://doi.org/10.18653/v1/n18-1024

  15. Yang, Yu. and Zhong, J., Automated essay scoring via example-based learning, Web Engineering. ICWE 2021, Brambilla, M., Chbeir, R., Frasincar, F., and Manolescu, I., Eds., Lecture Notes in Computer Science, Cham: Springer, 2021, pp. 201–208. https://doi.org/10.1007/978-3-030-74296-6_16

    Book  Google Scholar 

  16. Mayfield, E. and Black, A.W., Should you fine-tune BERT for automated essay scoring?, Proc. Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, Burstein, J., Kochmar, E., Leacock, C., et al., Eds., Seattle, Wash.: Association for Computational Linguistics, 2020, pp. 151–162. https://doi.org/10.18653/v1/2020.bea-1.15

  17. Yang, R., Cao, J., Wen, Z., Wu, Yo., and He, X., Enhancing automated essay scoring performance via fine-tuning pre-trained language models with combination of regression and ranking, Findings of the Association for Computational Linguistics: EMNLP 2020, Cohn, T., He, Yu., and Liu, Ya., Eds., Association for Computational Linguistics, 2020, pp. 1560–1569. https://doi.org/10.18653/v1/2020.findings-emnlp.141

    Book  Google Scholar 

  18. Uto, M., Xie, Yi., and Ueno, M., Neural automated essay scoring incorporating handcrafted features, Proc. 28th Int. Conf. on Computational Linguistics, Scott, D., Bel, N., and Zong, Ch., Eds., Barcelona: International Committee on Computational Linguistics, 2020, pp. 6077–6088. https://doi.org/10.18653/v1/2020.coling-main.535

  19. Aomi, I., Tsutsumi, E., Uto, M., and Ueno, M., Integration of automated essay scoring models using item response theory, Artificial Intelligence in Education. AIED 2021, Roll, I., McNamara, D., Sosnovsky, S., Luckin, R., and Dimitrova, V., Eds., Lecture Notes in Computer Science, vol. 12749, Cham: Springer, 2021, pp. 54–59. https://doi.org/10.1007/978-3-030-78270-2_9

    Book  Google Scholar 

  20. Zhu, W. and Sun, Yu., Automated essay scoring system using multi-model machine learning, Computer Science & Information Technology (CS & IT), Wyld, D.C. , Eds., AIRCC Publishing Corporation, 2020, vol. 10, pp. 109–117. https://doi.org/10.5121/csit.2020.101211

    Book  Google Scholar 

  21. Darwish, S.M. and Mohamed, S.K., Automated essay evaluation based on fusion of fuzzy ontology and latent semantic analysis, The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2019). AMLTA 2019, Hassanien, A., Azar, A., Gaber, T., Bhatnagar, R., and Tolba, M.F., Eds., Advances in Intelligent Systems and Computing, Cham: Springer, 2019, pp. 566–575. https://doi.org/10.1007/978-3-030-14118-9_57

  22. Salim, Ya., Stevanus, V., Barlian, E., Sari, A., and Suhartono, D., Automated English digital essay grader using machine learning, 2019 IEEE Int. Conf. on Engineering, Technology and Education (TALE), Yogyakarta, Indonesia, 2019, IEEE, 2019, pp. 1–6. https://doi.org/10.1109/tale48000.2019.9226022

  23. Wilkens, R., Seibert, D., Wang, X., and Franc¸ois, T., MWE for essay scoring English as a foreign language, Proc. 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conf., Wilkens, R., Alfter, D., Cardon, R., and Gala, N., Eds., Marseille: European Language Resources Association, 2022, pp. 62–69. https://aclanthology.org/2022.readi-1.9.

    Google Scholar 

  24. Ramesh, D. and Sanampudi, S.K., An automated essay scoring systems: A systematic literature review, Artif. Intell. Rev., 2021, vol. 55, no. 3, pp. 2495–2527. https://doi.org/10.1007/s10462-021-10068-2

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Funding

This study was supported by Demidov Yaroslavl State University through project no. P2-GM5-2021.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. S. Lagutina.

Ethics declarations

The authors of this work declare that they have no conflicts of interest

Additional information

Translated by A. Kolemesin

Publisher’s Note.

Allerton Press remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zafievsky, D.D., Lagutina, N.S., Melnikova, O.A. et al. Text Model for the Automatic Scoring of Business Letter Writing. Aut. Control Comp. Sci. 57, 828–840 (2023). https://doi.org/10.3103/S0146411623070167

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S0146411623070167

Keywords:

Navigation