skip to main content
research-article

Math Word Problem Generation via Disentangled Memory Retrieval

Authors Info & Claims
Published:26 March 2024Publication History
Skip Abstract Section

Abstract

The task of math word problem (MWP) generation, which generates an MWP given an equation and relevant topic words, has increasingly attracted researchers’ attention. In this work, we introduce a simple memory retrieval module to search related training MWPs, which are used to augment the generation. To retrieve more relevant training data, we also propose a disentangled memory retrieval module based on the simple memory retrieval module. To this end, we first disentangle the training MWPs into logical description and scenario description and then record them in respective memory modules. Later, we use the given equation and topic words as queries to retrieve relevant logical descriptions and scenario descriptions from the corresponding memory modules, respectively. The retrieved results are then used to complement the process of the MWP generation. Extensive experiments and ablation studies verify the superior performance of our method and the effectiveness of each proposed module. The code is available at https://github.com/mwp-g/MWPG-DMR.

REFERENCES

  1. [1] Cai Deng, Wang Yan, Bi Wei, Tu Zhaopeng, Liu Xiaojiang, Lam Wai, and Shi Shuming. 2019. Skeleton-to-response: Dialogue generation guided by retrieval memory. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL’19).Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Cai Deng, Wang Yan, Bi Wei, Tu Zhaopeng, Liu Xiaojiang, and Shi Shuming. 2019. Retrieval-guided dialogue response generation via a matching-to-generation framework. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP’19). Association for Computational Linguistics, 18661875. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Cai Deng, Wang Yan, Li Huayang, Lam Wai, and Liu Lemao. 2021. Neural machine translation with monolingual translation memory. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL’21). 73077318.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Chen Danqi, Fisch Adam, Weston Jason, and Bordes Antoine. 2017. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 18701879. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Chen Ricky T. Q., Li Xuechen, Grosse Roger B., and Duvenaud David K.. 2018. Isolating sources of disentanglement in variational autoencoders. Adv. Neural Inf. Process. Syst. 31 (2018).Google ScholarGoogle Scholar
  6. [6] Chen Xinlei, Fang Hao, Lin Tsung-Yi, Vedantam Ramakrishna, Gupta Saurabh, Dollár Piotr, and Zitnick C. Lawrence. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv:1504.00325. Retrieved from https://arxiv.org/abs/1504.00325Google ScholarGoogle Scholar
  7. [7] Deane Paul and Sheehan Kathleen. 2003. Automatic item generation via frame semantics: Natural language generation of math word problems.Google ScholarGoogle Scholar
  8. [8] Eastwood Cian and Williams Christopher K. I.. 2018. A framework for the quantitative evaluation of disentangled representations. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  9. [9] Gu Jiatao, Lu Zhengdong, Li Hang, and Li Victor O.K.. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 16311640. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Guu Kelvin, Hashimoto Tatsunori B., Oren Yonatan, and Liang Percy. 2018. Generating sentences by editing prototypes. Trans. Assoc. Comput. Ling. 6 (2018), 437450. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Guu Kelvin, Lee Kenton, Tung Zora, Pasupat Panupong, and Chang Ming-Wei. 2020. REALM: Retrieval-augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning (ICML’20). JMLR.org, Article 368, 10 pages.Google ScholarGoogle Scholar
  12. [12] Hashimoto Tatsunori B., Guu Kelvin, Oren Yonatan, and Liang Percy. 2018. A retrieve-and-edit framework for predicting structured outputs. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS’18).Google ScholarGoogle Scholar
  13. [13] Huang Danqing, Shi Shuming, Lin Chin-Yew, Yin Jian, and Ma Wei-Ying. 2016. How well do computers solve math word problems? large-scale dataset construction and evaluation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL’16). 887896.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Huang Shifeng, Wang Jiawei, Xu Jiao, Cao Da, and Yang Ming. 2021. Recall and learn: A memory-augmented solver for math word problems. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’21). Association for Computational Linguistics, 786796. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Huang Zhenya, Lin Xin, Wang Hao, Liu Qi, Chen Enhong, Ma Jianhui, Su Yu, and Tong Wei. 2021. Disenqnet: Disentangled representation learning for educational questions. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 696704.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Karpicke Jeffrey D.. 2012. Retrieval-based learning: Active retrieval promotes meaningful learning. Curr. Direct. Psychol. Sci. 21 (2012), 157163.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Karpicke Jeffrey D. and Roediger Henry L.. 2008. The critical importance of retrieval for learning. Science 319 (2008), 966968.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Karpukhin Vladimir, Oguz Barlas, Min Sewon, Lewis Patrick, Wu Ledell, Edunov Sergey, Chen Danqi, and Yih Wen-tau. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’20). Association for Computational Linguistics, Online, 67696781. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Khandelwal Urvashi, Levy Omer, Jurafsky Dan, Zettlemoyer Luke, and Lewis Mike. 2020. Generalization through memorization: Nearest neighbor language models. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  20. [20] Koncel-Kedziorski Rik, Roy Subhro, Amini Aida, Kushman Nate, and Hajishirzi Hannaneh. 2016. MAWPS: A math word problem repository. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 11521157. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Lan Yihuai, Wang Lei, Zhang Qiyuan, Lan Yunshi, Dai Bing Tian, Wang Yan, Zhang Dongxiang, and Lim Ee-Peng. 2021. MWPToolkit: An open-source framework for deep learning-based math word problem solvers. arXiv:2109.00799. Retrieved from https://arxiv.org/abs/2109.00799Google ScholarGoogle Scholar
  22. [22] Lavie Alon and Agarwal Abhaya. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceedings of the 2nd Workshop on Statistical Machine Translation. 228231.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Lee Kenton, Chang Ming-Wei, and Toutanova Kristina. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 60866096. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Lewis Mike, Ghazvininejad Marjan, Ghosh Gargi, Aghajanyan Armen, Wang Sida, and Zettlemoyer Luke. 2020. Pre-training via paraphrasing. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS’20). Curran Associates Inc., Red Hook, NY.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Lewis Patrick, Perez Ethan, Piktus Aleksandra, Petroni Fabio, Karpukhin Vladimir, Goyal Naman, Küttler Heinrich, Lewis Mike, Yih Wen-tau, Rocktäschel Tim, Riedel Sebastian, and Kiela Douwe. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems, Larochelle H., Ranzato M., Hadsell R., Balcan M. F., and Lin H. (Eds.), Vol. 33. Curran Associates, Inc., 94599474.Google ScholarGoogle Scholar
  26. [26] Li Shucheng, Wu Lingfei, Feng Shiwei, Xu Fangli, Xu Fengyuan, and Zhong Sheng. 2020. Graph-to-tree neural networks for learning structured input-output translation with applications to semantic parsing and math word problem. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’20).Google ScholarGoogle Scholar
  27. [27] Li Yanran, Zhang Ruixiang, Li Wenjie, and Cao Ziqiang. 2022. Hierarchical prediction and adversarial learning for conditional response generation. IEEE Trans. Knowl. Data Eng. 34, 1 (2022), 314327. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Li Zhongli, Zhang Wenxuan, Yan Chao, Zhou Qingyu, Li Chao, Liu Hongzhi, and Cao Yunbo. 2022. Seeking patterns, not just memorizing procedures: Contrastive learning for solving math word problems. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL’22).Google ScholarGoogle Scholar
  29. [29] Liang Zhenwen, Zhang Jipeng, Wang Lei, Qin Wei, Lan Yunshi, Shao Jie, and Zhang Xiangliang. 2021. MWP-BERT: Numeracy-augmented pre-training for math word problem solving. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT’21).Google ScholarGoogle Scholar
  30. [30] Lim Yongsub, Jung Minsoo, and Kang U.. 2018. Memory-efficient and accurate sampling for counting local triangles in graph streams: From simple to multigraphs. ACM Trans. Knowl. Discov. Data 12, 1, Article 4 (Jan.2018), 28 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Lin Bill Yuchen, Lee Seyeon, Khanna Rahul, and Ren Xiang. 2020. Birds have four legs?! numersense: Probing numerical commonsense knowledge of pre-trained language models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’20). Association for Computational Linguistics, 68626868. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Lin Chin-Yew. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out. 7481.Google ScholarGoogle Scholar
  33. [33] Liu Fenglin, Wu Xian, Ge Shen, Ren Xuancheng, Fan Wei, Sun Xu, and Zou Yuexian. 2021. DiMBERT: Learning vision-language grounded representations with disentangled multimodal-attention. ACM Transactions on Knowledge Discovery from Data 16, 1, Article 1 (jul2021), 19 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Liu Qianying, Guan Wenyv, Li Sujian, and Kawahara Daisuke. 2019. Tree-structured decoding for solving math word problems. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP’19). Association for Computational Linguistics, 23702379. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Liu Tianqiao, Fang Qiang, Ding Wenbiao, Li Hang, Wu Zhongqin, and Liu Zitao. 2020. Mathematical word problem generation from commonsense knowledge graph and equations. arXiv:2010.06196. Retrieved from https://arxiv.org/abs/2010.06196Google ScholarGoogle Scholar
  36. [36] Liu Tianqiao, Fang Qian, Ding Wenbiao, Wu Zhongqin, and Liu Zitao. 2021. Mathematical word problem generation from commonsense knowledge graph and equations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’21).Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Locatello Francesco, Bauer Stefan, Lucic Mario, Raetsch Gunnar, Gelly Sylvain, Schölkopf Bernhard, and Bachem Olivier. 2019. Challenging common assumptions in the unsupervised learning of disentangled representations. In International Conference on Machine Learning. PMLR, 41144124.Google ScholarGoogle Scholar
  38. [38] Ma Yunshan, Ding Yujuan, Yang Xun, Liao Lizi, Wong Wai Keung, and Chua Tat-Seng. 2020. Knowledge Enhanced Neural Fashion Trend Forecasting. Association for Computing Machinery, New York, NY, 8290.Google ScholarGoogle Scholar
  39. [39] Ma Yunshan, Yang Xun, Liao Lizi, Cao Yixin, and Chua Tat-Seng. 2019. Who, where, and what to wear? extracting fashion knowledge from social media. In Proceedings of the 27th ACM International Conference on Multimedia (MM’19). Association for Computing Machinery, New York, NY, 257265.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Makki Raheleh, Carvalho Eder, Soto Axel J., Brooks Stephen, Oliveira Maria Cristina Ferreira De, Milios Evangelos, and Minghim Rosane. 2018. ATR-Vis: Visual and interactive information retrieval for parliamentary discussions in twitter. ACM Trans. Knowl. Discov. Data 12, 1, Article 3 (Feb.2018), 33 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Nandhini K. and Balasundaram S. R.. 2011. Math word question generation for training the students with learning difficulties. In Proceedings of the International Conference; Workshop on Emerging Trends in Technology. 206211.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Papineni Kishore, Roukos Salim, Ward Todd, and Zhu Wei-Jing. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL’02). 311318.Google ScholarGoogle Scholar
  43. [43] Pennington Jeffrey, Socher Richard, and Manning Christopher D.. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’14). 15321543.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Pfau David, Higgins Irina, Botev Alex, and Racanière Sébastien. 2020. Disentangling by subspace diffusion. Adv. Neural Inf. Process. Syst. 33 (2020), 1740317415.Google ScholarGoogle Scholar
  45. [45] Polozov Oleksandr, O’Rourke Eleanor, Smith Adam M., Zettlemoyer Luke, Gulwani Sumit, and Popovic Zoran. 2015. Personalized mathematical word problem generation. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’15).Google ScholarGoogle Scholar
  46. [46] Qin Wei, Chen Zetong, Wang Lei, Lan Yunshi, Ren Wei, and Hong Richang. 2023. Read, diagnose and chat: Towards explainable and interactive LLMs-augmented depression detection in social media. arXiv: 2305.05138. Retrieved from https://arxiv.org/abs/2305.05138Google ScholarGoogle Scholar
  47. [47] Radford Alec, Wu Jeffrey, Child Rewon, Luan David, Amodei Dario, Sutskever Ilya, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1 (2019), 9.Google ScholarGoogle Scholar
  48. [48] Rohrer Doug and Pashler Harold. 2010. Recent research on human learning challenges conventional instructional strategies. Educ. Res. 39 (2010), 406412.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Roseberry Martha, Krawczyk Bartosz, and Cano Alberto. 2019. Multi-label punitive KNN with self-adjusting memory for drifting data streams. Proceedings of Machine Learning Research 13, 6, Article 60 (Nov.2019), 31 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] See Abigail, Liu Peter, and Manning Christopher. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL’17).Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N, Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS’17).Google ScholarGoogle Scholar
  52. [52] Verschaffel Lieven, Schukajlow Stanislaw, Star Jon, and Dooren Wim Van. 2020. Word problems in mathematics education: A survey. ZDM 52 (2020), 116.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Walkington Candace A.. 2013. Using adaptive learning technologies to personalize instruction to student interests: The impact of relevant contexts on performance and learning outcomes. J. Educ. Psychol. 105 (2013), 932.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Wang Lei, Zhang Dongxiang, Gao Lianli, Song Jingkuan, Guo Long, and Shen Heng Tao. 2018. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Wang Lei, Zhang Dongxiang, Zhang Jipeng, Xu Xing, Gao Lianli, Dai Bing Tian, and Shen Heng Tao. 2019. Template-based math word problem solvers with recursive neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, 71447151. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Wang Quan, Mao Zhendong, Wang Bin, and Guo Li. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 29, 12 (2017), 27242743. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Wang Yan, Liu Xiaojiang, and Shi Shuming. 2017. Deep neural solver for math word problems. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 845854. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Wang Yan, Liu Xiaojiang, and Shi Shuming. 2017. Deep neural solver for math word problems. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’17). 845854.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Wang Zichao, Lan Andrew, and Baraniuk Richard. 2021. Math word problem generation with mathematical consistency and problem context constraints. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’21).Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Weston Jason, Dinan Emily, and Miller Alexander H.. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’18).Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Williams Sandra. 2011. Generating mathematical word problems. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI’11).Google ScholarGoogle Scholar
  62. [62] Wu Lingfei, Chen Yu, Shen Kai, Guo Xiaojie, Gao Hanning, Li Shucheng, Pei Jian, and Long Bo. 2022. Graph neural networks for natural language processing: A survey. Found. Trends Mach. Learn. (2022).Google ScholarGoogle Scholar
  63. [63] Wu Yu, Wei Furu, Huang Shaohan, Wang Yunli, Li Zhoujun, and Zhou Ming. 2019. Response generation by context-aware prototype editing. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI’19).Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Xiao Han, Chen Yidong, and Shi Xiaodong. 2021. Knowledge graph embedding based on multi-view clustering framework. IEEE Trans. Knowl. Data Eng. 33, 2 (2021), 585596. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Xie Zhipeng and Sun Shichao. 2019. A goal-driven tree-structured neural model for math word problems. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI’19). International Joint Conferences on Artificial Intelligence Organization, 52995305. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Yang Xun, Feng Fuli, Ji Wei, Wang Meng, and Chua Tat-Seng. 2021. Deconfounded video moment retrieval with causal intervention. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Zhang Jipeng, Lee Roy Ka-Wei, Lim Ee-Peng, Qin Wei, Wang Lei, Shao Jie, and Sun Qianru. 2020. Teacher-student networks with multiple decoders for solving math word problem. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’20).Google ScholarGoogle Scholar
  68. [68] Zhang Xikun, Ramachandran Deepak, Tenney Ian, Elazar Yanai, and Roth Dan. 2020. Do language embeddings capture scales?. In Findings of the Association for Computational Linguistics: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’20). Association for Computational Linguistics, 48894896. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Zhou Qingyu and Huang Danqing. 2019. Towards generating math word problems from equations and topics. In Proceedings of the 12th International Conference on Natural Language Generation (INLG’19).Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Math Word Problem Generation via Disentangled Memory Retrieval

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Knowledge Discovery from Data
      ACM Transactions on Knowledge Discovery from Data  Volume 18, Issue 5
      June 2024
      699 pages
      ISSN:1556-4681
      EISSN:1556-472X
      DOI:10.1145/3613659
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 26 March 2024
      • Online AM: 26 January 2024
      • Accepted: 15 December 2023
      • Revised: 13 February 2023
      • Received: 14 September 2022
      Published in tkdd Volume 18, Issue 5

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)157
      • Downloads (Last 6 weeks)53

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text