skip to main content
research-article

NIM: Generative Neural Networks for Automated Modeling and Generation of Simulation Inputs

Published:10 August 2023Publication History
Skip Abstract Section

Abstract

Fitting stochastic input-process models to data and then sampling from them are key steps in a simulation study but highly challenging to non-experts. We present Neural Input Modeling (NIM), a Generative Neural Network (GNN) framework that exploits modern data-rich environments to automatically capture simulation input processes and then generate samples from them. The basic GNN that we develop, called NIM-VL, comprises (i) a variational autoencoder architecture that learns the probability distribution of the input data while avoiding overfitting and (ii) long short-term memory components that concisely capture statistical dependencies across time. We show how the basic GNN architecture can be modified to exploit known distributional properties—such as independent and identically distributed structure, nonnegativity, and multimodality—to increase accuracy and speed, as well as to handle multivariate processes, categorical-valued processes, and extrapolation beyond the training data for certain nonstationary processes. We also introduce an extension to NIM called Conditional Neural Input Modeling (CNIM), which can learn from training data obtained under various realizations of a (possibly time series valued) stochastic “condition,” such as temperature or inflation rate, and then generate sample paths given a value of the condition not seen in the training data. This enables users to simulate a system under a specific working condition by customizing a pre-trained model; CNIM also facilitates what-if analysis. Extensive experiments show the efficacy of our approach. NIM can thus help overcome one of the key barriers to simulation for non-experts.

Skip Supplemental Material Section

Supplemental Material

REFERENCES

  1. [1] Altosaar Jaan. 2020. Tutorial—What Is a Variational Autoencoder? Retrieved April 18, 2020 from https://jaan.io/what-is-variational-autoencoder-vae-tutorial.Google ScholarGoogle Scholar
  2. [2] Biller Bahar. 2009. Copula-based multivariate input models for stochastic simulation. Operations Research 57, 4 (2009), 878892.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Biller Bahar and Nelson Barry L.. 2003. Modeling and generating multivariate time-series input processes using a vector autoregressive technique. ACM Transactions on Modeling and Computer Simulation 13, 3 (2003), 211237.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Bowman Samuel R., Vilnis Luke, Vinyals Oriol, Dai Andrew M., Jozefowicz Rafal, and Bengio Samy. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning. 1021.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Box George E. P., Jenkins Gwilym M., Reinsel Gregory C., and Ljung Greta M.. 2016. Time Series Analysis: Forecasting and Control. Wiley.Google ScholarGoogle Scholar
  6. [6] Bratley P., Fox B. L., and Schrage L. E.. 1987. A Guide to Simulation. Springer-Verlag.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Brockwell P. J. and Davis R. A.. 2009. Time Series: Theory and Methods. Springer.Google ScholarGoogle Scholar
  8. [8] Cario M. C. and Nelson B. L.. 1997. Modeling and Generating Random Vectors with Arbitrary Marginal Distributions and Correlation Matrix. Technical Report. Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL.Google ScholarGoogle Scholar
  9. [9] Cen Wang, Herbert Emily A., and Haas Peter J.. 2020. NIM: Modeling and generation of simulation inputs via generative neural networks. In Proceedings of the 2020 Winter Simulation Conference (WSC’20). IEEE, Piscataway, NJ, 584595.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Cox D. R. and Isham V.. 1980. Point Processes. Chapman & Hall.Google ScholarGoogle Scholar
  11. [11] Doersch Carl. 2016. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908v2 (2016).Google ScholarGoogle Scholar
  12. [12] Geer Mountain Software 2020. Stat::Fit Distribution Fitting Software. Retrieved April 18, 2020 from https://www.geerms.com.Google ScholarGoogle Scholar
  13. [13] Härdle Wolfgang, Horowitz Joel, and Kreiss Jens-Peter. 2003. Bootstrap methods for time series. International Statistical Review 71, 2 (2003), 435459.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Hastie Trevor, Tibshirani Robert, and Friedman Jerome. 2009. The Elements of Statistical Learning (2nd ed.). Springer.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Hochreiter Sepp and Schmidhuber Jürgen. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 17351780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Hörmann Wolfgang, Leydold Josef, and Derflinger Gerhard. 2004. Automatic Nonuniform Random Variate Generation. Springer.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Ibrahim M. R., Haworth J., Lipani A., Aslam N., Cheng T., and Christie N.. 2021. Variational-LSTM autoencoder to forecast the spread of coronavirus across the globe. PLoS One 16, 1 (2021), e0246120.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Jiang Wendy Xi and Nelson Barry L.. 2018. Better input modeling via model averaging. In Proceedings of the 2018 Winter Simulation Conference (WSC’18). IEEE, Piscataway, NJ, 15751586.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Karmaker Shubhra Kanti, Hassan Md. Mahadi, Smith Micah J., Xu Lei, Zhai Chengxiang, and Veeramachaneni Kalyan. 2021. AutoML to date and beyond: Challenges and opportunities. ACM Computing Surveys 54, 8 (2021), 136.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Kingma Diederik P. and Welling Max. 2013. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114v10 (2013).Google ScholarGoogle Scholar
  21. [21] Law Averill M.. 2015. Simulation Modeling and Analysis (5th ed.). McGraw-Hill, New York, NY.Google ScholarGoogle Scholar
  22. [22] Lewis P. A. W. and Shedler G. S.. 1979. Simulation of nonhomogeneous Poisson processes by thinning. Naval Research Logistics Quarterly 26 (1979), 403413.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Lipton Zachary C.. 2015. A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019v4 (2015).Google ScholarGoogle Scholar
  24. [24] Neuts Marcel. 1981. Matrix-Geometric Solutions in Stochastic Models. Dover.Google ScholarGoogle Scholar
  25. [25] Niklaus Christina, Cetto Matthias, Freitas André, and Handschuh Siegfried. 2018. A survey on open information extraction. In Proceedings of the 27th International Conference on Computational Linguistics (COLING’18). 38663878.Google ScholarGoogle Scholar
  26. [26] Park Daehyung, Hoshi Yuuna, and Kemp Charles C.. 2018. A multimodal anomaly detector for robot-assisted feeding using an LSTM-based variational autoencoder. IEEE Robotics and Automation Letters 3, 3 (2018), 15441551.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Radford Alec, Metz Luke, and Chintala Soumith. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015).Google ScholarGoogle Scholar
  28. [28] Ramesh Aditya, Dhariwal Prafulla, Nichol Alex, Chu Casey, and Chen Mark. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022).Google ScholarGoogle Scholar
  29. [29] Schruben Lee W. and Singham Dashi I.. 2014. Data-driven simulation of complex multidimensional time series. ACM Transactions on Modeling and Computer Simulation 24, 1 (2014), Article 5, 13 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Tolstikhin Ilya, Bousquet Olivier, Gelly Sylvain, and Schoelkopf Bernhard. 2017. Wasserstein auto-encoders. arXiv preprint arXiv:1711.01558 (2017).Google ScholarGoogle Scholar
  31. [31] Aalst Wil M. P. van der. 2018. Process mining and simulation: A match made in heaven! In Proceedings of the 2018 Summer Simulation Conference (SummerSim’18). Article 4, 12 pages.Google ScholarGoogle Scholar
  32. [32] VanderPlas Jacob T.. 2018. Understanding the Lomb-Scargle periodogram. Astrophysical Journal Supplement Series 236, 1 (2018), 16.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Yang Li-Chia, Chou Szu-Yu, and Yang Yi-Hsuan. 2017. MidiNet: A convolutional generative adversarial network for symbolic-domain music generation. arXiv preprint arXiv:1703.10847 (2017).Google ScholarGoogle Scholar
  34. [34] Zheng Yufeng and Zheng Zeyu. 2021. Doubly stochastic generative arrivals modeling. arxiv:2012.13940 [stat.ML] (2021).Google ScholarGoogle Scholar
  35. [35] Zhou Luowei, Xu Chenliang, and Corso Jason J.. 2018. Towards automatic learning of procedures from web instructional videos. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, the 30th Innovative Applications of Artificial Intelligence Conference, and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI’18/IAAI’18/EAAI’18). 75907598.Google ScholarGoogle Scholar
  36. [36] Zhu Tingyu and Zheng Zeyu. 2021. Learning to simulate sequentially generated data via neural networks and Wasserstein training. In Proceedings of the 2021 Winter Simulation Conference (WSC’21). IEEE, Piscataway, NJ, 112.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. NIM: Generative Neural Networks for Automated Modeling and Generation of Simulation Inputs

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Modeling and Computer Simulation
      ACM Transactions on Modeling and Computer Simulation  Volume 33, Issue 3
      July 2023
      79 pages
      ISSN:1049-3301
      EISSN:1558-1195
      DOI:10.1145/3597020
      • Editor:
      • Wentong Cai
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 10 August 2023
      • Online AM: 19 April 2023
      • Accepted: 3 April 2023
      • Revised: 24 October 2022
      • Received: 12 January 2022
      Published in tomacs Volume 33, Issue 3

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)254
      • Downloads (Last 6 weeks)26

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text