Skip to main content
Log in

FFEINR: flow feature-enhanced implicit neural representation for spatiotemporal super-resolution

  • Regular Paper
  • Published:
Journal of Visualization Aims and scope Submit manuscript

Abstract

Large-scale numerical simulations are capable of generating data up to terabytes or even petabytes. As a promising method of data reduction, super-resolution (SR) has been widely studied in the scientific visualization community. However, most of them are based on deep convolutional neural networks or generative adversarial networks and the scale factor needs to be determined before constructing the network. As a result, a single training session only supports a fixed factor and has poor generalization ability. To address these problems, this paper proposes a flow feature-enhanced implicit neural representation (FFEINR) for spatiotemporal super-resolution of flow field data. It can take full advantage of the implicit neural representation in terms of model structure and sampling resolution. The neural representation is based on a fully connected network with periodic activation functions, which enables us to obtain lightweight models. The learned continuous representation can decode the low-resolution flow field input data to arbitrary spatial and temporal resolutions, allowing for flexible upsampling. The training process of FFEINR is facilitated by introducing feature enhancements for the input layer, which complements the contextual information of the flow field. To demonstrate the effectiveness of the proposed method, a series of experiments are conducted on different datasets by setting different hyperparameters. The results show that FFEINR achieves significantly better results than the trilinear interpolation method.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. https://www.top500.org/lists/top500/2023/11/.

  2. https://cgl.ethz.ch/research/visualization/data.php.

References

  • Baeza Rojo I, Günther T (2020) Vector field topology of time-dependent flows in a steady reference frame. IEEE Trans Visual Comput Graphics 26(1):280–290

    Google Scholar 

  • Barron JT, Mildenhall B, Tancik M, et al (2021) Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: Proceedings of IEEE/CVF international conference on computer vision, pp 5835–5844

  • Bashir SMA, Wang Y, Khan M et al (2021) A comprehensive review of deep learning-based single image super-resolution. PeerJ Comput Sci 7:e621

    Article  PubMed  PubMed Central  Google Scholar 

  • Cai S, Mao Z, Wang Z et al (2021) Physics-informed neural networks (PINNs) for fluid mechanics: a review. Acta Mech Sin 37(12):1727–1738

    Article  ADS  MathSciNet  Google Scholar 

  • Chen A, Xu Z, Zhao F, et al (2021a) MVSNeRF: fast generalizable radiance field reconstruction from multi-view stereo. In: Proceedings of IEEE/CVF international conference on computer vision, pp 14104–14113

  • Chen H, He B, Wang H, et al (2021b) NeRV: neural representations for videos. In: Proceedings of advances in neural information processing systems

  • Chen Z, Chen Y, Liu J, et al (2022) VideoINR: learning video implicit neural representation for continuous space-time super-resolution. In: Proceedings of IEEE/CVF conference on computer vision and pattern recognition, pp 2037–2047

  • Chen H, Gwilliam M, Lim SN, et al (2023) HNeRV: a hybrid neural representation for videos. In: Proceedings of IEEE/CVF conference on computer vision and pattern recognition

  • Chu M, Liu L, Zheng Q et al (2022) Physics informed neural fields for smoke reconstruction with sparse data. ACM Trans Graph 41(4):1–14

    Article  Google Scholar 

  • Deng L, Bao W, Wang Y et al (2022) Vortex-U-Net: an efficient and effective vortex detection approach based on U-net structure. Appl Soft Comput 115(108229):1–34

    Google Scholar 

  • Fukami K, Fukagata K, Taira K (2021) Machine-learning-based spatio-temporal super resolution reconstruction of turbulent flows. J Fluid Mech 909:A9

    Article  ADS  MathSciNet  CAS  Google Scholar 

  • Günther T, Gross M, Theisel H (2017) Generic objective vortices for flow visualization. ACM Trans Graph 36(4):141:1-141:11

    Article  Google Scholar 

  • Guo L, Ye S, Han J, et al (2020) SSR-VFD: spatial super-resolution for vector field data analysis and visualization. In: Proceedings of the IEEE pacific visualization symposium, pp 71–80

  • Han J, Wang C (2020a) SSR-TVD: spatial super-resolution for time-varying data analysis and visualization. IEEE Trans Visual Comput Graphics 28(6):2445–2456

    Google Scholar 

  • Han J, Wang C (2020b) TSR-TVD: temporal super-resolution for time-varying data analysis and visualization. IEEE Trans Visual Comput Graphics 26(1):205–215

    MathSciNet  Google Scholar 

  • Han J, Wang C (2022) TSR-VFD: generating temporal super-resolution for unsteady vector field data. Comput Graphics 103:168–179

    Article  Google Scholar 

  • Han J, Wang C (2023) CoordNet: data generation and visualization generation for time-varying volumes via a coordinate-based neural network. IEEE Trans Visual Comput Graphics 29(12):4951–4963

    Article  Google Scholar 

  • Han J, Zheng H, Chen DZ et al (2022) STNet: an end-to-end generative framework for synthesizing spatiotemporal super-resolution volumes. IEEE Trans Visual Comput Graphics 28(1):270–280

    Article  Google Scholar 

  • Hao Y, Bi C, Yang L, et al (2023) Visual analytics of air pollutant propagation path and pollution source. In: Proceedings of the 16th international symposium on visual information communication and interaction, pp 1–8

  • Jiao C, Bi C, Yang L et al (2023) ESRGAN-based visualization for large-scale volume data. J Visualization 26(3):649–665

    Article  Google Scholar 

  • Jin X, Cai S, Li H et al (2021) NSFnets (Navier–Stokes flow nets): physics-informed neural networks for the incompressible Navier–Stokes equations. J Comput Phys 426:109951

    Article  MathSciNet  Google Scholar 

  • Karniadakis GE, Kevrekidis IG, Lu L et al (2021) Physics-informed machine learning. Nat Rev Phys 3(6):422–440

    Article  Google Scholar 

  • Li J, Bi C (2023) Visual analysis of air pollution spatio-temporal patterns. Vis Comput 39(8):3715–3726

    Article  Google Scholar 

  • Li S, Marsaglia N, Garth C et al (2018) Data reduction techniques for simulation, visualization and data analysis. Comput Graphics Forum 37(6):422–447

    Article  Google Scholar 

  • Li Z, Wang M, Pi H, et al (2022) E-NeRV: expedite neural video representation with disentangled spatial-temporal context. In: Proceedings of European conference on computer vision, pp 267–284

  • Liang X, Di S, Tao D, et al (2018) Error-controlled lossy compression optimized for high compression ratios of scientific datasets. In: Proceedings of IEEE international conference on big data (big data), pp 438–447

  • Liu H, Ruan Z, Zhao P et al (2022) Video super-resolution based on deep learning: a comprehensive survey. Artif Intell Rev 55(8):5981–6035

    Article  Google Scholar 

  • Lu Y, Jiang K, Levine JA et al (2021) Compressive neural representations of volumetric scalar fields. Comput Graphics Forum 40(3):135–146

    Article  Google Scholar 

  • Mildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: representing scenes as neural radiance fields for view synthesis. In: Proceedings of European conference on computer vision, pp 405–421

  • Nguyen-Phuoc T, Liu F, Xiao L (2022) SNeRF: stylized neural implicit representations for 3D scenes. ACM Trans Graphics 41(4)

  • Pan P, Bi C, Wei J, et al (2023) Flow field feature extraction and tracking based on spatial similarity metrics. In: Proceedings of 2023 nicograph international, pp 30–37

  • Pandey S, Schumacher J, Sreenivasan KR (2020) A perspective on machine learning in turbulent flows. J Turbul 21(9):567–584

    Article  ADS  MathSciNet  Google Scholar 

  • Popinet S (2004) Free computational fluid dynamics. ClusterWorld 2(6)

  • Raissi M, Perdikaris P, Karniadakis G (2019) Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J Comput Phys 378:686–707

    Article  ADS  MathSciNet  Google Scholar 

  • Raissi M, Yazdani A, Karniadakis GE (2020) Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations. Science 367(6481):1026–1030

    Article  ADS  MathSciNet  CAS  PubMed  PubMed Central  Google Scholar 

  • Shechtman E, Caspi Y, Irani M (2002) Increasing space-time resolution in video. In: Proceedings of European conference on computer vision, pp 753–768

  • Sitzmann V, Martel JN, Bergman AW, et al (2020) Implicit neural representations with periodic activation functions. In: Proceedings of advances in neural information processing systems

  • Takamoto M, Praditia T, Leiteritz R, et al (2022) PDEBench: an extensive benchmark for scientific machine learning. In: Proceedings of neural information processing systems

  • Wang C, Han J (2023) DL4SciVis: a state-of-the-art survey on deep learning for scientific visualization. IEEE Trans Visual Comput Graphics 29(8):3714–3733

    Article  Google Scholar 

  • Wang J, Bi C, Deng L et al (2021) A composition-free parallel volume rendering method. J Visualization 24(3):531–544

    Article  Google Scholar 

  • Wang S, Yu X, Perdikaris P (2022) When and why PINNs fail to train: a neural tangent kernel perspective. J Comput Phys 449:110768

    Article  MathSciNet  Google Scholar 

  • Wang X, Yu K, Wu S, et al (2018) ESRGAN: enhanced super-resolution generative adversarial networks. In: Proceedings of computer vision-ECCV 2018 workshops, pp 63–79

  • Xiang X, Tian Y, Zhang Y, et al (2020) Zooming slow-mo: fast and accurate one-stage space-time video super-resolution. In: Proceedings of IEEE/CVF conference on computer vision and pattern recognition, pp 3367–3376

  • Xu G, Xu J, Li Z, et al (2021) Temporal modulation network for controllable space-time video super-resolution. In: Proceedings of IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 6384–6393

  • Yu A, Ye V, Tancik M, et al (2021) pixelNeRF: neural radiance fields from one or few images. In: Proceedings of IEEE conference on computer vision and pattern recognition

  • Zhao K, Di S, Liang X, et al (2020) Significantly improving lossy compression for HPC datasets with second-order prediction and parameter optimization. In: Proceedings of international symposium on high-performance parallel and distributed computing, pp 89–100

  • Zhao K, Di S, Dmitriev M, et al (2021) Optimizing error-bounded lossy compression for scientific data by dynamic spline interpolation. In: Proceedings of IEEE international conference on data engineering, pp 1643–1654

Download references

Acknowledgements

This work was partially supported by the National Key R&D Program of China under Grand No. 2021YFE0108400 and partially supported by the National Natural Science Foundation of China under Grant No. 62172294.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lu Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A. Supplementary experiments

A. Supplementary experiments

1.1 A.1. Performance on different time steps

As shown in Fig. 7, we compare the visualization results of PipeCylinder dataset at different time steps. FFEINR achieves better results than the baseline method in the region of the flow field around obstacles or in the interior of vortices across different time steps.

Fig. 7
figure 7

Qualitative comparison at different time steps. Top to bottom: \(t=1070, 1366, 1430\). FFEINR outperforms Trilinear in predicting data at different time steps. Specifically, FFEINR achieves more accurate results than interpolation methods in the region of the flow field around obstacles or in the interior of vortices across time steps

1.2 A.2. Performance with different fixed scale factors

In order to illustrate the impact of fixed scale factors on network performance, we conduct supplementary experiments on the Cylinder dataset. As shown in Table 6, the scale factor can affect training time and model performance. The larger the scale factor in the temporal dimension during training, the longer the training time. In the experiments with the factor of \((S\times 4, T\times 4)\) and \((S\times 4, T\times 8)\), the model performance decreases significantly compared to the factor \((S\times 4, T\times 2)\), but the effects of extended resolution inference are roughly the same, which means that the out-of-distribution performance of the model is relatively stable. However, in our experiment, if the scale factor in the spatial dimension during training is too small \((S\times 2)\) or too large \((S\times 8)\), the model performs poorly in extended resolution tasks. The reason may be that the factor of \(S\times 4\) achieves a balance between the degree of low-resolution data loss and the interpolation performance of the model.

Table 6 Quantitative comparison of the extended resolution with different training scale factors. Except for \((S\times 2, T\times 2)\), FFEINR outperforms Trilinear in most indicators. Considering the trade-off between training time and effectiveness, setting the scaling factor of \((S\times 4, T\times 2)\) during training can achieve the optimal results

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiao, C., Bi, C. & Yang, L. FFEINR: flow feature-enhanced implicit neural representation for spatiotemporal super-resolution. J Vis 27, 273–289 (2024). https://doi.org/10.1007/s12650-024-00959-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12650-024-00959-1

Keywords

Navigation