当前位置: X-MOL 学术Integr. Mater. Manuf. Innov. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automated Grain Boundary (GB) Segmentation and Microstructural Analysis in 347H Stainless Steel Using Deep Learning and Multimodal Microscopy
Integrating Materials and Manufacturing Innovation ( IF 3.3 ) Pub Date : 2024-01-08 , DOI: 10.1007/s40192-023-00305-7
Shoieb Ahmed Chowdhury , M. F. N. Taufique , Jing Wang , Marissa Masden , Madison Wenzlick , Ram Devanathan , Alan L. Schemer-Kohrn , Keerti S. Kappagantula

Austenitic 347H stainless steel offers superior mechanical properties and corrosion resistance required for extreme operating conditions such as high temperature. The change in microstructure due to composition and process variations is expected to impact material properties. Identifying microstructural features such as grain boundaries thus becomes an important task in the process-microstructure-properties loop. Applying convolutional neural network (CNN)-based deep learning models is a powerful technique to detect features from material micrographs in an automated manner. In contrast to microstructural classification, supervised CNN models for segmentation tasks require pixel-wise annotation labels. However, manual labeling of the images for the segmentation task poses a major bottleneck for generating training data and labels in a reliable and reproducible way within a reasonable timeframe. Microstructural characterization especially needs to be expedited for faster material discovery by changing alloy compositions. In this study, we attempt to overcome such limitations by utilizing multimodal microscopy to generate labels directly instead of manual labeling. We combine scanning electron microscopy images of 347H stainless steel as training data and electron backscatter diffraction micrographs as pixel-wise labels for grain boundary detection as a semantic segmentation task. The viability of our method is evaluated by considering a set of deep CNN architectures. We demonstrate that despite producing instrumentation drift during data collection between two modes of microscopy, this method performs comparably to similar segmentation tasks that used manual labeling. Additionally, we find that naïve pixel-wise segmentation results in small gaps and missing boundaries in the predicted grain boundary map. By incorporating topological information during model training, the connectivity of the grain boundary network and segmentation performance is improved. Finally, our approach is validated by accurate computation on downstream tasks of predicting the underlying grain morphology distributions which are the ultimate quantities of interest for microstructural characterization.



中文翻译:

使用深度学习和多模态显微镜对 347H 不锈钢进行自动晶界 (GB) 分割和微观结构分析

奥氏体 347H 不锈钢具有高温等极端工作条件所需的卓越机械性能和耐腐蚀性能。由于成分和工艺变化而导致的微观结构变化预计会影响材料性能。因此,识别晶界等微观结构特征成为工艺-微观结构-性能循环中的一项重要任务。应用基于卷积神经网络 (CNN) 的深度学习模型是一种强大的技术,可以自动检测材料显微照片中的特征。与微观结构分类相比,用于分割任务的监督 CNN 模型需要像素级注释标签。然而,用于分割任务的图像的手动标记对于在合理的时间范围内以可靠且可重复的方式生成训练数据和标签构成了主要瓶颈。特别需要通过改变合金成分来加快微观结构表征,以更快地发现材料。在本研究中,我们试图通过利用多模态显微镜直接生成标签而不是手动标记来克服这些限制。我们将 347H 不锈钢的扫描电子显微镜图像作为训练数据,将电子背散射衍射显微照片作为像素级标签,将晶界检测作为语义分割任务。我们的方法的可行性是通过考虑一组深度 CNN 架构来评估的。我们证明,尽管在两种显微镜模式之间的数据收集过程中产生仪器漂移,但该方法的性能与使用手动标记的类似分割任务相当。此外,我们发现简单的逐像素分割会导致预测的晶界图中出现小间隙和缺失边界。通过在模型训练过程中结合拓扑信息,提高了晶界网络的连通性和分割性能。最后,我们的方法通过对预测底层晶粒形态分布的下游任务的精确计算得到验证,这些形态分布是微观结构表征的最终兴趣量。

更新日期:2024-01-10
down
wechat
bug