当前位置: X-MOL 学术Text. Res. J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fabric defect image generation method based on the dual-stage W-net generative adversarial network
Textile Research Journal ( IF 2.3 ) Pub Date : 2024-02-29 , DOI: 10.1177/00405175241233942
Xuejuan Hu 1, 2, 3, 4 , Yifei Liang 1, 2 , Hengliang Wang 2 , Yadan Tan 1, 2 , Shiqian Liu 3 , Fudong Pan 1 , Qingyang Wu 4 , Zhengdi He 5
Affiliation  

Due to the intricate and diverse nature of textile defects, detecting them poses an exceptionally challenging task. In comparison with conventional defect detection methods, deep learning-based defect detection methods generally exhibit superior precision. However, utilizing deep learning for defect detection requires a substantial volume of training data, which can be particularly challenging to accumulate for textile flaws. To augment the fabric defect dataset and enhance fabric defect detection accuracy, we propose a fabric defect image generation method based on Pix2Pix generative adversarial network. This approach devises a novel dual-stage W-net generative adversarial network. By increasing the network depth, this model can effectively extract intricate textile image features, thereby enhancing its ability to expand information sharing capacity. The dual-stage W-net generative adversarial network allows generating desired defects on defect-free textile images. We conduct quality assessment of the generated fabric defect images resulting in peak signal-to-noise ratio and structural similarity values exceeding 30 and 0.930, respectively, and a learned perceptual image patch similarity value no greater than 0.085, demonstrating the effectiveness of fabric defect data augmentation. The effectiveness of dual-stage W-net generative adversarial network is established through multiple comparative experiments evaluating the generated images. By examining the detection performance before and after data augmentation, the results demonstrate that mean average precision improves by 6.13% and 14.57% on YOLO V5 and faster recurrent convolutional neural networks detection models, respectively.

中文翻译:

基于双阶段W-net生成对抗网络的织物疵点图像生成方法

由于纺织品缺陷的复杂性和多样性,检测它们是一项极具挑战性的任务。与传统的缺陷检测方法相比,基于深度学习的缺陷检测方法通常表现出更高的精度。然而,利用深度学习进行缺陷检测需要大量的训练数据,这对于纺织品缺陷的积累尤其具有挑战性。为了扩充织物缺陷数据集并提高织物缺陷检测精度,我们提出了一种基于 Pix2Pix 生成对抗网络的织物缺陷图像生成方法。这种方法设计了一种新颖的双阶段 W-net 生成对抗网络。通过增加网络深度,该模型可以有效地提取复杂的纺织品图像特征,从而增强其扩展信息共享能力。双阶段 W-net 生成对抗网络允许在无缺陷的纺织品图像上生成所需的缺陷。我们对生成的织物缺陷图像进行质量评估,峰值信噪比和结构相似度值分别超过 30 和 0.930,学习到的感知图像块相似度值不大于 0.085,证明了织物缺陷数据的有效性增强。双阶段 W-net 生成对抗网络的有效性是通过评估生成图像的多个比较实验建立的。通过检查数据增强前后的检测性能,结果表明,YOLO V5 和更快的循环卷积神经网络检测模型的平均精度分别提高了 6.13% 和 14.57%。
更新日期:2024-02-29
down
wechat
bug