当前位置: X-MOL 学术PASP › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Multimodal Transfer Learning Method for Classifying Images of Celestial Point Sources
Publications of the Astronomical Society of the Pacific ( IF 3.5 ) Pub Date : 2023-10-26 , DOI: 10.1088/1538-3873/acfbb9
Bingjun Wang , Shuxin Hong , Zhiyang Yuan , A-Li Luo , Xiao Kong , Zhiqiang Zou

A large fraction of celestial objects exhibit point shapes in CCD images, such as stars and QSOs, which contain less information due to their few pixels. Point source classification based solely on image data may lead to low accuracy. To address this challenge, this paper proposes a Multi-modal Transfer Learning-based classification method for celestial objects with point shape images. Considering that spectral data possess rich features and that there is a correlation between spectral data and image data, the proposed approach fully utilizes the knowledge gained from celestial spectral data and transfers it to the original image-based classification, enhancing the accuracy of classifying stars and QSOs. Initially, a one-dimensional residual network is employed to extract a 128-dimensional spectral feature vector from the original 3700-dimensional spectral data. This spectral feature vector captures important features of the celestial object. The Generative Adversarial Network is then utilized to generate a simulated spectral vector of 128 dimensions, which corresponds to the celestial object image. By generating simulated spectral vectors, data from two modals (spectral and image) for the same celestial object are available, enriching the input features of the model. In the upcoming multimodal classification model, we only require the images of celestial objects along with their corresponding simulated spectral data, and we no longer need real spectral data. With the assistance of spectral data, the proposed method alleviates the above disadvantages of the original image-based classification method. Remarkably, our method has improved the F1-score from 0.93 to 0.9777, while reducing the error rate in classification by 40%. These enhancements significantly increase the classification accuracy of stars and QSOs, providing strong support for the classification of celestial point sources.

中文翻译:


天体点源图像分类的多模态迁移学习方法



大部分天体在 CCD 图像中呈现点形状,例如恒星和 QSO,由于像素较少,因此包含的信息较少。仅仅基于图像数据的点源分类可能会导致精度较低。为了应对这一挑战,本文提出了一种基于多模态迁移学习的点状图像天体分类方法。考虑到光谱数据具有丰富的特征,并且光谱数据与图像数据之间存在相关性,该方法充分利用从天体光谱数据中获得的知识,并将其转移到原始的基于图像的分类中,提高了恒星分类的准确性和QSO。最初,采用一维残差网络从原始3700维光谱数据中提取128维光谱特征向量。该光谱特征向量捕获了天体的重要特征。然后,利用生成对抗网络生成 128 维的模拟光谱矢量,该矢量对应于天体图像。通过生成模拟光谱矢量,可以获得同一天体的两个模态(光谱和图像)的数据,丰富了模型的输入特征。在即将推出的多模态分类模型中,我们只需要天体图像及其相应的模拟光谱数据,而不再需要真实的光谱数据。在光谱数据的帮助下,所提出的方法减轻了原始基于图像的分类方法的上述缺点。值得注意的是,我们的方法将 F1 分数从 0.93 提高到 0.9777,同时将分类错误率降低了 40%。 这些增强功能显着提高了恒星和QSO的分类精度,为天体点源的分类提供了有力支持。
更新日期:2023-10-26
down
wechat
bug