当前位置: X-MOL 学术Brain Inf. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep learning based joint fusion approach to exploit anatomical and functional brain information in autism spectrum disorders
Brain Informatics Pub Date : 2024-01-09 , DOI: 10.1186/s40708-023-00217-4
Sara Saponaro , Francesca Lizzi , Giacomo Serra , Francesca Mainas , Piernicola Oliva , Alessia Giuliano , Sara Calderoni , Alessandra Retico

The integration of the information encoded in multiparametric MRI images can enhance the performance of machine-learning classifiers. In this study, we investigate whether the combination of structural and functional MRI might improve the performances of a deep learning (DL) model trained to discriminate subjects with Autism Spectrum Disorders (ASD) with respect to typically developing controls (TD). We analyzed both structural and functional MRI brain scans publicly available within the ABIDE I and II data collections. We considered 1383 male subjects with age between 5 and 40 years, including 680 subjects with ASD and 703 TD from 35 different acquisition sites. We extracted morphometric and functional brain features from MRI scans with the Freesurfer and the CPAC analysis packages, respectively. Then, due to the multisite nature of the dataset, we implemented a data harmonization protocol. The ASD vs. TD classification was carried out with a multiple-input DL model, consisting in a neural network which generates a fixed-length feature representation of the data of each modality (FR-NN), and a Dense Neural Network for classification (C-NN). Specifically, we implemented a joint fusion approach to multiple source data integration. The main advantage of the latter is that the loss is propagated back to the FR-NN during the training, thus creating informative feature representations for each data modality. Then, a C-NN, with a number of layers and neurons per layer to be optimized during the model training, performs the ASD-TD discrimination. The performance was evaluated by computing the Area under the Receiver Operating Characteristic curve within a nested 10-fold cross-validation. The brain features that drive the DL classification were identified by the SHAP explainability framework. The AUC values of 0.66±0.05 and of 0.76±0.04 were obtained in the ASD vs. TD discrimination when only structural or functional features are considered, respectively. The joint fusion approach led to an AUC of 0.78±0.04. The set of structural and functional connectivity features identified as the most important for the two-class discrimination supports the idea that brain changes tend to occur in individuals with ASD in regions belonging to the Default Mode Network and to the Social Brain. Our results demonstrate that the multimodal joint fusion approach outperforms the classification results obtained with data acquired by a single MRI modality as it efficiently exploits the complementarity of structural and functional brain information.

中文翻译:

基于深度学习的联合融合方法,利用自闭症谱系障碍的解剖和功能大脑信息

多参数 MRI 图像中编码信息的集成可以增强机器学习分类器的性能。在这项研究中,我们研究了结构和功能 MRI 的结合是否可以提高深度学习 (DL) 模型的性能,该模型经过训练可以区分自闭症谱系障碍 (ASD) 受试者相对于典型发育对照 (TD) 的受试者。我们分析了 ABIDE I 和 II 数据集中公开的结构和功能 MRI 脑部扫描。我们考虑了 1383 名年龄在 5 至 40 岁之间的男性受试者,其中包括来自 35 个不同采集地点的 680 名 ASD 受试者和 703 名 TD 受试者。我们分别使用 Freesurfer 和 CPAC 分析包从 MRI 扫描中提取形态测量和功能性大脑特征。然后,由于数据集的多站点性质,我们实施了数据协调协议。ASD 与 TD 分类是使用多输入深度学习模型进行的,该模型由生成每种模态数据的固定长度特征表示的神经网络 (FR-NN) 和用于分类的密集神经网络 ( C-NN)。具体来说,我们实施了一种联合融合方法来集成多源数据。后者的主要优点是,损失在训练期间传播回 FR-NN,从而为每种数据模态创建信息丰富的特征表示。然后,具有多个层和每层神经元在模型训练期间进行优化的 C-NN 执行 ASD-TD 判别。通过计算嵌套 10 倍交叉验证中接收者操作特征曲线下的面积来评估性能。驱动 DL 分类的大脑特征由 SHAP 可解释性框架识别。当仅考虑结构或功能特征时,在 ASD 与 TD 区分中分别获得 0.66±0.05 和 0.76±0.04 的 AUC 值。联合融合方法的 AUC 为 0.78±0.04。被认为对两类歧视最重要的一组结构和功能连接特征支持这样一种观点,即患有自闭症谱系障碍的个体的大脑变化往往发生在属于默认模式网络和社交大脑的区域。我们的结果表明,多模态联合融合方法优于通过单一 MRI 模态获取的数据获得的分类结果,因为它有效地利用了结构和功能大脑信息的互补性。
更新日期:2024-01-09
down
wechat
bug