当前位置: 首页 > news >正文

易销云建站公司网站建设销售做些什么工作

易销云建站公司,网站建设销售做些什么工作,网站开发技术文档格式,金螳螂家装官网Title 题目 BrainSegFounder: Towards 3D foundation models for neuroimagesegmentation BrainSegFounder#xff1a;迈向用于神经影像分割的3D基础模型 01 文献速递介绍 人工智能#xff08;AI#xff09;与神经影像分析的融合#xff0c;特别是多模态磁共振成像迈向用于神经影像分割的3D基础模型 01 文献速递介绍 人工智能AI与神经影像分析的融合特别是多模态磁共振成像MRI正在推动脑健康领域的重要进展Chen等2022Segato等2020Rao2023Owolabi等2023Moreno-Blanco等2019Rajpurkar等2022Khachaturian等2023。由于人脑的复杂性其精密的解剖结构和复杂的功能使神经影像分析面临显著挑战Moor等2023Azad等2023Zhang和Metaxas2024Segato等2020Rajpurkar等2022。人工智能在解释复杂神经数据方面的能力有望提高诊断精度并加深我们对脑部病理的理解。许多研究致力于开发用于特定脑健康分析的人工智能模型这些研究都在不断丰富神经影像研究领域。 传统上神经影像人工智能模型需要通过监督学习进行大量的微调以解决特定的下游任务。诸如nnU-NetIsensee等2021、DeepScanMcKinley等2019和DeepMedicKamnitsas等2017等架构在许多医学计算机视觉挑战中表现出色例如脑肿瘤分割挑战BraTSBaid等2021、医学分割十项全能MSDAntonelli等2022以及肿瘤和肝脏自动分割挑战ATLASQuinton等2023。这些进展中的许多都源于在大型无标签数据集上使用自监督预训练方法将模型编码器和解码器的权重转移到挑战中的较小数据集上Zhou等2021Tang等2022。除了这些预训练的改进外最近还推动了大规模医学数据集的开发Mei等2022Clark等2013Bycroft等2018以协助这些模型的创建。然而医学图像分析尚未从最近在自然图像分析和语言处理中的进展中受益例如“Segment Anything ModelSAM”Kirillov等2023和LLaMATouvron等2023模型。 在医学语言处理领域像MI-ZeroLu等2023和BioViL-TBannur等2023这样的模型利用对比学习在医学图像识别中的表征分析和零样本迁移学习方面取得了显著进展。通过利用不同的学习目标相似的图像-文本对在潜在空间中被拉近而不相似的对则被推得更远。这类模型推动了组织病理学研究的界限将基于文本的分析与计算机视觉结合起来。然而这些模型依赖于训练图像时附带的文本提示Tiu等2022。 Abatract 摘要 The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to analyze andinterpret neuroimaging data. Medical foundation models have shown promise of superior performance withbetter sample efficiency. This work introduces a novel approach towards creating 3-dimensional (3D) medicalfoundation models for multimodal neuroimage segmentation through self-supervised training. Our approachinvolves a novel two-stage pretraining approach using vision transformers. The first stage encodes anatomicalstructures in generally healthy brains from the large-scale unlabeled neuroimage dataset of multimodal brainmagnetic resonance imaging (MRI) images from 41,400 participants. This stage of pertaining focuses onidentifying key features such as shapes and sizes of different brain structures. The second pretraining stageidentifies disease-specific attributes, such as geometric shapes of tumors and lesions and spatial placementswithin the brain. This dual-phase methodology significantly reduces the extensive data requirements usuallynecessary for AI model training in neuroimage segmentation with the flexibility to adapt to various imagingmodalities. We rigorously evaluate our model, BrainSegFounder, using the Brain Tumor Segmentation (BraTS)challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainSegFounderdemonstrates a significant performance gain, surpassing the achievements of the previous winning solutionsusing fully supervised learning. Our findings underscore the impact of scaling up both the model complexityand the volume of unlabeled training data derived from generally healthy brains. Both of these factors enhancethe accuracy and predictive capabilities of the model in neuroimage segmentation tasks. 大脑健康研究的快速发展领域越来越多地利用人工智能AI来分析和解释神经影像数据。医学基础模型在样本效率上表现出优越的性能展现了巨大的潜力。本研究提出了一种通过自监督训练创建三维3D医学基础模型用于多模态神经影像分割的新方法。我们的方法涉及一种基于视觉转换器Vision Transformers的新颖两阶段预训练策略。 第一阶段的预训练编码了大规模无标签神经影像数据集中41,400名参与者的多模态脑部磁共振成像MRI图像中的一般健康大脑的解剖结构。该阶段的预训练侧重于识别不同脑结构的关键特征如形状和大小。第二阶段的预训练识别与疾病相关的特征如肿瘤和病变的几何形状以及其在大脑中的空间位置。该双阶段方法显著减少了神经影像分割中AI模型训练通常所需的大量数据需求同时具有适应各种影像模式的灵活性。 我们使用脑肿瘤分割挑战BraTS和中风后病变解剖追踪v2.0ATLAS v2.0数据集对我们的模型BrainSegFounder进行了严格的评估。BrainSegFounder展现了显著的性能提升超越了之前使用完全监督学习方法的获胜方案。我们的研究结果强调了增加模型复杂性和从一般健康大脑中衍生的大规模无标签训练数据量对提高神经影像分割任务中模型的准确性和预测能力的重要性。 Background 背景 Method 方法 2.1. Model architecture and pipeline The BrainSegFounder framework introduces a deep learning training scheme tailored for diverse applications by showcasing a distinctapproach to self-supervised pretraining followed by precise fine-tuning.This section offers a detailed examination of the framework’s architecture and its procedural pipeline. It highlights the multi-stageself-supervised pretraining, termed Stage 1 and Stage 2, before proceeding to fine-tuning for downstream tasks. Fig. 1 illustrates BrainSegFounder’s architecture. Central to BrainSegFounder is a visiontransformer-based encoder that employs a series of self-attention mechanisms. This encoder is linked with an up-sampling decoder tailored forsegmentation tasks. The architecture is adapted from the SwinUNETRarchitecture (Hatamizadeh et al., 2022) with modified input channelsand input hyperparameters. BrainSegFounder pioneers a novel dualphase self-supervised pretraining method, integrating self-supervisedlearning components within its structure. Stage 1 pretraining exposesthe framework to a wide-ranging dataset of brain MRIs from theUK Biobank dataset, predominantly consisting of healthy individuals.This initial stage equips the model with a thorough comprehension ofstandard brain anatomy, utilizing self-supervised learning to enhanceprediction capabilities. Stage 2 of pretraining advances the model’sproficiency by introducing it to a specialized MRI dataset gearedtowards the downstream task. This phase leverages the architecture’srefined anomaly detection skills, focusing on distinguishing deviationsin brain structure. 2.1. 模型架构和流程 BrainSegFounder框架引入了一种深度学习训练方案专为多种应用定制展示了一种独特的自监督预训练方法随后进行精确的微调。本节详细分析了框架的架构及其流程管道重点介绍了多阶段自监督预训练称为阶段1和阶段2然后进行下游任务的微调。图1展示了BrainSegFounder的架构。 BrainSegFounder的核心是一个基于视觉转换器Vision Transformer的编码器该编码器利用一系列自注意力机制。该编码器与一个针对分割任务的上采样解码器相连。该架构改编自SwinUNETR架构Hatamizadeh等2022并对输入通道和输入超参数进行了修改。BrainSegFounder开创了一种新颖的双阶段自监督预训练方法在其结构中集成了自监督学习组件。 阶段1预训练让框架接触到来自英国生物样本库UK Biobank的大规模脑部MRI数据集这些数据主要来自健康个体。这个初始阶段使模型对标准脑解剖结构有了透彻的理解通过自监督学习来增强预测能力。阶段2预训练通过引入专门的MRI数据集进一步提高模型的能力以适应下游任务。此阶段利用架构精细化的异常检测能力重点在于区分脑结构中的偏差。 Results 结果 3.1. Pretraining The pretraining of our BrainSegFounder models, which varied insize based on the number of parameters, took between 3 to 6 days. Thisprocess utilized a computational setup ranging from 8 to 64 NVIDIAA100 GPUs, each with 80 GB capacity. Fig. 4 illustrates the validationloss during the pretraining phase across different BrainSegFoundermodel sizes. 3.1. 预训练 我们的BrainSegFounder模型的预训练过程根据参数数量的不同耗时在3至6天之间。该过程使用了从8到64块NVIDIA A100 GPU的计算配置每块GPU具有80 GB的容量。图4展示了在预训练阶段不同规模的BrainSegFounder模型的验证损失情况。 Figure 图 Fig. 1. Overall Study Design. (a) The two-stage pretraining process using Swin Transformer decoders and encoder. Initially, the model is pretrained on the UKB dataset (Stage 1),followed by the downstream task dataset (Stage 2). (b) This is succeeded by fine-tuning on each downstream dataset, with transfer learning applied between each stage. 图1. 整体研究设计。(a) 使用Swin Transformer解码器和编码器的两阶段预训练过程。首先模型在UKB数据集阶段1上进行预训练然后在下游任务数据集阶段2上继续预训练。(b) 随后对每个下游数据集进行微调并在每个阶段之间应用迁移学习。 Fig. 2. Visual representation of demographic data from subjects in the UK Biobank in the study 图2. 本研究中来自英国生物样本库受试者的人口统计数据的可视化表示。 Fig. 3. CONSORT diagram of UKB data used in Stage 1 pretraining 图3. 用于阶段1预训练的英国生物样本库UKB数据的CONSORT图。 Fig. 4. Training (left) and validation (right) loss of Stage 1-pretraining three different scale of BrainSegFounder models on UKB. 图4. 在英国生物样本库UKB上进行阶段1预训练的三种不同规模的BrainSegFounder模型的训练损失左和验证损失右。 Fig. 5. Dice coefficients for baseline (SwinUNETR) and our model across differentlevels of training data availability. All models were trained 5 times to account forvariability in the input data randomly selected. Error bars represent ± one standarddeviation. 图5. 基线模型SwinUNETR和我们的模型在不同训练数据可用性水平下的Dice系数。所有模型均训练了5次以考虑随机选择的输入数据中的变异性。误差条表示±一个标准差。 Table 表 Table 1A summary of the data used in this study. 表1 本研究中使用的数据概述。 Table 2Pretraining encoder settings. 表2 预训练编码器设置。 Table 3Hardware and training parameters. 表3 硬件和训练参数。 Table 4A comparison of BrainSegFounder (BSF) models’ performance in terms of average Dice coefficient on the BraTS challenge. BSF-S indicates our best performing BrainSegFoundermodel (small, 64M parameters). BrainSegFounder models were pretrained with SSL on T1- and T2-weighted MRI 3D volumes and finetuned with supervised learning using allfour modalities present in BraTS. BSF-1-S indicates this model with only the Stage 1 (SSL) pertaining on UKB and without the Stage 2 pretraining step. SwinU models are modelsusing the SwinUNETR architecture trained on BraTS via supervised learning. SwinU-MRI is the model trained directly using supervised learning on BraTS published on GitHub(https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BRATS21), SwinU-Res is pretrained with SSL on only T1w and T2w and finetuned on BraTS,and SwinU-CT pretrained using CT data and finetuned with supervised learning on BraTS. nnU-Net and SegResNet are former BraTS challenge winners trained using supervisedlearning on our folds. TransBTS is a vision-transformer based segmentation algorithm optimized for brain segmentation. Model-Zoo is a bundle of models published by MONAIthat can perform BraTS segmentation out of the box using their ‘‘Brats mri segmentation’’ [sic] model found at https://monai.io/model-zoo.html. 表4 BrainSegFounderBSF模型在BraTS挑战中的平均Dice系数表现比较。BSF-S表示我们表现最佳的BrainSegFounder模型小型64M参数。BrainSegFounder模型通过自监督学习SSL在T1和T2加权的MRI三维体积上进行预训练并使用BraTS中所有四种模态的监督学习进行微调。BSF-1-S表示仅在UKB上进行阶段1SSL预训练且未进行阶段2预训练的模型。SwinU模型是使用SwinUNETR架构通过监督学习在BraTS上训练的模型。SwinU-MRI是在BraTS上直接使用监督学习训练的模型发布在GitHub上链接。SwinU-Res模型仅在T1w和T2w数据上使用SSL预训练然后在BraTS上进行微调。SwinU-CT模型使用CT数据预训练并在BraTS上通过监督学习进行微调。nnU-Net和SegResNet是先前BraTS挑战的获胜者使用监督学习在我们的数据集上训练。TransBTS是一种基于视觉转换器的脑分割优化算法。Model-Zoo是MONAI发布的一组模型能够直接使用其“Brats mri segmentation”模型可在此处找到执行BraTS分割任务。 Table 5Performance comparison of modality restricted models on the BraTS dataset. SwinUNETR is fully supervised learning on T1-weighted MRI without pretraining, whileBrainSegFounder uses our multi-stage pretraining on UKB and BraTS T1-weighted MRIand is then finetuned on BraTS T1-weighted MRI. 表5 模态受限模型在BraTS数据集上的性能比较。SwinUNETR是在没有预训练的情况下对T1加权MRI进行完全监督学习而BrainSegFounder使用我们的多阶段预训练在UKB和BraTS T1加权MRI上后在BraTS T1加权MRI上进行微调。 Table 6Performance comparison of segmentation models on the ATLAS v2.0 dataset. All metrics from the challenge (Dice coefficient, Lesion-wise F1 Score, Simple Lesion Count, andVolume Difference) are included for each model. 表6 分割模型在ATLAS v2.0数据集上的性能比较。每个模型的所有挑战指标Dice系数、病灶级F1得分、简单病灶计数和体积差异均包括在内。 Table A.7UKB Data Demographic information. 表A.7 UKB数据的人口统计信息。 Table A.8Comparison of BrainSegFounder models through 5-fold cross-validation with metric Dice coefficient on BraTS. SwinUNETR is the winning solution on BraTS challenge 2021,which is performed with fully supervised learning without UKB pretraining. BrainSegFounder is the proposed method, which is conducted with the two-stage pretraining and thenfinetuning on the target dataset. The one-stage means that pretraining on UKB is performed but not on the BraTS. 表A.8 通过5折交叉验证对BrainSegFounder模型在BraTS上的Dice系数进行比较。SwinUNETR是2021年BraTS挑战赛的获胜方案使用完全监督学习进行训练没有UKB预训练。BrainSegFounder是提出的方法采用了两阶段预训练随后在目标数据集上进行微调。“一阶段”意味着仅在UKB上进行预训练而不在BraTS上进行。 Table A.9Comparison of BrainSegFounder (BSF) and SwinUNETR (SwinU) Baseline models trained on 5 repeats of varying percentages of the input data.Data was randomly sampled from the BraTS training dataset, and models were evaluated on the testing dataset 表A.9 BrainSegFounderBSF和SwinUNETRSwinU基线模型在不同百分比的输入数据上进行5次重复训练的比较。数据从BraTS训练数据集中随机抽样模型在测试数据集上进行评估。
http://www.pierceye.com/news/521453/

相关文章:

  • 大宇网络做网站怎么样app制作器下载软件
  • 四川建行网站做网站公司职务
  • 广州定制网站设计图标设计免费 logo
  • 十大网站有哪些网站建设 模板
  • 网站流量一直下降中国十大品牌网
  • 同学录网站开发的背景域名注册网站免费
  • 旅游电子商务网站建设规划书温州网站建设策划方案
  • 国家住房建设部网站域名查询官方网站
  • app开发 网站开发统称宁波seo推广咨询
  • 专门做书单的网站网络营销策划方案的设计
  • 网站建设推广合同自己建设网站需要花多少钱
  • 深圳网站建设电话哈尔滨建设网站官网
  • 上海网站建设网页制作培训做网站做论坛赚钱吗
  • 为网站做电影花絮哈尔滨互联网公司
  • 哈尔滨微网站建设公司做网站被骗该咋样做
  • 做翻译 英文网站dede网站版权信息
  • 梅江区住房和城乡建设局官方网站品牌设计帮
  • 单页网站cms建设通会员多少一年
  • app营销型网站的特点公司建设网站怎么作账
  • 有免费做海报的网站吗制作表情包
  • 网站建设的平台做微课的网站
  • 有没有专门做美食海报的网站郑州网站建设搜q.479185700
  • 公司网站宣传做网站时版权怎么写
  • 可以在哪些网站 app做推广的建站官网模板
  • 网站建设标书卧龙区建网站
  • 东莞做网站软件嘉兴网站制作价格
  • 学网站建设 去那里合肥专业网站优化
  • 个人网站 备案 广告建设国际网站
  • 苏州建站推广公司做网站费用怎么记分录
  • 做的比较好的家具网站首页在win10下建设网站