AI

Iterative Deep Learning in 3D for Sparse-view Cone Beam Computed Tomography

Authors:Daniel Barco (1), Mark Stadelmann (1), Martin Oswald (1), Ivo Herzig (2), Lukas Lichtensteiger (2), Pascal Paysan (3), Igor Peterlik (3), Michal Valczak (3), Björn Menze (4), Frank-Peter Schilling (1) ((1) Center for Artificial Intelligence (CAI), Zurich University of Applied Sciences (ZHAW), Winterthur, Switzerland, (2) Institute of Applied Mathematics and Physics (IAMP), Zurich University of Applied Sciences (ZHAW), Winterthur, Switzerland, (3) Varian Medical Systems Imaging Laboratory, Baden, Switzerland, (4) Biomedical Image Analysis and machine learning, University of Zurich, Zurich, Switzerland)

View PDF of the article MInDI-3D: 3D Iterative Deep Learning for Sparse-View Cone Beam Computed Tomography, by Daniel Barco (1) and 23 other authors

View PDF HTML (beta)

a summary:We present MInDI-3D (Medical Inversion by Direct Replication 3D), the first 3D modal diffusion-based model to eliminate real-world cone beam computed tomography (CBCT) artifacts, with the aim of reducing imaging radiation exposure. The main contribution is to extend the concept of “InDI” from a 2D approach to a full 3D volumetric approach for medical images, implementing an iterative denoising process that optimizes the CBCT volume directly from the sparse view input. Another contribution is to create a large pseudo-CBCT dataset (16,182) of chest CT volumes in the public CT-RATE dataset to robustly train MInDI-3D. We conducted a comprehensive evaluation, including quantitative measures, scalability analysis, generalizability tests, and clinical evaluation by 11 clinicians. Our results demonstrate the effectiveness of MInDI-3D, achieving a PSNR gain of 12.96 (6.10) dB compared to uncorrected scans with only 50 projections on the CBCT (real-world independent) pseudo-CT-RATE test set and enabling an 8-fold reduction in imaging radiation exposure. We demonstrate scalability by showing that performance improves with more training data. Importantly, MInDI-3D matches the performance of 3D U-Net on real-life scans of 16 cancer patients across distortion and task-based metrics. It also generalizes to the new CBCT scanner geometry. Clinicians rated our model as adequate for patient positioning at all anatomical sites and found that it maintained lung tumor boundaries well.

Submission date

Written by: Daniel Barco [view email]
[v1]

Wednesday, 13 August 2025, 08:49:18 UTC (2,383 KB)
[v2]

Thursday, October 9, 2025, 07:53:47 UTC (2,382 KB)

Don’t miss more hot News like this! AI/" target="_blank" rel="noopener">Click here to discover the latest in AI news!

2025-10-10 04:00:00

Related Articles

Back to top button