Tanja Eichner, Eric MörthORCID iD, Kari Strøno Wagner-LarsenORCID iD, Njål Lura, Ingfrid S. HaldorsenORCID iD, Eduard GröllerORCID iD, Stefan BrucknerORCID iD, Noeska N. Smit
MuSIC: Multi-sequential interactive co-registration for cancer imaging data based on segmentation masks
In VCBM 2022 : Eurographics Workshop on Visual Computing for Biology and Medicine, pages 81-91. 2022.
[paper]

Information

  • Publication Type: Conference Paper
  • Workgroup(s)/Project(s): not specified
  • Date: 2022
  • ISBN: 978-3-03868-177-9
  • Publisher: The Eurographics Association
  • Open Access: yes
  • Location: Wien
  • Lecturer: Eduard GröllerORCID iD
  • Event: Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM2022)
  • DOI: 10.2312/vcbm.20221190
  • Booktitle: VCBM 2022 : Eurographics Workshop on Visual Computing for Biology and Medicine
  • Pages: 11
  • Conference date: 22. September 2022 – 23. September 2022
  • Pages: 81 – 91
  • Keywords: Visual Computing, Multi-Modal Segmentation, Registration

Abstract

In gynecologic cancer imaging, multiple magnetic resonance imaging (MRI) sequences are acquired per patient to reveal different tissue characteristics. However, after image acquisition, the anatomical structures can be misaligned in the various sequences due to changing patient location in the scanner and organ movements. The co-registration process aims to align the sequences to allow for multi-sequential tumor imaging analysis. However, automatic co-registration often leads to unsatisfying results. To address this problem, we propose the web-based application MuSIC (Multi-Sequential Interactive Co-registration). The approach allows medical experts to co-register multiple sequences simultaneously based on a pre-defined segmentation mask generated for one of the sequences. Our contributions lie in our proposed workflow. First, a shape matching algorithm based on dual annealing searches for the tumor position in each sequence. The user can then interactively adapt the proposed segmentation positions if needed. During this procedure, we include a multi-modal magic lens visualization for visual quality assessment. Then, we register the volumes based on the segmentation mask positions. We allow for both rigid and deformable registration. Finally, we conducted a usability analysis with seven medical and machine learning experts to verify the utility of our approach. Our participants highly appreciate the multi-sequential setup and see themselves using MuSIC in the future.

Additional Files and Images

Weblinks

BibTeX

@inproceedings{eichner-2022-music,
  title =      "MuSIC: Multi-sequential interactive co-registration for
               cancer imaging data based on segmentation masks",
  author =     "Tanja Eichner and Eric M\"{o}rth and Kari Strøno
               Wagner-Larsen and Nj{\aa}l Lura and Ingfrid S. Haldorsen and
               Eduard Gr\"{o}ller and Stefan Bruckner and Noeska N. Smit",
  year =       "2022",
  abstract =   "In gynecologic cancer imaging, multiple magnetic resonance
               imaging (MRI) sequences are acquired per patient to reveal
               different tissue characteristics. However, after image
               acquisition, the anatomical structures can be misaligned in
               the various sequences due to changing patient location in
               the scanner and organ movements. The co-registration process
               aims to align the sequences to allow for multi-sequential
               tumor imaging analysis. However, automatic co-registration
               often leads to unsatisfying results. To address this
               problem, we propose the web-based application MuSIC
               (Multi-Sequential Interactive Co-registration). The approach
               allows medical experts to co-register multiple sequences
               simultaneously based on a pre-defined segmentation mask
               generated for one of the sequences. Our contributions lie in
               our proposed workflow. First, a shape matching algorithm
               based on dual annealing searches for the tumor position in
               each sequence. The user can then interactively adapt the
               proposed segmentation positions if needed. During this
               procedure, we include a multi-modal magic lens visualization
               for visual quality assessment. Then, we register the volumes
               based on the segmentation mask positions. We allow for both
               rigid and deformable registration. Finally, we conducted a
               usability analysis with seven medical and machine learning
               experts to verify the utility of our approach. Our
               participants highly appreciate the multi-sequential setup
               and see themselves using MuSIC in the future.",
  isbn =       "978-3-03868-177-9",
  publisher =  "The Eurographics Association",
  location =   "Wien",
  event =      "Eurographics Workshop on Visual Computing for Biology and
               Medicine (VCBM2022)",
  doi =        "10.2312/vcbm.20221190",
  booktitle =  "VCBM 2022 : Eurographics Workshop on Visual Computing for
               Biology and Medicine",
  pages =      "11",
  pages =      "81--91",
  keywords =   "Visual Computing, Multi-Modal Segmentation, Registration",
  URL =        "https://www.cg.tuwien.ac.at/research/publications/2022/eichner-2022-music/",
}