Skip to main content
  • Original research
  • Open access
  • Published:

Deep learning for Dixon MRI-based attenuation correction in PET/MRI of head and neck cancer patients

Abstract

Background

Quantitative whole-body PET/MRI relies on accurate patient-specific MRI-based attenuation correction (AC) of PET, which is a non-trivial challenge, especially for the anatomically complex head and neck region. We used a deep learning model developed for dose planning in radiation oncology to derive MRI-based attenuation maps of head and neck cancer patients and evaluated its performance on PET AC.

Methods

Eleven head and neck cancer patients, referred for radiotherapy, underwent CT followed by PET/MRI with acquisition of Dixon MRI. Both scans were performed in radiotherapy position. PET AC was performed with three different patient-specific attenuation maps derived from: (1) Dixon MRI using a deep learning network (PETDeep). (2) Dixon MRI using the vendor-provided atlas-based method (PETAtlas). (3) CT, serving as reference (PETCT). We analyzed the effect of the MRI-based AC methods on PET quantification by assessing the average voxelwise error within the entire body, and the error as a function of distance to bone/air. The error in mean uptake within anatomical regions of interest and the tumor was also assessed.

Results

The average (± standard deviation) PET voxel error was 0.0 ± 11.4% for PETDeep and −1.3 ± 21.8% for PETAtlas. The error in mean PET uptake in bone/air was much lower for PETDeep (−4%/12%) than for PETAtlas (−15%/84%) and PETDeep also demonstrated a more rapidly decreasing error with distance to bone/air affecting only the immediate surroundings (less than 1 cm). The regions with the largest error in mean uptake were those containing bone (mandible) and air (larynx) for both methods, and the error in tumor mean uptake was −0.6 ± 2.0% for PETDeep and −3.5 ± 4.6% for PETAtlas.

Conclusion

The deep learning network for deriving MRI-based attenuation maps of head and neck cancer patients demonstrated accurate AC and exceeded the performance of the vendor-provided atlas-based method both overall, on a lesion-level, and in vicinity of challenging regions such as bone and air.

Background

Simultaneous PET/MRI offers great opportunities for cancer research and new possibilities for radiotherapy planning as it provides high anatomical sensitivity from MRI and functional information from both MRI and PET [1, 2]. However, accurate attenuation correction (AC) of PET continues to be a major challenge for PET/MRI and is critical for quantitative analysis but also qualitatively as it affects the visual representation of the PET tracer distribution. Traditionally, PET AC is handled by an attenuation map derived from either a transmission scan using an external rotating rod-source or a CT scan (for PET/CT scanners) converted from Hounsfield units (HU) into linear attenuation coefficients (LAC) at 511 keV. These methods are not accessible for PET/MRI, which instead infers an attenuation map (synthetic CT) from the patient MRI to perform so-called MRI-based AC (MR-AC). MR-AC is inherently challenging because MRI is not linked to photon attenuation information, and it is further complicated by the fact that while air and bone both lack signal in traditional MRI, they have completely different attenuation properties.

Initial commercially available MR-AC methods have traditionally relied on the quickly acquired Dixon MRI sequence, which can provide images for segmentation into air, lung, fat, and soft tissue, each with a predefined LAC value [3]. But attenuation coefficients of bones are not obtained and instead replaced by soft tissue, which has been shown to substantially underestimate PET values by more than 20% in regions closest to bone [4, 5].

Consequently, the strive for improving PET quantification has led to the development of many bone-including MR-AC methods primarily for brain imaging [6,7,8,9,10,11]. Especially segmentation-based methods using specialized MRI sequences (ultra short echo time (UTE) or zero echo time (ZTE)) obtaining some signal from bone, and atlas-based strategies for which the patient MRI is matched to a CT-atlas have demonstrated excellent results and solved the MR-AC issue for the adult head with normal anatomy [12]. However, the translation of these methods to other body regions is challenged by several factors. First, the scanned object is larger and more prone to motion (e.g., swallowing, respiration and pulsation), which challenges current UTE/ZTE MRI sequences with a small field-of-view and long acquisition times. Second, body regions outside the brain exhibit large inter-patient variations and many non-rigid structures, which are especially problematic for atlas-based methods as the registration requires a high degree of deformability. Third, some regions (e.g., head and neck) have a particular complex anatomy with many different bony structures and air cavities in close vicinity of each other. For such complex areas, atlas-based methods may struggle to find a suitable match and UTE/ZTE segmentation-based are most likely influenced by susceptibility effects, as it has been seen for the sinuses and skull base in brain imaging [13].

Recently, deep learning has emerged as an alternative MR-AC strategy [14] for the brain [15,16,17] and the pelvic region [18, 19] and has shown accurate and robust performances. Although several studies using deep convolutional neural networks for converting MRI to CT also exist for the head/neck region and have been applied for use in radiotherapy [20,21,22,23,24], only limited effort has been put into detailed evaluation of the effect on PET AC where the lower photon energy (511 keV compared to MeV typically used radiotherapy) increases the sensitivity to wrong tissue attenuation coefficients. The lack of studies in the field is most likely due to difficulties in obtaining accurately aligned PET/MRI and CT data for method development and evaluation.

In this study, we use a unique dataset where PET/MRI and CT imaging of head and neck cancer patients are acquired in matching position ensured by the radiotherapy immobilization devises. We apply a deep learning model – developed for radiotherapy purposes [23, 24] – to derive patient-specific MRI-based attenuation maps using only the standardized and quick Dixon MRI as input and perform a comprehensive evaluation of its performance on PET AC including the effect of bone structures and air cavities. We use CT-based AC as the reference and compare our results to those of the vendor-provided MRI-based method.

Materials and methods

Data acquisition

This study analyzed the MR-AC performance on 11 patients (10 males/1 female; 58 ± 9 years old) referred for radiotherapy of head and neck cancer (6 oropharyngeal/3 hypopharyngeal/2 laryngeal cancers; 7 patients with lymph node involvement), where 10 of these were part of a previous study concerning MRI-based radiotherapy dose calculations [24] and one patient was added since.

Each patient underwent a routine planning [18F]FDG-PET/CT (Siemens TruePoint 64; Siemens Healthcare GmbH, Erlangen, Germany) and subsequently a PET/MRI (Siemens Biograph mMR with VE11P software; Siemens Healthcare GmbH, Erlangen, Germany) using the same [18F]FDG injection. Both PET/CT and PET/MRI were acquired in radiotherapy treatment position using a flat table overlay (Qfix, Avondale, PA), a chest board (XRT-Series 6000; Candor, Gislev, Denmark) and individualized thermoplastic masks (EasyFrame; Candor) as described in [23]. All patients gave written informed consent, and the local ethics committee approved the study (H-7023133).

Patients fasted for a minimum of 4 h prior to injection of [18F]FDG (4 MBq/kg) given approximately 60 min prior to the PET/CT scan and an average of 142 min prior to PET/MRI.

CT was acquired with a 100/120 kVp tube voltage with i.v. contrast and reconstructed using the iterative algorithm for metal artifact reduction (iMAR) on 512 × 512 matrices, a pixel spacing 1.52 × 1.52 mm2 and a slice thickness of 2 mm.

PET/MRI scanning of the patients in radiotherapy position prevented the use of the standard head/neck coil for MRI, why an alternative coil setup consisting of two flexible coils in coil-holders hovering above the patient head and another flexible coil over the thorax were used, as described in another study [23]. For all patients, the MRI protocol included acquisition of the standard Dixon MRI sequence (flip angle of 10; repetition time of 3.85 ms; first/second echo time of 1.23/2.46 ms; in matrices of 384 × 312 × 88 with voxel size of 1.3 × 1.3 × 3.0 mm3) over one bed position covering the tumor. Simultaneously, PET emission data were acquired for an average of 26 ± 7 min in list mode.

Attenuation maps

For each patient, two different approaches were used for deriving Dixon MRI-based attenuation maps. First using the vendor-provided atlas-based method, which first segments the Dixon MRI into soft tissue, fat, lung tissue, and air [3] followed by superimposing of major bones (hip, spine and skull) from a bone-atlas [25]. The resulting attenuation map was created in 240 × 157 × 130 matrices with voxel size of 2.1 × 2.6 × 2.1 mm3. The second approach employed a deep learning network, presented in our previous work [24]. We did not re-train the model for the purpose of this study and refer to the original publication for details regarding network architecture and parameters [24]. In short, it is a convolutional neural network developed in TensorFlow 2.10 [26], with an architecture inspired by the 3D U-net [27, 28] that takes Dixon MRI in-phase and opposed-phase images as inputs and infers an attenuation map in LAC values. The network relies on transfer learning from a similar network trained with > 800 brain patients [16] and was subsequently fine-tuned on a total of 17 head and neck cancer patients (in a leave-one-out process) of which 10 are included in this study and the rest were excluded due to lack of PET data acquisition. Each patient’s attenuation map was inferred after being left out in the training process [24]. For the eleventh patient, who was recruited and scanned since the training of the network, the attenuation map was inferred using a (leave-one-out) model selected at random. Inference was performed on a standard desktop PC with a Titan V GPU (NVIDIA Corporation, Santa Clara, CA) and took less than 10 s per attenuation map, which subsequently was resampled to match the vendor-provided atlas-based attenuation map.

For each patient, a CT-based reference attenuation map was also derived. Non-patient objects (e.g., scanner bed) were removed from the CT images, before they were converted from HU to LAC using a biliary scaling [29]. The CT-based attenuation maps were registered to their corresponding Dixon MRI by an initial rigid registration (reg_aladin, NiftyReg) followed by a deformable registration (reg_f3d, NiftyReg), and alignments were validated by a visual inspection. Similarly, the CT-based attenuation maps were resampled to match the vendor-provided atlas-based attenuation map.

PET reconstructions

PET images were reconstructed with 3D-OP-OSEM (ordinary Poisson ordered subsets expectation maximization) (4 iterations, 21 subsets, 3 mm full width at half maximum Gaussian post-filtering) on 344 × 344 matrices with pixel size 2.1 × 2.1mm2 and a slice thickness of 2 mm using the E7Tools software (Siemens Healthcare GmbH, Erlangen, Germany). For each patient three PET images were reconstructed using different attenuation maps—the vendor-provided atlas-based attenuation map (PETAtlas), the deep learning derived attenuation map (PETDeep), and the CT-based attenuation map (PETCT). All PET reconstructions used the hardware-specific attenuation correction as described in [23] and voxels were converted to standardized uptake value (SUV) normalized by body weight. Reconstructed PET images were resampled to match the voxel size of the attenuation maps for data analysis purposes.

Data analyses

The overall effect of the MRI-based attenuation maps on the reconstructed PET images was studied by calculating joint histograms between PETAtlas and PETCT and between PETDeep and PETCT for all voxels within the patient volumes, for which the coefficients of determination (R2) were assessed. Keeping PETCT as a reference we calculated the voxelwise relative difference to PETAtlas and PETDeep, to estimate the error caused by MR-AC. The distributions of errors were represented in histograms and the absolute errors were shown in cumulative histograms from which we assessed the fraction of voxels with errors less than ± 5%, ± 10% and ± 20%.

To examine the MR-AC methods’ ability to infer bone and air compartments we calculated the Dice coefficients for bone (voxels above 0.11 cm−1) and air (not including lung tissue) within the body contour of the patient (voxels below 0.007 cm−1). Next, to examine the impact on the PET quantification, we calculated the error in mean SUV (SUVmean) within bone and air separately and extended the analysis by assessing the error as a function of spatial distance to the particular compartment. To this end, distance-to-bone maps and distance-to-air maps for each patient were created by first identifying CT voxels being bone or air. Then, for each CT voxel, the transaxial Euclidian distance (2D) to the nearest voxel classified as bone or air was calculated. Voxels outside the patient volume, within lungs and closer to air than bone were excluded in the distance-to-bone maps and vice versa for the distance-to-air maps. The voxels in a distance map was then binned into groups with 3 mm interval and the SUVmean error within each group was calculated for which the lower quartile, median, and upper quartile for each across all patients were reported.

To relate potential shortcomings of MR-AC to anatomical regions, the SUVmean errors in the spinal cord, brain stem, parotid glands, submandibular glands, larynx and esophagus were calculated for each patient. The same calculations were performed for tumors. All regions of interest were delineated for radiotherapy purposes using the images of the PET/CT examination and projected into PET/MRI space using the same transformation as used for the CT (described above). Delineation of all anatomical regions followed the national DAHANCA guidelines [30] and the tumors (primary tumor and involved lymph nodes) were delineated on the PET/CT by an nuclear medicine specialist using Mirada XD software (Mirada Medical, Oxford, United Kingdom) and a visual adaptation of an initial isocontouring starting at 40% of maximum uptake.

Results

Figure 1 shows the attenuation maps, the reconstructed PET images, and the PET voxelwise relative difference maps for patient 11.

Fig. 1
figure 1

The eleventh patient of the cohort, a 52-year-old male with right base of the tongue cancer and lymph node involvement (T2N1M0). Each row from top to bottom shows an axial, coronal and sagittal slice of: the reference CT; the vendor-provided atlas-based attenuation map (Atlas); the deep learning derived attenuation map (Deep); PETCT; PETAtlas; PETDeep; the relative difference map between PETCT and PETAtlas (ΔPETAtlas); the relative difference map between PETCT and PETDeep (ΔPETDeep);. The involved lymph node is delineated in green for the axial images. Notice, that the atlas-based attenuation map does not classify the trachea as air and the overall reduced PET error for the deep learning method, which is apparent from the difference maps

The joint histograms are shown in Fig. 2, for which the values of PETDeep and PETCT (R2 = 0.997) are distributed closer to the identity line, than PETAtlas and PETCT (R2 = 0.975). This can also be seen from the distributions of PET errors within patient volumes as shown in Fig. 3A. For these distributions the mean (± standard deviation) errors are 0.0 ± 11.4% for PETDeep compared to -1.3 ± 21.8% for PETAtlas. The cumulative distributions are shown in Fig. 3B, where the fraction of voxels below the thresholds ± 20%, ± 10%, and ± 5% are 95%, 84%, and 65% for PETDeep and 84%, 64%, and 42% for PETAtlas.

Fig. 2
figure 2

Joint histograms of PET voxels within the patient volumes for (A) PETDeep and PETCT (R2 = 0.997), and (B) PETAtlas and PETCT (R2 = 0.975). Notice, that the axes are clamped to SUV of 5 even though there are higher values in the PET images

Fig. 3
figure 3

(A) Histograms of the PET error distributions for PETAtlas and PETDeep. (B) Cumulative histogram of the absolute PET error for PETAtlas and PETDeep. The vertical dashed lines are located at 5%, 10%, and 20% and the corresponding amount of voxels with errors below these thresholds are 65%, 84%, and 95% for PETDeep and 42%, 64%, and 84% for PETAtlas

For the atlas-based MR-AC method, the average Dice coefficients for bone and air across all patients are 0.30 ± 0.11 and 0.42 ± 0.16, respectively. Similarly, for the deep learning MR-AC method the Dice coefficients for bone and air are 0.69 ± 0.08 and 0.74 ± 0.08. The PET SUVmean error as a function of distance to bone is shown in Fig. 4A. Within the bone compartment, SUVmean is underestimated by median of − 4% for PETDeep and by -15% for PETAtlas. In both cases the PET error decreases with distance to bone, but the interquartile range (error bars) include zero at 3–6 mm from bone for PETDeep and at 18–21 mm from bone for PETAtlas. The results of the same analysis for air are seen in Fig. 4B. The median error within air is much lower for PETDeep (12%) than PETAtlas (84%), and the effect decreases with distance similarly to bone.

Fig. 4
figure 4

Analysis showing the median error in SUVmean (colored bars) and interquartile range (black errorbars) as a function of distance to either (A) bone or (B) air

The effect of MR-AC on PET quantification in different anatomical regions and within tumors is shown in Fig. 5. The errors for PETDeep are generally lower than for PETAtlas within all of the analyzed regions. For both MR-AC methods the PET uptake in the mandible (jaw bone) is underestimated, whereas the uptake in the larynx (contains air within trachea) is generally overestimated. For PETDeep, regions with larger variations are the oral cavity and esophagus. The average (± standard deviation) tumor and involved lymph node volume defined by PETCT is 11.5 ± 10.8 cm3 and the differences in SUVmean are − 0.6 ± 2.0% (range: − 4.1%; 2.6%) for PETDeep and − 3.5 ± 4.6% (range: − 14.4%; 2.3%) for PETAtlas.

Fig. 5
figure 5

Regional analysis showing box-whiskers plots (box shows the quartiles of the data; whiskers show the 1.5 times interquartile range of the data) of the errors in SUVmean within different anatomical regions and the tumors. The individual errors are shown as colored dots on top of the box-whiskers

Figures 6A and B show the two patients with the largest errors in tumor SUVmean for both MR-AC methods.

Fig. 6
figure 6

The worst performing cases, based on tumor (green delineation) SUVmean error, for (A) the deep learning method (error of −4.1% for PETDeep and −6.7% for PETAtlas) and for (B) the atlas-based method (error of −0.8% for PETDeep and −14.4% for PETAtlas). (A) Axial and sagittal slices of a 68 year old female with cancer of the right tonsil and bilateral lymph node involvement (T3N2M0). The large single void in the MRI affects both MR-AC methods. Notice, that the deep learning method partially adds tissue within the MRI signal void, and that the atlas-based method adds the jawbone despite the missing signal. (B) Axial and coronal slices 42 year old male with cancer of the left tonsil (T1N1M0). The vendor-provided atlas-based method is affected by a fat–water swap (fat becomes soft tissue and vice versa) and some of the air in trachea is segmented as soft tissue (axial slice)

Discussion

Accurate MR-AC in the head and neck region is challenging due the complex anatomy with many different bony structures and air cavities often in close vicinity of each other as well as large inter-patient variation. In this study, we evaluated a deep learning network for deriving patient-specific Dixon MRI-based attenuation maps for head and neck cancer patients, which demonstrated small PET errors when using CT-based AC as reference and a performance exceeding the most recent vendor-provided atlas-based MR-AC method.

Quantitatively, our deep learning method provided PETDeep values that were in line with the reference PETCT values and largely corrected for the underestimation that was seen for PETAtlas. On a lesion-level, PETDeep showed improved quantification with an average difference (± standard deviation) in SUVmean of − 0.6 ± 2.0% compared to -3.5 ± 4.6% for PETAtlas. These findings are in accordance with the results reported in our previous study (SUVmean error of − 0.4 ± 1.2%) [23].

Accurate MR-AC is especially limited by the method’s ability to correctly segment air and bone. The Dice coefficients revealed that the deep learning method exceeded the vendor-provided atlas-based method in this regard. However, because the effects of bone/air on PET quantification are not limited to voxels within bone/air but introduces a spatial varying error, we performed an analysis assessing the PET error as a function of distance to air/bone. We observed that PET errors within the compartments were greatly reduced when using the deep learning method and that for both methods the error rapidly decreases with distance. For PETDeep the bias was only present in the immediate surroundings (3–6 mm), while the bias for PETAtlas had a larger spatial extent (2 cm). These results suggest that the deep learning method will greatly improve the PET accuracy for tumors located in proximity to bone or air and could also have implications for defining the tumor outline. It should be noted that the absolute uptake in the analyzed voxels were mainly very low (especially in air) and that a small absolute difference therefore can appear as a large relative difference.

The results of the regional analysis verified that the largest biases (median errors) were seen for bone (mandible) and regions including air (larynx). When keeping the error-distance relationship in mind it is also worth emphasizing the low errors of PETDeep compared to PETAtlas in the spinal cord, which is surrounded by bone; the oral cavity, where air cavities may be present next to teeth and the jaw; and the esophagus, which is anatomically located posterior to the trachea (air) and anterior to the spine (bone). Although PETDeep, in regions like the oral cavity and esophagus showed no clear bias a larger variation was present. Upon visual inspection, this was typically attributed to misclassified air cavities due to e.g., the tongue not being in the same position between the two scans (see sagittal view of the attenuation maps in Fig. 1) and small air cavities in the esophagus not captured on the MRI.

While both MR-AC methods have the strength of relying only on the fast and standardized Dixon MRI sequence, inference of a new patient’s attenuation map by the deep learning approach has the advantage over to the vendor-provided atlas-based method due to several reasons. First, it does not rely on any registration, making the method less sensitive to inter-patient variation and abnormal anatomy. Second, it infers all bones in the body and not just the major bones (hip, spine and skull). Third, the deep learning method proved reliable and robust compared to the vendor-provided method for which we observed frequent artifacts including fat–water swap, incorrect tissue segmentation (especially of air cavities), misplacement of the bone and even failing to add bone (the spine was missing for one patient). While the frequent artifacts are in line with other studies [31,32,33,34], the lower MRI quality caused by the altered coil arrangement in this set-up [35] might increase the risk. The deep learning method is, however, more sensitive toward artifacts in the underlying Dixon MR images. In our previous work [24], we reported that approximately half of the patients had metallic dental implants leading to signal voids clearly visible on Dixon MRI and this still holds for this study. The artifacts were generally small (< 2 cm), and the model was to a great extent capable of handling these (see figure in our previous work [24]), but for one patient the model did not fully correct for the presence of a large artifact (Fig. 6A). Improving model robustness toward artifacts could be possible by using a larger and more diverse training cohort [16] and should therefore be of the focus in future work.

Although the deep learning method exceeded the performance of the vendor-provided method, the method still has some remaining PET errors especially surrounding bone, which can most likely be attributed to the underestimation of bone attenuation coefficients/HU (− 199 ± 60 HU) as we reported in our previous study [24]. In a recent deep learning study for AC of the pelvic region, the error in bone was slightly lower (− 1%) compared to ours (− 4%), which could be due to improved prediction of bone attenuation coefficients but a direct comparison should be done with caution due to differences between the two regions. First, the head and neck region (extending from top of skull and down to mid thorax in this study) includes many different and usually smaller bone structures (e.g., skull bones, vertebras of the spine, hyoid bone, shoulder bones and ribs) than seen in pelvic region. Second, unlike the pelvis, the head and neck region is further challenged by the presence of many air/tissue and air/bone interfaces complicating the segmentation task.

Another strategy for AC in PET/MRI, which has become more compelling with the introduction of deep learning, is to  generate attenuation-corrected PET images directly from uncorrected PET images. Such methods have been applied for whole-body AC with promising results [36, 37].

The primary limitation of this study is that evaluation is only performed on eleven patients and mainly done so by a leave-one-out validation process opposed to a separate test cohort. Furthermore, it would also be desirable to also have a more diverse cohort, e.g., more female patients to assess whether the model performs equally on both genders. Another limiting factor is that the CT served as reference despite not directly reflecting the monoenergetic (511 keV) LAC required for PET and despite being acquired with contrast enhancement, which artificially increases the CT values in some soft tissue regions. Finally, despite the acquisition of both PET/CT and PET/MRI in radiotherapy position using fixation masks, there was still a need for non-rigid registration, for which inaccuracies will affect PET quantification.

Conclusion

In this study, a deep learning method for deriving patient-specific Dixon MRI-based attenuation maps in the anatomically challenging head and neck region was evaluated for PET AC. Using CT-based AC as reference, the method demonstrated small PET errors in tumors (0.6 ± 2.0%) and a performance exceeding the most recent vendor-provided MR-AC method. The method could have clinical impact, especially on tumor delineations and tumor uptake values close to bone or air compartments.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request and given approval from relevant regulatory authorities.

Abbreviations

PET:

Positron emission tomography

MRI:

Magnetic resonance imaging

CT:

Computed tomography

FDG:

Fluorodeoxyglucose

OP-OSEM:

Ordinary Poisson ordered subsets expectation maximization

AC:

Attenuation correction

MR-AC:

Magnetic resonance-based attenuation correction

LAC:

Linear attenuation coefficient

HU:

Hounsfield unit

References

  1. Thorwarth D, Leibfarth S, Mönnich D. Potential role of PET/MRI in radiotherapy treatment planning. Clin Transl Imaging. 2013;1(1):45–51.

    Article  Google Scholar 

  2. Gillies RJ, Beyer T. PET and MRI: is the whole greater than the sum of its parts? Cancer Res. 2016;76(21):6163–6.

    Article  CAS  Google Scholar 

  3. Martinez-Moller A, Souvatzoglou M, Delso G, Bundschuh RA, Chefd’hotel C, Ziegler SI, et al. Tissue classification as a potential approach for attenuation correction in whole-body PET/MRI: evaluation with PET/CT data. J Nucl Med. 2009;50(4):520–6. doi:https://doi.org/10.2967/jnumed.108.054726

  4. Samarin A, Burger C, Wollenweber SD, Crook DW, Burger IA, Schmid DT, et al. PET/MR imaging of bone lesions–implications for PET quantification from imperfect attenuation correction. Eur J Nucl Med Mol Imaging. 2012;39(7):1154–60.

    Article  Google Scholar 

  5. Andersen FL, Ladefoged CN, Beyer T, Keller SH, Hansen AE, Højgaard L, et al. Combined PET/MR imaging in neurology: MR-based attenuation correction implies a strong spatial bias when ignoring bone. Neuroimage. 2014;84:206–16. doi:https://doi.org/10.1016/j.neuroimage.2013.08.042

  6. Izquierdo-Garcia D, Catana C. MR imaging-guided attenuation correction of PET data in PET/MR imaging. PET Clin. 2016/01/26. 2016;11(2):129–49.

  7. Ladefoged CN, Benoit D, Law I, Holm S, Kjær A, Højgaard L, et al. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging. Phys Med Biol. 2015;60(20):8047–65.

  8. Juttukonda MR, Mersereau BG, Chen Y, Su Y, Rubin BG, Benzinger TLS, et al. MR-based attenuation correction for PET/MRI neurological studies with continuous-valued attenuation coefficients for bone through a conversion from R2* to CT-Hounsfield units. Neuroimage. 2015;112:160–8.

    Article  Google Scholar 

  9. Marshall HR, Patrick J, Laidley D, Prato FS, Butler J, Théberge J, et al. Description and assessment of a registration-based approach to include bones for attenuation correction of whole-body PET/MRI. Med Phys. 2013;40(8):82509.

    Article  Google Scholar 

  10. Burgos N, Cardoso MJ, Thielemans K, Modat M, Pedemonte S, Dickson J, et al. Attenuation correction synthesis for hybrid PET-MR scanners: application to brain studies. IEEE Trans Med Imaging. 2014;33(12):2332–41.

    Article  Google Scholar 

  11. Arabi H, Koutsouvelis N, Rouzaud M, Miralbell R, Zaidi H. Atlas-guided generation of pseudo-CT images for MRI-only and hybrid PET-MRI-guided radiotherapy treatment planning. Phys Med Biol. 2016;61(17):6531–52.

    Article  CAS  Google Scholar 

  12. Ladefoged CN, Law I, Anazodo U, Lawrence KS, Izquierdo-Garcia D, Catana C, et al. A multi-centre evaluation of eleven clinically feasible brain PET/MRI attenuation correction techniques using a large cohort of patients. Neuroimage. 2017;147(June 2016):346–59. Doi:https://doi.org/10.1016/j.neuroimage.2016.12.010

  13. Dickson JC, O’Meara C, Barnes A. A comparison of CT- and MR-based attenuation correction in neurological PET. Eur J Nucl Med Mol Imaging. 2014;41(6):1176–89.

  14. Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning-based synthetic-CT generation in radiotherapy and PET: a review. arXiv Prepr arXiv210202734; 2021.

  15. Ladefoged CN, Marner L, Hindsholm A, Law I, Højgaard L, Andersen FL. Deep learning based attenuation correction of PET/MRI in pediatric brain tumor patients: evaluation in a clinical setting. Front Neurosci. 2018;12(January):1005.

    PubMed  Google Scholar 

  16. Ladefoged CN, Hansen AE, Henriksen OM, Bruun FJ, Eikenes L, Øen SK, et al. AI-driven attenuation correction for brain PET/MRI: Clinical evaluation of a dementia cohort and importance of the training group size. Neuroimage. 2020;222:117221.

  17. Blanc-Durand P, Khalife M, Sgard B, Kaushik S, Soret M, Tiss A, et al. Attenuation correction using 3D deep convolutional neural network for brain 18F-FDG PET/MR: comparison with Atlas, ZTE and CT based attenuation correction. PLoS One. 2019;14(10):e0223141.

  18. Torrado-Carvajal A, Vera-Olmos J, Izquierdo-Garcia D, Catalano OA, Morales MA, Margolin J, et al. Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction. J Nucl Med. 2019;60(3):429–35.

    Article  Google Scholar 

  19. Leynes AP, Yang J, Wiesinger F, Kaushik SS, Shanbhag DD, Seo Y, et al. Direct pseudoCT generation for pelvis PET/MRI attenuation correction using deep convolutional neural networks with multi-parametric MRI: zero echo-time and dixon deep pseudoCT (ZeDD-CT). J Nucl Med. 2018;59(5):852–8.

    Article  Google Scholar 

  20. Dinkla AM, Florkow MC, Maspero M, Savenije MHF, Zijlstra F, Doornaert PAH, et al. Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch-based three-dimensional convolutional neural network. Med Phys. 2019;46(9):4095–104.

    Article  Google Scholar 

  21. Klages P, Benslimane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Patch-based generative adversarial neural network models for head and neck MR-only planning. Med Phys. 2020;47(2):626–42.

    Article  Google Scholar 

  22. Qi J, Thakrar PD, Browning MB, Vo N, Kumbhar SS. Clinical utilization of whole-body PET/MRI in childhood sarcoma. Pediatr Radiol. 2020;51(3):471–9.

  23. Olin AB, Hansen AE, Rasmussen JH, Ladefoged CN, Berthelsen AK, Håkansson K, et al. Feasibility of multiparametric positron emission tomography/magnetic resonance imaging as a one-stop shop for radiation therapy planning for patients with head and neck cancer. Int J Radiat Oncol Biol Phys. 2020;108:1329–38.

    Article  Google Scholar 

  24. Olin AB, Thomas C, Hansen AE, Rasmussen JH, Krokos G, Urbano TG, et al. Robustness and generalizability of deep learning synthetic computed tomography for positron emission tomography/magnetic resonance imaging–based radiation therapy planning of patients with head and neck cancer. Adv Radiat Oncol. 2021;6(6):100762.

  25. Paulus DH, Quick HH, Geppert C, Fenchel M, Zhan Y, Hermosillo G, et al. Whole-Body PET/MR Imaging: quantitative evaluation of a novel model-based mr attenuation correction method including bone. J Nucl Med. 2015;56(7):1061–6. Doi:https://doi.org/10.2967/jnumed.115.156000

  26. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al. Tensorflow: A system for large-scale machine learning. In: 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16). 2016. p. 265–83.

  27. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International conference on medical image computing and computer-assisted intervention. Springer; 2016. p. 424–32.

  28. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. Springer; 2015. p. 234–41.

  29. Carney JPJ, Townsend DW, Rappoport V, Bendriem B. Method for transforming CT images for attenuation correction in PET/CT imaging. Med Phys. 2006;33(4):976–83. Doi:https://doi.org/10.1118/1.2174132

  30. DAHANCA Radiotherapy Guidelines 2018. Danish Head and Neck Cancer Group. 2018 [cited 2019 Oct 25]. Available from: https://www.dahanca.oncology.dk/assets/files/GUID_DAHANCA_Radiotherapyguidelines 2018.pdf

  31. Øen SK, Keil TM, Berntsen EM, Aanerud JF, Schwarzlmüller T, Ladefoged CN, et al. Quantitative and clinical impact of MRI-based attenuation correction methods in [18F] FDG evaluation of dementia. EJNMMI Res. 2019;9(1):83.

    Article  Google Scholar 

  32. Olin A, Ladefoged CN, Langer NH, Keller SH, Löfgren J, Hansen AE, et al. Reproducibility of MR-based attenuation maps in PET/MRI and the impact on PET quantification in lung cancer. J Nucl Med. 2018;59(6):jnumed.117.198853. doi:https://doi.org/10.2967/jnumed.117.198853

  33. Rausch I, Rust P, Difranco MD, Lassen M, Stadlbauer A, Mayerhoefer ME, et al. Reproducibility of MRI Dixon-based attenuation correction in combined PET/MR with applications for lean body mass estimation. J Nucl Med. 2016;57(7):1096–102. Doi:https://doi.org/10.2967/jnumed.115.168294

  34. Ladefoged CN, Hansen AE, Keller SH, Holm S, Law I, Beyer T, et al. Impact of incorrect tissue classification in Dixon-based MR-AC: fat-water tissue inversion. EJNMMI Phys. 2014;1(1):101.

    Article  Google Scholar 

  35. Winter RM, Leibfarth S, Schmidt H, Zwirner K, Mönnich D, Welz S, et al. Assessment of image quality of a radiotherapy-specific hardware solution for PET/MRI in head and neck cancer patients. Radiother Oncol. 2018;128(3):485–91.

    Article  Google Scholar 

  36. Armanious K, Hepp T, Küstner T, Dittmann H, Nikolaou K, La Fougère C, et al. Independent attenuation correction of whole body [18 F] FDG-PET using a deep learning approach with Generative Adversarial Networks. EJNMMI Res. 2020;10(1):1–9.

    Article  Google Scholar 

  37. Shiri I, Arabi H, Geramifar P, Hajianfar G, Ghafarian P, Rahmim A, et al. Deep-JASC: joint attenuation and scatter correction in whole-body 18F-FDG PET using a deep residual network. Eur J Nucl Med Mol Imaging. 2020;47(11):2533–48.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the patients for participating in the study, and we thank John and Birthe Meyer Foundation, Denmark, for donating the PET/MRI system to Rigshospitalet.

Funding

The conduct of this study was supported from by the Danish Cancer Society and Siemens Healthcare.

Author information

Authors and Affiliations

Authors

Contributions

AO, BMF, FLA and AK planned the study and retrieved the ethical approval. ABO, AKB, JHR were responsible for patient recruitment. ABO and CNL developed the main algorithm for image processing. ABO applied and evaluated the algorithm. All authors took part in interpreting and discussion of the results. ABO wrote the first draft and responsible for finalizing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Anders B. Olin.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the local ethics committee approved the study (H-7023133). All patients gave written informed consent to participate.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Olin, A.B., Hansen, A.E., Rasmussen, J.H. et al. Deep learning for Dixon MRI-based attenuation correction in PET/MRI of head and neck cancer patients. EJNMMI Phys 9, 20 (2022). https://doi.org/10.1186/s40658-022-00449-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40658-022-00449-z

Keywords