Skip to content

yunyuntsai/BraTS-brain-tumer-segmentation

Repository files navigation

Biomedical Image Anlysis Final Project: Brain Tumor Segmentaion

Motivation

Quantitative assessment of brain tumor is an essential part of diagnose procedure. In this project, we aim to use object segmentation method to distinguish tumor part from Brain magnetic resonance images.

Our goal

We demonstrate the effectiveness of a 3D-UNet in the context of the BraTS 2019 Challenge and intend to improve the segmentation performance by using weighted mean square error (mse) loss function.

Dataset

BraTS dataset is from Multimodal Brain Tumor Segmentation Challenge 2019. All BraTS multimodal scans are available as NIfTI files (.nii.gz). For each patient a T1 weighted (T1w), a post-contrast enhanced T1-weighted (T1CE), a T2-weighted (T2w) and a Fluid-Attenuated Inversion Recovery (FLAIR) MRI was provided. Data includes 210 glioblastoma (GBM/HGG) and 75 lower grade glioma (LGG) Pre-operative multimodal MRI scans. The MRI originate from 19 institutions and were acquired with different protocols, magnetic field strengths and MRI scanners. Annotations comprise the GD-enhancing tumor (ET), the peritumoral edema (ED), and the necrotic and non-enhancing tumor core (NCR/NET).

Sample data of MRI slice: (Left: Original MRI, Right: Ground Truth)

The Archtecture of 3D-UNet

As the following figure shows, our network architecture is a 3D-Unet. It consists of encoder part (left) and decoder part (right). The encoder part follows the typical architecture of convolutional neural network, including repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a leaky rectified linear unit (lReLU) and a 2x2 max pooling operation with stride 2 for downsampling. Every step in the decoder part consists of an upsampling of the feature map followed by a 3*3 convolution (“up-convolution”).

Evaluation

We evaluate our model with two kinds of dataset. Both of them are extracted from HGG testing set. The first one uses only FLAIR MRI to train and the second combines FLAIR and T1CE MRI.

As table shows the testing results of our two model trained with different MRI data, we empirically discover that the result of model trained with FLAIR and T1CE data is better than model with only FLAIR data. As the Dice score of FLAIR and T1CE is higher than only FLAIR and Hausdorff distance is smaller as well.

Testing slice result from one patient

Conclusion

In this project, we demonstrated the effectiveness of 3D-Unet on BraTS 2019 challenge. We used two different types of data, including FLAIR and T1CE to train our model and compared the performance of training result. In the training procedure, we utilized the weighted-mse loss function to solve the label imbalance problem and can indeed improve the accuracy. It is undoubted that the semantic segmentation of brain tumor is challenge and might need lots of additional preprocessing or post-processing procedure. In the future work, we might consider to combine multiple types of data together for ensemble training to enhance modalities and improve the performance of tumor segmentation.