Deepfake Image Generation for Improved Brain Tumor Segmentation

Authors: Roa'a Al-Emaryeen, Sara Al-Nahhas, Fatima Himour, Waleed Mahafza, Omar Al-Kadi

Published: 2023-07-26 16:11:51+00:00

AI Summary

This research explores using deepfake image generation to improve brain tumor segmentation. A Generative Adversarial Network (GAN) increases dataset size via image-to-image translation, followed by U-Net-based segmentation on the augmented dataset. Results demonstrate improved segmentation performance metrics compared to using only real images.

Abstract

As the world progresses in technology and health, awareness of disease by revealing asymptomatic signs improves. It is important to detect and treat tumors in early stage as it can be life-threatening. Computer-aided technologies are used to overcome lingering limitations facing disease diagnosis, while brain tumor segmentation remains a difficult process, especially when multi-modality data is involved. This is mainly attributed to ineffective training due to lack of data and corresponding labelling. This work investigates the feasibility of employing deep-fake image generation for effective brain tumor segmentation. To this end, a Generative Adversarial Network was used for image-to-image translation for increasing dataset size, followed by image segmentation using a U-Net-based convolutional neural network trained with deepfake images. Performance of the proposed approach is compared with ground truth of four publicly available datasets. Results show improved performance in terms of image segmentation quality metrics, and could potentially assist when training with limited data.


Key findings
The use of deepfake images in training improved brain tumor segmentation performance metrics (DSC, JSC, MAD, HD). The results suggest that augmenting datasets with deepfake images is a promising approach, particularly when real data is limited. However, limitations exist regarding the robustness of deepfake data and its performance with small lesions or noise.
Approach
The researchers used a CycleGAN for image-to-image translation to generate deepfake MRI images, augmenting existing datasets. A U-Net model, using a pre-trained DenseNet-169 encoder, was then trained on this augmented dataset to perform brain tumor segmentation.
Datasets
BraTS 2020, Unpaired MR-CT Brain Dataset, Brain T1-weighted CE-MRI images, IXI dataset
Model(s)
CycleGAN (for deepfake generation), U-Net (with DenseNet-169 encoder) for segmentation
Author countries
Jordan