Skip to content

The following study presents a model for generating chest X-ray images of normal subjects (without lung disease) and pneumonia patients.

License

Notifications You must be signed in to change notification settings

kaledhoshme123/Using-GAN-to-Generate-Chest-X-Ray-Images

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

Using GAN to Generate Chest X-Ray Images

  • The following study presents a model for generating chest X-ray images of normal subjects (without lung disease) and pneumonia patients.
  • Through the proposed model, I tried to avoid most of the problems that the GAN models suffer from, in terms of the difficulty of training each of the generator and the discriminant, in addition to the problem of the modal collapse and the perceptual quality, so that I tried through the proposed model, to try to continue the training (to ensure the continuity of the derivability of cost function ) and discovering the features by the discriminator (the most accurate features for each case of the dataset), which leads the generator to focus on them during the training process.
  • A conditional model was used for the GAN, and the discriminator was forced to determine whether the medical images are real or not, in addition to identifying the pathological condition in the generated images.
  • I used (64, 64, 3) images because I didn't have enough computational resources.

Dataset:

image Dataset Link: https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia

GAN Architecture:

Generator Architecture Discriminator Architecture
1 2

Results:

  • Samples generated by the generator for each case (healthy person, person with pneumonia). image image

Metrics For Image Generation:

  • In order to be able to evaluate the images generated by the generating neural network, we can do so by proposing a neural structure dedicated to classifying the images generated by the generating neural network, and then we return to the basic images included in the dataset, and we evaluate the performance of the classified neural network that It was trained on the generated images, in order to see if the learned characteristics of the generated images can give high results on the basic images included in the dataset.
  • I used VGG16 for classification.
Classification NN Result
Capture sdf
  • Now, after training on the images generated by the generator, we will test the neural network on the basic images included in the dataset.

  • We will use several measures in the evaluation to study what is the ability of the generative adversarial network to capture the basic features that characterize each class, and whether the second classified network extracted the features included in the generated images.

  • Are the attributes that were extracted from the images generated by the generator, can be used on the original images included in the dataset.

  • This helps in the ability to study what was actually generated, and whether the focus was really on the cases that the X-ray images made him have pneumonia or not.

  • accuracy is: 93.90337423312883%.

  • f1_score: 95.75773745997866, recall_score: 99.11626622479977, precision_score: 92.61935483870968.

  • Classification Report:

    image

  • Confusion Matrix:

    image

Summary:

In the end, we can see that we have reached a neural network that is able to generate accurate images. But the slight variation in the accuracy of the classification for each class is due to the fact that we need more training time for the generative adversarial network, which helps to focus more on the characteristics of each class (because the number of samples in the basic dataset is different for each class (the healthy case, pneumonia)).