Skip to content

Python code to train and analyse bayesian neural networks for hyperspectral images datasets

Notifications You must be signed in to change notification settings

universidad-zaragoza/BNN_for_hyperspectral_datasets_analysis

Repository files navigation

BNN_for_hyperspectral_datasets_analysis

This repository contains python code to train bayesian neural networks for some of the most widely used open hyperspectral imaging datasets and to analyse the results.

We will refer to the repository as bnn4hi. To clone it, it is recommended to change the folder destination, especially to use it as a module, with the import clause:

git clone https://github.com/universidad-zaragoza/BNN_for_hyperspectral_datasets_analysis.git bnn4hi

The repository also contains the results of the entire execution. If you only need the code and documentation to test it or use it as a library, you can download and launch the download.sh script. It uses wget command to retrieve the necessary files without the results of previous executions.

Citations

This is the code of the paper Bayesian Neural Networks to Analyze Hyperspectral Datasets Using Uncertainty Metrics. If it is useful to you, please cite:

A. Alcolea and J. Resano, "Bayesian Neural Networks to Analyze Hyperspectral Datasets Using Uncertainty Metrics", in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-10, 2022, doi: 10.1109/TGRS.2022.3205119.

@article{alcolea2022bayesian,
author = {Alcolea, Adrián and Resano, Javier},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
title = {Bayesian Neural Networks to Analyze Hyperspectral Datasets Using Uncertainty Metrics},
year = {2022},
volume = {60},
pages = {1-10},
doi = {10.1109/TGRS.2022.3205119}
}

How to run

Note that the entire execution of the launch.sh script may take more than 24 hours. That includes training the models for 100 epochs (which should be fast), launching all the tests (the map and noise tests will take the longest), and then training again for 100 epochs with mixed classes and launching the mixed test. In case that everything works fine, training for thousands of epochs may last even longer (and it will be necessary to repeat all the tests).

With the 'sh' script

./launch.sh

Will run every step needed to reproduce the experiments of the paper Bayesian Neural Networks to Analyze Hyperspectral Datasets Using Uncertainty Metrics. Actually, it will only train the models for 100 epochs to easily test that everything works. There is a boolean variable in launch.sh called TEST_EXECUTION set to true. For launching the same number of epochs of the paper, set it to false.

You must have python3 installed with the following libraries: numpy, matplotlib, spectral, scipy, scikit-learn, tensorflow and tensorflow_probability.

With docker

docker build -t bnn4hi .

Will generate the container.

docker run -v ${PWD}:/workdir bnn4hi

Will run ./launch.py in docker using the repository directory as input and output point, so the result will be the same as running ./launch.py but avoiding to install the dependencies.

Use and files explanation

Train

./train.py NAME EPOCHS PERIOD

Will train the NAME dataset for EPOCHS epochs saving a checkpoint and writing information to stdout every PERIOD epochs.

The resultant trained checkpoints will be in Models/{NAME}_{LAYER1_NEURONS}-{LAYER2_NEURONS}model_{P_TRAIN}train_{LEARNING_RATE}lr/, organised with an epoch_{EPOCH} directory for each checkpoint.

Where LAYER1_NEURONS and LAYER2_NEURONS correspond to the number of neurons of the layers of the model, P_TRAIN corresponds to the percentage of pixels used for training and LEARNING_RATE corresponds to the initial learning rate. All of them are defined in lib/config.py. Also EPOCH corresponds to the current checkpoint epoch according to PERIOD.

Test

./test.py BO_EPOCH IP_EPOCH KSC_EPOCH PU_EPOCH SV_EPOCH

Will perform the necessary tests to generate the reliability diagram and the accuracy vs. uncertainty plots, along with the class uncertainty plot of each image. For that, it is necessary that the five dataset models are already trained. The plots will be saved in Test/.

The five mandatory epoch parameters of ./test.py correspond with the number of epochs of the selected checkpoint for testing each model in this order: BO, IP, KSC, PU and SV. In case you want to eliminate or add datasets, take on account that this order must correspond with the DATASETS_LIST variable in lib/config.py.

Test map

Here we call the entire hyperspectral image a map, that is, every pixel on its original position to conform the image; and not only the labelled pixels, but the unlabelled too.

./test_map.py NAME EPOCH

Will perform the inference of every pixel of the NAME dataset and generate a pdf image called H_{NAME}.pdf containing the RGB image, the ground truth, the prediction (with a different colour for each class) and the uncertainty map (with a different colour for each range of uncertainty). The images will be saved in Test/.

As in test.py, EPOCH refers to the number of epochs of the selected checkpoint.

Test with noisy data

./test_noise.py BO_EPOCHS IP_EPOCHS KSC_EPOCHS PU_EPOCHS SV_EPOCHS

Will perform the necessary tests to generate the combined noise plot. For that, it is necessary that the five dataset models are already trained. The plots will be saved in Test/.

The parameters are the same of test.py and they behave the same way.

Train with mixed classes

The exact same execution of train.py activating the -m flag will generate the trained model with mixed classes.

The resultant trained checkpoints will be in Models/{NAME}_{LAYER1_NEURONS}-{LAYER2_NEURONS}model_{P_TRAIN}train_{LEARNING_RATE}lr_{CLASS_A}-{CLASS_B}mixed/ with an epoch_{EPOCH} directory for each checkpoint.

Where CLASS_A and CLASS_B correspond to the numbers of the mixed classes, which are defined for each dataset in lib/config.py.

Test models with mixed classes

./test_mixed.py BO_EPOCHS IP_EPOCHS KSC_EPOCHS PU_EPOCHS SV_EPOCHS

Will perform the necessary tests to generate and print a table with the aleatoric uncertainty of the mixed classes and the mixed classes plot of each model. For that, it is necessary that the five dataset models are already trained. The results will be saved in Test/.

The parameters are the same of test.py and they behave the same way.

About

Python code to train and analyse bayesian neural networks for hyperspectral images datasets

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages