Skip to content

This is the code for the paper "Feature Attribution Explanation to Detect Harmful Dataset Shift" in International Joint Conference on Neural Networks (IJCNN) 2023

License

Notifications You must be signed in to change notification settings

oddwang/FAE-DHDS

Repository files navigation

Feature Attribution Explanation to Detect Harmful Dataset Shift

This is the code for the paper "Feature Attribution Explanation to Detect Harmful Dataset Shift" in International Joint Conference on Neural Networks (IJCNN) 2023, in which we proposed a method that combines feature attribution explanation and two sample tests to detect harmful dataset shifts.

Personal Use Only. No Commercial Use.

Code is based on "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": (https://github.com/steverab/failing-loudly) and "Detecting Covariate Drift with Explanations" (https://github.com/DFKI-NLP/xai-shift-detection)

Running experiments

Run experiments using:

python pipeline.py Dataset Shift_Type multiv Model_Name

Example: python pipeline.py mnist adversarial_shift multiv resnet50

Dependencies

We require the following dependencies:

About

This is the code for the paper "Feature Attribution Explanation to Detect Harmful Dataset Shift" in International Joint Conference on Neural Networks (IJCNN) 2023

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages