Skip to content

Biophotonics-Tyndall/PUB-FeatureSelectionSuppl

Repository files navigation

PUB-FeatureSelectionSuppl

Supplemental Material

File Name

  1. fig_DRS_Spectra.tif
  2. fig_OVR_Balanced_Accuracy_Compare.tif
  3. fig_SHAP_Feature_Importance.tif

File Description

  1. Diffuse reflectance spectra measured from bone cement, bone marrow, cartilage, cortical bone, muscle and trabecular bone showing (a,b) the original spectra after background calibration and (c,d) their standard normal variate (SNV) transformed spectra.
  2. Balanced accuracy computed by logistic regression (LogReg), linear discriminant analysis (LDA), random forest (RF), k-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB) and support vector machine (SVM) classifiers via the one-vs-rest classification approach for (a) bone cement vs. the rest (including bone marrow, cartilage, cortical bone and trabecular bone) and (b) cortical bone vs. the rest (including bone marrow, cartilage, muscle and trabecular bone).
  3. Global feature importance calculated by SHAP illustrating the sum of individual feature contribution to the class prediction for (a) bone cement vs. the rest (including bone marrow, cartilage, cortical bone and trabecular bone) and (b) cortical bone vs. the rest (including bone marrow, cartilage, muscle and trabecular bone).

The SHAP Documentation

https://shap.readthedocs.io/en/latest/index.html

The SHAP Citations

  1. S. M. Lundberg et al., “From local explanations to global understanding with explainable AI for trees,” Nat. Mach. Intell. 2(1), 56–67, Nature Publishing Group (2020) [doi:10.1038/s42256-019-0138-9].
  2. S. M. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” in 31st Conference on Neural Information Processing Systems (NIPS 2017)(Section 2), pp. 4766–4775 (2017).
  3. M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’ Explaining the Predictions of Any Classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 97–101 (2016) [doi:10.18653/v1/n16-3020].
  4. E. Štrumbelj and I. Kononenko, “Explaining prediction models and individual predictions with feature contributions,” Knowl. Inf. Syst. 41(3), 647–665 (2014) [doi:10.1007/s10115-013-0679-x].
  5. A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important features through propagating activation differences,” in 34th International Conference on Machine Learning, ICML 2017 7, pp. 4844–4866 (2017).
  6. A. Datta, S. Sen, and Y. Zick, “Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems,” in Proceedings - 2016 IEEE Symposium on Security and Privacy, SP 2016, pp. 598–617, IEEE (2016) [doi:10.1109/SP.2016.42].
  7. S. Bach et al., “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PLoS One 10(7), e0130140 (2015) [doi:10.1371/journal.pone.0130140].
  8. S. Lipovetsky and M. Conklin, “Analysis of regression in game theory approach,” Appl. Stoch. Model. Bus. Ind. 17(4), 319–330 (2001) [doi:10.1002/asmb.446].

Citation

[JBO]

License

Shield: CC BY-NC-SA 4.0

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published