Skip to content
forked from manjaryp/MCE-ViT

A Robust Approach Towards Distinguishing Natural and Computer Generated Images using Multi-Colorspace fused and Enriched Vision Transformer

License

Notifications You must be signed in to change notification settings

anoopkdcs/MCE-ViT

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

61 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Robust Approach Towards Distinguishing Natural and Computer Generated Images using Multi-Colorspace fused and Enriched Vision Transformer

Manjary P Gangana, Anoop Kb, and Lajish V La
a Department of Computer Science, University of Calicut, India
b School of Psychology, Queen's University Belfast, UK

📝 Paper : https://arxiv.org/abs/2308.07279
🌏 Link: https://dcs.uoc.ac.in/cida/projects/dif/mcevit.html (will be available along with the publication)

Abstract: The works in literature classifying natural and computer generated images are mostly designed as binary tasks either considering natural images versus computer graphics images only or natural images versus GAN generated images only, but not natural images versus both classes of the generated images. Also, even though this forensic classification task of distinguishing natural and computer generated images gets the support of the new convolutional neural networks and transformer based architectures that can give remarkable classification accuracies, they are seen to fail over the images that have undergone some post-processing operations usually performed to deceive the forensic algorithms, such as JPEG compression, gaussian noise, etc. This work proposes a robust approach towards distinguishing natural and computer generated images including both, computer graphics and GAN generated images using a fusion of two vision transformers where each of the transformer networks operates in different color spaces, one in RGB and the other in YCbCr color space. The proposed approach achieves high performance gain when compared to a set of baselines, and also achieves higher robustness and generalizability than the baselines. The features of the proposed model when visualized are seen to obtain higher separability for the classes than the input image features and the baseline features. This work also studies the attention map visualizations of the networks of the fused model and observes that the proposed methodology can capture more image information relevant to the forensic task of classifying natural and generated images.

For other inquiries, please contact:
Manjary P Gangan 📧 [email protected] 🌏 website
Anoop K 📧 [email protected] 🌏 website
Lajish V L 📧 [email protected] 🌏 website

Citation

@article{gangan2023robust,
      title={A Robust Approach Towards Distinguishing Natural and Computer Generated Images using Multi-Colorspace fused and Enriched Vision Transformer}, 
      author={Manjary, P Gangan and Anoop, Kadan and Lajish, V L},
      year={2023},
      eprint={2308.07279},
      archivePrefix={arXiv},
      doi={https://doi.org/10.48550/arXiv.2308.07279}
}

Acknowledgement

This work was supported by the Women Scientist Scheme-A (WOS-A) for Research in Basic/Applied Science from the Department of Science and Technology (DST) of the Government of India

About

A Robust Approach Towards Distinguishing Natural and Computer Generated Images using Multi-Colorspace fused and Enriched Vision Transformer

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 87.9%
  • Python 12.1%