Skip to content

SaitoAsahi/Image-Captions

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image Captioning System

This repository presents a pyTorch implementation of the Show, Attend, and Tell paper (https://arxiv.org/pdf/1502.03044.pdf) and applies two extentions to it: (1) utalize the GloVe embeddings and (2) integrate BERT context vectors into training. These extensions have proved to greatly inhance the model's performance.

Parts of this pyTorch implementaion are taken from the following github repositories:

  1. https://github.com/parksunwoo/show_attend_and_tell_pytorch/blob/master/prepro.py
  2. https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning

The main additions of this implementaion are:

  1. Integrating GloVe
  2. Intergrating BERT
  3. Integrating recent advancements into the model implementation
  4. Simplifying and cleaning the older implementations

Instructions to run the code

Download and clean data

  1. Create three folders: (1) data, (2) annotations - inside of data, (3) checkpoints, and (4) glove.6B
  2. Download the train2014 and val2014 MS COCO dataset and place them in the data folder (http://cocodataset.org/#download)
  3. Download the COCO train/val 2014 captions and place them in data>annotations folder (http://cocodataset.org/#download)
  4. Run the processData.py file (uncomment the last line of that file first) - this will generate train2014_resized, val2014_resized, and vocab.pkl
  5. Comment out the last line of processData.py

Setup GloVe embeddings

  1. Download the glove.6B dataset and place it in glove.6B (https://nlp.stanford.edu/projects/glove/)
  2. Run the glove_embeds.py file - this will generate glove_words.pkl in the glove.6B folder

Train/Validate the models

  1. Open main.py and scroll to 'START Parameters' (Pre-Trained Models: Baseline, GloVe, BERT)
  2. Edit the parameters to train/test the particular model you want
  3. Run main.py with python3

Pre-Trained Models

  1. BERT Soft Attention Model
  2. GloVe Soft Attention Model
  3. Baseline Soft Attention Model (Xu et al,. 2016)

If you only want to validate Pre-Trained Models, then it's much simpler to use the Jupyter Notebook in this repository and just load the model you wish to validate. Open the notebook and find the Load model section and pick the model you want. If you would like to compare all the models against each other, open the jupyter notebook and run the compare_all function.

Due github memory limitations, I wasn't able to upload my trained models. If you want access to them, email me at [email protected]

for more details: https://www.overleaf.com/read/jsghphtqpcgc

About

BERT + Image Captioning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 90.1%
  • Python 9.9%