Skip to content

tsujuifu/pytorch_lstm-shuttle

Repository files navigation

[EMNLP'18 (Long)] Speed Reading: Learning to Read ForBackward via Shuttle

A PyTorch implementation of LSTM-Shuttle

Paper | Poster

Overview

LSTM-Shuttle is an implementation of
"Speed Reading: Learning to Read ForBackward via Shuttle"
Tsu-Jui Fu and Wei-Yun Ma
in Conference on Empirical Methods in Natural Language Processing (EMNLP) 2018 (Long)

LSTM-Shuttle not only reads shuttling forward but also goes back. Shuttling forward enables high efficiency, and going backward gives the model a chance to recover lost information, ensuring better prediction. It first reads a fixed number of words sequentially and outputs the hidden state. Then, based on the hidden state, LSTM-Shuttle computes the shuttle softmax distribution over the forward or backward steps.

Requirements

This code is implemented under Python3 and PyTorch.
Following libraries are also required:

Usage

We use spaCy as pre-trained word embedding.

  • LSTM-Vanilla
model_lstm-vanilla.ipynb
  • LSTM-Jump
model_lstm-jump.ipynb
  • LSTM-Shuttle
model_lstm-shuttle.ipynb

Resources

Citation

@inproceedings{fu2018lstm-shuttle, 
  author = {Tsu-Jui Fu and Wei-Yun Ma}, 
  title = {{Speed Reading: Learning to Read ForBackward via Shuttle}}, 
  booktitle = {Conference on Empirical Methods in Natural Language Processing (EMNLP)}, 
  year = {2018} 
}

Acknowledgement

Releases

No releases published

Packages

No packages published