Skip to content

LITE: Light Inception with boosTing tEchniques for Time Series Classification

License

Notifications You must be signed in to change notification settings

MSD-IRIMAS/LITE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LITE: Light Inception with boosTing tEchniques for Time Series Classification

This is the source code of our paper "LITE: Light Inception with boosTing tEchniques for Time Series Classification" accepted at the 10th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2023) in the Learning from Temporal Data (LearnTeD) special session track.
This work was done by Ali Ismail-Fawaz, Maxime Devanne, Stefano Berretti, Jonathan Weber and Germain Forestier.

The LITE architecture

The same LITE architecture is then used to form an ensemble of five LITE models, names LITETime.

lite

Usage of the code

To use this code, running the main.py file with the following options in the parsed as arguments:

--dataset : to choose the dataset from the UCR Archive (default=Coffee)
--classifier : to choose the classifier, in this case only LITE can be chosen (default=LITE)
--runs : to choose the number of runs (default=5)
--output-directory : to choose the output directory name (default=results/)
--track-emissions : to choose wether or not to track the training/testing time and CO2/power consumotion.

Adaptation of code

The only change to be done in the code is the folder_path in the utils/utils.py file, this line. This directory should point to the parent directory of the UCR Archive datasets.

Using aeon-toolkit to Train a LITETimeClassifier on your DATA

When using aeon, simply load your data and create an instance of a LITETime classifier and train it

from aeon.datasets import load_classification
from aeon.classification.deep_learning import LITETimeClassifier

xtrain, ytrain, _ = load_classification(name="Coffee", split="train")
xtest, ytest, _ = load_classification(name="Coffee", split="test")

clf = LITETimeClassifier(n_classifiers=5)
clf.fit(xtrain, ytrain)
print(clf.score(xtest, ytest)

Results

Results can be found in the results.csv file for FCN, ResNet, Inception, InceptionTime, ROCKET, MultiROCKET, LITE and LITETime. For non-ensemble methods, results are averaged over five runs, for ensemble methods, we ensemble the five runs.

Average performance and FLOPS comparison

The following figure shows the comparison between LITE and state of the art complex deep learners. The comparison consists on the average performance and the number FLOPS.

flops

LITE 1v1 with FCN, ResNet and Inception

The following compares LITE with FCN, ResNet and Inception using the accuracy performance on the test set of the 128 datasts of the UCR archive.

1v1lite

LITETime 1v1 with ROCKET and InceptionTime

The following compares LITE with FCN, ResNet and Inception using the accuracy performance on the test set of the 128 datasts of the UCR archive.

1v1litetime

LITETime MCM with SOTA

The following 1v1 and multi-comparison matrix shows the performance of LITETime with respect to the SOTA models for Time Series Classification.

The following compares LITE with FCN, ResNet and Inception using the accuracy performance on the test set of the 128 datasts of the UCR archive.

mcm-litetime

CD Diagram

The following Critical Difference Diagram (CDD) shows the comparison following the average rank between classifiers.

cdd

Requirements

numpy
pandas
sklearn
tensorflow
matplotlib
codecarbon

Citation

@inproceedings{Ismail-Fawaz2023LITELightInception,
  author = {Ismail-Fawaz, A. and Devanne, M. and Berretti, S. and Weber, J. and Forestier, G.},
  title = {LITE: Light Inception with boosTing tEchniques for Time Series Classification},
  booktitle = {International Conference on Data Science and Advanced Analytics (DSAA)},
  year = {2023}
}

Acknowledgments

This work was supported by the ANR DELEGATION project (grant ANR-21-CE23-0014) of the French Agence Nationale de la Recherche. The authors would like to acknowledge the High Performance Computing Center of the University of Strasbourg for supporting this work by providing scientific support and access to computing resources. Part of the computing resources were funded by the Equipex Equip@Meso project (Programme Investissements d’Avenir) and the CPER Alsacalcul/Big Data. The authors would also like to thank the creators and providers of the UCR Archive.