Skip to content

Repository concerning a feature detector and tracker developed for the Computer Vision course of the master's degree in Computer Science at University of Trento

Notifications You must be signed in to change notification settings

samuelebortolotti/feature-detection-and-tracking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Feature detection and tracking

Feture detection and tracking is a collection of methods concerning feature detection and tracking in videos, developed for the Computer Vision course of the master's degree program in Computer Science at the University of Trento.

Author

Name Surname MAT
Samuele Bortolotti 229326

Requirements

The code as-is runs in Python 3.9 with the following dependencies

And the following development dependencies

Getting Started

Follow these instructions to set up the project on your PC.

Moreover, to facilitate the use of the application, a Makefile has been provided; to see its functions, simply call the appropriate help command with GNU/Make

make help

1. Clone the repository

git clone https://github.com/samuelebortolotti/feature-detection-and-tracking.git
cd feature-detection-and-tracking

2. Install the requirements

pip install --upgrade pip
pip install -r requirements.txt

Note: it might be convenient to create a virtual enviroment to handle the dependencies.

The Makefile provides a simple and convenient way to manage Python virtual environments (see venv). In order to create the virtual enviroment and install the requirements be sure you have the Python 3.9 (it should work even with more recent versions, however I have tested it only with 3.9)

make env
source ./venv/fdt/bin/activate
make install

Remember to deactivate the virtual enviroment once you have finished dealing with the project

deactivate

3. Generate the code documentation

The automatic code documentation is provided Sphinx v4.5.0.

In order to have the code documentation available, you need to install the development requirements

pip install --upgrade pip
pip install -r requirements.dev.txt

Since Sphinx commands are quite verbose, I suggest you to employ the following commands using the Makefile.

make doc-layout
make layout

The generated documentation will be accessible by opening docs/build/html/index.html in your browser, or equivalently by running

make open-doc

However, for the sake of completness one may want to run the full Sphinx commands listed here

sphinx-quickstart docs --sep --no-batchfile --project feature-detection-and-tracking --author "Samuele Bortolotti"  -r 0.1  --language en --extensions sphinx.ext.autodoc --extensions sphinx.ext.napoleon --extensions sphinx.ext.viewcode --extensions myst_parser
sphinx-apidoc -P -o docs/source .
cd docs; make html

Note: executing the second list of command will lead to a slightly different documentation with respec to the one generated by the Makefile. This is because the above listed commands do not customize the index file of Sphinx. This is because the above listed commands do not customise the index file of Sphinx.

4. Run the SIFT feature detection

To run the SIFT feature detector on an image you can type:

python -m fdt sift path_to_image [--n-features 100]

where path_to_image is the path to the image you want to process with the SIFT algorithm and --n-features refers to the number of features you want to obtain from the detection phase.

As output, the algorithm will plot the original image with the SIFT keypoint drawn on top of it.

Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the Makefile and then run:

make sift

5. Run the ORB feature detection

To run the ORB feature detector on an image you can type:

python -m fdt orb path_to_image [--n-features 100]

where path_to_image is the path to the image you want to process with the ORB algorithm and --n-features refers to the number of features you want to obtain from the detection phase.

As output, the algorithm will plot the original image with the ORB keypoint drawn on top of it.

Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the Makefile and then run:

make orb

6. Run the Harris corner detector

To run the Harris corner detector on an image you can type:

python -m fdt harris path_to_image [--config-file]

where path_to_image is the path to the image you want to process with the Harris corner detector and --config-file is used in order to load the configuration present in fdt/config/harris_conf.py.

As output, the algorithm will plot the original image with the Harris corners drawn on top of it.

Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the Makefile and then run:

make harris

7. Run the Simple Blob detector

To run the Simple blob detector on an image you can type:

python -m fdt blob path_to_image [--config-file]

where path_to_image is the path to the image you want to process with the Simple Blob detector and --config-file is used in order to load the configuration present in fdt/config/blob_conf.py.

As output, the algorithm will plot the original image with the blobs center keypoint drawn on top of it.

Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the Makefile and then run:

make blob

8. Run the feature matching

python -m fdt matcher matcher_method [--n-features 100 --flann --matching-distance 60 --video material/Contesto_industriale1.mp4 --frame-update 30]

Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the Makefile and then run:

make matcher

9a. Run the feature detection with a the Kalman filter as tracking algorithm

python -m fdt kalman matcher_method [--n-features 100 --flann --matching-distance 60 --video material/Contesto_industriale1.mp4 --frame-update 30 --output-video-name videoname]

If output-video-name is passed, then the program saves the video in AVI format in the output folder. For generating the video XVID codec is employed. Therefore, if you want to precisely follow the code you may need to install it unless you already have it. Otherwise, feel free to change it or suggest me a better alternative.

Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the Makefile and then run:

make kalman

You can customise the Kalman filter matrices by modifying the current_conf Python dictionary in the fdt/config/kalman_config.py file.

The current configuration is depicted here:

import numpy as np

"""Legend:
  A (np.ndarray): state transition matrix
  w (np.ndarray): process noise
  H (np.ndarray): measurement matrix
  v (np.ndarray): measurement noise
  B (np.ndarray): additional and optional control input
"""

# Configuration which is running at the moment
current_conf = {
    "dynamic_params": 6,
    "measure_params": 2,
    "control_params": 0,
    "A": np.array(
        [
            [1, 0, 1, 0, 1 / 33, 0],
            [0, 1, 0, 1, 0, 1 / 33],
            [0, 0, 1, 0, 1, 0],
            [0, 0, 0, 1, 0, 1],
            [0, 0, 0, 0, 1, 0],
            [0, 0, 0, 0, 0, 1],
        ],
        np.float32,
    ),
    "w": np.eye(6, dtype=np.float32) * 50,
    "H": np.array(
        [
            [1, 1 / 33, 0, 0, 0, 0],
            [0, 1, 0, 0, 0, 0],
        ],
        dtype=np.float32,
    ),
    "v": np.eye(2, dtype=np.float32) * 50,
    "B": None,
}

Note: if the program raises an error when the name of the output video is passed, it is possible that it is an issue with CODECS, thus consider changing the ``cv2.VideoWriter_fourcc(..) line in the code (tracking/kalman.py`).

9b. Run the feature detection with a the Kalman filter as tracking algorithm

python -m fdt lukas-kanade matcher_method [--nfeatures 100 --video material/Contesto_industriale1.mp4 --frameupdate 30]

If output-video-name is passed, then the program saves the video in AVI format in the output folder. For generating the video XVID codec is employed. Therefore, if you want to precisely follow the code you may need to install it unless you already have it. Otherwise, feel free to change it or suggest me a better alternative.

Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the Makefile and then run:

make lukas-kanade

Note: if the program raises an error when the name of the output video is passed, it is possible that it is an issue with CODECS, thus consider changing the ``cv2.VideoWriter_fourcc(..) line in the code (tracking/lucas_kanade.py`).

10 Report

The report concerning the implementation details and my considerations regarding the feature detectors is present in the report/paper folder.

Moreover, a simple LaTeX beamer presentation is presented in the report/presentation.

About

Repository concerning a feature detector and tracker developed for the Computer Vision course of the master's degree in Computer Science at University of Trento

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published