Skip to content

An implementation of SLAM(Simultaneous Localization and Mapping) for a robot that moves and senses in a 2 dimensional, grid world. SLAM gives the user a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems.

Notifications You must be signed in to change notification settings

echatzidaki/Landmark_Detection_and_Robot_Tracking-SLAM

Repository files navigation

Landmark Detection & Robot Tracking (SLAM)

Overview

An implement of SLAM (Simultaneous Localization and Mapping) for a 2 dimensional world! It combines robot sensor measurements and movement theory to create a map of an environment from only sensor and motion data gathered by a robot, over time. SLAM gives you a way to track the location of a robot in the world in real-time and identify the locations of landmarks such as buildings, trees, rocks, and other world features. This is an active area of research in the fields of robotics and autonomous systems.

Below is an example of a 2D robot world with landmarks (purple x's) and the robot (a red 'o') located and found using only sensor and motion data collected by that robot. This is just one example for a 50x50 grid world; in your work you will likely generate a variety of these maps.

The project will be broken up into three Python notebooks; the first two are for exploration of provided code, and a review of SLAM architectures:

Notebook 1 : Robot Moving and Sensing

Notebook 2 : Omega and Xi, Constraints

Notebook 3 : Landmark Detection and Tracking

Project Instructions

NOTE: Make sure that you have all the libraries and dependencies required to support this project. If you have already created a cv-nd environment for exercise code, then you can use that environment! If not, instructions for creation and activation are below.

Local Environment Instructions

  1. Clone the repository, and navigate to the downloaded folder.
git clone https://github.com/udacity/P3_Implement_SLAM.git
cd P3_Implement_SLAM
  1. Create (and activate) a new environment, named cv-nd with Python 3.6. If prompted to proceed with the install (Proceed [y]/n) type y.

    • Linux or Mac:
    conda create -n cv-nd python=3.6
    source activate cv-nd
    
    • Windows:
    conda create --name cv-nd python=3.6
    activate cv-nd
    

    At this point your command line should look something like: (cv-nd) <User>:P3_Implement_SLAM <user>$. The (cv-nd) indicates that your environment has been activated, and you can proceed with further package installations.

  2. Install a few required pip packages, which are specified in the requirements text file (including OpenCV).

pip install -r requirements.txt

Notebooks

  1. Navigate back to the repo. (Also, your source environment should still be activated at this point.)
cd
cd P3_Implement_SLAM
  1. Open the directory of notebooks, using the below command. You'll see all of the project files appear in your local environment; open the first notebook and follow the instructions.
jupyter notebook
  1. Once you open any of the project notebooks, make sure you are in the correct cv-nd environment by clicking Kernel > Change Kernel > cv-nd.

LICENSE: This project is licensed under the terms of the MIT license.

About

An implementation of SLAM(Simultaneous Localization and Mapping) for a robot that moves and senses in a 2 dimensional, grid world. SLAM gives the user a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published