- Download KITTI dataset image as
KITTI/image/
cd KITTI/
bash downloader.sh
- Download KITTI dataset ground truth label as
KITTI/pose_GT/
cd KITTI/
download: http://www.cvlibs.net/download.php?file=data_odometry_poses.zip
rename as KITTI/pose_GT
- Transfer ground truth pose from [R|t] to rpyxyzR as .npy into
KITTI/pose_GT/
for training
3.5. Transfer .npy ground truth to rpyxyz into/GT_pose_rpyxyz
for visualizing
python3 preprocess.py
python3 myGTtxt_generator.py # Need to specify your path
- Download our dataset, uzip them and put them into
KITTI/image/
cd KITTI/
download: https://drive.google.com/open?id=1DVB0K2cufUY0mSzXrByesJdHrs4bZqDf
images/ntu_15fstep unzip as KITTI/image/ntu
images/room_1fstep unzip as KITTI/image/room
images/campus1_2fstep unzip as KITTI/image/campus1
images/campus2_2fstep unzip as KITTI/image/campus2
images/ntu3_15tstep unzip as KITTI/image/ntu3
images/ntu4_15fstep unzip as KITTI/image/ntu4
move all things in pose_GT to KITTI/pose_GT
- Download our pretrain model
DeepVo_Epoch_Last.pth
, and put it intomodel_para/
mkdir model_para
cd model_para
wget https://www.dropbox.com/s/0or826j6clrbh3h/DeepVo_Epoch_Last.pth?dl=1
- Specify your path in
myMain.py, myTest.py, myTestNoGT.py, myVisualize.py, myVisualizeNoGT.py
gt_root to KITTI/pose_GT
img_root to KITTI/images
pose_GT_dir to KITTI/pose_GT
- (optional) Training your own (you may need flownet pretrain model
python3 myMain.py
- Predict the KITTI dataset pose and our dataset pose
python3 myTest.py
python3 myTestNoGT.py
- Visualize the prediction of KITTI and our dataset
python3 myVisualize.py
python3 myVisualizeNoGT.py
- Visualize poses dynamically by Rviz (ROS Kinetic pre-instealled required)
mv ros_odometry_visualizer catkin_ws/src/ros_odometry_visualizer #move the folder to your catkin workspace
vim ros_odometry_visualizer/launch/odometry_kitti_visualizer.launch #edit your own path
roscd
cd ..
catkin_make
rospack profile
roslaunch ros_odomtry_visualizer odometry_kitti_visualizer.launch # visualizing kitti result
roslaunch ros_odomtry_visualizer odometry_kitti_visualizer_noGT.launch # visualizing our dataset result
train on KITTI dataset video: 00, 01, 02, 05, 08, 09
valid on KITTI dataset video: 04, 06, 07, 10
test on KITTI dataset video: 04, 05, 07, 09, 10
test on Self-made dataset video: ntu, room, campus1, campus2
ntu, campus1, campus2
are recorded by iPhone8 with riding bicycle
room
is recorded by iPhone8 through walking
All videos are processed by Blender to 1241x376 resolution png sequences
paper result
paper result | pre-trained model from alexart13 | our model |
---|---|---|
ntu | ntu-ref | room | room-ref |
---|---|---|---|
campus1 | campus1-ref | campus2 | campus2-ref |
ntu3 | ntu3-ref | ntu4 | ntu4-ref |
all in: https://drive.google.com/drive/folders/16Mqq-QOYdFPORCvmaqvTxSjoXwKzzQ8O?usp=sharing
(we suggest you to use VLC player to play those screen recording video, if you fail to open them.)
doc/VFXfinal_report.pdf
doc/VFXfinal_presentation.pdf
[1] S. Wang, R. Clark, H. Wen and N. Trigoni, "DeepVO: Towards end-to-end visual odometry with deep Recurrent Convolutional Neural Networks," 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017, pp. 2043-2050.
[2] reference: https://github.com/ChiWeiHsiao/DeepVO-pytorch
[3] visualizer odometry: http://wiki.ros.org/navigation/Tutorials/RobotSetup/Odom
[4] visualizer image: http://wiki.ros.org/image_transport/Tutorials/PublishingImages
[5] visualizer marker: http://wiki.ros.org/rviz/Tutorials/Markers%3A%20Basic%20Shapes
Author | 陳健倫 | 李尚倫 | 李佳蓮 |
---|