Skip to content
forked from yuxng/PoseCNN

A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes

License

Notifications You must be signed in to change notification settings

haudren/PoseCNN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes

Created by Yu Xiang at RSE-Lab at University of Washington.

Introduction

We introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. arXiv, Project

PoseCNN

License

PoseCNN is released under the MIT License (refer to the LICENSE file for details).

Citation

If you find PoseCNN useful in your research, please consider citing:

@inproceedings{xiang2017posecnn,
    Author = {Xiang, Yu and Schmidt, Tanner and Narayanan, Venkatraman and Fox, Dieter},
    Title = {PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes},
    Journal   = {arXiv preprint arXiv:1711.00199},
    Year = {2017}
}

Installation

  1. Install TensorFlow. I usually compile the source code of tensorflow locally.

  2. Download the VGG16 weights from here (528M). Put the weight file vgg16.npy to $ROOT/data/imagenet_models.

  3. Compile lib/synthesize with cmake. This package contains a few useful tools such as generating synthetic image and ICP.

    Install dependencies:

    cd $ROOT/lib/synthesize
    mkdir build
    cd build
    cmake ..
    make

    Compile the Cython interface for lib/synthesize and custom layers

    cd $ROOT/lib
    python setup.py build_ext --inplace

    Add the libary path

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ROOT/lib/synthesize/build

Tested environment

  • Ubuntu 16.04
  • Tensorflow >= 1.2.0
  • CUDA >= 8.0

Running the demo

  1. Download our trained model on the YCB-Video dataset from here, and save it to $ROOT/data/demo_models.

  2. run the following script

    ./experiments/scripts/demo.sh $GPU_ID

Running on the YCB-Video dataset

  1. Download the YCB-Video dataset from here.

  2. Create a symlink for the YCB-Video dataset (the name LOV is due to legacy, Learning Objects from Videos)

    cd $ROOT/data/LOV
    ln -s $ycb_data data
  3. Training and testing on the YCB-Video dataset

    cd $ROOT
    
    # training
    ./experiments/scripts/lov_color_2d_train.sh $GPU_ID
    
    # testing
    ./experiments/scripts/lov_color_2d_test.sh $GPU_ID
    

About

A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 45.4%
  • Python 34.5%
  • Cuda 14.1%
  • Shell 4.4%
  • CMake 1.2%
  • GLSL 0.2%
  • Other 0.2%