[Paper] [Project Website] [Data]
Authors: Irmak Guzey, Ben Evans, Soumith Chintala and Lerrel Pinto, New York University and Meta AI
This repository includes the official implementation of T-Dex, including the training pipeline of tactile encoders and the real-world deployment of the non-parametric imitation learning policies for dexterous manipulation tasks using Allegro hand with XELA sensors integration and Kinova arm.
Datasets for the play data and the demonstrations is uploaded in this Google Drive link. Instructions on how to use this dataset is given below.
The following assumes our current working directory is the root folder of this project repository; tested on Ubuntu 20.04 LTS (amd64).
- Install the project environment:
This will create a conda environment with the name
conda env create --file=conda_env.yml
tactile_dexterity
. - Activate the environment:
conda activate tactile_dexterity
- Install the
tactile_dexterity
package by usingsetup.py
.This command should be done inside the conda environment. You can test if the project package has been installed correctly by runningpip install -e .
import tactile_dexterity
from a python shell. - To enable logging, log in with a
wandb
account:wandb login
This work uses Holo-Dex pipeline for demonstration collection and few of the interfaces implemented there. Also uses Holo-Dex package and API in order to connect to the robots.
You can install holodex
as a separate package and follow the instructions to install pipeline and collect demonstrations. Same procedure is required for deployment as well.
Datasets used for training and evaluation will be uploaded in this Google Drive link.
- There will be two separate folders:
play_data
: All the tactile play data. Kinova and Allegro commanded and current states are also saved along with tactile and visual observations.evaluation
: Successful demonstrations used during robot runs. Each separate directory contains demonstrations for different tasks.
- Download and unzip the datasets.
- Dataset paths will be updated in the configuration files.
The following assumes our current working directory is the root folder of this project repository and the data provided above is being used. For each of these parts you should first activate the conda environment by running:
conda activate tactile_dexterity
For both the play data trainings and the evaluation dataset we need a preprocessing procedure. Steps for this are as follows:
- Preprocess by running:
Here you should set the
python preprocess.py data_path=<data to preprocess>
data_path
variable to either the root task directory (which will be<dataset_location>/evaluation/<task_name>
) or the play data directory (which will be<dataset_location>/play_data
) . You can set the necessary parameters intactile_dexterity/configs/preprocess.yaml
file. - Preprocessing should be done separately with different procedures.
- If the preprocessing is done for tactile SSL training set the following parameters:
vision_byol: false tactile_byol: true dump_images: false threshold_step_size: 0.1
- If the preprocessing is done for image SSL training set the following parameters:
vision_byol: true tactile_byol: false dump_images: true threshold_step_size: 0.1 view_num: <camera-id>
view_num
parameter should be 0 if there is only one camera, otherwise it should be set to whichever camera you'd like to use.- If the preprocessing is done for robot deployments, then you should set the following parameters:
vision_byol: true tactile_byol: false dump_images: true threshold_step_size: 0.2 view_num: <camera-id>
threshold_step_size
can be changed according to the task but this is the default value. This parameter is the difference in the end effector position during subsampling. Please refer to the paper for more detailed information.
You can train encoders using Self-Supervised methods such as BYOL and VICReg for tactile and visual images. Also Behavior Cloning models with both image and the tactile.
- Use the following command to train a resnet encoder using the above mentioned SSL methods:
Experiment name and the data directory to train can be changed accordingly. Experiment name is used for
python3 train.py data_dir=<path-to-desired-root> experiment=<experiment-name>
wandb
especially. Dataroots used for preprocessing are the same for training. - Currently this command will train with BYOL with tactile images.In order to change the training type and the training object (tactile|image) one should modify
tactile_dexterity/configs/train.yaml
file.learner
,learner_type
anddataset
variables should be changed accordingly. Namings are done similarly to the paper.
Trainings will save a snapshot of the used models under the tactile-dexterity/out
with the time and experiment name. At each deployer
model used the desired model paths should be retrieved from there.
You can deploy models by running:
python3 deploy.py data_path:<path-to-evaluation-task> deployer:<vinn|bc|openloop>
For each deployer module you should set the model directories to the snapshots saved.
- For VINN, set
tactile_out_dir
andimage_out_dir
to the desired encoders paths to be used. - For BC, set
out_dir
to the root of all the BC encoders that were saved during training. - For OpenLoop, you will not need an encoder.
NOTE: These instructions assume that you are running the Holo-Dex
deployment API on a separate shell. Without communication to the robot these deployments cannot be done.
If you use this repo in your research, please consider citing the paper as follows:
@misc{guzey2023dexterity,
title={Dexterity from Touch: Self-Supervised Pre-Training of Tactile Representations with Robotic Play},
author={Irmak Guzey and Ben Evans and Soumith Chintala and Lerrel Pinto},
year={2023},
eprint={2303.12076},
archivePrefix={arXiv},
primaryClass={cs.RO}
}