Results will be displayed on the Leaderboard
This repository is for you if you want to partake in the THÖR-MAGNI challenge. Develop, train and test your own methods with the dataset.
For all this we provide you with a comprehensive individual repository, that contains a sample dataloader. Furthermore, the repository describes everything you need to know about the handling of the THÖR-MAGNI data. For a first impression of how the data looks like, you can use our visualization tool
You train and develop your method locally and generate prediction files, that can be packaged and submitted to our challenge.
Submissions to our challenge are only to be made in .npy format.
For information on how to format your predictions, before proceeding with th next steps, please checkout the BENCHMARK REPO once again.
The repository's main directory contains a config.ini file. Here you can adjust your team name and specify your method. Also, specify the name of your prediction file that you want to upload to the challenge as a prediction in the next step.
To participate in this challenge, follow these steps:
- Fork this repository to your own GitHub account.
- Clone the forked repository to your local machine.
- Create a conda environment using the following command:
conda env create -f environment.yaml && conda activate thor-magni-challenge
- Copy your submission file (.npy) into the repo base folder and package it: (NOTE: This will use the metadata you specified in config.yml and create a submission.npy file inside the submissions folder.)
python package_submission.py
- To test your challenge results, you can run the processing script locally. This will print the leaderboard entry for the previously packaged submission.
python challenge_processing_script.py
Please proceed only with these steps if you want to submit your final results!
- Commit and push ONLY the submission.npy file inside the submissions folder to your forked repository.
- Create a pull request to submit your submissions.npy file to the challenge branch. Your pull request will be inspected by one of our admins and approved if there are no outstanding issues.
Note that the ground truth test annotations are provided in the BENCHMARK REPO. This is because they match the ground truth of the original THÖR-MAGNI data, which is readily available. We trust participants not to utilize these unethically, especially as we will be inviting the top participants to present their work at our 2024 ICRA workshop and will review submissions accordingly. For participation in the workshop, only submissions provided before 01.05. will be considered. Top performers will then be contacted to validate their approaches and provide instructions for submitting their writeup for the 6th Workshop on Longterm Human Motion Prediction (LHMP) at the 13.05.2024 workshop.
If you have questions or remarks regarding this challenge, please contact one of our team members:
The THÖR-MAGNI dataset is a large-scale collection of human and robot navigation and interaction data. THÖR-MAGNI offers diverse navigation styles of both mobile robots and humans engaged in shared environments with robotic agents, featuring multi-modal data for a comprehensive representation. THÖR-MAGNI serves as a valuable resource for training activity-conditioned motion prediction models and investigating visual attention during human-robot interaction.
To further support researchers, THÖR-MAGNI comes with a dedicated set of user-friendly tools, including a dashboard and the specialized Python package thor-magni-tools. These tools streamline the visualization, filtering, and preprocessing of raw trajectory data, enhancing the accessibility and usability of the dataset. By providing these resources, we aim to equip researchers with versatile and efficient tools to navigate, analyze, and extract valuable insights from the dataset.