2025-07-08 15:58:20 +08:00
2025-07-08 15:44:50 +08:00
2025-07-08 15:44:50 +08:00
gif
2025-06-12 00:45:59 -07:00
2025-07-08 15:44:50 +08:00
2025-07-08 15:44:50 +08:00
2025-07-08 15:56:48 +08:00
2025-07-08 15:44:50 +08:00
2025-07-08 15:58:20 +08:00
2025-07-08 15:44:50 +08:00
2025-07-08 15:44:50 +08:00
2025-07-08 15:44:50 +08:00
2025-07-08 15:44:50 +08:00
2025-07-08 15:44:50 +08:00
2025-07-08 15:44:50 +08:00

SpatialTrackerV2: 3D Point Tracking Made Easy

CAD&CG, Zhejiang University; University of Oxford; Ant Research; Pixelwise AI; Bytedance Seed

Yuxi Xiao, Jianyuan Wang, Nan Xue, Nikita Karaev, Iurii Makarov, Bingyi Kang, Xin Zhu, Hujun Bao, Yujun Shen, Xiaowei Zhou

Project Page | BibTeX | Goolge Drive

Paper PDF Open In Colab Spaces Visitors

📰 Latest Updates & News

  • [June 27, 2025]: SpatialTrackerV2 accepted by ICCV 2025
  • [June 23, 2025]: Huggingface Space Demo launched! Try it out: 🤗 Huggingface Space

TODO List

  • Release quick start of SpaTrack2-offline
  • Final version of Paper at PAPER.md
  • Release SpaTrack2-online
  • Training & Evaluation Codes.
  • More supports for other Depth Model, i.e., DepthAnything, StereoFoundation, UniDepth, Metric3D.
  • Ceres Python Bindings designed for SpatialTracker and Dynamic Reconstruction.

Set up the environment

To set up the environment for running the SpaTrack model, follow these steps:

  1. Clone the Repository:

    git clone git@github.com:henry123-boy/SpaTrackerV2.git
    cd SpaTrackerV2
    
  2. Create a Virtual Environment: It's recommended to use a virtual environment to manage dependencies.

    conda create -n SpaTrack2 python=3.11
    conda activate SpaTrack2
    
  3. Install Dependencies:

    Install the torch dependencies pip (tested with torch2.4).

    python -m pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu124
    

    Install the required Python packages using pip.

    python -m pip install -r requirements.txt
    

By following these steps, you should have a working environment ready to run the SpaTrack model.

Quick Start

We gave two examples to illustrate the usage of SpaTrack2.

Type1: Monocular video as input (Example0)

python inference.py --data_type="RGB" --data_dir="examples" --video_name="protein" --fps=3

Type2: Customized Posed RGBD video as input (Example1)

We provide an example which has Posed RGBD input with MegaSAM. Firstly, please download examples via:

sh scripts/download.sh

Run it with below:

python inference.py --data_type="RGBD" --data_dir="assets/example0" --video_name="snowboard" --fps=1

Visualize your results

The guidance will be displayed in the terminal after running inference.py.

Please follow the instructions in the app_3rd README to configure the dependencies. Then,

python -m pip install gradio==5.31.0 pako

Our gradio demo enable the user to track the points on the target object easily, just try:

python app.py
Description
[ICCV 2025] SpatialTrackerV2: 3D Point Tracking Made Easy
Readme 9.4 MiB
Languages
Python 90.4%
HTML 9.6%