commit
535912c17b
71
README.md
71
README.md
@ -1,29 +1,37 @@
|
|||||||
# 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
|
# 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
|
||||||
|
|
||||||
## ArXiv Preprint
|
## CVPR 2024
|
||||||
|
|
||||||
### [Project Page](https://guanjunwu.github.io/4dgs/index.html)| [arXiv Paper](https://arxiv.org/abs/2310.08528)
|
### [Project Page](https://guanjunwu.github.io/4dgs/index.html)| [arXiv Paper](https://arxiv.org/abs/2310.08528)
|
||||||
|
|
||||||
|
[Guanjun Wu](https://guanjunwu.github.io/) <sup>1*</sup>, [Taoran Yi](https://github.com/taoranyi) <sup>2*</sup>,
|
||||||
|
[Jiemin Fang](https://jaminfong.cn/) <sup>3‡</sup>, [Lingxi Xie](http://lingxixie.com/) <sup>3 </sup>, </br>[Xiaopeng Zhang](https://scholar.google.com/citations?user=Ud6aBAcAAAAJ&hl=zh-CN) <sup>3 </sup>, [Wei Wei](https://www.eric-weiwei.com/) <sup>1 </sup>,[Wenyu Liu](http://eic.hust.edu.cn/professor/liuwenyu/) <sup>2 </sup>, [Qi Tian](https://www.qitian1987.com/) <sup>3 </sup> , [Xinggang Wang](https://xwcv.github.io) <sup>2‡✉</sup>
|
||||||
|
|
||||||
[Guanjun Wu](https://guanjunwu.github.io/)<sup>1*</sup>, [Taoran Yi](https://github.com/taoranyi)<sup>2*</sup>,
|
<sup>1 </sup>School of CS, HUST   <sup>2 </sup>School of EIC, HUST   <sup>3 </sup>Huawei Inc.  
|
||||||
[Jiemin Fang](https://jaminfong.cn/)<sup>3‡</sup>, [Lingxi Xie](http://lingxixie.com/)<sup>3</sup>, </br>[Xiaopeng Zhang](https://scholar.google.com/citations?user=Ud6aBAcAAAAJ&hl=zh-CN)<sup>3</sup>, [Wei Wei](https://www.eric-weiwei.com/)<sup>1</sup>,[Wenyu Liu](http://eic.hust.edu.cn/professor/liuwenyu/)<sup>2</sup>, [Qi Tian](https://www.qitian1987.com/)<sup>3</sup> , [Xinggang Wang](https://xwcv.github.io)<sup>2‡✉</sup>
|
|
||||||
|
|
||||||
<sup>1</sup>School of CS, HUST   <sup>2</sup>School of EIC, HUST   <sup>3</sup>Huawei Inc.  
|
|
||||||
|
|
||||||
<sup>\*</sup> Equal Contributions. <sup>$\ddagger$</sup> Project Lead. <sup>✉</sup> Corresponding Author.
|
<sup>\*</sup> Equal Contributions. <sup>$\ddagger$</sup> Project Lead. <sup>✉</sup> Corresponding Author.
|
||||||
|
|
||||||
---------------------------------------------------
|
---
|
||||||
|
|
||||||

|

|
||||||
Our method converges very quickly and achieves real-time rendering speed.
|
Our method converges very quickly and achieves real-time rendering speed.
|
||||||
|
|
||||||
Colab demo:[](https://colab.research.google.com/github/hustvl/4DGaussians/blob/master/4DGaussians.ipynb) (Thanks [camenduru](https://github.com/camenduru/4DGaussians-colab).)
|
New Colab demo:[](https://colab.research.google.com/drive/1wz0D5Y9egAlcxXy8YO9UmpQ9oH51R7OW?usp=sharing) (Thanks [Tasmay-Tibrewal
|
||||||
|
](https://github.com/Tasmay-Tibrewal))
|
||||||
|
|
||||||
|
Old Colab demo:[](https://colab.research.google.com/github/hustvl/4DGaussians/blob/master/4DGaussians.ipynb) (Thanks [camenduru](https://github.com/camenduru/4DGaussians-colab).)
|
||||||
|
|
||||||
Light Gaussian implementation: [This link](https://github.com/pablodawson/4DGaussians) (Thanks [pablodawson](https://github.com/pablodawson))
|
Light Gaussian implementation: [This link](https://github.com/pablodawson/4DGaussians) (Thanks [pablodawson](https://github.com/pablodawson))
|
||||||
|
|
||||||
|
|
||||||
|
## News
|
||||||
|
|
||||||
|
2024.02: Accepted by CVPR 2024. We delete some logging settings for debugging, the corrected training time is only **8 mins** (20 mins before) in D-NeRF datasets and **30 mins** (1 hour before) in HyperNeRF datasets. The rendering quality is not affected.
|
||||||
|
|
||||||
## Environmental Setups
|
## Environmental Setups
|
||||||
|
|
||||||
Please follow the [3D-GS](https://github.com/graphdeco-inria/gaussian-splatting) to install the relative packages.
|
Please follow the [3D-GS](https://github.com/graphdeco-inria/gaussian-splatting) to install the relative packages.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/hustvl/4DGaussians
|
git clone https://github.com/hustvl/4DGaussians
|
||||||
cd 4DGaussians
|
cd 4DGaussians
|
||||||
@ -35,13 +43,17 @@ pip install -r requirements.txt
|
|||||||
pip install -e submodules/depth-diff-gaussian-rasterization
|
pip install -e submodules/depth-diff-gaussian-rasterization
|
||||||
pip install -e submodules/simple-knn
|
pip install -e submodules/simple-knn
|
||||||
```
|
```
|
||||||
|
|
||||||
In our environment, we use pytorch=1.13.1+cu116.
|
In our environment, we use pytorch=1.13.1+cu116.
|
||||||
|
|
||||||
## Data Preparation
|
## Data Preparation
|
||||||
|
|
||||||
**For synthetic scenes:**
|
**For synthetic scenes:**
|
||||||
The dataset provided in [D-NeRF](https://github.com/albertpumarola/D-NeRF) is used. You can download the dataset from [dropbox](https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0).
|
The dataset provided in [D-NeRF](https://github.com/albertpumarola/D-NeRF) is used. You can download the dataset from [dropbox](https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0).
|
||||||
|
|
||||||
**For real dynamic scenes:**
|
**For real dynamic scenes:**
|
||||||
The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their official websites. To save the memory, you should extract the frames of each video and then organize your dataset as follows.
|
The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their official websites. To save the memory, you should extract the frames of each video and then organize your dataset as follows.
|
||||||
|
|
||||||
```
|
```
|
||||||
├── data
|
├── data
|
||||||
│ | dnerf
|
│ | dnerf
|
||||||
@ -69,46 +81,56 @@ The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used
|
|||||||
| ├── ...
|
| ├── ...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Training
|
## Training
|
||||||
|
|
||||||
For training synthetic scenes such as `bouncingballs`, run
|
For training synthetic scenes such as `bouncingballs`, run
|
||||||
|
|
||||||
```
|
```
|
||||||
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py
|
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py
|
||||||
```
|
```
|
||||||
|
|
||||||
You can customize your training config through the config files.
|
You can customize your training config through the config files.
|
||||||
|
|
||||||
# Checkpoint
|
Checkpoint
|
||||||
|
|
||||||
Also, you can training your model with checkpoint.
|
Also, you can training your model with checkpoint.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py --checkpoint_iterations 200 # change it.
|
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py --checkpoint_iterations 200 # change it.
|
||||||
```
|
```
|
||||||
|
|
||||||
Then load checkpoint with:
|
Then load checkpoint with:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py --start_checkpoint "output/dnerf/bouncingballs/chkpnt_coarse_200.pth"
|
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py --start_checkpoint "output/dnerf/bouncingballs/chkpnt_coarse_200.pth"
|
||||||
# finestage: --start_checkpoint "output/dnerf/bouncingballs/chkpnt_fine_200.pth"
|
# finestage: --start_checkpoint "output/dnerf/bouncingballs/chkpnt_fine_200.pth"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Rendering
|
## Rendering
|
||||||
|
|
||||||
Run the following script to render the images.
|
Run the following script to render the images.
|
||||||
|
|
||||||
```
|
```
|
||||||
python render.py --model_path "output/dnerf/bouncingballs/" --skip_train --configs arguments/dnerf/bouncingballs.py &
|
python render.py --model_path "output/dnerf/bouncingballs/" --skip_train --configs arguments/dnerf/bouncingballs.py &
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Evaluation
|
## Evaluation
|
||||||
|
|
||||||
You can just run the following script to evaluate the model.
|
You can just run the following script to evaluate the model.
|
||||||
|
|
||||||
```
|
```
|
||||||
python metrics.py --model_path "output/dnerf/bouncingballs/"
|
python metrics.py --model_path "output/dnerf/bouncingballs/"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Custom Datasets
|
## Custom Datasets
|
||||||
|
|
||||||
Install nerfstudio and follow their colmap pipeline.
|
Install nerfstudio and follow their colmap pipeline.
|
||||||
|
|
||||||
```
|
```
|
||||||
pip install nerfstudio
|
pip install nerfstudio
|
||||||
ns-process-data images --data data/your-data --output-dir data/your-ns-data
|
ns-process-data images --data data/your-data --output-dir data/your-ns-data
|
||||||
python train.py -s data/your-ns-data --port 6017 --expname "custom" --configs arguments/hypernerf/default.py
|
cp -r data/your-ns-data/images data/your-ns-data/colmap/images
|
||||||
|
python train.py -s data/your-ns-data/colmap --port 6017 --expname "custom" --configs arguments/hypernerf/default.py
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -120,6 +142,7 @@ There are some helpful scripts in , please feel free to use them.
|
|||||||
get all points clouds at each timestamps.
|
get all points clouds at each timestamps.
|
||||||
|
|
||||||
usage:
|
usage:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
export exp_name="hypernerf"
|
export exp_name="hypernerf"
|
||||||
python vis_point.py --model_path output/$exp_name/interp/aleks-teapot --configs arguments/$exp_name/default.py
|
python vis_point.py --model_path output/$exp_name/interp/aleks-teapot --configs arguments/$exp_name/default.py
|
||||||
@ -132,6 +155,7 @@ visualize the weight of Multi-resolution HexPlane module.
|
|||||||
`merge_many_4dgs.py`:
|
`merge_many_4dgs.py`:
|
||||||
merge your trained 4dgs.
|
merge your trained 4dgs.
|
||||||
usage:
|
usage:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
export exp_name="dynerf"
|
export exp_name="dynerf"
|
||||||
python merge_many_4dgs.py --model_path output/$exp_name/sear_steak
|
python merge_many_4dgs.py --model_path output/$exp_name/sear_steak
|
||||||
@ -139,6 +163,7 @@ python merge_many_4dgs.py --model_path output/$exp_name/sear_steak
|
|||||||
|
|
||||||
`colmap.sh`:
|
`colmap.sh`:
|
||||||
generate point clouds from input data
|
generate point clouds from input data
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
bash colmap.sh data/hypernerf/virg/vrig-chicken hypernerf
|
bash colmap.sh data/hypernerf/virg/vrig-chicken hypernerf
|
||||||
bash colmap.sh data/dynerf/sear_steak llff
|
bash colmap.sh data/dynerf/sear_steak llff
|
||||||
@ -147,26 +172,50 @@ bash colmap.sh data/dynerf/sear_steak llff
|
|||||||
**Blender** format seems doesn't work. Welcome to raise a pull request to fix it.
|
**Blender** format seems doesn't work. Welcome to raise a pull request to fix it.
|
||||||
|
|
||||||
`downsample_point.py` :downsample generated point clouds by sfm.
|
`downsample_point.py` :downsample generated point clouds by sfm.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
python scripts/downsample_point.py data/dynerf/sear_steak/colmap/dense/workspace/fused.ply data/dynerf/sear_steak/points3D_downsample2.ply
|
python scripts/downsample_point.py data/dynerf/sear_steak/colmap/dense/workspace/fused.ply data/dynerf/sear_steak/points3D_downsample2.ply
|
||||||
```
|
```
|
||||||
|
|
||||||
In my paper, I always use `colmap.sh` to generate dense point clouds and downsample it to less than 40000 points.
|
In my paper, I always use `colmap.sh` to generate dense point clouds and downsample it to less than 40000 points.
|
||||||
|
|
||||||
Here are some codes maybe useful but never adopted in my paper, you can also try it.
|
Here are some codes maybe useful but never adopted in my paper, you can also try it.
|
||||||
|
|
||||||
|
## Further works
|
||||||
|
|
||||||
|
We sincerely thank the authors and their fantastic works for other applications based on our code.
|
||||||
|
|
||||||
|
[MD-Splatting: Learning Metric Deformation from 4D Gaussians in Highly Deformable Scenes](https://md-splatting.github.io/)
|
||||||
|
|
||||||
|
[4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency](https://vita-group.github.io/4DGen/)
|
||||||
|
|
||||||
|
[DreamGaussian4D: Generative 4D Gaussian Splatting](https://github.com/jiawei-ren/dreamgaussian4d)
|
||||||
|
|
||||||
|
[EndoGaussian: Real-time Gaussian Splatting for Dynamic Endoscopic Scene Reconstruction](https://github.com/yifliu3/EndoGaussian)
|
||||||
|
|
||||||
|
[EndoGS: Deformable Endoscopic Tissues Reconstruction with Gaussian Splatting](https://github.com/HKU-MedAI/EndoGS)
|
||||||
|
|
||||||
|
[Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting](https://arxiv.org/abs/2401.16416)
|
||||||
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Contributions
|
## Contributions
|
||||||
|
|
||||||
**This project is still under development. Please feel free to raise issues or submit pull requests to contribute to our codebase.**
|
**This project is still under development. Please feel free to raise issues or submit pull requests to contribute to our codebase.**
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Some source code of ours is borrowed from [3DGS](https://github.com/graphdeco-inria/gaussian-splatting), [k-planes](https://github.com/Giodiro/kplanes_nerfstudio),[HexPlane](https://github.com/Caoang327/HexPlane), [TiNeuVox](https://github.com/hustvl/TiNeuVox). We sincerely appreciate the excellent works of these authors.
|
Some source code of ours is borrowed from [3DGS](https://github.com/graphdeco-inria/gaussian-splatting), [k-planes](https://github.com/Giodiro/kplanes_nerfstudio),[HexPlane](https://github.com/Caoang327/HexPlane), [TiNeuVox](https://github.com/hustvl/TiNeuVox). We sincerely appreciate the excellent works of these authors.
|
||||||
|
|
||||||
## Acknowledgement
|
## Acknowledgement
|
||||||
|
|
||||||
We would like to express our sincere gratitude to [@zhouzhenghong-gt](https://github.com/zhouzhenghong-gt/) for his revisions to our code and discussions on the content of our paper.
|
We would like to express our sincere gratitude to [@zhouzhenghong-gt](https://github.com/zhouzhenghong-gt/) for his revisions to our code and discussions on the content of our paper.
|
||||||
|
|
||||||
## Citation
|
## Citation
|
||||||
|
|
||||||
Some insights about neural voxel grids and dynamic scenes reconstruction originate from [TiNeuVox](https://github.com/hustvl/TiNeuVox). If you find this repository/work helpful in your research, welcome to cite these papers and give a ⭐.
|
Some insights about neural voxel grids and dynamic scenes reconstruction originate from [TiNeuVox](https://github.com/hustvl/TiNeuVox). If you find this repository/work helpful in your research, welcome to cite these papers and give a ⭐.
|
||||||
|
|
||||||
```
|
```
|
||||||
@article{wu20234dgaussians,
|
@article{wu20234dgaussians,
|
||||||
title={4D Gaussian Splatting for Real-Time Dynamic Scene Rendering},
|
title={4D Gaussian Splatting for Real-Time Dynamic Scene Rendering},
|
||||||
|
|||||||
@ -11,7 +11,7 @@ OptimizationParams = dict(
|
|||||||
iterations = 20000,
|
iterations = 20000,
|
||||||
pruning_interval = 8000,
|
pruning_interval = 8000,
|
||||||
percent_dense = 0.01,
|
percent_dense = 0.01,
|
||||||
render_process=True,
|
render_process=False,
|
||||||
# no_do=False,
|
# no_do=False,
|
||||||
# no_dshs=False
|
# no_dshs=False
|
||||||
|
|
||||||
|
|||||||
@ -11,7 +11,7 @@ ModelHiddenParams = dict(
|
|||||||
plane_tv_weight = 0.0002,
|
plane_tv_weight = 0.0002,
|
||||||
time_smoothness_weight = 0.001,
|
time_smoothness_weight = 0.001,
|
||||||
l1_time_planes = 0.0001,
|
l1_time_planes = 0.0001,
|
||||||
render_process=True
|
render_process=False
|
||||||
)
|
)
|
||||||
OptimizationParams = dict(
|
OptimizationParams = dict(
|
||||||
# dataloader=True,
|
# dataloader=True,
|
||||||
|
|||||||
@ -1 +1 @@
|
|||||||
Subproject commit e49506654e8e11ed8a62d22bcb693e943fdecacf
|
Subproject commit f2d8fa9921ea9a6cb9ac1c33a34ebd1b11510657
|
||||||
4
train.py
4
train.py
@ -403,8 +403,8 @@ if __name__ == "__main__":
|
|||||||
parser.add_argument('--port', type=int, default=6009)
|
parser.add_argument('--port', type=int, default=6009)
|
||||||
parser.add_argument('--debug_from', type=int, default=-1)
|
parser.add_argument('--debug_from', type=int, default=-1)
|
||||||
parser.add_argument('--detect_anomaly', action='store_true', default=False)
|
parser.add_argument('--detect_anomaly', action='store_true', default=False)
|
||||||
parser.add_argument("--test_iterations", nargs="+", type=int, default=[500*i for i in range(100)])
|
parser.add_argument("--test_iterations", nargs="+", type=int, default=[3000,14000,20000])
|
||||||
parser.add_argument("--save_iterations", nargs="+", type=int, default=[1000, 3000, 4000, 5000, 6000, 7_000, 9000, 10000, 12000, 14000, 20000, 30_000, 45000, 60000])
|
parser.add_argument("--save_iterations", nargs="+", type=int, default=[3000,14000,20000, 30_000, 45000, 60000])
|
||||||
parser.add_argument("--quiet", action="store_true")
|
parser.add_argument("--quiet", action="store_true")
|
||||||
parser.add_argument("--checkpoint_iterations", nargs="+", type=int, default=[])
|
parser.add_argument("--checkpoint_iterations", nargs="+", type=int, default=[])
|
||||||
parser.add_argument("--start_checkpoint", type=str, default = None)
|
parser.add_argument("--start_checkpoint", type=str, default = None)
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user