diff --git a/README.md b/README.md index f937246..45ef9e2 100644 --- a/README.md +++ b/README.md @@ -1,29 +1,49 @@ # 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering -## ArXiv Preprint +## CVPR 2024 ### [Project Page](https://guanjunwu.github.io/4dgs/index.html)| [arXiv Paper](https://arxiv.org/abs/2310.08528) +[Guanjun Wu](https://guanjunwu.github.io/) ``1*``, [Taoran Yi](https://github.com/taoranyi) ``2*``, +[Jiemin Fang](https://jaminfong.cn/) ``3‡``, [Lingxi Xie](http://lingxixie.com/) ``3 ``, `
`[Xiaopeng Zhang](https://scholar.google.com/citations?user=Ud6aBAcAAAAJ&hl=zh-CN) ``3 ``, [Wei Wei](https://www.eric-weiwei.com/) ``1 ``,[Wenyu Liu](http://eic.hust.edu.cn/professor/liuwenyu/) ``2 ``, [Qi Tian](https://www.qitian1987.com/) ``3 `` , [Xinggang Wang](https://xwcv.github.io) ``2‡✉`` -[Guanjun Wu](https://guanjunwu.github.io/)1*, [Taoran Yi](https://github.com/taoranyi)2*, -[Jiemin Fang](https://jaminfong.cn/)3‡, [Lingxi Xie](http://lingxixie.com/)3,
[Xiaopeng Zhang](https://scholar.google.com/citations?user=Ud6aBAcAAAAJ&hl=zh-CN)3, [Wei Wei](https://www.eric-weiwei.com/)1,[Wenyu Liu](http://eic.hust.edu.cn/professor/liuwenyu/)2, [Qi Tian](https://www.qitian1987.com/)3 , [Xinggang Wang](https://xwcv.github.io)2‡✉ +``1 ``School of CS, HUST   ``2 ``School of EIC, HUST   ``3 ``Huawei Inc.   -1School of CS, HUST   2School of EIC, HUST   3Huawei Inc.   +``\*`` Equal Contributions. ``$\ddagger$`` Project Lead. ``✉`` Corresponding Author. -\* Equal Contributions. $\ddagger$ Project Lead. Corresponding Author. +--- ---------------------------------------------------- - -![block](assets/teaserfig.jpg) +![block](assets/teaserfig.jpg) Our method converges very quickly and achieves real-time rendering speed. Colab demo:[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/hustvl/4DGaussians/blob/master/4DGaussians.ipynb) (Thanks [camenduru](https://github.com/camenduru/4DGaussians-colab).) Light Gaussian implementation: [This link](https://github.com/pablodawson/4DGaussians) (Thanks [pablodawson](https://github.com/pablodawson)) +## Further works + +We sincerely thank the authors and their fantastic works for other applications based on our code. + +[MD-Splatting: Learning Metric Deformation from 4D Gaussians in Highly Deformable Scenes](https://md-splatting.github.io/) + +[4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency](https://vita-group.github.io/4DGen/) + +[DreamGaussian4D: Generative 4D Gaussian Splatting](https://github.com/jiawei-ren/dreamgaussian4d) + +[EndoGaussian: Real-time Gaussian Splatting for Dynamic Endoscopic Scene Reconstruction](https://github.com/yifliu3/EndoGaussian) + +[EndoGS: Deformable Endoscopic Tissues Reconstruction with Gaussian Splatting](https://github.com/HKU-MedAI/EndoGS) + +[Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting](https://arxiv.org/abs/2401.16416) + +## News + +2024.02: We delete some logging settings for debugging, the corrected training time is only **8 mins** (20 mins before) in D-NeRF datasets and **30 mins** (1 hour before) in HyperNeRF datasets. The rendering quality is not affected. ## Environmental Setups + Please follow the [3D-GS](https://github.com/graphdeco-inria/gaussian-splatting) to install the relative packages. + ```bash git clone https://github.com/hustvl/4DGaussians cd 4DGaussians @@ -35,13 +55,17 @@ pip install -r requirements.txt pip install -e submodules/depth-diff-gaussian-rasterization pip install -e submodules/simple-knn ``` + In our environment, we use pytorch=1.13.1+cu116. + ## Data Preparation -**For synthetic scenes:** + +**For synthetic scenes:** The dataset provided in [D-NeRF](https://github.com/albertpumarola/D-NeRF) is used. You can download the dataset from [dropbox](https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0). -**For real dynamic scenes:** +**For real dynamic scenes:** The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their official websites. To save the memory, you should extract the frames of each video and then organize your dataset as follows. + ``` ├── data │ | dnerf @@ -69,46 +93,56 @@ The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used | ├── ... ``` - ## Training -For training synthetic scenes such as `bouncingballs`, run -``` + +For training synthetic scenes such as `bouncingballs`, run + +``` python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py -``` +``` + You can customize your training config through the config files. -# Checkpoint +Checkpoint + Also, you can training your model with checkpoint. + ```python python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py --checkpoint_iterations 200 # change it. ``` + Then load checkpoint with: + ```python python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py --start_checkpoint "output/dnerf/bouncingballs/chkpnt_coarse_200.pth" # finestage: --start_checkpoint "output/dnerf/bouncingballs/chkpnt_fine_200.pth" ``` ## Rendering -Run the following script to render the images. + +Run the following script to render the images. ``` python render.py --model_path "output/dnerf/bouncingballs/" --skip_train --configs arguments/dnerf/bouncingballs.py & ``` - ## Evaluation -You can just run the following script to evaluate the model. + +You can just run the following script to evaluate the model. ``` python metrics.py --model_path "output/dnerf/bouncingballs/" ``` + ## Custom Datasets + Install nerfstudio and follow their colmap pipeline. ``` pip install nerfstudio ns-process-data images --data data/your-data --output-dir data/your-ns-data -python train.py -s data/your-ns-data --port 6017 --expname "custom" --configs arguments/hypernerf/default.py +cp -r data/your-ns-data/images data/your-ns-data/colmap/images +python train.py -s data/your-ns-data/colmap --port 6017 --expname "custom" --configs arguments/hypernerf/default.py ``` @@ -120,6 +154,7 @@ There are some helpful scripts in , please feel free to use them. get all points clouds at each timestamps. usage: + ```python export exp_name="hypernerf" python vis_point.py --model_path output/$exp_name/interp/aleks-teapot --configs arguments/$exp_name/default.py @@ -132,6 +167,7 @@ visualize the weight of Multi-resolution HexPlane module. `merge_many_4dgs.py`: merge your trained 4dgs. usage: + ```python export exp_name="dynerf" python merge_many_4dgs.py --model_path output/$exp_name/sear_steak @@ -139,6 +175,7 @@ python merge_many_4dgs.py --model_path output/$exp_name/sear_steak `colmap.sh`: generate point clouds from input data + ```bash bash colmap.sh data/hypernerf/virg/vrig-chicken hypernerf bash colmap.sh data/dynerf/sear_steak llff @@ -147,26 +184,33 @@ bash colmap.sh data/dynerf/sear_steak llff **Blender** format seems doesn't work. Welcome to raise a pull request to fix it. `downsample_point.py` :downsample generated point clouds by sfm. + ```python python scripts/downsample_point.py data/dynerf/sear_steak/colmap/dense/workspace/fused.ply data/dynerf/sear_steak/points3D_downsample2.ply ``` + In my paper, I always use `colmap.sh` to generate dense point clouds and downsample it to less than 40000 points. Here are some codes maybe useful but never adopted in my paper, you can also try it. --- + ## Contributions **This project is still under development. Please feel free to raise issues or submit pull requests to contribute to our codebase.** --- + Some source code of ours is borrowed from [3DGS](https://github.com/graphdeco-inria/gaussian-splatting), [k-planes](https://github.com/Giodiro/kplanes_nerfstudio),[HexPlane](https://github.com/Caoang327/HexPlane), [TiNeuVox](https://github.com/hustvl/TiNeuVox). We sincerely appreciate the excellent works of these authors. ## Acknowledgement We would like to express our sincere gratitude to [@zhouzhenghong-gt](https://github.com/zhouzhenghong-gt/) for his revisions to our code and discussions on the content of our paper. + ## Citation -Some insights about neural voxel grids and dynamic scenes reconstruction originate from [TiNeuVox](https://github.com/hustvl/TiNeuVox). If you find this repository/work helpful in your research, welcome to cite these papers and give a ⭐. + +Some insights about neural voxel grids and dynamic scenes reconstruction originate from [TiNeuVox](https://github.com/hustvl/TiNeuVox). If you find this repository/work helpful in your research, welcome to cite these papers and give a ⭐. + ``` @article{wu20234dgaussians, title={4D Gaussian Splatting for Real-Time Dynamic Scene Rendering}, diff --git a/arguments/dnerf/dnerf_default.py b/arguments/dnerf/dnerf_default.py index ec7fa8e..e1dad90 100644 --- a/arguments/dnerf/dnerf_default.py +++ b/arguments/dnerf/dnerf_default.py @@ -11,7 +11,7 @@ OptimizationParams = dict( iterations = 20000, pruning_interval = 8000, percent_dense = 0.01, - render_process=True, + render_process=False, # no_do=False, # no_dshs=False diff --git a/arguments/hypernerf/default.py b/arguments/hypernerf/default.py index 39035e4..4412b74 100644 --- a/arguments/hypernerf/default.py +++ b/arguments/hypernerf/default.py @@ -11,7 +11,7 @@ ModelHiddenParams = dict( plane_tv_weight = 0.0002, time_smoothness_weight = 0.001, l1_time_planes = 0.0001, - render_process=True + render_process=False ) OptimizationParams = dict( # dataloader=True, diff --git a/submodules/depth-diff-gaussian-rasterization b/submodules/depth-diff-gaussian-rasterization index e495066..f2d8fa9 160000 --- a/submodules/depth-diff-gaussian-rasterization +++ b/submodules/depth-diff-gaussian-rasterization @@ -1 +1 @@ -Subproject commit e49506654e8e11ed8a62d22bcb693e943fdecacf +Subproject commit f2d8fa9921ea9a6cb9ac1c33a34ebd1b11510657 diff --git a/train.py b/train.py index a2479fd..4080482 100644 --- a/train.py +++ b/train.py @@ -403,8 +403,8 @@ if __name__ == "__main__": parser.add_argument('--port', type=int, default=6009) parser.add_argument('--debug_from', type=int, default=-1) parser.add_argument('--detect_anomaly', action='store_true', default=False) - parser.add_argument("--test_iterations", nargs="+", type=int, default=[500*i for i in range(100)]) - parser.add_argument("--save_iterations", nargs="+", type=int, default=[1000, 3000, 4000, 5000, 6000, 7_000, 9000, 10000, 12000, 14000, 20000, 30_000, 45000, 60000]) + parser.add_argument("--test_iterations", nargs="+", type=int, default=[3000,14000,20000]) + parser.add_argument("--save_iterations", nargs="+", type=int, default=[3000,14000,20000, 30_000, 45000, 60000]) parser.add_argument("--quiet", action="store_true") parser.add_argument("--checkpoint_iterations", nargs="+", type=int, default=[]) parser.add_argument("--start_checkpoint", type=str, default = None)