From e4039a08e8f55f522564419ac1920ad7def39375 Mon Sep 17 00:00:00 2001
From: guanjunwu <985091524@qq.com>
Date: Tue, 27 Feb 2024 17:21:42 +0800
Subject: [PATCH 1/3] modified readme, delete log for debugging
---
README.md | 84 +++++++++++++++-----
arguments/dnerf/dnerf_default.py | 2 +-
arguments/hypernerf/default.py | 2 +-
submodules/depth-diff-gaussian-rasterization | 2 +-
train.py | 4 +-
5 files changed, 69 insertions(+), 25 deletions(-)
diff --git a/README.md b/README.md
index f937246..45ef9e2 100644
--- a/README.md
+++ b/README.md
@@ -1,29 +1,49 @@
# 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
-## ArXiv Preprint
+## CVPR 2024
### [Project Page](https://guanjunwu.github.io/4dgs/index.html)| [arXiv Paper](https://arxiv.org/abs/2310.08528)
+[Guanjun Wu](https://guanjunwu.github.io/) ``1*``, [Taoran Yi](https://github.com/taoranyi) ``2*``,
+[Jiemin Fang](https://jaminfong.cn/) ``3‡``, [Lingxi Xie](http://lingxixie.com/) ``3 ``, ``[Xiaopeng Zhang](https://scholar.google.com/citations?user=Ud6aBAcAAAAJ&hl=zh-CN) ``3 ``, [Wei Wei](https://www.eric-weiwei.com/) ``1 ``,[Wenyu Liu](http://eic.hust.edu.cn/professor/liuwenyu/) ``2 ``, [Qi Tian](https://www.qitian1987.com/) ``3 `` , [Xinggang Wang](https://xwcv.github.io) ``2‡✉``
-[Guanjun Wu](https://guanjunwu.github.io/)1*, [Taoran Yi](https://github.com/taoranyi)2*,
-[Jiemin Fang](https://jaminfong.cn/)3‡, [Lingxi Xie](http://lingxixie.com/)3, [Xiaopeng Zhang](https://scholar.google.com/citations?user=Ud6aBAcAAAAJ&hl=zh-CN)3, [Wei Wei](https://www.eric-weiwei.com/)1,[Wenyu Liu](http://eic.hust.edu.cn/professor/liuwenyu/)2, [Qi Tian](https://www.qitian1987.com/)3 , [Xinggang Wang](https://xwcv.github.io)2‡✉
+``1 ``School of CS, HUST ``2 ``School of EIC, HUST ``3 ``Huawei Inc.
-1School of CS, HUST 2School of EIC, HUST 3Huawei Inc.
+``\*`` Equal Contributions. ``$\ddagger$`` Project Lead. ``✉`` Corresponding Author.
-\* Equal Contributions. $\ddagger$ Project Lead. ✉ Corresponding Author.
+---
----------------------------------------------------
-
-
+
Our method converges very quickly and achieves real-time rendering speed.
Colab demo:[](https://colab.research.google.com/github/hustvl/4DGaussians/blob/master/4DGaussians.ipynb) (Thanks [camenduru](https://github.com/camenduru/4DGaussians-colab).)
Light Gaussian implementation: [This link](https://github.com/pablodawson/4DGaussians) (Thanks [pablodawson](https://github.com/pablodawson))
+## Further works
+
+We sincerely thank the authors and their fantastic works for other applications based on our code.
+
+[MD-Splatting: Learning Metric Deformation from 4D Gaussians in Highly Deformable Scenes](https://md-splatting.github.io/)
+
+[4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency](https://vita-group.github.io/4DGen/)
+
+[DreamGaussian4D: Generative 4D Gaussian Splatting](https://github.com/jiawei-ren/dreamgaussian4d)
+
+[EndoGaussian: Real-time Gaussian Splatting for Dynamic Endoscopic Scene Reconstruction](https://github.com/yifliu3/EndoGaussian)
+
+[EndoGS: Deformable Endoscopic Tissues Reconstruction with Gaussian Splatting](https://github.com/HKU-MedAI/EndoGS)
+
+[Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting](https://arxiv.org/abs/2401.16416)
+
+## News
+
+2024.02: We delete some logging settings for debugging, the corrected training time is only **8 mins** (20 mins before) in D-NeRF datasets and **30 mins** (1 hour before) in HyperNeRF datasets. The rendering quality is not affected.
## Environmental Setups
+
Please follow the [3D-GS](https://github.com/graphdeco-inria/gaussian-splatting) to install the relative packages.
+
```bash
git clone https://github.com/hustvl/4DGaussians
cd 4DGaussians
@@ -35,13 +55,17 @@ pip install -r requirements.txt
pip install -e submodules/depth-diff-gaussian-rasterization
pip install -e submodules/simple-knn
```
+
In our environment, we use pytorch=1.13.1+cu116.
+
## Data Preparation
-**For synthetic scenes:**
+
+**For synthetic scenes:**
The dataset provided in [D-NeRF](https://github.com/albertpumarola/D-NeRF) is used. You can download the dataset from [dropbox](https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0).
-**For real dynamic scenes:**
+**For real dynamic scenes:**
The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their official websites. To save the memory, you should extract the frames of each video and then organize your dataset as follows.
+
```
├── data
│ | dnerf
@@ -69,46 +93,56 @@ The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used
| ├── ...
```
-
## Training
-For training synthetic scenes such as `bouncingballs`, run
-```
+
+For training synthetic scenes such as `bouncingballs`, run
+
+```
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py
-```
+```
+
You can customize your training config through the config files.
-# Checkpoint
+Checkpoint
+
Also, you can training your model with checkpoint.
+
```python
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py --checkpoint_iterations 200 # change it.
```
+
Then load checkpoint with:
+
```python
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py --start_checkpoint "output/dnerf/bouncingballs/chkpnt_coarse_200.pth"
# finestage: --start_checkpoint "output/dnerf/bouncingballs/chkpnt_fine_200.pth"
```
## Rendering
-Run the following script to render the images.
+
+Run the following script to render the images.
```
python render.py --model_path "output/dnerf/bouncingballs/" --skip_train --configs arguments/dnerf/bouncingballs.py &
```
-
## Evaluation
-You can just run the following script to evaluate the model.
+
+You can just run the following script to evaluate the model.
```
python metrics.py --model_path "output/dnerf/bouncingballs/"
```
+
## Custom Datasets
+
Install nerfstudio and follow their colmap pipeline.
```
pip install nerfstudio
ns-process-data images --data data/your-data --output-dir data/your-ns-data
-python train.py -s data/your-ns-data --port 6017 --expname "custom" --configs arguments/hypernerf/default.py
+cp -r data/your-ns-data/images data/your-ns-data/colmap/images
+python train.py -s data/your-ns-data/colmap --port 6017 --expname "custom" --configs arguments/hypernerf/default.py
```
@@ -120,6 +154,7 @@ There are some helpful scripts in , please feel free to use them.
get all points clouds at each timestamps.
usage:
+
```python
export exp_name="hypernerf"
python vis_point.py --model_path output/$exp_name/interp/aleks-teapot --configs arguments/$exp_name/default.py
@@ -132,6 +167,7 @@ visualize the weight of Multi-resolution HexPlane module.
`merge_many_4dgs.py`:
merge your trained 4dgs.
usage:
+
```python
export exp_name="dynerf"
python merge_many_4dgs.py --model_path output/$exp_name/sear_steak
@@ -139,6 +175,7 @@ python merge_many_4dgs.py --model_path output/$exp_name/sear_steak
`colmap.sh`:
generate point clouds from input data
+
```bash
bash colmap.sh data/hypernerf/virg/vrig-chicken hypernerf
bash colmap.sh data/dynerf/sear_steak llff
@@ -147,26 +184,33 @@ bash colmap.sh data/dynerf/sear_steak llff
**Blender** format seems doesn't work. Welcome to raise a pull request to fix it.
`downsample_point.py` :downsample generated point clouds by sfm.
+
```python
python scripts/downsample_point.py data/dynerf/sear_steak/colmap/dense/workspace/fused.ply data/dynerf/sear_steak/points3D_downsample2.ply
```
+
In my paper, I always use `colmap.sh` to generate dense point clouds and downsample it to less than 40000 points.
Here are some codes maybe useful but never adopted in my paper, you can also try it.
---
+
## Contributions
**This project is still under development. Please feel free to raise issues or submit pull requests to contribute to our codebase.**
---
+
Some source code of ours is borrowed from [3DGS](https://github.com/graphdeco-inria/gaussian-splatting), [k-planes](https://github.com/Giodiro/kplanes_nerfstudio),[HexPlane](https://github.com/Caoang327/HexPlane), [TiNeuVox](https://github.com/hustvl/TiNeuVox). We sincerely appreciate the excellent works of these authors.
## Acknowledgement
We would like to express our sincere gratitude to [@zhouzhenghong-gt](https://github.com/zhouzhenghong-gt/) for his revisions to our code and discussions on the content of our paper.
+
## Citation
-Some insights about neural voxel grids and dynamic scenes reconstruction originate from [TiNeuVox](https://github.com/hustvl/TiNeuVox). If you find this repository/work helpful in your research, welcome to cite these papers and give a ⭐.
+
+Some insights about neural voxel grids and dynamic scenes reconstruction originate from [TiNeuVox](https://github.com/hustvl/TiNeuVox). If you find this repository/work helpful in your research, welcome to cite these papers and give a ⭐.
+
```
@article{wu20234dgaussians,
title={4D Gaussian Splatting for Real-Time Dynamic Scene Rendering},
diff --git a/arguments/dnerf/dnerf_default.py b/arguments/dnerf/dnerf_default.py
index ec7fa8e..e1dad90 100644
--- a/arguments/dnerf/dnerf_default.py
+++ b/arguments/dnerf/dnerf_default.py
@@ -11,7 +11,7 @@ OptimizationParams = dict(
iterations = 20000,
pruning_interval = 8000,
percent_dense = 0.01,
- render_process=True,
+ render_process=False,
# no_do=False,
# no_dshs=False
diff --git a/arguments/hypernerf/default.py b/arguments/hypernerf/default.py
index 39035e4..4412b74 100644
--- a/arguments/hypernerf/default.py
+++ b/arguments/hypernerf/default.py
@@ -11,7 +11,7 @@ ModelHiddenParams = dict(
plane_tv_weight = 0.0002,
time_smoothness_weight = 0.001,
l1_time_planes = 0.0001,
- render_process=True
+ render_process=False
)
OptimizationParams = dict(
# dataloader=True,
diff --git a/submodules/depth-diff-gaussian-rasterization b/submodules/depth-diff-gaussian-rasterization
index e495066..f2d8fa9 160000
--- a/submodules/depth-diff-gaussian-rasterization
+++ b/submodules/depth-diff-gaussian-rasterization
@@ -1 +1 @@
-Subproject commit e49506654e8e11ed8a62d22bcb693e943fdecacf
+Subproject commit f2d8fa9921ea9a6cb9ac1c33a34ebd1b11510657
diff --git a/train.py b/train.py
index a2479fd..4080482 100644
--- a/train.py
+++ b/train.py
@@ -403,8 +403,8 @@ if __name__ == "__main__":
parser.add_argument('--port', type=int, default=6009)
parser.add_argument('--debug_from', type=int, default=-1)
parser.add_argument('--detect_anomaly', action='store_true', default=False)
- parser.add_argument("--test_iterations", nargs="+", type=int, default=[500*i for i in range(100)])
- parser.add_argument("--save_iterations", nargs="+", type=int, default=[1000, 3000, 4000, 5000, 6000, 7_000, 9000, 10000, 12000, 14000, 20000, 30_000, 45000, 60000])
+ parser.add_argument("--test_iterations", nargs="+", type=int, default=[3000,14000,20000])
+ parser.add_argument("--save_iterations", nargs="+", type=int, default=[3000,14000,20000, 30_000, 45000, 60000])
parser.add_argument("--quiet", action="store_true")
parser.add_argument("--checkpoint_iterations", nargs="+", type=int, default=[])
parser.add_argument("--start_checkpoint", type=str, default = None)
From d1ba0c3e2f53c9baf372385de618e6bd55be69b9 Mon Sep 17 00:00:00 2001
From: Geralt_of_Rivia <87054407+guanjunwu@users.noreply.github.com>
Date: Tue, 27 Feb 2024 17:30:31 +0800
Subject: [PATCH 2/3] Update README.md
---
README.md | 45 +++++++++++++++++++++++++--------------------
1 file changed, 25 insertions(+), 20 deletions(-)
diff --git a/README.md b/README.md
index 45ef9e2..19c1fa8 100644
--- a/README.md
+++ b/README.md
@@ -4,37 +4,25 @@
### [Project Page](https://guanjunwu.github.io/4dgs/index.html)| [arXiv Paper](https://arxiv.org/abs/2310.08528)
-[Guanjun Wu](https://guanjunwu.github.io/) ``1*``, [Taoran Yi](https://github.com/taoranyi) ``2*``,
-[Jiemin Fang](https://jaminfong.cn/) ``3‡``, [Lingxi Xie](http://lingxixie.com/) ``3 ``, ``[Xiaopeng Zhang](https://scholar.google.com/citations?user=Ud6aBAcAAAAJ&hl=zh-CN) ``3 ``, [Wei Wei](https://www.eric-weiwei.com/) ``1 ``,[Wenyu Liu](http://eic.hust.edu.cn/professor/liuwenyu/) ``2 ``, [Qi Tian](https://www.qitian1987.com/) ``3 `` , [Xinggang Wang](https://xwcv.github.io) ``2‡✉``
+[Guanjun Wu](https://guanjunwu.github.io/) 1*, [Taoran Yi](https://github.com/taoranyi) 2*,
+[Jiemin Fang](https://jaminfong.cn/) 3‡, [Lingxi Xie](http://lingxixie.com/) 3 , [Xiaopeng Zhang](https://scholar.google.com/citations?user=Ud6aBAcAAAAJ&hl=zh-CN) 3 , [Wei Wei](https://www.eric-weiwei.com/) 1 ,[Wenyu Liu](http://eic.hust.edu.cn/professor/liuwenyu/) 2 , [Qi Tian](https://www.qitian1987.com/) 3 , [Xinggang Wang](https://xwcv.github.io) 2‡✉
-``1 ``School of CS, HUST ``2 ``School of EIC, HUST ``3 ``Huawei Inc.
+1 School of CS, HUST 2 School of EIC, HUST 3 Huawei Inc.
-``\*`` Equal Contributions. ``$\ddagger$`` Project Lead. ``✉`` Corresponding Author.
+\* Equal Contributions. $\ddagger$ Project Lead. ✉ Corresponding Author.
---

Our method converges very quickly and achieves real-time rendering speed.
-Colab demo:[](https://colab.research.google.com/github/hustvl/4DGaussians/blob/master/4DGaussians.ipynb) (Thanks [camenduru](https://github.com/camenduru/4DGaussians-colab).)
+New Colab demo:[](https://colab.research.google.com/drive/1wz0D5Y9egAlcxXy8YO9UmpQ9oH51R7OW?usp=sharing) (Thanks [Tasmay-Tibrewal
+](https://github.com/Tasmay-Tibrewal))
+
+Old Colab demo:[](https://colab.research.google.com/github/hustvl/4DGaussians/blob/master/4DGaussians.ipynb) (Thanks [camenduru](https://github.com/camenduru/4DGaussians-colab).)
Light Gaussian implementation: [This link](https://github.com/pablodawson/4DGaussians) (Thanks [pablodawson](https://github.com/pablodawson))
-## Further works
-
-We sincerely thank the authors and their fantastic works for other applications based on our code.
-
-[MD-Splatting: Learning Metric Deformation from 4D Gaussians in Highly Deformable Scenes](https://md-splatting.github.io/)
-
-[4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency](https://vita-group.github.io/4DGen/)
-
-[DreamGaussian4D: Generative 4D Gaussian Splatting](https://github.com/jiawei-ren/dreamgaussian4d)
-
-[EndoGaussian: Real-time Gaussian Splatting for Dynamic Endoscopic Scene Reconstruction](https://github.com/yifliu3/EndoGaussian)
-
-[EndoGS: Deformable Endoscopic Tissues Reconstruction with Gaussian Splatting](https://github.com/HKU-MedAI/EndoGS)
-
-[Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting](https://arxiv.org/abs/2401.16416)
## News
@@ -193,6 +181,23 @@ In my paper, I always use `colmap.sh` to generate dense point clouds and downsam
Here are some codes maybe useful but never adopted in my paper, you can also try it.
+## Further works
+
+We sincerely thank the authors and their fantastic works for other applications based on our code.
+
+[MD-Splatting: Learning Metric Deformation from 4D Gaussians in Highly Deformable Scenes](https://md-splatting.github.io/)
+
+[4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency](https://vita-group.github.io/4DGen/)
+
+[DreamGaussian4D: Generative 4D Gaussian Splatting](https://github.com/jiawei-ren/dreamgaussian4d)
+
+[EndoGaussian: Real-time Gaussian Splatting for Dynamic Endoscopic Scene Reconstruction](https://github.com/yifliu3/EndoGaussian)
+
+[EndoGS: Deformable Endoscopic Tissues Reconstruction with Gaussian Splatting](https://github.com/HKU-MedAI/EndoGS)
+
+[Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting](https://arxiv.org/abs/2401.16416)
+
+
---
## Contributions
From 84426e538a4eb31b47c72fde5d53509a34dc4f6d Mon Sep 17 00:00:00 2001
From: Geralt_of_Rivia <87054407+guanjunwu@users.noreply.github.com>
Date: Tue, 27 Feb 2024 17:34:58 +0800
Subject: [PATCH 3/3] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 19c1fa8..1b9f4b9 100644
--- a/README.md
+++ b/README.md
@@ -26,7 +26,7 @@ Light Gaussian implementation: [This link](https://github.com/pablodawson/4DGaus
## News
-2024.02: We delete some logging settings for debugging, the corrected training time is only **8 mins** (20 mins before) in D-NeRF datasets and **30 mins** (1 hour before) in HyperNeRF datasets. The rendering quality is not affected.
+2024.02: Accepted by CVPR 2024. We delete some logging settings for debugging, the corrected training time is only **8 mins** (20 mins before) in D-NeRF datasets and **30 mins** (1 hour before) in HyperNeRF datasets. The rendering quality is not affected.
## Environmental Setups