Update README.md

This commit is contained in:
Xinggang Wang 2023-10-17 11:09:58 -05:00 committed by GitHub
parent ec0f0ccdb6
commit 40dd2f3b49
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -1,8 +1,8 @@
# 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
## Arxiv Preprint
## arXiv Preprint
### [Project Page](https://guanjunwu.github.io/4dgs/index.html)| [Arxiv Paper](https://arxiv.org/abs/2310.08528)
### [Project Page](https://guanjunwu.github.io/4dgs/index.html)| [arXiv Paper](https://arxiv.org/abs/2310.08528)
[Guanjun Wu](https://guanjunwu.github.io/)<sup>1*</sup>, [Taoran Yi](https://github.com/taoranyi)<sup>2*</sup>,
@ -13,7 +13,7 @@
---------------------------------------------------
![block](assets/teaserfig.png)
Our method converges very quickly. And achieves real-time rendering speed.
Our method converges very quickly and achieves real-time rendering speed.
Colab demo:[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/4DGaussians-colab/blob/main/4DGaussians_colab.ipynb) (Thanks [camenduru](https://github.com/camenduru/4DGaussians-colab).)
@ -45,7 +45,7 @@ In our environment, we use pytorch=1.13.1+cu116.
The dataset provided in [D-NeRF](https://github.com/albertpumarola/D-NeRF) is used. You can download the dataset from [dropbox](https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0).
**For real dynamic scenes:**
The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their offical websites, to save the memory, you should extract the frames of each video, them organize your dataset as follows.
The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their official websites. To save the memory, you should extract the frames of each video and then organize your dataset as follows.
```
├── data
│ | dnerf
@ -79,7 +79,7 @@ For training synthetic scenes such as `lego`, run
```
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py
```
You can custom your training config through the config files.
You can customize your training config through the config files.
## Rendering
Run the following script to render the images.
@ -89,7 +89,7 @@ python render.py --model_path "output/dnerf/bouncingballs/" --skip_train --conf
## Evaluation
Run the following script to evaluate the model.
You can just run the following script to evaluate the model.
```
python metrics.py --model_path "output/dnerf/bouncingballs/"