update README
This commit is contained in:
parent
150ff0e1f0
commit
8ad7f08a9a
22
README.md
22
README.md
@ -56,7 +56,7 @@ In our environment, we use pytorch=1.13.1+cu116.
|
||||
The dataset provided in [D-NeRF](https://github.com/albertpumarola/D-NeRF) is used. You can download the dataset from [dropbox](https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0).
|
||||
|
||||
**For real dynamic scenes:**
|
||||
The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their official websites. To save the memory, you should extract the frames of each video using `preprocess_dynerf.py` in the scripts and then organize your dataset as follows.
|
||||
The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their official websites. To save the memory, you should extract the frames of each video and then organize your dataset as follows.
|
||||
|
||||
```
|
||||
├── data
|
||||
@ -93,6 +93,18 @@ For training synthetic scenes such as `bouncingballs`, run
|
||||
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py
|
||||
```
|
||||
|
||||
For training dynerf scenes such as `cut_roasted_beef`, run
|
||||
```python
|
||||
# First, extract the frames of each video.
|
||||
python scripts/preprocess_dynerf.py --datadir data/dynerf/cut_roasted_beef
|
||||
# Second, generate point clouds from input data.
|
||||
bash colmap.sh data/dynerf/cut_roasted_beef llff
|
||||
# Third, downsample the point clouds generated in the second step.
|
||||
python scripts/downsample_point.py data/dynerf/cut_roasted_beef/colmap/dense/workspace/fused.ply data/dynerf/cut_roasted_beef/points3D_downsample2.ply
|
||||
# Finally, train.
|
||||
python train.py -s data/dynerf/cut_roasted_beef --port 6017 --expname "dynerf/cut_roasted_beef" --configs arguments/dynerf/cut_roasted_beef.py
|
||||
```
|
||||
|
||||
You can customize your training config through the config files.
|
||||
|
||||
Checkpoint
|
||||
@ -166,14 +178,6 @@ export exp_name="dynerf"
|
||||
python merge_many_4dgs.py --model_path output/$exp_name/sear_steak
|
||||
```
|
||||
|
||||
`preprocess_dynerf.py`:
|
||||
extract the frames of each video.
|
||||
usage:
|
||||
|
||||
```
|
||||
python scripts/preprocess_dynerf.py --datadir data/dynerf/sear_steak
|
||||
```
|
||||
|
||||
`colmap.sh`:
|
||||
generate point clouds from input data
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user