diff --git a/README.md b/README.md index e0f73be..88b8cec 100644 --- a/README.md +++ b/README.md @@ -30,18 +30,23 @@ Our method converges very quickly. And achieves real-time rendering speed. ## Environmental Setups Please follow the [3D-GS](https://github.com/graphdeco-inria/gaussian-splatting) to install the relative packages. ```bash -git clone https://github.com/hustvl/4DGaussians --recursive +git clone https://github.com/hustvl/4DGaussians cd 4DGaussians conda create -n Gaussians4D python=3.7 +conda activate Gaussians4D + pip install -r requirements.txt +cd submodules +git clone https://github.com/ingra14m/depth-diff-gaussian-rasterization +pip install -e depth-diff-gaussian-rasterization ``` -In our environment, we use pytorch=1.13.1+cu116 +In our environment, we use pytorch=1.13.1+cu116. ## Data Preparation **For synthetic scenes:** The dataset provided in [D-NeRF](https://github.com/albertpumarola/D-NeRF) is used. You can download the dataset from [dropbox](https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0). **For real dynamic scenes:** -The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their offical websites, to save the memory, you should extract the frames of each video, twhen organize your dataset as follows. +The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their offical websites, to save the memory, you should extract the frames of each video, them organize your dataset as follows. ``` ├── data │ | dnerf diff --git a/requirements.txt b/requirements.txt index 3c6bd34..2820ac9 100644 --- a/requirements.txt +++ b/requirements.txt @@ -2,8 +2,7 @@ torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 mmcv==1.6.0 -matploblib +matplotlib argparse lpips plyfile -submodules/depth-diff-gaussian-rasterization diff --git a/scene/deformation.py b/scene/deformation.py index 89e1546..9419cfe 100644 --- a/scene/deformation.py +++ b/scene/deformation.py @@ -146,10 +146,6 @@ class deform_network(nn.Module): return self.deformation_net.get_mlp_parameters() + list(self.timenet.parameters()) def get_grid_parameters(self): return self.deformation_net.get_grid_parameters() -class Tineuvox(nn.Module): - def __init__(self) -> None: - super(Tineuvox).__init__() - pass def initialize_weights(m): if isinstance(m, nn.Linear): diff --git a/scene/hexplane.py b/scene/hexplane.py index 6835bb2..82d44f4 100644 --- a/scene/hexplane.py +++ b/scene/hexplane.py @@ -180,11 +180,3 @@ class HexPlaneField(nn.Module): features = self.get_density(pts, timestamps) return features -if __name__ == "__main__": - aabb = torch.tensor([[-3,-3,-3], - [3,3,3]]) - planes = KPlaneField(aabb) - pts = torch.randn(10000,3) - time = torch.ones(10000,1) - features = planes.forward(pts,time) - print(features.shape) diff --git a/scripts/train_dnerf_all.sh b/scripts/train_dnerf_all.sh index cecd918..9d1cfa6 100644 --- a/scripts/train_dnerf_all.sh +++ b/scripts/train_dnerf_all.sh @@ -1,4 +1,4 @@ -bash scripts/train_ablation.sh dnerf_noboth +bash scripts/process_dnerf.sh dnerf_tv_test wait # bash scripts/train_ablation.sh dnerf_3dgs # wait