fix-some-typo

This commit is contained in:
guanjunwu 2023-10-14 18:11:38 +08:00
parent 793cfb880c
commit d3fdb452e1
5 changed files with 10 additions and 18 deletions

View File

@ -30,18 +30,23 @@ Our method converges very quickly. And achieves real-time rendering speed.
## Environmental Setups
Please follow the [3D-GS](https://github.com/graphdeco-inria/gaussian-splatting) to install the relative packages.
```bash
git clone https://github.com/hustvl/4DGaussians --recursive
git clone https://github.com/hustvl/4DGaussians
cd 4DGaussians
conda create -n Gaussians4D python=3.7
conda activate Gaussians4D
pip install -r requirements.txt
cd submodules
git clone https://github.com/ingra14m/depth-diff-gaussian-rasterization
pip install -e depth-diff-gaussian-rasterization
```
In our environment, we use pytorch=1.13.1+cu116
In our environment, we use pytorch=1.13.1+cu116.
## Data Preparation
**For synthetic scenes:**
The dataset provided in [D-NeRF](https://github.com/albertpumarola/D-NeRF) is used. You can download the dataset from [dropbox](https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0).
**For real dynamic scenes:**
The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their offical websites, to save the memory, you should extract the frames of each video, twhen organize your dataset as follows.
The dataset provided in [HyperNeRF](https://github.com/google/hypernerf) is used. You can download scenes from [Hypernerf Dataset](https://github.com/google/hypernerf/releases/tag/v0.1) and organize them as [Nerfies](https://github.com/google/nerfies#datasets). Meanwhile, [Plenoptic Dataset](https://github.com/facebookresearch/Neural_3D_Video) could be downloaded from their offical websites, to save the memory, you should extract the frames of each video, them organize your dataset as follows.
```
├── data
│ | dnerf

View File

@ -2,8 +2,7 @@ torch==1.13.1
torchvision==0.14.1
torchaudio==0.13.1
mmcv==1.6.0
matploblib
matplotlib
argparse
lpips
plyfile
submodules/depth-diff-gaussian-rasterization

View File

@ -146,10 +146,6 @@ class deform_network(nn.Module):
return self.deformation_net.get_mlp_parameters() + list(self.timenet.parameters())
def get_grid_parameters(self):
return self.deformation_net.get_grid_parameters()
class Tineuvox(nn.Module):
def __init__(self) -> None:
super(Tineuvox).__init__()
pass
def initialize_weights(m):
if isinstance(m, nn.Linear):

View File

@ -180,11 +180,3 @@ class HexPlaneField(nn.Module):
features = self.get_density(pts, timestamps)
return features
if __name__ == "__main__":
aabb = torch.tensor([[-3,-3,-3],
[3,3,3]])
planes = KPlaneField(aabb)
pts = torch.randn(10000,3)
time = torch.ones(10000,1)
features = planes.forward(pts,time)
print(features.shape)

View File

@ -1,4 +1,4 @@
bash scripts/train_ablation.sh dnerf_noboth
bash scripts/process_dnerf.sh dnerf_tv_test
wait
# bash scripts/train_ablation.sh dnerf_3dgs
# wait