Download 360 v2 dataset and unzip it. Please keep the folder name unchanged. The default batch_size=8192 takes up to 18GB RAM on a RTX3090. Please adj

taichi-dev/taichi-nerfs

submited by
Style Pass
2023-03-15 11:30:07

Download 360 v2 dataset and unzip it. Please keep the folder name unchanged. The default batch_size=8192 takes up to 18GB RAM on a RTX3090. Please adjust batch_size according to your hardware spec.

Place your video in data folder and pass the video path to the script. There are several key parameters for producing a sound dataset for NeRF training. For a real scene, scale is recommended to set to 16. video_fps determines the number of images generated from the video, typically 150~200 images are sufficient. For a one minute video, 2 is a suitable number. Running this script will preprocess your video and start training a NeRF out of it:

A: For the most efficient interop with PyTorch CUDA backend, training is mostly tested with Taichi CUDA backend. However it's pretty straightforward to switch to Taichi vulkan backend if interop is removed, check out this awesome taichi-ngp inference demo!

A: Reduce batch_size passed to train.py! By default it's 8192 which fits a RTX3090, you should reduce this accordingly. For instance, batch_size=2048 is recommended on a RTX3060Ti.

Leave a Comment