top of page

NeRF'd

Damn, this thing is fun:




The process is similar to my older photogrammetry tests, in that it uses SFM to create a pointcloud- then you feed it into a neural network that is "trained" to remember it. Once you have the trained model, which has the 'memory' of the space, it's represented as a volume/raymarched render that encodes the appearance of the object from that viewing location. This means that reflections, refractions, and neat specular lobes and all kinds of shading are captured, not just the location and one version of the colour of a surface.


Amazing stuff, and surprisingly smooth to get going via this tutorial:


Viewing material:


A more technical breakdown:

You can export the volume and filter it in Houdini a bit, to get kind-of-ok results:


Either way, I am surprised that the turnkey SFM solution did it's job so flawlessly to provide the right pointcloud for training. FYI the size of the trained model, when saved, is only 60mb. Nice.


I'm curious to see how we can start using these representations in scenes, XR and digital twins. Because of the way the data is stored/recalled, and the volumetric representation/render, it's certainly a different approach.






bottom of page