This week I captured a test 3D scan of a street in Elephant and Castle. This was not a high quality capture and done on a phone using the Luma3D app. The main reason for doing this was to test the workflow of Capturing images > Point cloud > Gaussian splat > Houdini > Render.
I took hundreds of photos of the area I wanted to scan on my phone, trying to use similar techniques to ones that yield good results when photo scanning, such as moving in a dome shape around the object(s) capturing multiple different elevations and angles. Once the app had processed all the images taken I had a 3D scene I could look around. This scene could then be exported as a point cloud, with each point containing all the data required to render a gaussian splat.

To be able to view this correctly in Houdini the imported file needed to be baked into a format Houdini can understand,