Categories
Final Major Project MA Visual Effects

FMP Week: 10

This week I developed the final abstract particles, composited and edited the final video.

When researching different particle styles and techniques I finally found a tutorial that would serve as a good base for what I wanted.

This system works by creating a pyro simulation, then transfering the velocity field from the pyro onto the particles, allowing the

Abstract Particle Setup
Pyro Sim

To have the abstract particles be the same colour as the environment I imported the pointcloud into the network, then copied the colour values onto the nearest points of the sphere.

Transfered Colours
Abstract Particle Sim

During the final week I collected all my rendered clips and began comping and editing in Nuke and Premier.

Nuke Node Graph
Input, Grading And Glow

To make a glitch VHS effect I created a noise that was streched on one axis, this created lines varying in brightness, I then blurred it and graded the green channel to add variation in the distance certain colours get stretched. This was then copied to the forward u and v channels and used to distort the image. This not only helps to cover up some visual artifacts in the final render but also adds more to the effect of the data points being distored.

Glitch Effect, Camera Effects And Output

As I had to deviate from my storyboard when editing I tried to use what clips I had as efficiently as possible, trying to fit them to the initial music cues and cutting thr 2nd half of the music. When exporting the video I testesd H.264 and ProRes 422HQ. As the video consists of many moving points, when it is exported in H.264 there is a visibile loss in quality where the bitrate of the video is not enough to accurately show all the moving parts. 422HQ solves this issue but at the cost of almost 10x the file size and is not as widely supported as H.264.

Premiere Timeline

Overall

Categories
Final Major Project MA Visual Effects

FMP Week: 9

As I planned to render the particles and Gaussian Splat separately I wanted to add the effect of the particles lighting up the environment. I did this by getting the transition particle band that was being used to spawn the particles and converted it to a VDB volume, this volume was then cleaned up and converted into a mesh, this allows for it to be procedural and change shape with the particles. The mesh is then used as an Octane Mesh light, the lights colour is also set to the average colour of the particles.

Mesh Light VDB

To add to this effect on the particle side, I used the age and life attributes. These are created when a POP sim is used and contain values related to how long the particle has been alive for. This data was then passed into the material where I could use it to multiply or set colour values.

Particle Colour Based Off Age

My initial particle simulation used a POP wind and custom velocity. This velocity came from a cube made to be the same shape as the hallway, the normals of this cube were then converted to velocity attributes and used to push the particles away from the center of the hallway and stop them covering the center of the screen. I spent a long time with this setup and went through many iterations and research attempts to try and find my desired look. After focusing of other aspects of the project I went back to this, I found a technique of creating a simple pyro smoke simulation from an object the same shape as your desired effect, caching it, then using an addvecfield node within the POP network. This node imports the smoke cloud data into the particle simulation and adds the velocities to their nearest particles, making them behave in the same way as the smoke. Finally I added attribute blurs after the particle simulation, this try’s to blur and group the particles positions making the trails of particles appear more condensed and feel like they have forces clumping them together instead of being widely dispersed.

With this new setup, I had the look I was going for, but realised that the camera angle I had initially planned didn’t showcase this to its fullest. I decided to flip the camera and have it pulling back down the hallway, this allows the viewer to see more of the environment dissolving while also removing the issue of the particles obscuring the cameras vision.

Pyro Based Particle Sim
Skylight Setup

I was having some issues with floating points and Gaussian Splats, so I went back into Postshot and cleaned up the Splat and reimported it into Houdini. As all of the processing on the points in Houdini is procedural, I didn’t need to change anything and it accepted the updated point cloud.

Before Cleanup
After Cleanup

Categories
Final Major Project MA Visual Effects

FMP Week: 8

In my initial concepts I planned to have the Houdini Gaussian Splat transition into particles, then have those transition into a deconstructed version of the landscape within Unreal Engine.

Unreal Engine Environment Setup

I created a simple previs of the environment so I can use it as reference for what I need my last shot to transition into. But as the main portion of this project is more challenging than I anticipated for this will only be further developed if there is enough time to allow for it.

Test Details Added

Moving back to Houdini I started to tackle the challenge of rendering the Gaussian Splat. Gaussian Splats can be rendered in a variety of ways, some of these can even be fast enough to view entire scenes on a web page. All the lighting, colour and other required data has been packed into the points, so all of the heavy rendering is done when training the cloud. However I wanted to relight my scene and add effects.

The first method I tried was the default plug and play rendering with Houdini 21, This didn’t look very visually pleasing though, and I decided to Downgrade to Houdini 20.5 and use the GSOPs plugin and use custom renders.

To render with raytracing in Houdini’s Karma I got a sphere primitive and instanced it on every point.

Sphere Instance

With this sphere I could then apply a material. This material uses the same mathematical functions that make up the core rendering of Gaussian Splats in other applications.

USD Raytraced Gaussian Splat Material

This looked better than the default Houdini 21 Render but still left areas for improvement and didn’t give me the effect I wanted.

USD Karma Raytraced Render Test

I also did a quick test within blended using custom plugins, but yielded the same results.

Blender Gaussian Splat Addon Test

For my next render test I found a custom plugin for Redshift in Houdini. I saw some promising results of this but it didn’t work well for my application. After some more research on this renderer I found that people have said I can achieve good results, but only in very specific conditions and lighting.

Houdini Redshift Gaussian Splat Plugin Test

Finally I did some research in the GSOPs discord server. Here there are lots of people developing and experimenting with this technology every day. I found one person that has been doing a lot of renders with Octane. Their work looked amazing and was exactly the style I was going for, so after talking with them for a while and asking questions and getting help setting the renderer up I could finally lock in my renderer choice and start progressing the LookDev aspect of the project.

Houdini Octane Test

The latest version of Octane has a dedicated Gaussian Splat object setting, just by enabling this I already had a render I liked the aesthetics of more than any other I explored. The next step was to get the particle rendering side developed.

Octane Particle Render

As my base for the particles I just wanted them to inherit the base colour from their respective points. I did this by stripping all of the Gaussian Splat related data and just passed the particles Cd colour attribute into the material. Octane does this by packing all the data into a UV map then reading that inside the material.

Octane Particle Sim Test

With the colours working I could then focus on developing the look of the particle size, lighting and PBR elements.

Octane Particle Size and Lighting Test

Next I needed to test if the renderer could handle the animation of the Gaussian Splat and particles transitioning. When testing this I encountered an issue that wouldn’t render more than 1 frame. After spending a while trying to fix this, I went back to the person that showed me Octane, and they said that this was because Gaussian Splat and particle rendering within Octane does work work well when motion blur is enabled, so I turned this off and will add it back in post by adding an AOV motion vector layer.

Octane Gaussian Splat Animation Test

With this done I could then render out a motion test.

Octane Transfered Particle Colour Material

The next stage was relighting the scene. One of the many benefits of Gaussian Splatting is its ability to be relit with relative ease, so all I needed to do was as a light source and edit the scene as I would with any other 3D asset.

Octane Relighting Test

With the lighting, materials and animations working, I could then tweak my camera settings.

Octane Relighting Test
Categories
Final Major Project MA Visual Effects

FMP: Week 7

This week I began to apply the transition to my point cloud. I created the initial point group and passed it into the DOP network.

Point Activation Setup

I then used the attribute transfer technique, however this moved very fast and also had a problem of missing points. As it picks points within a certain distance, if a point is slightly too far away it will miss it and leave it out of the simulation, so I needed to find fix.

Transition DOP Network

I combined my previous development of a moving mask object, and the transition calculation. Now as the masked moved, any points that are active on the current frame and not active on the previous would be added to the transition.

Updated Object Mask Transition
Categories
Final Major Project MA Visual Effects

FMP: Week 6

This week was spent finding a solution to the particle transition. I tried multiple solutions I thought of in Houdini, but none of them worked. After this I tried looking for resources online, looking for anything specifically related to Gaussian Splatting is very difficult as the technology is still very new, and as this was a very specific use case in just one of many DCCs there was 0 resources. I then turned to generic Houdini particle tutorials, I explored the entire SideFX learn database and couldn’t find anything similar to the effect I wanted. I tried multiple search terms on youtube and google and found one or two tutorials that were similar but they had flaws that meant they wouldn’t work with my project. I turned to GPT and Gemini for solutions, but they provided no help at all. Finally I went back to searching and found one source on creating a particle dissolve effect.

The man concept behind this effect is to have particles spawn on the points every frame, but having a mask that ensures there is a new set of points active every frame that do not repeat. The way this is achieved is by initially setting a group of points as active by giving them an attribute. This is then put into a DOP network that expands the selection by a desired amount. The DOP network can then get the data from the current frame, and compare it to the last frame, any points that are active on the current frame but weren’t on the previous frame get added to a transition group. This group is then used as the mask for the points.

Categories
Final Major Project MA Visual Effects

FMP Week: 5

When chosing a location to capture I looked at Alexandria Road, I have previously been to this location, so I knew that there wouldnt be many people walking around and the location fit the style of my project well. To capture the location I decided to use multiple different methods, as this technology is very new there isnt a “right” way to do it, but there is lots of them.

I chose 4 methods of capture.

  • iPhone Polycam
  • iPhone Luma 3D
  • 360 Camera Video
  • DSLR RAW Images

First I used Polycam, I have used this app before so I knew the basics of how it works in regards to scaning and creating 3D assets. For this capture I walked around in a sort of spiral pattern trying to capture as many different angles, positions and raw data as I could. I had a conversation with a VFX supervisor that just finished giving a talk on the future of Gaussian Splatting within Virtual Production and asked for any advice on how to capture my scene. They said that in Polycam you have the ability to export all the captured photos that it produced while walking around capturing your scene, this data can then be passed into COLMAP and Postshot later on.

Then I captured using Luma 3D. This is one of the most popular apps for Gaussian Splatting and has lots of community uploaded datasets. I used this for my initial test and it didnt have the best quality, but I still wanted to try it for the location. The outcome was very messy and the worst of them all. There is also no option to export the photos to try and reprocess it later.

Then I captured using the 360 Camera. For this capture I started recording while holding the camera above my head to try and reduce my footprint within the shot. I then moved in a zig-zag pattern down the tunnel as I have heard this pattern to give good results when using 360 cameras.

Finally I started capturing using my DSLR camera. For this I decided to walk down the tunnel standing at one side and moving down while taking pictures of the opposite side, I tried to move slowly and make sure every new image had some part of the last image in it so that when I try and reconstruct the camera postions later it has an easier time placing them.

After capturing this data I imported their respective data into COLMAP and Postshot. Postshot is a software that can handle nearly all aspects of creating a 3D Gaussian Splat. You can import raw camera image data and it will work out the positions of the cameras, create the point cloud and work out how to turn it into a Gaussian Splat. However the camera position tracking in Postshot is very slow and isnt as advanced as some other open source options. This is where I use COLMAP, COLMAP is an open source software that takes an input of images and plots the position of each image in 3D space, then creates a rudimentary point cloud. COLMAP is a lot faster and I have heard many good things from various forums and people working and developing this technology.

The data I captured using Luma 3D was locked into that app and was of such a low quality I didnt persure it any further. The image data from Polycam was sent to my comptuer and processed through COLMAP and Postshot, it resembled the tunnel but still had a lot to be desired, this dataset also consisted of almost 3000 images so it took a long time to process. I then processed the 360 camera video data, I did this by putting the video file into another open source software, this software split the video up into frames, then split these frames into flat images, I used 6 images per 360 frame, this split it up into a “cube” and gave me a flat image from each side of this “cube”. I could then past these images into COLMAP to get the camera positions. Once I had the camera position data from COLMAP I could cobine this data with the images and drag them into Postshot, Postshot recognises there is camera position data and skips this stage, takes the camera positions and skips straight to training the Gaussian Splat. Finally I tried processing the images from the DSLR camera. I tried tracking the camera in COLMAP and Postshot but kept getting the same problem, because I took the photos direcly facing the wall and that wall being almost identical on both sides the software thought it was just one single wall. I tried tweaking various settings but could not get it to distinguish the two different sides. After my tests I saw that the DSLR had very high detail in small locations and the 360 video was good at rebuilding the whole location (with less quality). So I planned to go back to the location and shoot again with the DSLR, but using the movement pattern I did with the 360 camera.

Left: Original DSLR Capture pattern. Right: New DSLR Capture pattern.

With this new data captured, I imported the photos into Photoshop and batch edited the photos. I tried to make the photos as flat as possible so there was no over or under exposed areas, this allows for better tracking. I also used the AI Denoiser to try and give COLMAP and Postshot as clean of a data set as I could.

After the photos were processed I started reconstructing in COLMAP.

This took a very long time to process even with a high end GPU, but once it was done I finally had accurate camera position data and points. I exported this camera data into a file, this file can then be imported alongside the images into Postshot, skipping the reconstruction and using the already created data instead.

With this new approach I had a much better splat. There was still areas for improvement as the floors and walls had a lot of holes and visual artifacts. I think this happened as I picked a difficult scene to reconstruct, It features lots of repeating points of intrest and is a dark, somewhat confined area.

Postshot also has the ability to import Gaussian Splats into Unreal Engine via its own plugin. I will not be using this is my project but it was interesting to see how well it was implimented.

After the splat finished training I was able to export it as a .PLY into Houdini. For a basic test I used the new Gaussian Splat nodes in Houdini and could visualise my point cloud. As this now functions as a collection of points with attributes within Houdini I wanted to also test if I could move these points and see if the Gaussian Splat would still be rendered. I created an attribute wrangle node with some simple VEX code that moves these points based on a sine wave. As this worked well I moved onto more complex edits.

The next progression was POP particle simulations. I created a POP network and added a POP wind to move the particles around. By default the POP network continuously spawns particles at every point based on a value. As I wanted each single splat point to move I had to change this and make it so the network spawns a single particle at each point.

This worked but my desired effect is to have the points transition from Gaussians to particles. I began to experiment and find solutions to achieve this. My initial idea was to create a box that moves across the point cloud and give any points inside that box a group, this would be my mask. I could then set the POP network to only spawn particles on points in that group. However this did not work. As the POP network was set up to only spawn a single particle, it did this on the first frame, so when the box moves and adds new points to the mask, the initial particles have already been spawned and there will be no more created.

Categories
Final Major Project MA Visual Effects

FMP Week: 4

This week I started practically looking into the Gaussian Splatting implimentation added in Houdini 21.

I used the sample project I got from a video tutorial earlier and went through it breaking dow how it all works. This sample project looks at creating a 3D mesh based on the points, running a vellum simulation on this mesh, then sending that animation data back onto the point cloud. While this is very interesting it will not be useful in my project.

Instead I want to use some kind of particle simulation or large scale animation on the point cloud, I tested a very basic popnet setup to see how the Gaussian Splat would react to it and everything seemed to work as expected.

Then I looked into the rendering setup. While the vast majority of processing such as camera tracking and reconstruction is done before the pointcloud is made, they still need to be set up in order to be viewed correctly. As each point is made up of essentially a coloured circle all the lighting and materials are not needed and can just be viewed as is. However it is still possible to relight a Gaussian Splat and change elements of the render, I might look into these further down the project.

After experimenting in Houdini and seeing what tools were available to me, I was able to go from ideas of a storyboard to getting a first idea down. As this project is focused more on research & development than storytelling my storyboard is very basic and essentially just consists of the “real world” transforming into data and becoming the digital world that isnt quite perfect yet. this project is very valuable to me as a chance to look into this new technology that is very quickly intergrating itself into upcoming professional workflows and especially in the music video industry where my passion lies.

Categories
Final Major Project MA Visual Effects

FMP Week: 3

I plan to refine my storyboard by focusing on ~3 key shots I want in my final video. As this is more of a technical proof of concept and research project, the final video will most likely be around the 30-60 second mark, so instead of trying to create a deep cohesive story, I plan to focus on creating short high quality shots that highlight the techniques that were used to create them.

I will also capture a small scale guassian splat of a room. This will be using a DSLR and manually adding the photos to software. I will then explore the possibilities of manipulating the guassian splat and pointcloud within Houdini.

During the capture process I tried to cover as many area sof the room as possible. Since the capture technique is very similar to that of photogrametry, I used the same stratagey while taking the photos. When capturing a single object you want to orbit around the object at multiple heights, as this is a room instead of a single object, I reversed this and orbited the room facing from inside out.

Once I had all the photos It was time to process them. During my research I discovered the software Postshot, this is a dedicated Gaussian Splat training software. It takes all the raw images and reconstructsits position and orientation in 3D space.

Categories
Final Major Project MA Visual Effects

FMP: Week 2

This week I captured a test 3D scan of a street in Elephant and Castle. This was not a high quality capture and done on a phone using the Luma3D app. The main reason for doing this was to test the workflow of Capturing images > Point cloud > Gaussian splat > Houdini > Render.

I took hundreds of photos of the area I wanted to scan on my phone, trying to use similar techniques to ones that yield good results when photo scanning, such as moving in a dome shape around the object(s) capturing multiple different elevations and angles. Once the app had processed all the images taken I had a 3D scene I could look around. This scene could then be exported as a point cloud, with each point containing all the data required to render a Gaussian Splat.

To be able to view this correctly in Houdini the imported file needed to be baked into a format Houdini can understand. Gaussian Splats are most commonly stored as .PLY files, containing all the points and their respective attributes. This can be exported from Luma3D and into any software that supports .PLY.

Categories
Final Major Project MA Visual Effects

FMP: Week 1

My original plan was to create a full CG 3D music video with Metahumans inside of Unreal Engine, however I have done similar projects to this before. At this current moment in time I want to persue a career in the Music Video VFX Industry, this is very big in South Korea and a lot of small studios I follow within this industry have a very similar style and utilizes techniques such as 3D scanning into their projects very often. I have also planned to write my theisis on the topic of Guassian Splatting, a technique also gaining more popularity with its first use in a production movie (Superman 2025) earlier this year and recent implimentation to DCC’s such as Houdini and Maya. So I have decided to take on the challenge of incorporating these techniques into my final project. This will be a challenge but will be a signifinatly more usefull learning experience and portfolio piece rather than just doing something im comfortable with and have done before.

The next 2 weeks I will be researching how gaussian splatting and 3D scanning can be used within DCC’s and implimented within my project. I will then be able to start planning my timeline and create an animatic based on my findings.

To start I began researching the recent devlopments in Gaussian Splatting and the recent addition of dedicated features into Houdini 21.

These features are allready being used in combination with existing Houdini nodes and workflows, allowing for the data to be edited just like any other 3D asset instead of being locked into one software.

I also experimented with Reality Capture to get a base understanding of photoscanning and camera tracking software.