top of page
  • Writer's pictureMike

MA Perfecting The Look: Camera Projection, Lens Distortion, and 3D Camera Tracking

Now that we've covered how to work in Nuke within the 2D space, we're branching out into working within 3D setups which will become more important for the final composition. The main things being tackled are some basic camera projection, creating an ST map for lens distortion, and 3D tracking with the camera tracker!

 

Camera Projection with a Matte Painting


So this will be in less detail than the following topics mainly because I did the work and then realised I hadn't taken screenshots as I was progressing...oops. The gist of this exercise was to take a 2D image and convert it into a seemingly 3D render within Nuke.


First thing to do was make some edits to a selection of images and then stitch them together into a new image. Different pieces were taken to create foregrounds, midgrounds, backgrounds, and other features like the sky and fog. This was done in Photoshop and the images were saved as Tiff files. The export consists of the final comp and all the layers separated as individual images with transparent backgrounds.

The final composition was then imported into Maya so that some fairly simple geometry could be created that would later act as 3D models of parts of the closer scenery. The final comp is mainly to act as a guide for creating the CG elements and only the closest terrain is modelled, since the camera will be zooming into the image later on.

Once all is finished with modelling, the main things to export are the camera and the modelled terrain. These need to be .fbx formats for importing into Nuke.


Inside Nuke, a Camera and 2 ReadGeo Nodes need to be created and set to read from the correct files. The camera node is going to be looking for the Maya camera, whilst the ReadGeo nodes will need to be linked to the geometry assets that were created previously.

A Scene and Scanline Renderer node need to then be added, along with all the image files for the land elements that don't have 3D models and the sky. For the land images, these need to be hooked up to Project3D nodes and then connected to Card nodes which are linked to the Scene node. Card nodes act like a 2D layer that can be moved around within 3D space, its similar to setting up parallax scrolling effects in 2D games.


The sky needs a Sphere node so that it is turned into a skybox. The sphere will need to be larger than the entire scene so that it is placed correctly into the comp, so it just needs scaling up inside the settings.


For the fog, we can create a few Card nodes and position them in between different layers within the 3D space so that the fog seems to be in most of the landscape. Since they will all read from the same image, only one copy of the image needs to be in the node graph tree - the trick is to add a Merge node that connects the image file with the Project3D node. The mix slider can then be played with until the combination looks good!


All Project3D nodes need to be linked to the Camera node that is reading from the Maya camera. Last thing to do is to add the motion to the camera, so to do that I needed to duplicate the Camera node and plug it in to the ScanlineRenderer and Scene nodes. It also needs to no longer read from file, so that gets turned off, then its a matter of setting keys! Keys need to be set for position and rotation of the camera at the starting and ending frames. A Reformat node needs to be added between the ScanlineRenderer and Viewer nodes so that the output format is Full HD 1080. Now all that's left is to add a Write node at the end and export it (this example is a .tga) and click render!


The final Nuke project looks like this:

 

Creating an ST Map for Lens Distortion


A couple of things can be done to setup the footage before creating the ST Map. If I only wanted to use certain frames in the project instead of the whole video, I can use a retime node to change my input and output values. In the footage being used, the input was 1 - 730 for all the frames in the video. Following the example, I set this to start from frame 50 and end at frame 250. The output range needed to be adjusted to match these changes, starting at frame 0 and ending at frame 200.

I also will want to check what the focal length is for the camera used to record the footage. This will be helpful for making a more accurate lens distortion later with the help of a checkerboard that matches the focal length. Since we are using provided assets, we could see that the focal length is 35mm, and I can then use the 35mm checkerboard to match the camera lens. The image needs a reformat node to make the image match the video size, which is 1920 x 1080, before its ready to use.


Now that the checkerboard is ready for use, I can add a LensDistortion node and go to the Analysis tab to access the "Detect" button. This then detects points on the grid to place features (basically big X's) and makes something like this (note: this is in preview mode since I forgot to get a SS, oops!):

Although it does a good job automatically, it still needs some additional manual touches to improve it. The first thing to do is add a Sharpen node to make the image a little sharper and clearer, followed by removing unwanted elements - such as the connected lines along the right edge of the image. I can then add custom lines from the Add Lines tool and by selecting the existing features along a given edge to start connecting them. I can also manually move these features around so that they fit more accurately to the corners of the black and white squares. If there's a feature missing (like in the corners), I'll just have to make a rough guess for where the feature should go. When an edge is done, I can hit the Enter key and repeat this until the perimeter is finished. All that's left to do is go back into the LensDistortion node and hit the "Solve" button to create the final grid:

The LensDistortion node actually adds a bit of space to the boundary size of the boundary box, making it slightly larger than 1920x1080. The default output mode is set to Undistort the image, which pushes the image to undistort it and makes the boundary box bigger. To have more memorable numbers for this size increase, we can enable BBox and manually choose the size increase the number of pixels by 10 along the x- and y-axis, which also requires the boundary box to be made to 1930x1090. We also need to change the output mode from Undistort to STMap.

Now, a Write node is needed to save this data as a new image. The important things to check here are that the channels is set to "all" instead of "rgb" and the datatype is set to 32 bit float.

To ensure all the data from the different channels is correctly saved, the file name needs to be .exr name. We can change the render to only send out 1 frame instead of a sequence of images and bring in this .exr image into the project.


Now, an STMap node can be connected, linking the source to the footage and the STMap to the exr file. In rgba, it appears like the same checkerboard, but changing the mode to motion, forward, or backward displays the ST Map data that was created previously:

From the STMap node, we can change the UV channel to forward to apply this data and undistort the image. Adding a reformat node will bring back in all the information that was pushed outside the boundary and ensure that when the lens distortion is applied to the renders of the CG elements, the CG elements will line up with the footage correctly. To make sure this works, we need to go to output format and create a new format size. This will be the lens undistort that is set to the increase made to the frame earlier (1940 x 1100). This is also saved as a preset now, so that it can be accessed easily at any time. We don't need a reformat node to resize the images to these dimensions, so resize type is none and check black outside to have a black border filling the extra space.

Trying to write out the work so far will not work, since Nuke will think there is only one frame to render out because of the ST Map image. So, the last thing to do is add one last retime node between the ST Map image and the STMap node and change the output range back to 0 - 200 and hold frame for before and after the first frame.

Now, we can add in the final write node and create a new folder to save the undistorted footage that is about to be rendered out, set the quality to 1, and everything is done!

The final look for the workspace and the node graph trees is below:

With this prepped, I can move onto doing 3D tracking with the camera tracker!

 

3D Tracking with the Camera Tracker


After bringing in the newly made footage into a new comp, the full size format in the project settings needs to be set to full HD 1080, then I need to add a CameraTracker node. If something was moving across the footage or image sequence, like a person or car, I could use the Mask setting in this node. Anything that might be moving which isn't a result of the camera moving will need to be masked out, so to do that I can add a Roto node by hitting the O key [tip for future me: you can use Cntrl + Shift + X to pop nodes out of the tree] and connect it to the Mask input of the CameraTracker node. Now, with the Roto node opened up, I can mask anything in the image sequence and add keys to different frames to control how an object might be moving across the image!

If there is something stationary in the shot that I might want to mask out, I can connect the Roto node to a tracker node and the selected area should stay masked out as the camera moves around.


Anyway, to help create the 3D environment I need to use a large number of features, or Xs. Going back into the CameraTracker, there's a way of previewing these by going into Settings and checking Preview Features.

In here, I can increase the number of features to something between 500 and 1000 for better results since more features will be latching onto areas of high contrast that will help later when entering the 3D space. Changing the Mask setting to Mask Alpha displaces all the features within the squared off area that I made previously, positioning them outside of the square. The total count stays the same, and the CameraTracker node now ignores the area inside the mask.

The Roto node can be deleted now since its not longer required for the next task. The CameraTracker node has several components which, for my benefit, I am going to go through (i.e., write out my notes in case I lose them 😅):

  • Range

The default value is set to Input for the frame range, but a Custom value can be created here if preferred

  • Camera Motion

  1. Free Camera is for anything that's a handheld camera

  2. Rotation Only is for if on a tripod that was panning only

  3. Linear Motion is for cameras on a dolly or track that is moving on 1 direction

  4. Planar Motion for if its moving along 2 axis, e.g. left and right or up and down

  • Lens Distortion

I know the footage doesn't have lens distortion because I removed it in the previous segment. If I didn't know the lens distortion or didn't use the lens distortion node (maybe from not knowing the focal length), I can select Unknown Lens and the CameraTracker node will try to determine the lens distortion. I just need to then enable "Undistort Input" to finish, but the method done previously is better and more accurate.

  • Focal Length

Unknown Constant will be one focal length for entirety of shot, but since I do know the focal length I can change it to Known and set the length to 35mm.

  • Film Back Preset

This is for setting the type of camera. I know its a Canon 5D Mark iii Video thanks to the information provided, so I can open the list and go to Canon then 5D MKII+III Video.

Feeding this data into the CameraTracker node will make the final Solve much better.

If there's a particularly difficult camera track, I can create 2D tracking points by going to UserTracks -> Add Track to add a point or create a Tracker Node, like in the WallE exercise.


The Settings tab also has a number of, well, settings to play with:

  • Number of Features

Controls how many features will be made. A good number is anything within 500 - 1000.

  • Detection Threshold

This is for controlling the distribution of points over the entire image. Lower values = more evenly spaced images.

  • Feature Separation

Determines how much space / relative distance there is from one point to another.

  • Refine Features Locations

This will look for the closest corner point for contrast that any feature has been originally placed.

  • Set Reference Frame

This is helpful for frames that have a lot of motion blur in them. I'll need to find a frame that doesn't have a lot of motion blur to use as a reference. In this case, I'll be using frame 100.

Back in the CameraTracker tab, clicking the Track button will go through the footage and bounce back to frame 0 to refine the track and populate it with features.

Hitting the Solve button will go through these features and change their colour if they were able to be factored into the final Solve or not. There's 3 different colours:

  • Red means the feature has been rejected because they have very high error rates. These can be influenced by the initial tracking thresholds that have been set under the Settings tab.

  • Orange means the feature has not been Solved and not been factored into the final Solve. These require some fixes until they become Solved.

  • Green means everything is fine and dandy!

Going into the AutoTracks tab brings up the options that can help sort out the unwanted or incomplete features. Selecting track len - min, track len - avg and Max Track Error then changing the Max Track Error slider will change the error thresholds for features. Selecting the Min Length and error - min then increasing the Min Length slider will start rejecting tracks that last for less frames, or decreasing it will include more rejected tracks. The Max Error represents the maximum errors per frame while Max Track Error is the maximum errors for the whole track.

Hitting the Delete Rejected tracks button will improve the Solve Error, which ideally should be below 1. Hitting the Delete Unsolved button will also bring the Solve Error down a bit more.

The result can be further improved by enabling Position and Rotation and pressing Refine Solve then Delete Rejected again. When the Solve Error is in a good spot, we can return to the CameraTracker tab and hit Update Solve. If any new tracks are rejected, we can return to the AutoTracks tab and select Min Length and error - min and press Delete Rejected again.


Now that this is set, its ready for exporting!


To export it, I need to be in the CameraTracker tab and go to Export -> Scene and hit Create. By default, this is set to Link Output which creates a live link between the CameraTracker and what is being exported.

A ScanlineRenderer node is needed to take the 3D elements and render them back into the 2D image. The Background connects to the image, obj/scn to the Scene node, and the camera to the camera node:

Next thing to do is to determine the ground plane. One way to do this is by looking for the best tracks and using those. These need to have long track lengths and low error values. Once enough are selected, I can right click the footage -> Ground Plane -> Set to Selected. Pressing tab in the workspace view will convert this into 3D space, showing how it generated the environment based on those points as well as recreating the camera's animation:

For establishing real world scale, I can select 2 points for an object in the scene then right click -> Scene -> Add Scale Distance. Then inside the CameraTracker node go into the Scene tab and click on a dist entry to manually input the distance. The best measurement to use is cm because Maya uses this by default. Now, it scales up from 1 to ~447cm.

Back in the 3D view, it shows the increased size to all the points, which makes it easier to try working with real world scales!

Last thing to do is scale up these points so that they will be much easier to use when working in Maya with CG elements. To do that, I need to connect the ScanlineRenderer node to the Viewer and go into CameraTrackerPointCloud and into the Point Size setting. Upscaling these to something like 800 should work for later.

And that's it, ready to go! Going into the 2D space will cause one of these points to fill up the view, so it will need to be reset back to 1 to see the entire image again.

 

Closing Thoughts


So that was a lot of stuff, but I think the next exercises will look at using these point clouds inside of Maya and doing some technical wizardry in there before bringing it back into Nuke for...something. I shall see! I also don't know if I will stick to this lengthy format for everything I've been working on, since its really time consuming and feels more like a copy of my notes as opposed to the normal type of blog I would write.

19 views0 comments
bottom of page