Usage of the node

The NNFlowVector node can produce normal motion vectors that are compatible with how Nuke handles motion vectors, i.e. they become available in the “motion” layer and also as a subset in the “forward” and “backward” layers. These can be used in native Nuke nodes such as VectorBlur, among other third party tools from Nukepedia (www.nukepedia.com). The motion vector output is available, without limits, in the free version of NNFlowVector.

Worth mentioning is that you do want to set the “interleave” option in the Write node to “channels, layers and views” when pre-comping out motion vectors (this is Nuke’s default settings). This way the “forward” and “backward” layers are combined automatically by Nuke to be represented in the “motion” layer as well. If you write out the files with “interleave” set to for example “channels” only, then the “forward” and “backwards” layers become written out in separate parts in the resulting multi-part EXR sequence. This results in Nuke failing to recombine them correctly to the “motion” layer when read in again. You can easily see if this has happened by checking if the “motion_extra” layer has been created instead of the standard “motion”.

NNFlowVector can also, in the licensed/paid version, output smart vector compatible vectors. These are more complex motion vectors that can be used with native NukeX tools like VectorDistort, VectorCornerPin and GridWarpTracker.

Knob reference

There are some knobs that do change the output results of the neural network, i.e. the quality of the output vectors. These are iterations, exposure, gamma and colorspace. We do recommend to play around with these to find the settings that do work best with your particular material.

The other knobs that do exist on the plugin to tweak by the artist are mostly related to being able to generate motion vectors for pretty large resolution images/sequences using limited VRAM on the graphics card. The neural network requires pretty large amounts of memory even for small resolution input images/sequences. To be able to generate vectors for a full HD sequence or larger, the images most likely need to be split up into several passes/patches to fit into memory. This is all done and handled transparently “under the hood” so the artist can focus on more important things. You might need to tweak the settings though depending on the use case and available hardware on your workstation.

mode

The default mode is still mode. This is best for upscaling still photos, textures, patches and similar material. When you are processing You can switch between generating normal “motion vectors”, and the more Nuke specific “smart vectors”, depending on your need and what nodes you are going to use the motion vectors with.

iterations

Defines how many refinement iterations the solve algorithm will do. Most often the default setting of 15 is good, but you can choose between values of 5 to 25 (in increments of 5). A lower number is faster but coarser, while a higher number is slower and more refined. It’s easy to over refine the result though which will introduce artefacts, so a higher number doesn’t necessarily mean a better result. Please have a play with your particular material to find the most optimal iterations setting.

exposure

This is a normal exposure grade correction that is applied before the neural network processing. It is here for artist convenience since it’s worth playing a bit with the exposure balance to optimise your output.

gamma

This is a normal gamma grade correction that is applied before the neural network processing. It is here for artist convenience since it’s worth playing a bit with the gamma balance to optimise your output.

colorspace

Different options for the processing colorspace of the neural network. We do recommend to try out both “logarithmic” and “sRGB” to see what gives you the best result on your specific material. A rule of thumb is to process high dynamic range material using “logarithmic”, and low dynamic range material using “sRGB”. The option “raw” is available if you want to experiment with your own colorspace conversions before you input your material to the node.

max size

Max size sets the maximum dimension, in one direction, that an image patch can have and is one of the most important knobs on the plugin. The default is 1100, which means that the max dimensions an input patch would be allowed to be is 1100×1100 pixels. From our experience that will use up to around 8Gb of VRAM on your graphics card. If you haven’t got that available and free the processing will error out with a CUDA memory error, and the node in Nuke will error in the DAG. To remedy this, and also to be able to input resolutions much higher than a single patch size, you can tweak the max size knob to adapt to your situation. You can lower it to adapt to having much less VRAM available. The plugin will split the input image into lots of smaller patches and stitch them together in the background. This will of course be slower, but it will make it possible to still run and produce much larger results. There is a text status knob above the “max size” knob (in between the two dividers), that will let you know how many image patches the plugin will run to create the final motion vector output.

Since motion vectors do describe local and global movements in plates, they are pretty sensitive to what image regions are part of the algorithm solve. What this mean is that the more it sees the better result it will be able to produce. Keeping the max size as high as you can, will produce better motion vectors. It’s worth to mention that this is much more sensitive in a plugin like this compared to for example NNSuperResolution. It’s also worth mentioning that you might want to resize down the input material to produce the motion vectors, and then size these up again instead of trying to process too large plates. An example would be to resize a 4K plate to 2K before processing and then scaling up the vectors to 4K again after being processed. Don’t forget to scale the vectors accordingly, i.e. both resize the image up with a Reformat node, but also compensate for the larger format by multiplying the vectors by 2.0 using a normal Multiply (Math) node.

overlap

Since the plugin will run multiple patches through the neural network, there will be lots of edges of image patches present. The edges doesn’t get as satisfying results as a bit further into an image from the edge. Because of this, all these multiple patches that get processed are done so with some spatial overlap. The overlap knob value sets the number of pixels of the patches’ overlap. The default value is 128, which is usually a good start.

padding

The padding is very connected to the overlap above. While the overlap sets the total amount of pixels the patches overlap, the padding then reduces the actual cross fading area where the patches are blended to not use the very edge pixels at all. The padding is also specified in pixels, and the default value is 16. This way of blending all patches together has proven pretty successful in our own testing.

motion vector mode: forward and backward

Motion vectors describe the local movement of different parts of the plate from the current frame to the next frame (forward), and from the current frame to the frame before (backward). Different tools and algorithms have different needs for what to use in regards to motion vectors, some work with just forward vectors and some need both forward and backward vectors. You have the option of which ones you want to calculate, but we recommend to always calculate and save both.

Worth noting is that all the NNFlowVector Util nodes that got motion vectors as input (please see below in this document) do require both forward and backward vectors.

smart vector mode: frame distance x

SmartVector, and the tools that come bundled with it in NukeX, is truly a brilliant invention that opens of for a lot of powerful compositing techniques. The VectorDistort node is the most obvious one which makes it possible to track on images, or image sequences, onto moving and deforming objects in plates. This is made possible by the more complex set of motion vectors called smart vectors. While NukeX is able to produce these smart vectors, they are quite often pretty rough which makes them not hold up for that long sequences. Basically the material you are tracking onto the plates does deteriorate over time. This is where NNFlowVector comes in by being able to produce cleaner and more stable motion vectors in the smart vector format. Hence the output of NNFlowVector is usable directly in the VectorDistort node. This is also true for other smart vector compatible nodes like VectorCornerPin and GridWarpTracker.

You have to use the licensed/paid version of NNFlowVector to be able to produce smart vectors. If you want to try this feature out first, please request a free trial license.

Use GPU if available

There is another knob at the top of the plugin called “Use GPU if available”, and it’s “on” by default (recommended). This knob is only present if you’ve installed a GPU version of the plugin. This knob is not changing the motion vector output, but rather how the result is calculated. If it is “on” the algorithm will run on the GPU hardware of the workstation, and if it’s “off” the algorithm will run on the normal CPU of the workstation. If the plugin can’t detect a CUDA compatible GPU, this knob will automatically be disabled/greyed out. This knob is similar to the one you’ll find in Foundry’s own GPU accelerated plugins like for example the ZDefocus node.

We highly recommend always having this knob “on”, since the algorithm will run A LOT faster on the GPU than on the CPU. To give you an idea of the difference, we’ve seen calculation times of the same input image be around 2.5 secs using the GPU and about 2 minutes and 20 seconds on the CPU (a difference factor or 56x slower on the CPU). These numbers are just a simple example to show the vastly different processing times you will get using the GPU vs. the CPU. For speed references of processing, please download and test run the plugin on your own hardware/system.

Knob demo video

We haven’t got a video introduction to the knobs for NNFlowVector yet, but if you are interested in the “max size”, “padding” and “overlap” knobs, please watch the following demo of NNSuperResolution since the same info applies for those knobs: