This is the version history, or change log, for all the released versions of NNFlowVector (latest official release at the top).

v2.2.1 – November 2023

  • Corrected a bug that made the node crash when a GPU build was used to process on a machine without an installed GPU. Now it automatically falls back to using the CPU instead, as intended.

v2.2.0 – November 2023

  • More efficient releasing of used GPU memory while using the node in a live Nuke GUI session, i.e. you don’t have to restart Nuke anymore to get some of the GPU memory back. Also doing a more thorough GPU memory cleanup when the node is fully deleted (usually when you run “File/Clear” in the Nuke GUI).
  • Added native support for the NVIDIA RTX40xx series of GPUs, i.e. compute capability 8.9 (also supporting 9.0), with a version built against CUDA 11.8 and cuDNN 8.4.1.
  • Added support for Nuke14.1 (which also is natively built against CUDA 11.8 and cuDNN 8.4.1)
  • Added a check for GPU compatibility, i.e. disable the GPU processing options if the current GPU is not supported (instead of crashing).
  • Improved the error logging. It’s now much more clear if the node errors because you are running out of GPU memory.
  • Added text information knobs to the “About” tab that prints out what CUDA/cuDNN versions the current build is against. Also what compute capability that is natively supported and what compute capability your current GPU supports and info about if you will or will not use JIT-compiled kernels. We’ve also added info about compatibility against Nuke’s own AIR tools in relation to the current build configuration.
  • Added a warning to the terminal during node creation if you are relying on JIT-compiled kernels. Just so you get some kind of heads up the first time the JIT-compilation kicks in (which usually takes just above half an hour). There are also warnings about the CUDA_CACHE_MAXSIZE environment variable if it’s not set correctly. These warning can be suppressed with a new environment variable PIXELMANIA_SUPPRESS_KERNEL_WARNINGS if you want.
  • Updated the NNFlowVector Utils gizmos “MotionVector_FrameBlend” and “MotionVector_FrameDistort” with a workaround that makes them work correctly again in Nuke13 and above.

v2.1.0 – June 2023

  • Added support for dedicated render licenses.
  • Added support for Nuke Indie.
  • Added a new and modern model variant, available as the “AA” model in the variant drop down menu. This is a transformer based model which can handle more tricky situations better. It is using more VRAM on your GPU so you might need to lower the “max size” knob some to get it to run. You probably also want to turn on the new knob “separate inferences” (see below).
  • Added a new knob called “separate inferences” which makes the node calculate the forward and backward vectors in two separate inference passes instead of at the same time. This makes the node use less VRAM on your GPU, with a slight render speed performance degradation as a trade off.

v2.0.0 (official release) – March 2023

  • Fixed a bug relating to when the bounding box of the input material isn’t the same as the
  • image format. This was a regression in the v2.0.0b5 release, and now it’s working as it should again, exactly the same as in v1.5.1.

v2.0.0b5 (beta release) – January 2023

  • Implemented matte input support, to be able to ignore a selected region when generating
  • motion vectors. The output is instead a region filled and then machine learning enhanced area of vectors which represents what should have been there if the matted out object wasn’t present when the material was filmed.
  • Lots of restructuring of underlying code to make it possible to implement the matte support.
  • Lots of optimisations and enhancements to make the plugin perform more stable, more memory efficient and a bit faster.

v1.5.1 – October 2022

  • Patch release fixing a streaking error that occurred with the last processing patch
  • (furthest to the bottom-right area of the processed image), in some resolutions
  • (resolutions that needed padding to become dividable by 8).
  • Improved the blending of the seams between processing patches. The problem was not
  • always visible, but became apparent in some specific combinations of maxsize, overlap and padding values.

v1.5.0 – June 2022

  • Fully re-trained the optical flow neural networks with optimized settings and pipeline. This
  • results in even higher quality of generated vectors, especially for object edges/
  • silhouettes.
  • To better handle high dynamic range material, all training has now internally been done
  • in a logarithmic colorspace. Hence the “colorspace” knob became unnecessary and has
  • been removed/deprecated.
  • Implemented a “process scale” knob that controls in what resolution the vector
  • calculations are happening in. A value of 0.5 will for example process the vectors in half
  • res, and then scale them back to the original res automatically.
  • Improved the user control of how many iterations the algorithm will do while calculating
  • the vectors. The knob “iterations” is now an integer knob instead of a fixed drop down
  • menu.
  • Added a knob called “variant”, to enable the user to choose between several differently
  • trained variations of the optical flow network. All network variants produce pretty similar results, but some might perform better on a certain type of material. Hence we encourage you to test around. If you are unsure, go with the default variant of “A”.
  • Speed optimizations in general. According to our own internal testing, the plugin is now about 15% faster to render overall.
  • Added an option for processing in mixed precision. This is using a bit less VRAM, and is quite a lot faster on some GPU architectures that are supporting it (RTX).
  • Added an option for choosing what CUDA device ID to process on. This means you can pick what GPU to use if you got a workstation with multiple GPUs installed.
  • Optimized the build of the neural network processing backend library. The plugin binary (shared library) is now a bit smaller and faster to load.
  • Compiled the neural network processing backend with MKLDNN support, resulting in a vast improvement in rendering speed when using CPU only. According to our own testing it’s sometimes using even less than 25% of the render time of v1.0.1, i.e. 4x the speed!
  • Updated the NVIDIA cuDNN library to v8.0.5 for the CUDA10.1 build. This means we are fully matching what Nuke13.x is built against, which means our plugin can co-exists together with CopyCat nodes as well as other AIR nodes by Foundry.
  • Compiled the neural network processing backend with PTX support, which means that GPUs with compute capability 8.0 and 8.6, i.e. Ampere cards, can now use the CUDA10.1 build if needed (see above). The only downside is that they have to JIT compile the CUDA kernels the first time they run the plugin. Please see the documentation earlier in this document for more information about setting the CUDA_CACHE_MAXSIZE environment variable.
  • Internal checking that the bounding box doesn’t change between frames (it’s not supported having animated bboxes). Now it’s throwing an error instead of crashing.
  • Better error reporting to the terminal
  • Added support for Nuke13.2

v1.0.1 – February 2022

  • Initial release