Render license support, Nuke Indie support and NNFlowVector 2.0

We are pleased to announce that we have released both NNSuperResolution (v3.3.0) and NNCleanup (v1.3.0) with dedicated render license support. This means more flexibility for you to choose how to license our plugins. You can for example buy a couple of GUI licenses for your artists to use interactively when working, and then have a bunch of render licenses for the farm to batch process with (without them interfering with the GUI licenses). If you are a bit technically creative, you can also make your farm use the GUI licenses durings off hours to up the license count for overnight batch rendering on the farm. Adding render license support was in response to customers asking for this functionality, and we do 100% agree it’s a good thing! If you already got a node locked license, or if you’re a studio with a site license, there is no need to worry. You don’t have to change anything, and things will keep working as it is currently (i.e. the nodes will continue rendering using the GUI licenses in those cases).
NNFlowVector will also get the addition of render license support in the next point release, which will probably be available in a few weeks from now.

The render licenses are available for purchase in the Shop already, and are priced at $59 USD/year.

NNCleanup (v1.3.0) has also got support for Nuke Indie with this release!
NNFlowVector will follow along with Nuke Indie support as well in the next point release.

Since our last blog post in February, we have also released NNFlowVector v2.0 as a sharp public release (i.e. no beta version anymore). This means that you can now go ahead and download and install it, and use the matte input support on both Linux and Windows.

Until next time!
Cheers, David

NNCleanup v1.1.0 released

This new version features some new and important controls for what area to process when doing the cleanup/inpainting. You have the option to either process “Full frame”, “Specified region” or “Matte input’s bbox” (the old behaviour of v1.0.0 was always “Full frame”). Because the area that needs processing is usually just a sub section of the image, having these options makes it possible to work on really large images, 4K plates and even 16K HDRIs for example. We’ve also added a “process_scale” knob that makes it possible to work on really large areas by internally downscaling them before processing (and then later upscaling them again). All this makes the memory footprint way smaller on the GPU and hence possible to still work GPU accelerated.

An example cleanup on a 16K HDRI

We have added a couple of HDRI examples on the product page, including downloadable EXRs of before and after.

Cheers,
David

First release of our new product NNCleanup

Example of the cleanup / inpaint result from NNCleanup

We’re excited to announce our third product NNCleanup, a Nuke plugin for quick cleanup / inpainting tasks. Read more about the plugin, and have a look at a few more examples, at the product page. If you are keen to try it out yourself in Nuke, please request a free trial license. We already got ideas for improvements, but until then let us know what you think using this form, we are keen to hear your input!

Cheers,
David

NNFlowVector v2.0.0b5 released!

We have just released a public beta version of v2.0.0 of NNFlowVector. This version features a matte input, so you can make the plugin ignore a selected area. To be more precise, it’s not really ignoring the selected area but rather treating it with inpainting and some machine learning/artificial intelligence to make it appear as an approximation of what it should have looked like if the objects selected weren’t there during filming. This makes it possible, for example, to create motion vectors of the background wall even if a character is passing by in front of it. You can then use the vectors to track in footage/image patches onto the wall. You still got to roto back the character of course, but the tracking is solved.

An example of how the matte input feature will treat the produced motion vectors

We have decided to make the v2.0.0 release available as a beta version because we are really keen for you to get your hands on the matte input feature. This has been the single most asked for feature by you, so we are very excited to deliver on that! The choice to release it as a beta version first is to get it into your hands earlier, instead of postponing the release a couple of months due to extra testing. We hope that the plugin is already stable, but if you are experiencing bugs or crashes, please drop us a mail at [email protected] where you are explaining what is going on. Thanks for the help!

Cheers,
David

Nuke 14 versions released, plus some exciting news!

We have just released builds for both NNSuperResolution and NNFlowVector for Nuke 14.0. What’s good to know is that Foundry has updated the bundled CUDA version in Nuke to v11.1.1, and the cuDNN version to v8.4.1. We have matched the builds of our plugins to that so there are no compatibility problems. You will basically get native compatibility of all supported GPUs up to compute capability 8.6, i.e. Ampere type of cards (for example RTX3080 and RTX3090). If you are lucky enough to own a brand new RTX4080 card or similar, you will have to rely on the JIT compilation of the kernels. The plugins will work, but you will need to wait for that kernel compile the first time around (there is more info about this in our documentation PDF). Enjoy!

A little teaser of the upcoming feature of NNFlowVector v2.0

We are also pleased to announce that we are very close to releasing NNFlowVector with matte support! This will be released as v2.0 in the beginning of the new year. We are very excited about this since this is the most common feature request we get. If you are very keen to test this out and can’t wait, we are interested in having beta testers. Please use the normal contact form, and please let us know what build you are using (platform, Nuke version, CUDA version). We will then send you an email with a special download link so you can get up and running.

Hope you like our Christmas presents. 🙂
Merry Christmas and Happy New Year!
Cheers, David

All downloads should now be much faster

We have successfully migrated all of our files available for downloading to Amazon’s AWS S3 cloud system. We received your feedback about our downloads being painfully slow (for some of you it even took days to download the latest builds of our plugins). We agree that this was not acceptable, and have now solved it by upgrading to a much more solid solution. We hope that this will provide nice download speeds going forward, no matter where in the world you are located.

Cheers,
David

Bug fix release of NNFlowVector and some other updates

We have just released a new version of NNFlowVector, v1.5.1. It’s a patch/bug fix release with the following release notes:

  • Patch release fixing a streaking error that occurred with the last processing patch (furthest to the bottom-right area of the processed image), in some resolutions (resolutions that needed padding to become dividable by 8).
  • Improved the blending of the seams between processing patches. The problem was not always visible, but became apparent in some specific combinations of maxsize, overlap and padding values.

We have noticed that the NNFlowVector Utility nodes “MotionVector_DistortFrame” and “MotionVector_FrameBlend” don’t work in Nuke13.1 and Nuke13.2. They do however work in Nuke13.0 and earlier versions. We investigated this and found the reason to be a bug in Nuke in the way it handles motion vector channels in the IDistort node when they are time shifted using a TimeOffset node. If you are interested, here is the bug ticket (ID 518631) at Foundry’s site: https://support.foundry.com/hc/en-us/articles/7496106165010-ID-518631-The-Viewer-outputs-a-grey-image-when-there-is-an-IDistort-downstream-of-a-time-node

We have recently relocated our website to a new hosting service. It’s now much faster and more reliable. We are currently looking into moving our file hosting service as well, to get all downloads working better (we know that they have been painfully slow as times, thanks for letting us know!)

Due to client requests, we have added a new license type for our plugins called a “Global license”. This type of license is tailored for large companies that operate world wide, i.e. on a global scale, with operations present in a lot of different geographical locations around the globe. You will find the option in the Shop next to the other types (“node locked”, “floating” and “site license”).

Have a nice weekend!
Cheers,
David

Bug fix update to NNSR and some future plans

We released an important bug fix update to NNSuperResolution yesterday (12th of September). It’s called v3.2.1, and fixes a regression where the overscan support didn’t work as intended in sequence mode (v3.2.0 was actually crashing if you tried to calculate a sequence that had overscan). The plugin is now patched and fully working again, so please go ahead and install the new version (there are no other changes to the plugin in this version update).

We wanted to take the opportunity to communicate some of what we are aiming to work on going forward. The single most requested and asked for feature for NNFlowVector is mask support, i.e. the ability to exclude some local object movements from the solve of the resulting optical flow. We are currently in the process of implementing this, but it’s a rather complex solution which involves gathering a lot of example data, training another neural network, and rewriting parts of the plugin. Hence we are not setting any hard time frames at this point. If everything goes according to plan, it will be released sometime next year.

We are also planning to investigate if we can squeeze some extra quality out of NNSuperResolution’s upscaling in sequence mode. The idea is to replace its rather simple internal optical flow engine with the much more competent optical flow engine that’s implemented in NNFlowVector. That way the solution would be able to lean even more onto the temporal features than today, which will hopefully result in increased end quality.

We are also slowly and secretly working on our third Nuke plugin powered by AI/ML! It’s too early to tell what it is though at this point in time, so you just got to remember to check back in on our website from time to time to be in the loop.

All the best,
David

New version release of NNFlowVector, v1.5.0

We are proud to finally release our first major update to our NNFlowVector plugin. This is a release with a bit of everything in it; improved neural networks, better UI and user control, better performance overall, better compatibility with Nuke13.x etc. Here are the full release notes:

  • Fully re-trained the optical flow neural networks with optimized settings and pipeline. This results in even higher quality of generated vectors, especially for object edges/silhouettes.
  • To better handle high dynamic range material, all training has internally been done in a logarithmic colorspace. This made the “colorspace” knob become unnecessary and hence it has been removed. (please create new node instances if you are updating version and using Nuke scripts with the old version present)
  • Implemented a “process scale” knob that controls in what resolution the vector calculations are happening in. A value of 0.5 will for example process the vectors in half res, and then scale them back to the original res automatically.
  • Improved the user control of how many iterations the algorithm will do while calculating the vectors. The knob “iterations” is now an integer knob instead of a fixed drop down menu.
  • Added a knob called “variant”, to enable the user to choose between several differently trained variations of the optical flow network. All network variants produce pretty similar results, but some might perform better on a certain type of material. Hence we encourage you to test around. If you are unsure, go with the default variant of “A”.
  • Speed optimizations in general. According to our own internal testing, the plugin is now about 15% faster to render overall.
  • Added an option for processing in mixed precision. This is using a bit less VRAM, and is a quite a lot faster on some GPU architectures that are supporting it (RTX).
  • Added an option for choosing what CUDA device ID to process on. This means you can pick what GPU to use if you got a workstation with multiple GPUs installed.
  • Optimized the build of the neural network processing backend library. The plugin binary (shared library) is now a bit smaller and faster to load.
  • Compiled the neural network processing backend with MKLDNN support, resulting in a vast improvement in rendering speed when using CPU only. According to our own testing it’s sometimes using even less than 25% of the render time of v1.0.1, i.e. 4x the speed!
  • Updated the NVIDIA cuDNN library to v8.0.5 for the CUDA10.1 build. This means we are fully matching what Nuke13.x is built against, which means our plugin can co-exists together with CopyCat nodes as well as other AIR nodes by Foundry.
  • Compiled the neural network processing backend with PTX support, which means that GPUs with compute capability 8.0 and 8.6, i.e. Ampere cards, can now use the CUDA10.1 build if needed (see above). The only downside is that they have to JIT compile the CUDA kernels the first time they run the plugin. Please see the documentation for more information about setting the CUDA_CACHE_MAXSIZE environment variable.
  • Internal checking that the bounding box doesn’t change between frames (it’s not supported having animated bboxes). Now it’s throwing an error instead of crashing.
  • Better error reporting to the terminal
  • Added support for Nuke13.2

Hope you like it, and that you find it even more useful in production!
All the best,
David