Announcing NNSuperResolution v4.0.0

We are happy to finally be able to release NNSuperResolution v4.0.0! This new major release features fully retrained networks for all upscale solutions (stills as well as for sequences). The biggest difference is that we have replaced the previously rather simplistic optical flow module with the very successful and much more performant variant “A” model from NNFlowVector. We have also increased the dataset size and the training resolution. These tweaks together makes the new models perform better overall in terms of sharpness and with less alignment artifacts. We are also adding support for Nuke15.1 (like with the recent update releases of NNCleanup and NNFlowVector).

Have a look below to see a comparison between NNSuperResolution v4.0 and v3.4. We showcase three example frames from three different videos upscaled 4x in sequence mode:

NNSR v3.4
NNSR v3.4
NNSR v3.4
NNSR v4.0
NNSR v4.0
Source
Source
NNSR v3.4
NNSR v3.4
NNSR v3.4
NNSR v4.0
NNSR v4.0
Source
Source
NNSR v3.4
NNSR v3.4
NNSR v3.4
NNSR v4.0
NNSR v4.0
Source
Source


We did investigate and train different solutions for upscaling stills with alpha channel support. Unfortunately none of them were good enough to release at this point, so we decided to release all the other significant updates anyway to not hold on them for too long. Hopefully this is something we can revisit in another upcoming release, but for now it has been put lower on the prio list again.

It’s great to finally be able to release this version as it has been in the making for almost a full year by now (it has been A LOT of model training).
We hope you like it!

Cheers,
David

Announcing NNFlowVector v2.3.0 and NNCleanup v1.5.0

We have recently released new point release versions of NNFlowVector and NNCleanup. One important feature for both of these releases is the support for Nuke15.1, which is the latest Nuke release from Foundry that was made public in June earlier this year. Nuke15.1 has updated the bundled version of PyTorch from v1.12.1 to v2.1.1, which is a rather huge step in development for PyTorch. This means faster and better execution for machine learning models in general, but the biggest difference is for the MacOS platform. The old version (PyTorch v1.12.1) was the first that supported MPS acceleration, but there was quite a lot of instructions that wasn’t ready and automatically was executed on the CPU instead. With the new release (PyTorch v2.1.1) the MPS support is way more mature and the speed has improved significantly. Here is a speed test we ran for NNFlowVector on a Full HD clip (and process scale set to 0.5) on a MacBook M3 Pro, 36Gb unified RAM:

Nuke15.0: Variant “A” – 4.3 sec/frame (MPS)
Nuke15.0: Variant “BB” – 33.9 sec/frame (CPU fallback)

Nuke15.1: Variant “A” – 2.1 sec/frame (MPS)
Nuke15.1: Variant “BB” – 3.0 sec/frame (MPS)

As you can see above, the speed has more than doubled between the Nuke15.0 build and the Nuke15.1 build (using the same version of NNFlowVector, v2.3.0).

NNFlowVector v.2.3.0 also features a new transformer variant called “BB”. This variant is similar to the “AA” variant, but it has been trained on larger image areas (using a much larger GPU) which makes it perform even better in most scenarios. Our recommendation is to at least try the default “A” model variant, and also the “AA” and “BB” variants when testing which variant is best for your specific material. NNFlowVector v2.3.0 also features an important fix for anamorphic material.

NNCleanup v1.5.0 also features a whole new suite of model variants, called “AAA“, “BBB“, “CCC” and “DDD“. These have been trained using a much larger batch size on a much larger dataset, which makes them perform quite a lot better in general. (Because of this, we have deprecated the original “A”, “B”, “C” and “D” model variants.).

We hope you find these updates as useful as we do!
Cheers, David

Releasing all our plugins with macOS support!

The latest release of Nuke changed a lot when it comes to being able to support the macOS system from our side. Since Nuke15.0 is natively compiled for the arm64 architecture, i.e. Apple’s M processors/silicon, and comes with PyTorch 1.12 internally bundled which supports being accelerated using MPS (Apple’s Metal Performance Shaders) it is suddenly possible for us to build compatible and accelerated plugins.

This is what we have primarily spent the last couple of months doing, i.e. building, adapting and testing NNSuperResolution, NNFlowVector and NNCleanup for Nuke15.0 on macOS. They all work well and we feel they are ready for a public release, which is very exciting!

However, it’s worth noting that the MPS support in PyTorch 1.12 is still very early, and hence the support is not fully fleshed out. This means that the MPS acceleration performs well is some situation and not so well in others. It works better for NNSuperResolution than for NNFlowVector as an example. This is however just a matter of time until it’s improved, because as soon as Foundry decides to update the internal PyTorch version that is shipped with Nuke, there will automatically be much better and updated support for MPS in general. Hopefully we will already benefit from this in Nuke15.1, time will tell.

Having noted the above, the only real difference that you will notice as an artist while using the plugins is that NNFlowVector on macOS will not accelerate the “AA” transformer based model for now (it will automatically switch to classic CPU processing so it will work, but just be very slow). Otherwise than that, it should all be very familiar to the use on Linux/Windows.

As a final note, to be super clear, you need to have an Apple M processor machine to be able to use these new macOS builds of the plugins (they will NOT work on older Intel based macs).

Happy compositing!
Cheers, David

NNSuperResolution updates

We have previously been reporting about a malloc related crashing situation with NNSuperResolution when used in Nuke13.2v2 or later. That problem has always been able to be worked around by setting the environment variable NUKE_ALLOCATOR=TBB, which has made the plugin stable to use (this has been mentioned both here on our website, and also in the included “Documentation.pdf” in all NNSuperResolution zip downloads). The root cause of this issue has been very tricky to pin-point down, and we have been in tight communication with Foundry and their engineers during a couple of months to finally be able to solve it.

We hare happy to announce the bug fix release NNSuperResolution v3.4.4. This release is stable without the need for the environment variable mentioned above. It’s also fixing another minor bug fix, please see the release notes.

With that out of the way we are pushing forward with the work on NNSuperResolution v4.0! We have actively been training neural networks for over half a year for this new major release, and are currently closing in on the last bits and pieces. We plan and hope to release it within a month or two, and will of course announce it here as soon as it is out!

Cheers, and happy compositing!
David

All plugins released for Nuke15.0 and more

We have finally released all of our plugins for Nuke15.0!
We have also released all the latest versions of the plugins as Windows builds. This means that you can now go ahead (for Linux and Windows) and download NNSuperResolution v3.4.3, NNFlowVector v2.2.1 and NNCleanup v1.4.1.

The following section is rather technical, but good information if you are interested, or a person that is handling installation of our plugins and such:
A rather big update in general is that the builds for Nuke14.1 and Nuke15.0 are now dynamically using Nuke’s own provided copy of libtorch/PyTorch. This is in contrast to all the previous versions where we have been providing our own internal copy of PyTorch for multiple reasons. The biggest reasons for this has been that we started building plugins before Nuke 13.0 was released, i.e. before there even existed a public version of Nuke that had a bundled PyTorch version included, that we later needed a more modern version of PyTorch than what was bundled with Nuke 13 and that we wanted to provide a build compiled against a more modern version of CUDA/cuDNN than what Nuke 13 came with so we very early could support new generations of GPUs from Nvidia (without relying on JIT-compilation). We also wanted to be in control of which compute capabilities that we natively supported in our own releases.
With Foundry’s release of Nuke14.1/Nuke15.0, the things behind all the reasons above have caught up, i.e. Nuke is now released with a modern enough PyTorch for our needs, and is built against modern CUDA/cuDNN versions that do natively support up to the RTX40xx series of GPUs (compute capability 8.9). The benefits of using Nuke’s in-built version of PyTorch is mostly file size. For the Nuke14.1/Nuke15.0 our own binaries are way smaller (like 1-2 Mb instead of 1-2Gb), and we don’t need to supply a copy of the CUDA/cuDNN libs since they already exist within the Nuke installation itself. One example would be the zip download of the NNFlowVector v2.2.1 build for Nuke15.0 (Linux) with GPU support: 240Mb instead of 2.5Gb.

The documentation about the above is not fully up to date yet. We intend to fix this going forward of course, but felt it was more important to get the latest builds released instead (and not delayed by documentation processes).

Another good thing to know is that we at the same as releasing the Nuke15.0 builds did a bug fix of NNSuperResolution, bumping the version from v3.4.2 to v3.4.3. This is an important bug fix that fixes a regression bug that made the RGBA not work. Please be sure to download and run the latest versions that are released. 🙂

Have a great year compositing in 2024!
Cheers, David

New updated versions of all plugins (Linux)

This is an across the board maintenance release where we are focusing on compatibility and stability. All plugins now support:

  • Better automatic releasing of used GPU memory after processing has finished. Also, you no longer need to restart Nuke to release all memory, just do a “File / Clear”
  • Added support for Nuke 14.1
  • Added support for CUDA 11.8 and hence native Ada Lovelace/Hopper compatibility, e.g. RTX40xx GPUs
  • We have nailed down a few edge case bugs that led to crashes
  • Better info about JIT-compilation and CUDA compatibility
  • Better error logging (telling you if you are running out of GPU VRAM more clearly)

NNSuperResolution v3.4 does in addition to all the above, support single channel layers and has been internally optimized to perform about 20% faster in sequence mode! The full change log can be found here.

NNCleanup v1.4 is in addition to the list above, also featuring four new neural network variants (AA-DD), with improved detail and color stability. The full change log can be found here.

NNFlowVector v2.2 is including the full list of changes above, but haven’t got any other extra changes. The full change log can be found here.

This release is so far only for the Linux builds of the plugins, sorry about that. We decided that it was better to get this release out to the public as soon as possible, and then continue the work with creating the Windows builds (instead of just sitting on the Linux builds for some undetermined amount of time) . It’s hard to tell exactly how long it will take to create the Windows builds, but a likely estimate is before the end of January (but hopefully earlier!).

If you are unsure of what CUDA version of the different builds that you should download and run, please have a look at our new CUDA compatibility chart page.

To follow up on the last blog post; We are still very much working on NNSuperResolution v4.0, but it has turned out that our initial estimate of the needed training time for the new upscale network (with the internal optical flow solution from NNFlowVector) was pretty far off. Instead of a couple of weeks for one of the more complex training stages, it has turned out that it needs to number crunch for a couple of months(!) instead. This is mainly due to the need of a much higher resolution of the training patches since the new optical flow network needs a larger spatial coverage to work correctly. Hence we need to push the time table for getting NNSuperResolution v4.0 released to some time this spring, instead of the end of this year as previously communicated. The one who waits… 🙂

Hope you like these new versions as much as we do!
Cheers, David

Status update and development roadmap

We haven’t posted any new releases for a while, so we feel it’s time to give you an update of what we’re currently up to and what’s to come. But first a little bit of a background by looking back at the year so far. The year started with the release of our third Nuke plugin NNCleanup. After getting v1.0 out in February, we quickly followed up with a couple of maintenance releases. When we felt NNCleanup was in a stable state we circled back to our most popular plugin NNFlowVector.

We adapted a modern and new transformer based optical flow solution to the plugin and trained the network from scratch using the same dataset as the already released solution. After a few months of work we released NNFlowVector v2.1 with the new “AA” variant. This has been welcomed by lots of users across the globe, so we are very happy with the addition.

After the release of NNFlowVector, we felt it was time to circle back to our first plugin NNSuperResolution. This is where we are spending all development resources currently. The most important thing we are doing to it is swapping out the rather old and simple optical flow solution that was baked into the sequence mode upscale solution. We are replacing it with the very successful model from our NNFlowVector plugin. With this new and very performant optical flow generator the upscale solution is able to much more reliably warp in the previous frame to the current and hence have more info to work with. The result is overall a sharper and more stable image with less artifacts. It does mean retraining the whole solution from scratch though, so it’s taking quite a lot of time. All variants need retraining; Alexa 2x, Alexa 4x, CG 2x and CG 4x and they are each a two step process taking between a week or two in processing time. We are also looking into the possibility of a CG/RGBA solution for stills, basically so you can upscale textures with an alpha channel. This solution is not working satisfactory yet though, so we are not making any promises about it just yet. The aim is to get NNSuperResolution v4.0 released in the end of the year.

After the new NNSuperResolution is released, it’s time to circle back to NNCleanup again. The big thing to attack then is a sequence mode, i.e. being able to cleanup/paint away objects in moving material. This work has already started, but it’s very complex so it will take quite a lot of development time. The aim and hope is to get NNCleanup v2.0 released, with a working sequence mode, some time during 2024.

Cheers,
David

New version of NNFlowVector released

We have officially released v2.1 of NNFlowVector. The new version features render license support, Nuke Indie support and a new variant called “AA” that is based on a new and modern transformer model. You can go ahead and download it now!

The render license support means more flexibility when licensing our plugins. You can for example buy a couple of GUI licenses for your artists to use interactively when working, and then have a bunch of render licenses for the farm to batch process with (without them interfering with the GUI licenses). If you are a bit technically creative, you can also make your farm use the GUI licenses durings off hours to up the license count for overnight batch rendering on the farm.

Nuke Indie support opens up the use of NNFlowVector for more Nuke users. All of our plugins are now supporting Nuke Indie, and are listed as such on Foundry’s 3rd party listing of plugins for Nuke.

The new “AA” variant is an even more complex neural network for solving optical flow / motion vectors for image sequences. It can handle some complicated situations better than the already existing “A-H” variants. Due to its increased complexity it is also using more VRAM on your GPU than the other variants. To be able to fit it on your GPU, you might need to lower the “max size” parameter. We have also introduced a new parameter called “separate inferences” that will solve the forward and backward pass after each other instead of in parallel, which also makes the plugin use a bit less memory. Worth knowing is that the new variant is not always better, so it’s still recommended to try out and compare a couple of different variants for your own specific image material.

Take care and happy compositing!
Cheers, David