This is an across the board maintenance release where we are focusing on compatibility and stability. All plugins now support:
Better automatic releasing of used GPU memory after processing has finished. Also, you no longer need to restart Nuke to release all memory, just do a “File / Clear”
Added support for Nuke 14.1
Added support for CUDA 11.8 and hence native Ada Lovelace/Hopper compatibility, e.g. RTX40xx GPUs
We have nailed down a few edge case bugs that led to crashes
Better info about JIT-compilation and CUDA compatibility
Better error logging (telling you if you are running out of GPU VRAM more clearly)
NNSuperResolution v3.4 does in addition to all the above, support single channel layers and has been internally optimized to perform about 20% faster in sequence mode! The full change log can be found here.
NNCleanup v1.4 is in addition to the list above, also featuring four new neural network variants (AA-DD), with improved detail and color stability. The full change log can be found here.
NNFlowVector v2.2 is including the full list of changes above, but haven’t got any other extra changes. The full change log can be found here.
This release is so far only for the Linux builds of the plugins, sorry about that. We decided that it was better to get this release out to the public as soon as possible, and then continue the work with creating the Windows builds (instead of just sitting on the Linux builds for some undetermined amount of time) . It’s hard to tell exactly how long it will take to create the Windows builds, but a likely estimate is before the end of January (but hopefully earlier!).
If you are unsure of what CUDA version of the different builds that you should download and run, please have a look at our new CUDA compatibility chart page.
To follow up on the last blog post; We are still very much working on NNSuperResolution v4.0, but it has turned out that our initial estimate of the needed training time for the new upscale network (with the internal optical flow solution from NNFlowVector) was pretty far off. Instead of a couple of weeks for one of the more complex training stages, it has turned out that it needs to number crunch for a couple of months(!) instead. This is mainly due to the need of a much higher resolution of the training patches since the new optical flow network needs a larger spatial coverage to work correctly. Hence we need to push the time table for getting NNSuperResolution v4.0 released to some time this spring, instead of the end of this year as previously communicated. The one who waits… 🙂
Hope you like these new versions as much as we do! Cheers, David
We haven’t posted any new releases for a while, so we feel it’s time to give you an update of what we’re currently up to and what’s to come. But first a little bit of a background by looking back at the year so far. The year started with the release of our third Nuke plugin NNCleanup. After getting v1.0 out in February, we quickly followed up with a couple of maintenance releases. When we felt NNCleanup was in a stable state we circled back to our most popular plugin NNFlowVector.
We adapted a modern and new transformer based optical flow solution to the plugin and trained the network from scratch using the same dataset as the already released solution. After a few months of work we released NNFlowVector v2.1 with the new “AA” variant. This has been welcomed by lots of users across the globe, so we are very happy with the addition.
After the release of NNFlowVector, we felt it was time to circle back to our first plugin NNSuperResolution. This is where we are spending all development resources currently. The most important thing we are doing to it is swapping out the rather old and simple optical flow solution that was baked into the sequence mode upscale solution. We are replacing it with the very successful model from our NNFlowVector plugin. With this new and very performant optical flow generator the upscale solution is able to much more reliably warp in the previous frame to the current and hence have more info to work with. The result is overall a sharper and more stable image with less artifacts. It does mean retraining the whole solution from scratch though, so it’s taking quite a lot of time. All variants need retraining; Alexa 2x, Alexa 4x, CG 2x and CG 4x and they are each a two step process taking between a week or two in processing time. We are also looking into the possibility of a CG/RGBA solution for stills, basically so you can upscale textures with an alpha channel. This solution is not working satisfactory yet though, so we are not making any promises about it just yet. The aim is to get NNSuperResolution v4.0 released in the end of the year.
After the new NNSuperResolution is released, it’s time to circle back to NNCleanup again. The big thing to attack then is a sequence mode, i.e. being able to cleanup/paint away objects in moving material. This work has already started, but it’s very complex so it will take quite a lot of development time. The aim and hope is to get NNCleanup v2.0 released, with a working sequence mode, some time during 2024.
We have officially released v2.1 of NNFlowVector. The new version features render license support, Nuke Indie support and a new variant called “AA” that is based on a new and modern transformer model. You can go ahead and download it now!
The render license support means more flexibility when licensing our plugins. You can for example buy a couple of GUI licenses for your artists to use interactively when working, and then have a bunch of render licenses for the farm to batch process with (without them interfering with the GUI licenses). If you are a bit technically creative, you can also make your farm use the GUI licenses durings off hours to up the license count for overnight batch rendering on the farm.
The new “AA” variant is an even more complex neural network for solving optical flow / motion vectors for image sequences. It can handle some complicated situations better than the already existing “A-H” variants. Due to its increased complexity it is also using more VRAM on your GPU than the other variants. To be able to fit it on your GPU, you might need to lower the “max size” parameter. We have also introduced a new parameter called “separate inferences” that will solve the forward and backward pass after each other instead of in parallel, which also makes the plugin use a bit less memory. Worth knowing is that the new variant is not always better, so it’s still recommended to try out and compare a couple of different variants for your own specific image material.
We are pleased to announce that we have released both NNSuperResolution (v3.3.0) and NNCleanup (v1.3.0) with dedicated render license support. This means more flexibility for you to choose how to license our plugins. You can for example buy a couple of GUI licenses for your artists to use interactively when working, and then have a bunch of render licenses for the farm to batch process with (without them interfering with the GUI licenses). If you are a bit technically creative, you can also make your farm use the GUI licenses durings off hours to up the license count for overnight batch rendering on the farm. Adding render license support was in response to customers asking for this functionality, and we do 100% agree it’s a good thing! If you already got a node locked license, or if you’re a studio with a site license, there is no need to worry. You don’t have to change anything, and things will keep working as it is currently (i.e. the nodes will continue rendering using the GUI licenses in those cases). NNFlowVector will also get the addition of render license support in the next point release, which will probably be available in a few weeks from now.
The render licenses are available for purchase in the Shop already, and are priced at $59 USD/year.
NNCleanup (v1.3.0) has also got support for Nuke Indie with this release! NNFlowVector will follow along with Nuke Indie support as well in the next point release.
Since our last blog post in February, we have also released NNFlowVector v2.0 as a sharp public release (i.e. no beta version anymore). This means that you can now go ahead and download and install it, and use the matte input support on both Linux and Windows.
This new version features some new and important controls for what area to process when doing the cleanup/inpainting. You have the option to either process “Full frame”, “Specified region” or “Matte input’s bbox” (the old behaviour of v1.0.0 was always “Full frame”). Because the area that needs processing is usually just a sub section of the image, having these options makes it possible to work on really large images, 4K plates and even 16K HDRIs for example. We’ve also added a “process_scale” knob that makes it possible to work on really large areas by internally downscaling them before processing (and then later upscaling them again). All this makes the memory footprint way smaller on the GPU and hence possible to still work GPU accelerated.
We have added a couple of HDRI examples on the product page, including downloadable EXRs of before and after.
We’re excited to announce our third product NNCleanup, a Nuke plugin for quick cleanup / inpainting tasks. Read more about the plugin, and have a look at a few more examples, at the product page. If you are keen to try it out yourself in Nuke, please request a free trial license. We already got ideas for improvements, but until then let us know what you think using this form, we are keen to hear your input!
We have just released a public beta version of v2.0.0 of NNFlowVector. This version features a matte input, so you can make the plugin ignore a selected area. To be more precise, it’s not really ignoring the selected area but rather treating it with inpainting and some machine learning/artificial intelligence to make it appear as an approximation of what it should have looked like if the objects selected weren’t there during filming. This makes it possible, for example, to create motion vectors of the background wall even if a character is passing by in front of it. You can then use the vectors to track in footage/image patches onto the wall. You still got to roto back the character of course, but the tracking is solved.
We have decided to make the v2.0.0 release available as a beta version because we are really keen for you to get your hands on the matte input feature. This has been the single most asked for feature by you, so we are very excited to deliver on that! The choice to release it as a beta version first is to get it into your hands earlier, instead of postponing the release a couple of months due to extra testing. We hope that the plugin is already stable, but if you are experiencing bugs or crashes, please drop us a mail at [email protected] where you are explaining what is going on. Thanks for the help!
We have just released builds for both NNSuperResolution and NNFlowVector for Nuke 14.0. What’s good to know is that Foundry has updated the bundled CUDA version in Nuke to v11.1.1, and the cuDNN version to v8.4.1. We have matched the builds of our plugins to that so there are no compatibility problems. You will basically get native compatibility of all supported GPUs up to compute capability 8.6, i.e. Ampere type of cards (for example RTX3080 and RTX3090). If you are lucky enough to own a brand new RTX4080 card or similar, you will have to rely on the JIT compilation of the kernels. The plugins will work, but you will need to wait for that kernel compile the first time around (there is more info about this in our documentation PDF). Enjoy!
We are also pleased to announce that we are very close to releasing NNFlowVector with matte support! This will be released as v2.0 in the beginning of the new year. We are very excited about this since this is the most common feature request we get. If you are very keen to test this out and can’t wait, we are interested in having beta testers. Please use the normal contact form, and please let us know what build you are using (platform, Nuke version, CUDA version). We will then send you an email with a special download link so you can get up and running.
Hope you like our Christmas presents. 🙂 Merry Christmas and Happy New Year! Cheers, David
We have successfully migrated all of our files available for downloading to Amazon’s AWS S3 cloud system. We received your feedback about our downloads being painfully slow (for some of you it even took days to download the latest builds of our plugins). We agree that this was not acceptable, and have now solved it by upgrading to a much more solid solution. We hope that this will provide nice download speeds going forward, no matter where in the world you are located.
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.