NNSuperResolution is a x4 resolution upscale plugin for Foundry’s Nuke. Enjoy being able to, directly in your composite, upscale photographed/plate material and sequences to a much higher resolution. To test it out for yourself, please use the downloads page. You are also welcome to request a free trial license to run it without added watermarks/noise for a limited time period.
Here is an interactive still image before/after example:

After
After
After
Before
Before

Features

  • Still mode for upscaling photos and still material like DMP patches, textures etc.
  • Sequence mode for upscaling moving material like filmed plates, final composites etc.
  • High dynamic range upscaling
  • Native overscan handling (i.e. it handles larger bounding boxes than the image frame/format)
  • Processes the RGB channels of an input image (no support for the Alpha channel/4th channel in layers yet)
  • Support for multiple layers, i.e. you can feed multiple RGB layers through the plugin at the same time using Nuke’s native multichannel/layer system.
  • GPU accelerated using CUDA by NVIDIA (requires a NVIDIA graphics card)
  • Internal stitching of several inference/image patches (to be able to upscale high resolution images without having that much VRAM available on the GPU)

The algorithm that does the upscaling is based on modern neural network technology (also commonly referred to as deep learning, machine learning or artificial intelligence). Since v2.0 of the plugin, there are two modes available which uses different network solutions for best results on either stills or sequences. While the still mode produces sharper results it might stutter/flicker on some moving material. The sequence mode focuses on making the result as sharp and detailed as possible while keeping a focus on making sure the result is temporally stable . Please try out the plugin for yourself by visiting the downloads page before you buy a license.

Video examples

Here are two longer and more elaborate video examples available on YouTube. Be sure to view them in their native full 4K (UHD) resolution. The examples are created by using Alexa footage from Arri’s demo footage site (https://www.arri.com/en/learn-help/learn-help-camera-system/camera-sample-footage). Those native 4K Arri Alexa videos have been scaled down to 1K (25% of original resolution), and then run through NNSuperResolution to get to an upscaled 4K resolution video. The videos below are only showing the upscale step from 1K to 4K resolution using both sequence mode and still mode.

NNSuperResolution before & after, Interior, 1K to 4K upscale example with comparison between sequence mode and still mode
NNSuperResolution before & after, Exterior, 1K to 4K upscale example with comparison between sequence mode and still mode

These videos are also available as high bitrate MP4 video downloads for people interested in frame stepping and doing a more in-depth comparison:
Interior: https://pixelmania.blob.core.windows.net/content/NNSuperResolution_Interior_comparison.mp4
Exterior: https://pixelmania.blob.core.windows.net/content/NNSuperResolution_Exterior_comparison.mp4

If you want to have a quick look at some crop in regions of these videos directly in the web browser, have a look at these four example pages:
#1, Interior, close up on girl’s face
#2, Interior, medium on girl in sofa
#3, Exterior, cows in the background
#4, Exterior, close up on berries in the foreground

Knob reference

The algorithm that calculates the high resolution output using the lower resolution input material is pre-trained on lots and lots of images/sequences during the creation of the software. The only input to the algorithm is the low resolution material, and hence there are no knobs to tweak the upscaled result. The plugin is hard coded to do a 4x upscale of the input material. To produce the upscaled result, you simply feed it a RGB image/ sequence to the only input of the node. The node supports processing of any RGB layer, which means that you can feed multiple layers of RGB material through it at the same time, and write it out using a standard Write node set to “channels: all”. Please note that there is no alpha channel support yet (or 4th channel support). The knobs that do exist on the plugin to tweak by the artist are most related to being able to scale up pretty large resolution images using limited VRAM on the graphics card. The neural network requires pretty large amounts of memory even for small resolution input images. To be able to scale up, for example, a 1K input to 4K the image needs to be split up into several passes/patches to fit into memory. This is all done and handled transparently “under the hood” so the artist can focus on more important things. You might need to tweak the settings though depending on the use case and available hardware on your workstation.

mode

The default mode is still mode. This is best for upscaling still photos, textures, patches and similar material. When you are processing sequences, i.e. filmed video material, you want to change the mode knob to “sequence”. This activates a different way of processing than when in still mode. While the sequence mode doesn’t produce as detailed and sharp still frames as the still mode, it does create temporally stable results instead. This is hugely beneficial when you got moving material, for example when doing VFX work.

max size

Max size sets the maximum dimension, in one direction, that an image patch can have and is the single most important knob on the plugin. The default is 500, which means that the max dimensions an input patch would be allowed to be is 500×500 pixels. From our experience that will use up to around 8Gb of VRAM on your graphics card. If you haven’t got that available and free the processing will error out with a CUDA memory error, and the node in Nuke will error in the DAG. To remedy this, and also to be able to input resolutions much higher than a single patch size, you can tweak the max size knob to adapt to your situation. You can lower it to adapt to having much less VRAM available. The plugin will split the input image into lots of smaller patches and stitch them together in the background. This will of course be slower, but it will make it possible to still run and produce much larger results. There is a text status knob above the “max size” knob (in between the two dividers), that will let you know how many image patches the plugin will run to create the final upscaled image.

overlap

Since the plugin will run multiple patches through the neural network, there will be lots of edges of image patches present. The edges doesn’t get as satisfying results as a bit further into an image from the edge. Because of this, all these multiple patches that get processed are done so with some spatial overlap. The overlap knob value sets the number of pixels (in the source image resolution space) of the patches’ overlap. The default of 4, which in the resulting high res image is 16 pixels, is usually a good value.

padding

The padding is very connected to the overlap above. While the overlap sets the total amount of pixels the patches overlap, the padding then reduces the actual cross fading area where the patches are blended to not use the very edge pixels at all. The padding is also specified in pixels in the source resolution, so the default value of 1 actually means that the 4 edge pixels in the high res result of each patch will be thrown away. This way of blending all patches together has proven very successful in our own testing.

After (multisample)
After (multisample)
After (multisample)
After
After
Before
Before

Still mode: multisample

The default value of the multisample knob is “off” which means that each image patch is run through the upscale algorithm once. This usually produces nice and sharp still images. Sometimes it’s more beneficial to get a bit less sharp but smoother results instead. When multisample is “on” it will instead run each image patch through the upscale algorithm 4 times but first pre-process the patch by orienting it differently (using rotations and mirrors). This will of course make the process much slower, but it will in some situations be worth it. You have an example result with multisample on in the before & after image above (the rightmost image) picturing the wetsuit.

Sequence mode: frame range

When you are using sequence mode, it’s very important to set the frame range of the input material correctly. Since the algorithm need to gather neighbouring frames it needs to know the extend of the material it can use to best be able to produce a good result. If you are trying to view/process a frame outside of the specified frame range it will render black.

Sequence mode: preroll

The preroll is specifying how many frames before the current frame that is used to produce the current frame’s upscaled result. Since the algorithm is temporal it is a bit more complex than that. Basically the plugin will use the number of preroll frames if you jump straight into a random frame in a sequence. It will then need to run the whole upscale process for all the preroll frames before it can produce the current high res frame. This is how it’s making sure the result is highly detailed and also temporally stable. Doing this takes a lot of processing power and is not something we want to do for every frame. To be efficient the current frame’s high res result is cached internally in the plugin. So if you then step to the next frame in the timeline it will use the cache and will directly be able to process the frame.
To put it short and clear: The plugin will need to process the preroll amount of frames before the current one if the previous frame hasn’t been processed just before (and hence resides in the cache). Because of this the first frame will take much longer to process, but if you then step to the next frame one at a time the processing will be faster. Because of this, you do want to keep a pretty high batch count (frame range of a processing chunk) when you upscale material using the plugin in a farm environment. If you are rendering locally it will just work as long as you are rendering all consecutive frames in a sequence.

Use GPU if available

There is another knob at the top of the plugin called “Use GPU if available”, and it’s “on” by default (recommended). This knob is only present if you’ve installed a GPU version of the plugin. This knob is not changing the upscale behaviour, but rather how the result is calculated. If it is “on” the algorithm will run on the GPU hardware of the workstation, and if it’s “off” the algorithm will run on the normal CPU of the workstation. If the plugin can’t detect a CUDA compatible GPU, this knob will automatically be disabled/greyed out. This knob is similar to the one you’ll find in Foundry’s own GPU accelerated plugins like for example the ZDefocus node.
We highly recommend always having this knob “on”, since the algorithm will run A LOT faster on the GPU than on the CPU. To give you an idea of the difference, we’ve seen calculation times of the same input image be around 10 secs using the GPU and about 6 minutes(!) on the CPU. These numbers are just a simple example to show the vastly different processing times you will get using the GPU vs. the CPU. For speed references of processing, please download and test run the plugin on your own hardware/system.

After
After
After
Before
Before

Frequently Asked Questions

We have gathered the most common questions about the plugin on a separate Frequently Asked Questions page.

Downloads

All the downloads are available on the dedicated downloads page.

Licensing

To buy a license, please visit our shop. If you want to request a time limited trial license, please use the form on the Request a trial license page.

More examples

You got some more still examples right below. You can also visit this before & after examples page, which got a lot more still examples to browse. We also got four video examples pages, #1, #2, #3, #4, with some embedded GIF animations of crop regions of the Interior and Exterior examples higher up on this page.

After
After
After
Before
Before
After
After
After
Before
Before
After
After
After
Before
Before