Author Topic: [SD6] Auto Levels Plus Node - remap an image to a custom range  (Read 3212 times)

Hello everyone,

As Substance Designer 6 introduced new bit depths for nodes (L16F and L32F), some new opportunities and new possible workflows come to life. To facilitate them, I've accidentally created a new node that I thought would be great to share.  ::)

The Auto Levels Plus node allows automatic remapping of an input image to a custom range specified by input parameters. Basically, it works just like you would expect from an Auto Levels node, but with two added benefits:

1) You can remap to a custom range, like to 0.5 - 1 or any other.

2) The node supports HDR data, so an input image can be of any available bit depth (L16F or L32F too).

I have tested the node for some time, but there could be bugs or corner cases when it may not work as expected. Please report such occasions in this thread so I could take a look at them and possibly come up with a fix.

Additional considerations before using the Auto Levels plus node:

The node uses this formula to remap values: NewValue = (OldValue - OldMin) * (NewMax - NewMin) / (OldMax - OldMin) + NewMin. So, at the core, it should be mathematically accurate.However, finding the OldMin and OldMax is quite an expensive operation, so at higher resolutions (2k+) performance would start to drop quite a bit.

To alleviate it, the node makes sampling for OldMin and OldMax values in lower resolution. Though there are some countermeasures, in some cases when resolution optimization is too extreme for a given image, due to downsampling OldMin and OldMax values can be sampled inaccurately (a small bit higher or lower than they are in fact). Most of the times, however, countermeasures mentioned above tend to work great, so it will produce a mathematically accurate result at a good speed for most cases.

One more note: in preliminary tests I've spotted some strange occurrences when the image is being remapped to a range other than 0...1. It was possible that some pixels would get a value a little bit off this new range. For example, if you remap it to a range of 0.41 - 1, some pixel сan get a value of 0.409998 or like that. I believe this is a precision issue due to operating with Float values, but I'm not sure about it. As a workaround, I've decided to clamp such "stray" values — in practice, it shouldn't cause any problems, as the margin of error there is minuscule.

Some images to illustrate the node are below and here is a download link:
https://forum.allegorithmic.com/index.php?action=dlattach;topic=15281.0;attach=23839

Cheers!
Last Edit: February 19, 2017, 12:33:49 pm

Uploaded the node to the Share, it's pending review now.  :)

Substance Designer 2017.1 was recently released.  ;D It got an update for the Auto Levels node that now works faster and with HDR data too. Perfect time to update my Auto Levels Plus  :). Download here: https://forum.allegorithmic.com/index.php?action=dlattach;topic=15281.0;attach=25710

I've made some optimizations to the node and now it's really fast. On my hardware, it takes 5 ms to process a 4k map and about 12 ms for an 8k map (vs ~ 100 ms with the Auto Levels node that ships with SD). With speed optimization enabled, the numbers go down even more and it's just 5 ms for an 8k map (so, 8x - 20x faster).

Those are huge speed gains and it may greatly benefit those who like to use Auto Levels functionality in their graphs a lot.

If someone would be curious about how this was done, here's a short explanation.

The greatest performance hit for the nodes in SD is a factor of node resolution. Reducing the resolution of nodes that need to process some data is the most common way to get better performance.

For Auto Levels Plus node, I've figured that it would be more beneficial to perform only the absolute minimum of computations necessary in each Pixel Processor node and gradually lower resolution of the subsequent nodes in the chain.

In such way, the Auto Levels Plus process the input image in full resolution only once, and each subsequent processing step is done in one step lower resolution. For example, 8192x8192 input map gets processed as 8192x8192, then 4096x4096, then 2048x2048, etc.

Quite fortunately, the logic of the node allowed that.

The idea of the Auto Levels Plus is quite simple: we have to find and store in some way luminance values of at least one brightest and one darkest pixel in the input map. Then it's a matter of some math equations to remap an entire input map to a new range of values (that is Auto Levels in a nutshell).

So, in the end, we need only two values (basically, two pixels) from the input map and can demolish all other pixels that dare to stay in our way.  :P By downsampling the input map to a lower resolution by one step, like from 2048x2048 to 1024x1024, we can vastly reduce the sheer amount of pixels we need to process in our search for luminance values.

However, there's a problem: downsampling can skew luminance values of pixels in the map because it will take luminance values of 4 of them, and compute from that a value for a new single pixel (repeat that for an entire map).

An input map downsampled several times will most likely lose pixels with exact luminance values we need and instead will get some averaged ones. In our case, it will make the node to be unable to correctly perform the remap function, because it has to be supplied with exact values to produce an accurate result.

But there's a trick: if we use a Pixel Processor to locally find darkest (and brightest) pixels in our map before downsampling, we can "spread" them a bit around themselves. I figured that it is enough to spread luminance value of a pixel just exactly around it to get what I wanted. So, a Pixel Processor function was put in place that sampled neighbour pixels and painted blocks of 3x3 pixels with the same luminance value (highest or lowest one, depending on how the function was configured). Check my Randomize Mask node for details about how this "sampling" function worked for me, as I've used it in this node too: https://www.artstation.com/artwork/1alxX

A block of 3x3 pixels of the same luminance can be safely downsampled to a lower resolution step, and at least one pixel from that group will retain its original luminance value (it will not be averaged with pixels with other luminance values, but instead with pixels from the 3x3 group that have same values).

We can repeat this step before every downsample operation and safely build a chain of Pixel Processor nodes that will go in resolution as low as 16x16. From there, a couple of extra "spread" operations will fill an entire image with a value that we're looking for. Repeat that second time for another value (brightest pixel or darkest pixel - we need two values), and output of this succession of Pixel Processor nodes can be used to feed data to the remap function.

That's why the node is fast — it uses just Pixel Processor nodes and rather simple operations in conjunction with resolution optimization.

I hope this explanation will be useful and informative for someone interested in making their own nodes using Pixel Processor.  :)

Hey Sergey,

Just wanted to say thanks for coming up with this. It's exactly what I was looking for :).
Awesome work!