Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Sergey Danchenko

Pages: 1 [2] 3 4 ... 10
Substance Designer 2017.1 was recently released.  ;D It got an update for the Auto Levels node that now works faster and with HDR data too. Perfect time to update my Auto Levels Plus  :). Download here:;topic=15281.0;attach=25710

I've made some optimizations to the node and now it's really fast. On my hardware, it takes 5 ms to process a 4k map and about 12 ms for an 8k map (vs ~ 100 ms with the Auto Levels node that ships with SD). With speed optimization enabled, the numbers go down even more and it's just 5 ms for an 8k map (so, 8x - 20x faster).

Those are huge speed gains and it may greatly benefit those who like to use Auto Levels functionality in their graphs a lot.

If someone would be curious about how this was done, here's a short explanation.

The greatest performance hit for the nodes in SD is a factor of node resolution. Reducing the resolution of nodes that need to process some data is the most common way to get better performance.

For Auto Levels Plus node, I've figured that it would be more beneficial to perform only the absolute minimum of computations necessary in each Pixel Processor node and gradually lower resolution of the subsequent nodes in the chain.

In such way, the Auto Levels Plus process the input image in full resolution only once, and each subsequent processing step is done in one step lower resolution. For example, 8192x8192 input map gets processed as 8192x8192, then 4096x4096, then 2048x2048, etc.

Quite fortunately, the logic of the node allowed that.

The idea of the Auto Levels Plus is quite simple: we have to find and store in some way luminance values of at least one brightest and one darkest pixel in the input map. Then it's a matter of some math equations to remap an entire input map to a new range of values (that is Auto Levels in a nutshell).

So, in the end, we need only two values (basically, two pixels) from the input map and can demolish all other pixels that dare to stay in our way.  :P By downsampling the input map to a lower resolution by one step, like from 2048x2048 to 1024x1024, we can vastly reduce the sheer amount of pixels we need to process in our search for luminance values.

However, there's a problem: downsampling can skew luminance values of pixels in the map because it will take luminance values of 4 of them, and compute from that a value for a new single pixel (repeat that for an entire map).

An input map downsampled several times will most likely lose pixels with exact luminance values we need and instead will get some averaged ones. In our case, it will make the node to be unable to correctly perform the remap function, because it has to be supplied with exact values to produce an accurate result.

But there's a trick: if we use a Pixel Processor to locally find darkest (and brightest) pixels in our map before downsampling, we can "spread" them a bit around themselves. I figured that it is enough to spread luminance value of a pixel just exactly around it to get what I wanted. So, a Pixel Processor function was put in place that sampled neighbour pixels and painted blocks of 3x3 pixels with the same luminance value (highest or lowest one, depending on how the function was configured). Check my Randomize Mask node for details about how this "sampling" function worked for me, as I've used it in this node too:

A block of 3x3 pixels of the same luminance can be safely downsampled to a lower resolution step, and at least one pixel from that group will retain its original luminance value (it will not be averaged with pixels with other luminance values, but instead with pixels from the 3x3 group that have same values).

We can repeat this step before every downsample operation and safely build a chain of Pixel Processor nodes that will go in resolution as low as 16x16. From there, a couple of extra "spread" operations will fill an entire image with a value that we're looking for. Repeat that second time for another value (brightest pixel or darkest pixel - we need two values), and output of this succession of Pixel Processor nodes can be used to feed data to the remap function.

That's why the node is fast — it uses just Pixel Processor nodes and rather simple operations in conjunction with resolution optimization.

I hope this explanation will be useful and informative for someone interested in making their own nodes using Pixel Processor.  :)

I believe it isn't there because not many people asked about such a node.

Fortunately, with Pixel Processor node we can have any blending mode we want rather simply :). I've made a node that mimicks the Hard Light you're looking for. Hope you'll find it useful. Download link below.

Thanks! I'll check it out. I agree that currently there are too many parameters under the hood and it will be beneficial to simplify the node a bit.

Seems that you get it working. Nice job! ;)

Thank you for kind words. Actually, the implementation concept posted above is my second take on this task. First I've assembled a prototype that used literally 1000+ Pixel Processor nodes to isolate and level out luminance values in a range of 1-255. It worked to some extent, but it wasn't reliable and what's more important it was slow. Computation times were acceptable around 1500ms, but before computation, it had to pump the data through that many nodes, and it was lagging for like 5-6 seconds before the node would be updated.

I was quite lucky to find another solution same morning.

And by the way ,
would you like to look into one of my node ?

I made this node for my co-workers to generate unique curve shapes , but it turns out to be difficult to use . Is there some way around or some ways to optimize it, any tips could be apprecialted.

I'll take a peak, but I'm not really that advanced in SD nodes as it may look. If something useful would come up, I'll let you know.  ;)

I was a bit busy until today, so the node was put on hold. Now I have some time for it, so I will continue.  ::)

Thanks! Check out the first image (tall one) or the Arstation page - they have just enough resolution for the facial painting. I'll look to add another image here when I get home.


Meet MAT the Colossus – my entry for the Allegorithmic’s contest.  :D MAT is an ancient stone colossus covered with moss and lichen that stands on a swampy ground. Legends say the antique Path of a Painter goes right through the colossus. It’s a path full of trials, but the one that should be taken to reach the Heart of a Painter.

For this work, I was looking for a concept that would communicate some idea. Due to that, I've decided that MAT in my vision shouldn't be a character, but rather a canvas for something more than that. "Everything is a canvas for a Painter" — this line came pretty quickly (pun intended ::)), and it fit perfectly well with this concept.

Thinking of a canvas, I've remembered primeval cave drawings I saw and how I was fascinated by the idea that it is the Art and Culture that connects the past and the future. Primeval artists were creating their art just as we do in our time. Thousands and tens of thousands of years passed, and yet their art still exists for us while these people are long turned to dust.

For me, it was a really thrilling idea that for my entry I could use the very first forms of art in conjunction with the Substance Painter, which tech stands on the edge of our progress. It’s like bringing together something that on a first thought can’t stand further apart — cave drawings from prehistoric times and Substance technology.

Just think of it: some artists of old actually created every drawing you can see on MAT, and there are four of them. It were people of flesh and bones who were born, lived and perished, but in the end managed to leave a mark of their existence that reached us through the ages.

In one part, "Path of a Painter" is my humble tribute to these unknown artists of old. I'm putting these drawings to display one more time, in the year of 2017, with the idea that it is the Art that can long outlive ourselves.

In another part, this work is a tribute to all artists that lived, live now and will live in the future. I believe that there is a link between all of the artists who climb the path of creativity through various obstacles, pitfalls and disappointments, but still struggle to express themselves through the art. May the art created within an open heart of the artist be his heritage.

I hope you will enjoy these ideas as much as I do. :)

Check out the Artstation page:

P. S. By the way, reality is cruel.  Particular artist of our time misspelled the "heart" word, rendered images and submitted them to the contest. Oh, dear...  :P

Nope. In my tests, it seems that using 16-bit mode for the node gives it enough values to produce a smooth gradient. In a corner case, 32F bit depth can be used.

I'm super curious how you managed that bit.

I've managed it only because the solution was already laying in front of me thanks to the two nodes I've made before (lucky!  ;D). Still took me three full days to grasp it.

In a nutshell, there are three key aspects to this node:

1) You need to generate a gradient that is able to run in different directions basing on some "angle" parameter. At first, I thought that this would be the most difficult part. Surprisingly, it was the easiest part as there are already some great solutions for gradients on Share that worked perfectly well.

Initially, I was looking for a way to generate the gradient that would run from black to white for each tile in the mask right away. Somehow I figured that it's more important to actually HAVE a gradient, and it would be more productive to look for a way to maximize its range in a separate step.

2) You need a variation map to drive gradient angles. This one was covered by the Randomize Mask node I've made earlier.

3) You need some kind of Auto Levels functionality to maximize the gradient's dynamic range for each individual tile.

The problem with "vanilla" Auto Levels shipped with Substance Designer was that it can work on a "macro" level only, so it would level out an entire image. It would remap it to a new range in a way that would make the darkest pixel in the image the "new black" and the brightest one would be the "new white". Unfortunately, this wouldn't be helpful for our case.

Here the Auto Levels Plus node comes into play. It was implemented through the Pixel Processor, so it can work on a "micro" level, i.e. on a per-pixel basis. The node uses a specific math formula to remap each pixel to a new range of values. However, it is required to be supplied with two values to work correctly: one that would define the "old range minimum" and one for the "old range maximum". In our case, that would be the darkest pixel in a gradient and the brightest one.

As the Auto Levels Plus node works on the per-pixel basis, it can level out individual tiles in the mask to a common new range if supplied with correct data. From here, we need to get the "old minimum" value and the "old maximum" value for each tile in the mask, then store them in two additional "data" maps to provide the Auto Levels Plus function with this values for each pixel processed. The important thing here is that such values should be exactly the same throughout the individual tile in the mask, so the entire tile would be remapped to a new range using the same data.

From here, the Randomize Mask node comes back into play. The key feature of this node is a function that can spread (propagate) brightest pixels sampled in the tile throughout the entire tile without crossing its borders. Using this function, we can find and store in a map maximum values of gradients that run in each tile. This will cover half of our needs for the Auto Levels Plus function. In a similar manner, we can sample for the darkest pixels in tile's gradients and store them in a map as well.

At this point, we have everything we need to auto level individual tiles with random gradients to their maximum range (for example, 0-1, or black-to-white). The Auto Levels Plus function would take the mask with gradients as the main input and two data maps as additional inputs. The function will sample each pixel in all three maps and use appropriate values to remap main input pixels to a new range using the math formula.

Basically, that's it. See the image attached for a visual guide.

Fun fact: working on this node I've detected a couple of shortcomings in the Auto Levels Plus node itself. Fixing them made the node 10x-30x times faster than the "vanilla" Auto Levels (was so-so). Double profit.  ::)

Yes. If you don't have a Nvidia GPU, you can only use a CPU.

Yes, it will work. The only scenario when it may fail is when you have a very complex, spiral-like pattern with long and thin parts bending in different directions at extreme angles. Something like an ornament, perhaps. But, even then, it's really a matter of the node configuration. One can open the .SBS and reconfigure the node to allow more "processing" steps to be concluded, and then it will work at the expense of longer computation time.

For general usage, it should be pretty universal and reliable.  :P

annunziata3d, great suggestions! Makes perfect sense. Will do.

seanv3d, thanks! Should be pretty useful for anything with tiles.

Been using this for a while now. Just want to say thanks and great work!

Also... would it be possible with a random gradient/slope node next?  ;D

You're on!  ;)

Check it out:

I'm looking to making a node that will be able to fill the mask with gradients/slopes running in different directions. It's a prototype for now, so I'm looking for some input on it.

Good news, everyone! ::) .

With a bit of wizardry and some spontaneous insights here and there I have a functional prototype for this one. :P It's a bit funny that it became possible as a combination of my previous nodes: Randomize Mask and Auto Levels Plus.

A "Slope Mask" prototype takes as an input a variation map and produces a mask filled with gradients, whose direction is determined by the luminance of a corresponding tile in the input mask. Simply put, the variation mask rotates the gradient.

The gradient can be in a maximum range for each individual tile in the mask, or it can be controlled with input parameters. For example, it is possible to limit the gradients to a specific range, like 0.5 - 0.75 (or any other). Also, it is possible to invert the direction of gradients. There are potential for some additional parameters to make the node even more powerful and controllable, but I haven't explored it in full just yet.

For example, using a Levels node, one can clamp some portions of the gradients to produce interesting effects in the height map, like partially cut tiles (see the third image attached).

And, of course, it works with bitmap images too.  :D

Because this is just a prototype, I'm not making the node available yet, as it isn't finished. I'm looking for some feedback to enhance it. How would you use such a node? What parameters would you like to control? Some additional ideas, maybe? Comments?

Thanks and cheers!


Pages: 1 [2] 3 4 ... 10