Author Topic: Normalize input to Histogram Scan so that Position = % Coverage  (Read 130 times)

I frequently use Histogram Scan nodes with very high contrast in order to create masks between different material surfaces. Damaged vs. undamaged concrete, painted vs. unpainted wood, rusted vs. unrusted metal, etc.

Ideally I want the Position slider to be roughly equivalent to the percentage of the mask that is white. But really it currently doesn't do that unless the input texture has a very even, flat distribution of values.

Is there a node or combination of nodes I can use that redistribute grayscale values so that it's flatter?

That position slider won't resemble percentage coverage unless I can get the top distribution to look more like the bottom one.

I don't think that's technically possible. The histogram indicates the number of pixels having a specific value, in order to get a flat histogram you'd have to change the value for some pixel while some other would keep their current value. It rises many problems that I think don't have solutions: simply put, what rule should you follow to modify some pixels while not modifying others ?
Product Manager - Allegorithmic

There are solutions I think, but I'll concede I don't have any good ones immediately in mind. It's probably easier to build a node specifically designed to select a given percentage of pixels on a texture rather than trying to redistribute values for use in a histogram scan.


The stupid version of this 'Percentage Selection' node would be to create an ordering of all of the pixels from darkest to lightest (or whatever other desired criteria), and then you just multiply percentage by the total number of pixels, and take the first however many off the list.

As you point out though, you have to decide how to create that ordering for all of the pixels that share the same value. The stupid version might just take all of the pixels of whatever value is at the cut off, or none of them. Or pick them randomly.

And it would work very unreliably especially for quantized images. But if you take most noise textures at 2k or 4k, I suspect that you'll still end up with a result that's quite close to the actual percentage you want.

That also seems unlikely to be an efficient node, but I'm not really sure about that.

It might be possible to tend toward a uniform distribution of values, but it'd certainly require many iterations to make the result converge.

Product Manager - Allegorithmic