Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Sergey Danchenko

Pages: 1 ... 3 4 [5] 6 7 ... 10
61
Yes, will share SBS and SBSAR in coming days. I'll make a post here once it will be ready.  ::)

62
I'm glad that it was of help to you  :) Small note, though — I did not made the Filter Random Tiles Grayscale Generator, it's author is Pawel Pluta. I'm working on a similar node, but it wasn't released to the public yet.

Good luck with your project!

Regards,

63
Hey, Pawel! Thank you for finding time posting here and for the SBS, of course. So, I was wrong about the FX-Map after all... Too bad  :D I've run through your explanations below, great job and a clever one! Will study the graph after it will become available on Share.

Actually, seeing your node on Share just before the New Year was a great inspiration to me. I've just started learning the ins and outs of Pixel Processor and FX-Map nodes and was doing some research on them, stumbling upon this thread by chance. Seeing that there were no solution yet and how useful such a node may be, I had it bookmarked for a week or so, thinking of a way to implement something like this. Then your node clearly demonstrated that this IS possible, so it get me to work  ::)

Long story short, I've ended up with a solution very much like yours with a number of distinctions. Some optimizations (will cover them below) drastically improved the performance, so now it can take less than 150 ms to process a generic 8192x8192 mask, a bit more if there are complex patterns in a mask.

Here's how I've done it:

#1 is a noise variants that are multiplied on top of the mask in the Blend node marked as #3. Just like you, I was looking for a way to "tag" each mask's cell with some unique feature. Looking for a maximum luminance value inside the cell was a good idea. After some experimentation I found out that Fractal Sum 1 and Fractal Sum 3 were very good in providing unique maximum values for each cell, with former being less uniform and latter being more uniform. I've made a switch between the two available for the user as well as a custom noise input (#4), as it can be used to create pseudo-random luminance variation (may be useful sometimes). Not all noise types will work correctly, though.

Also there are transform nodes with exposed parameters to allow offset of the noise - it can be used to kinda "move" the values in a way one needs it. Also a Levels node with tweaked Level Out Low and Level Out High to ensure that noises will not have pure black and pure white pixels — necessary for the Pixel Processor functions to work correctly later on.

Here's the first optimization — most of the times this noises can be executed in lower resolution that the rest of the graph. I've set "Parent minus 2" as a default resolution downscale and not yet decided if this should be exposed to the user or not. Helps to shave off some tens of milliseconds of the graph, especially true for 4k and 8k.

#2 is a custom Pixel Contractor node that basically erodes the mask by given pixel distance. It uses a Pixel Processor to sample pixels around the currently processed one. If a black pixel is found, then processed pixel considered as a "border" pixel and it takes a 0 (black) value. This effectively contracts the mask around its borders by one pixel for each iteration. This is a supplementary node I've made to fight the leaking of cell's luminance if they are very close together and could be necessary for the biggest speed optimization I've made which will be explained below.

#5 is a Multi-Switch Grayscale node and a bunch of container nodes. Each one is a host for a chain of four Pixel Processor nodes that contain the main dilation function. Just as yours, it looks for the maximum pixel luminance around in a supplied map and assumes it as own luminance. Four consecutive processing like this is an iteration, and then in #5 you can see how 32 of such iterations are plugged into the Multi-Switch node to allow control for how many iterations are needed.

Now it's time for the biggest optimizations I've made to increase performance.

1) Instead of sampling just pixels around the currently processed one, I've made a kind of ray-casting function that can sample any given number of pixels to the left, right, top, bottom or in four diagonal directions. This way each iteration will be much more effective in finding maximum luminance value, and in overall much less iterations will be needed to propagate it through the entire cell. By experimenting, I found that a ray length of 32 px is an optimal one in terms of performance and propagation speed (tried 8, 16, 32, 64 and 128).

Initially I've tried to cast this rays in eight directions (straight and diagonals), but found out that rays cast in straight directions had a little effect on the end result, but will slow the function two times. So, final function looks like this: sample current pixel, pixels around it and 32 pixels in four diagonal directions, then assume the maximum found luminance value for currently processed pixel.

The main issue with this ray-casting thing was finding a way to limit its sampling to the insides of the current cell. If even a single sample would be taken from the another cell, then the luminance value will "leak", producing two identically valued mask cells next to each other. Fortunately, eventually I've assembled main function in a desired way — it will sample values only before the black pixel is encountered in sampled direction. When black pixel is found, all further values would be considered as being 0 (black). Unfortunately, I haven't found a way to actually stop the sampling at this point, otherwise the function would become even faster.

So, in words this sampling-constraint function reads like this: sample pixel in given direction, check if it's black (border) OR any of the previous pixels were black (has border already been encountered?). If false, give sampled value to the MAX node; it true, give 0 (zero) to the MAX node so it will not affect the output result.

You can see a portion of this function and use of OR operator in a picture attached.

2) As Pixel Processor makes computation for each individual pixel, the amount of pixels have direct effect on computation speed. Increase in resolution linearly scales the computation times, so 4k map is processed 4 times longer than 2k, and 16 times longer than 1k map. I thought of node resolution as a perfect opportunity to decrease computation time by order of magnitude.

So, the question was quite simple — do I really need to perform maximum luminance sampling and dilation/propagation with Pixel Processor in full graph resolution, or it can be safely decreased? Some research demonstrated that downscaling the mask and performing sampling/propagation in lower resolution indeed produces the result much faster (like, 16 times faster if processed in 1k for the 4k mask or 64 faster if processed in 1k for 8k mask ::)). Decreasing inner resolution works, but not without its shortcomings, though.

The first issue is that after downsampling the borders of the mask become distorted and do not match original mask. To fix this, I multiply original mask on top of the processed one right after the dilation/propagation node in a Blend node marked as #6 on the picture attached, to ensure that any stray pixels that come out after the mask resampling will be cut out.

The second issue is that after upsampling the mask back to the original resolution after the processing there will be some pixels on the cell's borders that will not match the luminance value that was propagated to the cell just before. It can be fixed quite easily with some secondary processing using the same dilation/propagation function, but there's another problem: some pixels inside the cell could become totally black, so the function will not work right away - those pixels will be considered as mask's gutter, and the end result will not match original mask by the borders. I fix this in another Blend (#7) by just copying original mask over the processed one in very low opacity (0.05) to push those pixels from black and create room for the secondary dilation/propagation.

Secondary dilation/propagation is necessary only if inner resolution downscaling optimization were used and happens in the node marked as #8. Basically it's the same dilation function with sampling and ray casting, but with ray length of 8 pixels instead of 32 to give more granular control over it — at this stage it's just about fixing a bit of pixels on the border, so expensive sampling is not really necessary.

3) Lastly, there is one more thing to consider if inner resolution is decreased. For very tight masks where there is only one pixel wide black gutter between the cells, downsampling the mask could produce the cells that would penetrate each other, effectively merging and becoming just one cell. To fight that, the Pixel Contractor node mentioned above and marked as #2 on the picture is used. In essence, it widens the gutter between the individual cells to prevent merging of cells in some corner cases. As the mask will be overlayed with the original map later on, the contraction will not distort the borders of the end result and we still will get the mask borders identical to the input at the end of the processing. I believe that most of the time this parameter will not be needed, as such tight masks should be pretty rare to be used.

4) Kinda positive side effect of lowering the inner resolution — much less iterations will be needed to flood the cells with maximum luminance. As the sampling rate remains the same (32 px on diagonals), instead of 6 iterations just 1 or 2 could be used, making another substantial contribution to higher performance.

So, in short: reducing the inner resolution of the Pixel Processor's containers and exposing this value to the user combined with some counter-measures to fight side effects provide tremendous speed boost. 8k masks, as I mentioned at the start of this post, could be processed in less than 150 ms. More complex and large patterns could take something like 500 ms. Most of the 4k masks could be processed between 50 and 100 ms.

Lastly, the #9 is some nodes to control the contrast of created random luminance mask. Nothing special here, and I'm still looking for the best way to do that - some kind of Autolevels function would be the best, I think.

It was a fun experience and I've enjoyed it very much. Long live Pixel Processor!  :P

64
I agree that such functionality would be the most welcome. Until something like this officially makes its way into Sampler-type nodes, there's basically two options:

#1 is to customize the Tile Sampler node yourself to expose random luminance mask. It's not a complex task if you're already familiar with parameters and functions. I gave a quick look and it appears that Tile Sampler node uses a lot of map inputs, an FX-Map node and plenty of parameters to drive it. If you make a copy of the FX-Map inside this graph keeping all the connections, it will share all inputs with the original FX-Map node including the parameters, thus producing exactly the same result when one would tweak this node instance. But, as this would be a copy, you could use another custom parameter to drive its random luminance function and provide it with an additional output node to get this randomized luminance mask available for use outside of the graph.

Actually, writing this I've modified Tile Sampler nodes (Grayscale and Color versions) for you in a way described above — you can use them if you would like so (attached below) :) Unfortunately, due to the nature of Tile Sampler and how this was achieved modified nodes will be twice as slow, which could be a problem in certain circumstances. It's pretty much two Tile Samplers that share parameters and running in parallel inside one.

#2 is to use a separate node that would be unrelated to the Tile Sampler and would randomize your black-and-white mask by assigning random luminance values for each individual "cell". Potentially it could be much faster that using solution #1. Unfortunately, there is no such node inside SD, but there are some user-contributed solutions that may work for you.

Here is a discussion of a node like this here: https://forum.allegorithmic.com/index.php?topic=5158.0 . Luckily, one guy actually managed to implement it, so you can grab it from Share: https://share.allegorithmic.com/libraries/2272

I'm also working on a node like this using the Pixel Processor, so it would be much faster than linked above.

Hope that helps. Cheers!

65
I can confirm that in SD 5.6 tuning the Noise and Precision controls has no effect on the currently set gradient keys.

66
Some observations for the node above if anyone is interested.  :)  It's magic!  ;D

In lower resolution it works great, but setting higher graph resolution, like 4096x4096, allows to actually see its inner mechanics. Basically there are four parameters that affect the end result: noiseDistance1, noiseDistance2, bevelDistance and DilationSteps.

noiseDistance1 produces small rectangles that start to swarm the supplied black-and-white mask as the parameter is increased. It's pretty obvious that FX-Map involved at this step - most likely this parameter controls the amount of iterations in it.

noiseDistance1 control luminance of those rectangles. At 0 there are no variation in luminance, and from 1 to 64 it looks like some noise is overlayed on top to introduce luminance variation (that would be important later).

bevelDistance has an effect on how rectangles are placed inside the mask's "cells". I'm not sure how this is done, but I guess that there is a Bevel node in play that creates some gradient inside the "cells", which allows FX-Map to sample it and somehow figure out the "cell" boundaries and where to place rectangles.

Lastly, the most important parameter is DilationSteps. As it is increased, the rectangles placed previously start to spread inside the cells becoming larger, while rectangles with maximum luminance value are placed on top of others. This allows them to propagate through the entire "cell" if enough DilationSteps are given. It looks like there are some blending using the "Max" mode that produces such result, most likely set inside the FX-Map's Quadrant node(s) used in iterations that create rectangles.

So, in the end this node works quite good and it proved that such functionality inside SD is possible :) Though, on higher resolution or very large cells it may fail, unfortunately, as there can be occurrences where 16 DilationSteps aren't enough to fill an entire cell. Likely to happen in 4096x4096, very likely for 8192x8192. Also, the node becomes slower as the resolution and DilationSteps are increased.

The most intriguing part of this node is how the rectangles are propagated only inside their own cells without affecting other ones. I've contacted the author of this node, Pawel Pluta, to see if he would be willing to share the .SBS package and some insights on how this was done. Hope that he would come by soon  :P

67
At the top of Graph View panel (just above of where your graph is) there is a Looking Glass icon - use it to bring forth node filtering options. From there, you could select to highlight nodes of specific types or nodes that use specific parameters.  :)

68
That's good news. Thank you!

69
Hi Nicolas,

Thank you for the reply. Tried that before posting here, to no avail. Have you tried enabling the Point Light 1 in Viewport's Lights menu, enabling Light (shape visualization) in Display menu, and bringing the Point Light 1 closer to the surface to actually see it? From a default scene you have to hold Shift + RMB and drag the mouse a bit upward to bring the light closer to the surface and see it's shape. The issue still manifests itself through the gigantic Light sphere and inadequate effect of Height scale for the Tesselation shader.

I'm pretty sure this should be reproducable as just a couple of days ago I saw a guy on YouTube using the SD 5.6 and facing the same issue, though it looked like he haven't figured out that it wasn't normal behaviour of SD.

70
Already in Substance Designer, will come shortly in Substance Painter.

71
If you want a Nvidia video card, there's not much to choose from :) GTX 1070 is a sweet spot in terms of performance for the money, and it falls within your budget. Just look for a model with a decent cooling solution and stay away from the Founders Edition. Also pay attention and do not buy an overpriced version of GTX 1070 - there can be a situation when you can get a GTX 1080 for what they're asking for GTX 1070.

GTX 1070 will be very fast in Iray, so you won't be disappointed in upgrade in any case.  :D

72
Check this out, guys: https://share.allegorithmic.com/libraries/2272

Too bad it's only .sbsar :( I'm very curious in seeing how it was actually done.

73
Should work. The only thing I can think of would be resource constraints... Do you have enough RAM and video memory to handle 8k textures? It can be pretty expensive, especially if there are a few of them.

74
Substance tech is getting momentum inside the VFX, architectural visualization, automotive, VR/AR and a few other industries. Yes, it's core is built around the game art, but toolset gradually expands and it is clear that Allegorithmic seeks to extend their presence in mentioned above industries. Many studios involved in those industries now pioneer Substance to get an edge over their competitors.  ;)

Substance is compatible with many major rendering engines, including V-Ray, Mental Ray, Iray, Corona, Arnold, Keyshot.

So, the answer to your question - yes, it is used by many outside of games. It may yet to be perfect in handling this task, but these days you basically can't do wrong going with Substance. I highly recommend.


75
I know it's just a BIT late, but this was discussed here: https://forum.allegorithmic.com/index.php?topic=14139.0

It's a limitation of how FX-Map renders patterns, unfortunately, though there is a suggested "hack" proposed (see linked thread).

Pages: 1 ... 3 4 [5] 6 7 ... 10