Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Sergey Danchenko

Pages: [1] 2 3 ... 10
1
That's great, thank you!

2
Hi,

I wonder if there is a way to enable/implement DirectX raytracing in bakers for Pascal GPUs? Microsoft told about DXR that there would be some kind of a fallback layer for GPUs that don't have a dedicated raytracing hardware to utilize raytracing features. I guess this is the one: https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/Libraries/D3D12RaytracingFallback

Considering that RTX graphics card are too expensive, I believe that Pascal GPUs like 1080 Ti, 1080 and 1070 would be around and relevant for a couple of years at least. Yes, raytracing on those would be much slower than on RTX, but Pascal is still quite capable GPU and probably it would accelerate things substantially (even baking AO from Mesh 2x times faster would be quite nice  ;)).

Any chance to explore that possibility? Thanks!

3
That was a great idea! Seems like it will support tiling, after all :)

Gradients are generated from pixel position in a map (UV coordinates), while random luminance mask controls gradient rotation matrix. Because of random luminance mask, the actual value that rotates the gradient is kept the same for each individual island in a mask — it's a corner stone in this implementation. So, as pixel position changes, so does the gradient value, but overall direction for a given island is preserved.

At first, I've tried to use Transform node to downscale the map, so islands located at the edges would move inside the map and gradient will be applied to them without wrapping around. Transform node scaled down at 50% and set as "Input +1" for resolution would allow to do this without resampling pixels, so no information will be lost. Then apply the gradient and use Transform node again with 200% scaling and Input -1 resolution to get back to original resolution. Unfortunately, this would limit the node tiling to 4k resolution, as you can't get over 8k in SD yet.

Then I've realized that indeed, with very long islands in the map, scaling down at 50% won't be enough. To make tiling work it should be scaled one more time, to 25%, and then vice versa. That would effectively limit the node to 2k resolution only, which is unacceptable.

I've decided to follow your great idea and leveraged Randomize Mask node to mark islands that are located at the edges of the map and in the corner, then flood fill those islands to make a map for them. Then I've figured that I can modify gradient function to warp around marked islands as necessary, so no actual map offset would be necessary (instead the gradient will offset). Also it became apparent that I don't need to mask islands that are located on both sides of the mask — just pick one side and make sure the gradient wraps correctly to the other side.

So, with wrapping islands masked, I've used them to put some additional parameters in gradient function. Islands marked as located at the left border of the map will get horizontal U offset of 1 unit, so the gradient would warp there exactly by one texture tile. Same for islands located at the top - mask them and offset gradient vertically on V by 1 unit. For corner island that touches both left and top borders of the map, offset both U and V by 1. Voila! — seamless gradient.

Now I have something like this almost ready (first image below): node that allows to apply random and variable direction gradient to individual islands in a mask, with support for donut-like shapes, adjustable and variable range for gradient, global gradient angle control and a pretty invert (!) button. :P

I wonder if there any use for an additional node that will mark islands that will wrap when the map will tile (second image below)? I have a feeling that it may come useful, but can't right away tell in which way. The map can be neatly packed in a RGBA with horizontally wrapping islands in R, vertically - in G, corner islands in B and general mask for all of them in A.

4
 :-\ Looks like my gradient node won't support tiling for now. It does work with donut-like shapes, but I can't figure out how to make the gradient to wrap correctly on islands that located on both sides of the mask (i.e. that need to tile). Can't have everything at the same time, eh?..  ;D

The most unfortunate thing is that I'm pretty confident it can be done, but at this time I'm lacking a required math background to make it happen.

5
No, I've stopped working on it since Flood Fill was released. The node is functional, but requires some love on the organizational side. Judging from this topic, it looks like the node still may be useful for some cases where Flood Fill can't operate properly. I'll look into finishing and releasing it.  ::)

6
I guess my Randomize Mask node implements something close to a Flood Fill algorithm as in the OP. It's a brute force approach with some optimizations, but it's quick enough to be usable on practice. As a benefit, it can tackle almost any shape given enough iterations  ;D.

It was designed to fill an array of islands with random luminance values (but keep this value the same inside a given island) to create a random luminance mask from/for bitmap images.

On images attached below you can see how the node will spread values across shapes in iterations, with the end result in the last frame.

Some time later I've found that a random angle gradient can be applied to such shapes using additional processing (see last image).

The node mentioned above is released and thoroughly commented, so anyone interested can take a peek inside.  ::)

By the way, Eggfruit, you've pretty much described how Randomize Mask node come together. It's a bunch of Pixel Processor nodes with something like "ray casting" functionality. Each Pixel Processor samples a number of pixels (I've stopped at up to 8 px per iteration) to the left/right/top/bottom from current coordinate and stops if it finds a black one (i.e. island border). Then it assumes min or max sampled value as a value for a processed pixel. In this way, min/max pixels can be spread throughout the islands without crossing the island's borders. To optimize it, you indeed can reduce resolution at which computations are being made, and it will drastically affect performance in a good way.

Some links:

https://forum.allegorithmic.com/index.php/topic,5158.0.html
https://forum.allegorithmic.com/index.php/topic,16547.0.html
https://www.artstation.com/artwork/1alxX

7
Hello,

This is my entry for the contest :).

I was looking for something special and something that will harness the power of Iray and MDL. Decided to go with an optical fiber self-illuminated cloth material. It is a material made from specially fabricated optical fiber wires. Regularly, the light shouldn't leak from optical fiber, but for this material a large number of laser cuts are being made on the fiber, so the light would leak out along the length of a wire producing beautiful light sparkles and streaks.

The material is made as a MDL driven by Substance graph. Images were rendered using IRay with displacement mapping. As it's basically all glass, there's A LOT of refraction on this material, so it was a challenge on itself to render it out.  ;D

On Artstation: https://www.artstation.com/artwork/Ynmq6

8
Cool! Can't wait to check it out  ;D

thephobos, I haven't shared anything yet. I was going to make a push and finish the node, but it seems that it may be too late  ;).

9
You're welcome and good luck!  ;)

10
I've meant areas like on the first picture attached - it's really the same issue with missing projection rays that you've (mostly) fixed already. Unfortunately, on shapes like this, it's close to impossible to bake a normal map without any artifacts - at most, by moving around the geometry you can choose to have them either on top or on the side.  :D

That's why you should change the geometry to have a better bake. Consider adding a slight bevel on top of your low-poly shape (see the second picture attached for an example). Generally, you should avoid right angles (close to 90 degrees) on your shapes as much as possible, because it's pretty hard to bake them without artifacts. It's a good practice to slightly exaggerate such angles and make them a bit more round and smoother than they are on real objects - you will not only get a better normal map from it, but the object also will catch highlights on those edges better and keep visual integrity on larger distances.

That said, you have really sharp edges on top of your high-poly — it may be realistic, but will not play so well for the normal map. Try to make them much more round and smoother - you can follow the profile on the low-poly mesh with bevel you'll be adding to it.

Lastly, there are some redundant edges on the top side of the low-poly (see the third picture attached). They can affect baking in a negative way, so remove them (and check UV's after that).

With those three steps, you should get a much better normal map.

11
Now, your issue is that your low-poly mesh doesn't follow the shape of the high-poly mesh close enough. It's a very similar artifact (basically the same) that you would be getting baking a high-poly cylinder shape into a low-poly with not enough geometry. Check out this thread, it has some great insights on this: http://polycount.com/discussion/81154/understanding-averaged-normals-and-ray-projection-who-put-waviness-in-my-normal-map

Basically, to fix this, you should add some geometry to your low-poly where there are artifacts you're seeing in the normal map. Also, make sure that your low-poly wraps high-poly as close as possible - there are a few areas on top where that isn't the case, and you get some thin line-like artifacts on your normal map because of this.

And finally here's a quick mockup of what's going on with your bike. Blue is low-poly, red is high-poly. Blue line is the distance between the low-poly and the cage (a "push" for the cage or Frontal Distance if you don't use a cage in SP) and an approximation of a pixel for which a particular baking ray will be cast. The black line is a ray cast for a particular point on the side of your mesh (where you see the artifacts). As you can see, the ray cast from the side of the low-poly will hit the top side of the high-poly (black dot). Normals baked from such will produce visible artifact on you normal map because it will contain normals from the top of the mesh on the side of the mesh. Additional geometry should alleviate that.




12
You have broken smoothing groups (soft/hard edges) on your low-poly mesh in SP. Everything is soft. Hence the weird smoothing on top of the mesh that can be seen even without any maps/materials applied. Normal map you've been baking struggles to compensate for that, thus artifacts are present.

The mesh should look more like on the right side of the picture below. Try to re-export your meshes from your 3D app and check if Smoothing Groups export is enabled. Also, you can try to export in FBX format if OBJ won't do for some reason. And don't forget that for every hard edge in your LP mesh you should have a split in your UV's for normal map to be baked seamlessly. Most likely you would need to slightly alter your UV's for this mesh to account for that.

Good luck!  :)

13
I support the request to put this feature back. Substance Designer is for crazy things, and you can't really foretell what bake configuration would be necessary. Being able to override settings for each bake separately was a huge boon.  ::)

14
Nice catch! Thank you. Fixed and re-uploaded  :).

If you can recommend something, i will really appreciate that.

Vincent Gault's YouTube channel has some informative videos on Pixel Processor: https://www.youtube.com/user/vinnysud. Apart from that, there's not so much content around, unfortunately.

In general, I would say that most important things for getting familiar with Pixel Processor are the following:

1) Pixel Processor executes its functions on a per-pixel basis and in parallel. It means that for every pixel in the map, the same function will be executed. You can't get the result of such processing to influence something that happens inside the same Pixel Processor node. You can, however, process your maps in several steps by chaining some Pixel Processor nodes in a succession. Intermediate nodes can act as data storage/provider for next nodes in the chain.

2) It's all about manipulating pixels' luminance values. For each individual pixel in a map, you can sample (pick luminance value) basically any pixel on the same map or in an another map plugged to the same Pixel Processor. Alternatively, you can right away write some new values to the pixels without sampling anything based on some logic implemented through the functions.

3) To manipulate values, you'll need to assemble some functions inside Pixel Processor. One needs a very basic understanding of programming and math to grasp it. You'll need to have some idea of what is Float, Integer and Boolean, how If-Else conditions work, what are some other function nodes available inside Pixel Processor. Then it's math - take some value, multiply it with another, add something to it, subtract, etc. The math here CAN be pretty advanced, but it does not necessarily have to be like that. You can use what you know and still get nice results.

4) Actually making some simple Pixel Processor functions helps a lot when you want to quickly get familiar with it. Then you can try new and more complex ideas, make some simple filters using the Pixel Processor, etc. This will get things rolling and in no time, you'll find yourself assembling sophisticated functions to make it do what you want. ;D

15
Substance Designer 2017.1 was recently released.  ;D It got an update for the Auto Levels node that now works faster and with HDR data too. Perfect time to update my Auto Levels Plus  :). Download here: https://forum.allegorithmic.com/index.php?action=dlattach;topic=15281.0;attach=25710

I've made some optimizations to the node and now it's really fast. On my hardware, it takes 5 ms to process a 4k map and about 12 ms for an 8k map (vs ~ 100 ms with the Auto Levels node that ships with SD). With speed optimization enabled, the numbers go down even more and it's just 5 ms for an 8k map (so, 8x - 20x faster).

Those are huge speed gains and it may greatly benefit those who like to use Auto Levels functionality in their graphs a lot.

If someone would be curious about how this was done, here's a short explanation.

The greatest performance hit for the nodes in SD is a factor of node resolution. Reducing the resolution of nodes that need to process some data is the most common way to get better performance.

For Auto Levels Plus node, I've figured that it would be more beneficial to perform only the absolute minimum of computations necessary in each Pixel Processor node and gradually lower resolution of the subsequent nodes in the chain.

In such way, the Auto Levels Plus process the input image in full resolution only once, and each subsequent processing step is done in one step lower resolution. For example, 8192x8192 input map gets processed as 8192x8192, then 4096x4096, then 2048x2048, etc.

Quite fortunately, the logic of the node allowed that.

The idea of the Auto Levels Plus is quite simple: we have to find and store in some way luminance values of at least one brightest and one darkest pixel in the input map. Then it's a matter of some math equations to remap an entire input map to a new range of values (that is Auto Levels in a nutshell).

So, in the end, we need only two values (basically, two pixels) from the input map and can demolish all other pixels that dare to stay in our way.  :P By downsampling the input map to a lower resolution by one step, like from 2048x2048 to 1024x1024, we can vastly reduce the sheer amount of pixels we need to process in our search for luminance values.

However, there's a problem: downsampling can skew luminance values of pixels in the map because it will take luminance values of 4 of them, and compute from that a value for a new single pixel (repeat that for an entire map).

An input map downsampled several times will most likely lose pixels with exact luminance values we need and instead will get some averaged ones. In our case, it will make the node to be unable to correctly perform the remap function, because it has to be supplied with exact values to produce an accurate result.

But there's a trick: if we use a Pixel Processor to locally find darkest (and brightest) pixels in our map before downsampling, we can "spread" them a bit around themselves. I figured that it is enough to spread luminance value of a pixel just exactly around it to get what I wanted. So, a Pixel Processor function was put in place that sampled neighbour pixels and painted blocks of 3x3 pixels with the same luminance value (highest or lowest one, depending on how the function was configured). Check my Randomize Mask node for details about how this "sampling" function worked for me, as I've used it in this node too: https://www.artstation.com/artwork/1alxX

A block of 3x3 pixels of the same luminance can be safely downsampled to a lower resolution step, and at least one pixel from that group will retain its original luminance value (it will not be averaged with pixels with other luminance values, but instead with pixels from the 3x3 group that have same values).

We can repeat this step before every downsample operation and safely build a chain of Pixel Processor nodes that will go in resolution as low as 16x16. From there, a couple of extra "spread" operations will fill an entire image with a value that we're looking for. Repeat that second time for another value (brightest pixel or darkest pixel - we need two values), and output of this succession of Pixel Processor nodes can be used to feed data to the remap function.

That's why the node is fast — it uses just Pixel Processor nodes and rather simple operations in conjunction with resolution optimization.

I hope this explanation will be useful and informative for someone interested in making their own nodes using Pixel Processor.  :)

Pages: [1] 2 3 ... 10