Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Sergey Danchenko

Pages: 1 ... 8 9 [10]
Particle brushes can take into account only the normals that are in a normal map plugged into Additional maps. So, for now you have to export your Normal Channel and re-plug the resulting normal map into Additional maps slot. This new exported normal map will contain details both from your old baked normal map from Additional maps slot AND information that you have painted into the Normal channel.

To me it looks like you got it right  :)

but both images are the same model at different angle!!   :D

Yes, I was talking about the same model, but maybe communicated it poorly. You have two arrows on your picture, with both pointing alonge the same UV split. But, near the top arrow you have a seam that comes both from mask and texture sampling. You can fix the mask seam (Clone Tool or different approach to generate the mask), so there would be no knife-like clipping of your "moss" material layered on top of the rock, and it will look much better. However, the seam from texture sampling will reman, as it can't be fixed via textures - you can't paint them out.

In other words, both types of texture seams are happening along the same UV split area (UV seam), but have a different nature.

On a picture below I marked the location where seams would be much less noticeable. Note that this is just a general idea of areas where you can place them - as I wrote in my previous post, it's concavities and sharp geometry edges.

You definetly can try to bake in other apps, but I'm pretty confident it wouldn't help much - Substance bakers are really good, and seams in discussion doesn't come from texture images, so this isn't the baker issue.

Oh, and I've checked your lowpoly rock version just this morning and noticed that it has double geometry, i.e. there are two rocks exactly on top of each other. You may want to delete the extra one  ;)

Hope that helps.  :D

I guess I can answer that, as I was researching this just a couple of days back  :) If someone knows better, please correct me, as I'm also interested in understanding it to full extent.

So, the Normal is just... normal. Pixels that are above in layer stack displayed on top of ones that are below. If there is no pixels above (empty layer or an empty part of it), then anything that is below is displayed, in the same order - from top to bottom.

Passthrough is a tricky one :) I'm not sure that I understanded it correctly, but in simple words to me it looks like this blending mode makes a layer to kinda "pull" the content of layers below into itself, but keeps this "passthrough" layer independent of others in terms of opacity control and painting abilities. For example, normally you can't paint on a Fill layer, thus can't use the Clone Tool on it. If you add a Paint effect to this Fill layer or a regular layer (non-Fill) just above it, you can paint in it over your Fill layer, but still can't use the Clone Tool, because the layer itself doesn't contain the information you need (no pixels from a Fill layer are there). But, if you set the layer above to Passthrough, the content of Fill layer below is "pulled" above, and now you can use the Clone Tool (on regular layer above the Fill). Also you can notice how thumbnail of the layer updates when you set it to Passthrough - it starts to represent the computed result of layer stack below ("pulls" the content inside itself).

Replace is pretty simple — it completely replaces the content of all layers that are below with content of the layer itself, including the parts that are empty. In a sense, setting a layer blend mode to Replace will make it look like there is no other layers below. For example, if you have a layer stack full of data and a layer with a single pixel on the very top of it, setting this top layer blend mode to Replace will make only this one pixel visible, while the content of the layers below will be discarded completely.

Hope that helps :) Oh, and I would like to add one more question to this topic. Could anyone kindly explain what's the practical difference between AddSub and Linear Light modes in Painter? From what I learned, I would assume they should produce identical results. Maybe there are some specific cases when one of them should be used and other one shouldn't? Thanks in advance.

I think that it is impossible to get rid of seams in such places completely via baking only. However, there's still something to think about.

To me it looks like you have two kind of seams on your images shown just above. The bottom seam is a texture sampling seam that happens in Painter just because your geometry has a UV seam there. You can't do much with this one, as this has little to do with baking and projecting. You can't paint it out either. The best thing that you probably CAN do is to try and organize your UVs slightly differently, by moving such seams to sharp edges or concave areas, where they naturally will be less visible. Try not to leave them in flat areas. Another thing to keep in mind here that this seams become even less visible when you zoom out a bit, so basically they shouldn't ruin your model in any practical way.

The seam at top, however, is a different story. This is where your material on top has a sharp cut-like clipping, which is very noticeable. This one most likely comes from a mask generator that can't use triplanar projection due to its mechanics. I'm afraid that to get rid of this ones you have to use the Clone Tool, or find some other creative ways to generate your masks. In case of using the Clone Tool, you can Alt-click on your resulting mask to view it on top of your model, then add a Paint effect just above the mask generator(s) you're using, set its blending mode to Passthrough and use Clone Tool to paint over the seams. Given the non-destructive nature of Clone Tool, this probably should be done only once.

By the way, hiding your UV seams will also help with "mask" seams a lot.  :D

I had this problem once with my mesh. Indeed it turned out to be bad geometry (non-manifold) that caused this. The only solution that I found was to clean the mesh.

I know this forum is for Designer, but still... Could anyone kindly explain what's the practical difference between AddSub and Linear Light modes in Painter? From what I see above, I would assume they should produce identical results. Maybe there are some specific cases when one of them should be used and other one shouldn't? Thanks in advance.

Are you sure you're not viewing combined normals from both height AND normal channels? I guess that can produce this "double intensity" feel you're probably see as "strong" relief. Basically, if you paint height details in Height channel and bake them in a normal map, then import that normal map an plug it into additional maps, you should disable your layers with "original" height information, because it will be added on top of your normal map. If you switch between height and normal channels in rotation, you shouldn't see any difference in relief ammount.

As for your normal map question — I think that generally you shouldn't be worried about seeing seams in your actual normal map, because seams will always be where tangent space is changing, i.e. along a UV shell border or where smoothing groups meet (hard edges in Maya). If your mesh looks good in shaded view, meaning without a visible lighting seam in that particular area, then your normal map is... normal.  ::)

Curvature map is generated from normal map if you're using per-pixel mode, so it is not a surprise that Curvature map also has this seam.

I can suggest that you double-check your UV's and smoothing groups/hard edges. If there is a UV shell border along the seam that causes you troubles, then you have to organize UV's differently to get rid of it.

Makes sense. Thank you for your response.

Hi everyone, I would like to discuss something in regards to baking done in Substance Painter. If some guys from Allegorithmic would join us to speak from the perspective of their technical expertise — it would be really great, because I'm not sure that I understand everything correctly.

So, the thing is that recently I was trying to troubleshoot a problematic Substance Painter normal map bake on Polycount. After the source of the problem was identified, I thought that it was a pretty interesting one. The day after I have faced the same issue personally when I was trying to redo UV mapping and baking on the Sci-Fi Container from a Wes McDermott's course "Substance Painter Texturing for Beginners" (by the way, thank you, Wes!). I've decided to start this thread after it came to my understanding that such kind of issue could be not so rare in practice. Here's the link to the Polycount in case someone would like to read through the mentioned thread:

In short, some specific lowpoly/highpoly geometry configuration can produce non-obvious shading issues in baked normal map. Let me illustrate this with an example. See first image attached below:;topic=9519.0;attach=15757;image.

So there I have a simplified geometry setup that illustrates this possible scenario that can produce some shading issues. In this example I have a lowpoly that consists of two quad faces with a hard edge between them and a highpoly that has some concavity in its corner. Nothing special about baking setup — it is Substance Painter 2 that bakes without a cage with Average normals = ON.

The problem is that with configuration like this (and many more that is similar) some baking rays projected for the bottom lowpoly face around the corner of a mesh will hit that concave area of a highpoly, due to how baking rays are distributed when using averaged normals projection. To my understanding, because the normal map will be computed in regards to a flat lowpoly surface normals (remember, the hard edge is at the corner), it will record a pretty extreme value in the blue channel of a normal map (around RGB 113, actually). On a practical side this will result in that when such normal map will be applied to the surface, a rendering engine will treat this part of a polygon face like it is oriented "backwards" in relation to the actual geometry and its face normal. I have a more detailed explanation on this in the Polycount thread linked above (though it's more like a theory of mine). Short version: if value in the blue channel is above 128, we "can" see the surface and it is oriented towards us. The value of 128 is the last that actually meant to be seen on the object's surface and it means the surface is parallel to the camera. If value is 127 or lower — the surface is effectively facing away from us, so we shouldn't really see it.

When rendered inside the application, be it Substance Painter or some game engine like UE4, it looks like picture shown below. First is the actual geometry from an example above, second and third is the Sci-Fi crate I've talked about, with lightning being cast at different angles. Notice how shading works on a problematic area — it is dark when in the light and bright when in the shadow. Looks weird, like an artifact. See second image attached below:;topic=9519.0;attach=15759;image

It is important to note that shading issues discussed is unlikely to happen when using all-soft lowpoly mesh, because due to softer lowpoly surface normals it is less probable that blue channel of the normal map will have values lower than 128.

So, my question is — should we really allow the baker to record in the blue channel of normal map values below 128? As I wrote above, to my understanding such value means that we're basically looking at the polygon from inside, i.e. it is backface to us. And this normally shouldn't happen, no? Is there any practical scenario when we would want to have such "backfacing" normals on a normal map applied to the mesh? Maybe some other operations or features use this data in a way I am not aware of?

I mean, if we're to clamp all blue channel values below 128 to actually record as 128 or 129, this will prevent such shading issues to come up and will make the baker a bit easier to handle, especially for less experienced users. I totally understand that in examples shown there's clearly some problems with the shape of a highpoly (no concavity should be allowed in situations like that) and how close the highpoly and the lowpoly are aligned, but it's not the question of how to "fix" some particular bake setups. At the same time, I'm pretty sure that such configurations with lowpolys that use some hard edges to reduce gradients on normal map do pop-up here and there pretty often, and it can be pretty confusing to troubleshoot them without knowing where to look specifically (blue channel) if such shading issues are to come out.

Any thoughts on this will be much appreciated. Thanks!

You already don't have to. Setting your normal mixing mode to Replace should do the same thing, that's why we introduced this mode in SP2, so you don't have to copy the normal map in a fill layer.

Now I am confused. When I set Normal mixing mode to Replace the baked normals completely disappear from the model, since I don't have anything in my layer stack to replace them with (let's assume it is empty). So, how do I paint out those details from the baked normal map when I do not see them inside the viewport? I figured that I can add baked normals as a fill layer and paint out details with neutral color in a layer just above it, but I just can't figure the way to do that without using a fill layer with baked normals.

Am I missing something?

Pages: 1 ... 8 9 [10]