Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Nicolas Wirrmann

Pages: [1] 2 3 ... 104
Use the unreal tangent space plugin (I don't remember the plugin exact name,  don't have access to a computer..).

In Painter, by default, the height map range is remapped to fit within the [-1, 1] range, that's why it looks darker: because it now contains negative values.

You can change this behavior by either changing the color space on the Substance output directly in Painter:

or by adding a user data on the height output node in SD:

Here is the documentation page about that:

The 3D view has a "quality" setting in the preference that can dramatically affect the material response to light. The setting tell how many samples are performed and the default setting is actually quite low. Resulting in some cases in a slightly rougher result.

Go to the preferences/3d view and change the sample count to at least 128. I think the result will be closer to Unity.

We have changed the default setting for the next minor version.

trying to mimic the physical material by altering the specular level below 0.5 for plastic, wouldn't that also account as breaking the PBR fundamentals?

No: the default value is 0.5, which corresponds to a f0 of 4%. It's been defined as the most common f0 for dielectrics but this value has to be adapted to represents the "others" dielectrics, especially rocks, cloth, rubber...etc (I could not find an exhaustive list).

The specularLevel value will be remapped to the [0, 0.08] range as it corresponds to the maximum range that has been observed for dielectrics (by the people who defined the shading model). 0.5 will therefore become 0.04.

UE4 also have such control (with the same range afair), and it's also usually available on other uber shaders based on metal/rough but maybe behind a different name (and maybe set within a different range).

I guess that for the specular-glossiness workflow, altering the specular map would be preferable as it wouldn’t involve any extra maps.

The specular/gloss workflow we support encodes the basecolor and specular data entirely in two RGB maps. But the specular/gloss workflow we support still force the use of fresnel (100% reflection at f90). Which might not be the case for all uber shaders in the market. So it's safe to say using this workflow offers less chances to get the same result as in SD.

The bright result probably comes from the f0 driven by the specularLevel.

Our PBR shader follows in most part the Disney model, which has a fresnel component where the f90 (reflection at grazing angle) is always 1. That's probably what you are observing here.
You can indeed control the f0 using the specularLevel but there is no control over the f90.

So, in regard to the shading model and the lighting conditions, the result you get is probably what you are supposed to get.

Keep in mind the end result highly depends on the HDRi you use, so it's better to test your material in various lighting conditions.

Some of the studio HDRi contain very large area lights, which produce a very soft/uniform lighting and it definitely affects how the material is perceived.

Edit: here is an example with a 0.1 specularLevel, 0.5 roughness and the tomocco_studio HDRi

Could you send me a package where you experience the issue ?

Are you working with the "Contextual graph editing" enabled ? (yellow message in the bottom left side of the UI)


No, it's not possible to expose the Level or Curve widgets.


The Gradient Dynamic is set to "Horizontal" and the key is to adjust the roughness of the Fractal Sum.
If you want a black&white result, simply replace the Anisotropic Noise by a checker and adjust the tiling value.

Your node chain is 8bit (That's what the L8 label under the nodes means), this is why you have this quantization issue. You probably have a node where the precision loss happen, make sure you keep your data at least in 16bit.

It computes the derivatives of the height map (the second input), so basically it generates a normal map internally, and it uses the resulting vectors to offset the UV coordinates used to sample into the first input image. The intensity parameter is a multiplier applied to the offset.

A high memory usage does not sound abnormal when working at 4k: I can't tell for Alchemist, but in SD each node result is kept in memory to be used as a cache, since a single RGBA 8bit image is 64Mo (128Mo for 16bit), if you have many nodes in your graph it can fill your ram pretty quickly.

Using the ram for the cache prevents swapping to the system disk, which would kill the performances. Ram is meant to be used :)

It's not clear if your problem is in Designer or in Alchemist... ?

Also: when using Iray, the surface normal is recomputed based on the displacement intensity, it's not the case in OpenGL.
So when using Iray the mesh has twice the normal information and the shading will look "wrong" / more contrasted.

There is no way to disable this behavior in Iray (nor in most rytraced/pathtraced renderers..), so you should probably set the normal intensity to 0 or at least lower it.

It looks like the lighting is not correctly computed in SD, maybe because the shader does not match the node output, it's not clear. What outputs do you have in your graph ? And what shader is selected in the OpenGL renderer ?

Pages: [1] 2 3 ... 104