Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Sergey Danchenko

Pages: [1]

I wonder if there is a way to enable/implement DirectX raytracing in bakers for Pascal GPUs? Microsoft told about DXR that there would be some kind of a fallback layer for GPUs that don't have a dedicated raytracing hardware to utilize raytracing features. I guess this is the one:

Considering that RTX graphics card are too expensive, I believe that Pascal GPUs like 1080 Ti, 1080 and 1070 would be around and relevant for a couple of years at least. Yes, raytracing on those would be much slower than on RTX, but Pascal is still quite capable GPU and probably it would accelerate things substantially (even baking AO from Mesh 2x times faster would be quite nice  ;)).

Any chance to explore that possibility? Thanks!


This is my entry for the contest :).

I was looking for something special and something that will harness the power of Iray and MDL. Decided to go with an optical fiber self-illuminated cloth material. It is a material made from specially fabricated optical fiber wires. Regularly, the light shouldn't leak from optical fiber, but for this material a large number of laser cuts are being made on the fiber, so the light would leak out along the length of a wire producing beautiful light sparkles and streaks.

The material is made as a MDL driven by Substance graph. Images were rendered using IRay with displacement mapping. As it's basically all glass, there's A LOT of refraction on this material, so it was a challenge on itself to render it out.  ;D

On Artstation:


Meet MAT the Colossus – my entry for the Allegorithmic’s contest.  :D MAT is an ancient stone colossus covered with moss and lichen that stands on a swampy ground. Legends say the antique Path of a Painter goes right through the colossus. It’s a path full of trials, but the one that should be taken to reach the Heart of a Painter.

For this work, I was looking for a concept that would communicate some idea. Due to that, I've decided that MAT in my vision shouldn't be a character, but rather a canvas for something more than that. "Everything is a canvas for a Painter" — this line came pretty quickly (pun intended ::)), and it fit perfectly well with this concept.

Thinking of a canvas, I've remembered primeval cave drawings I saw and how I was fascinated by the idea that it is the Art and Culture that connects the past and the future. Primeval artists were creating their art just as we do in our time. Thousands and tens of thousands of years passed, and yet their art still exists for us while these people are long turned to dust.

For me, it was a really thrilling idea that for my entry I could use the very first forms of art in conjunction with the Substance Painter, which tech stands on the edge of our progress. It’s like bringing together something that on a first thought can’t stand further apart — cave drawings from prehistoric times and Substance technology.

Just think of it: some artists of old actually created every drawing you can see on MAT, and there are four of them. It were people of flesh and bones who were born, lived and perished, but in the end managed to leave a mark of their existence that reached us through the ages.

In one part, "Path of a Painter" is my humble tribute to these unknown artists of old. I'm putting these drawings to display one more time, in the year of 2017, with the idea that it is the Art that can long outlive ourselves.

In another part, this work is a tribute to all artists that lived, live now and will live in the future. I believe that there is a link between all of the artists who climb the path of creativity through various obstacles, pitfalls and disappointments, but still struggle to express themselves through the art. May the art created within an open heart of the artist be his heritage.

I hope you will enjoy these ideas as much as I do. :)

Check out the Artstation page:

P. S. By the way, reality is cruel.  Particular artist of our time misspelled the "heart" word, rendered images and submitted them to the contest. Oh, dear...  :P

Hello everyone,

As Substance Designer 6 introduced new bit depths for nodes (L16F and L32F), some new opportunities and new possible workflows come to life. To facilitate them, I've accidentally created a new node that I thought would be great to share.  ::)

The Auto Levels Plus node allows automatic remapping of an input image to a custom range specified by input parameters. Basically, it works just like you would expect from an Auto Levels node, but with two added benefits:

1) You can remap to a custom range, like to 0.5 - 1 or any other.

2) The node supports HDR data, so an input image can be of any available bit depth (L16F or L32F too).

I have tested the node for some time, but there could be bugs or corner cases when it may not work as expected. Please report such occasions in this thread so I could take a look at them and possibly come up with a fix.

Additional considerations before using the Auto Levels plus node:

The node uses this formula to remap values: NewValue = (OldValue - OldMin) * (NewMax - NewMin) / (OldMax - OldMin) + NewMin. So, at the core, it should be mathematically accurate.However, finding the OldMin and OldMax is quite an expensive operation, so at higher resolutions (2k+) performance would start to drop quite a bit.

To alleviate it, the node makes sampling for OldMin and OldMax values in lower resolution. Though there are some countermeasures, in some cases when resolution optimization is too extreme for a given image, due to downsampling OldMin and OldMax values can be sampled inaccurately (a small bit higher or lower than they are in fact). Most of the times, however, countermeasures mentioned above tend to work great, so it will produce a mathematically accurate result at a good speed for most cases.

One more note: in preliminary tests I've spotted some strange occurrences when the image is being remapped to a range other than 0...1. It was possible that some pixels would get a value a little bit off this new range. For example, if you remap it to a range of 0.41 - 1, some pixel сan get a value of 0.409998 or like that. I believe this is a precision issue due to operating with Float values, but I'm not sure about it. As a workaround, I've decided to clamp such "stray" values — in practice, it shouldn't cause any problems, as the margin of error there is minuscule.

Some images to illustrate the node are below and here is a download link:;topic=15281.0;attach=23839



I have some questions regarding saving rendered images from SD (via Save Render or Alt+R hotkey) to files with alpha channel/transparency information. I'm using latest SD 5.6.0 to test this.

A. Does OpenGL viewport renderer support saving alpha channel data into image file formats that support it? I've tried saving renders into PNG, TIFF, TGA, PSD, with and without environment visible — no alpha channel or transparency were preserved.

B. It seems that Iray does support saving renders with alpha, but I found something that seems like a bug to me. Could anyone kindly confirm this? I'll describe my steps so they would be reproducable.

1) Create a new graph, put any base material into it, attach outputs to be viewed inside the viewport (not really necessary, could be done with an empty scene too).
2) Make sure that Postprocessing in Camera settings is enabled (important, will tell later why), and make environment map invisilbe.
3) Render scene with Iray (to make it quick 50 samples are fine).
4) Use Save Render menu (Alt+R) to save rendered image to a file format that supports alpha/transparency — for example, PNG (still the same with TIFF and TGA).
5) Check that image was saved with transparency (or alpha channel).
6) Immediately (without changing camera position or any other settings) make another Save Render to another file (or overwrite the first one) — transparency information should be lost, or no alpha channel will be saved. In short, transparency is preserved only for the first Save Render operation and is lost during any further attempts to save the file.
7) Rotate the camera and make Iray re-render the scene. Try again to Save Render — it should repeat the pattern described above, first operation succeeds, second and others fail.

My question is — this isn't normal, right? Looks like a bug. I often save renders in a few file formats and this is very inconvenient.

C. Another question: if Postprocessing is disabled, I can't get Iray to save images with transparency/alpha no matter what I do. Is it intended or also a bug?



I'm facing some issues in SD 5.6.0 that I believe is a bug related to primitive shapes scale. I had no issues of this kind in SD 5.5.3.

The symptoms are: all geometry shapes except Rounded Cube appear to be sized incorrectly, as if they were scaled 100x down. As a result:

1) Custom Point Light 1 if visualized inside the viewport appears to be disproportionally large (see picture attached).
2) Height's Scale in Physycally Based Metallic/Roughness Tesselation shader has extreme effect on displacement amount. For example, a low value of 0.05 results in highly displaced geometry as shown on a second picture attached.
3) Switching between Rounded Cube and other shapes clearly show signs of camera movement that suggest heavy scale mismatch between these objects.

Hope that helps to identify the issue  :D


Here's one of my recent works — knife machete "The Regale".

My goal was to make a lowpoly game-ready model with mix of wood, paint and metal materials. Another goal was to prepare decent model presentation shots and make them fun.

I chose a knife machete for this project. Though it's a pretty simple object, I found some interest in making a model of it. Good amount of attention went to ensuring that the blade will capture the feel of roughly sharpened edge, like I saw on some reference images.

During texturing I had a thought on how to play around those three horizontal flexibility grooves on the blade — the result is on images. The imprint was made by me, though the concept isn't mine: saw it on a funny t-shirt one day.

The model has 1530 triangles. Textures of 4096x4096 were used. I'm including some images of rendered materials that was made in Substance Painter specifically for this project.

Artstation link:

Thank you for checking this out,



This fire extinguisher is a project I've done to practice PBR texturing in Substance Painter. I've named it "The Elephant"  ;)

The model is fairly high polygonal with ~91k triangles, so it's not really suited for real-time usage.

Final shots were rendered with Iray inside the SP. I've tried to make them in a way that the extinguisher would "pop" a bit, so I've tuned the scene to make the asset bright and vibrant enough to achieve the goal.

I'm including some rendered images of materials created for use in this project.

Artstation link:

Thank you for checking this out!


Substance PainterSubstance Painter - Showcase - Oil Drum / Barrel Prop
 on: September 11, 2016, 02:07:39 pm 

I was curious to see if one can take a really ordinary object and effectively present it as a hero asset. Is it possible to create a showy shot with something plain, or is it just complex models that can have such feature?

So, I wanted to make something really simple and play a bit with textures and presentation. I’ve chosen an Oil Drum that we all have seen countless number of times basically everywhere, especially in games.

Most of the time this red barrels tend to explode all of a sudden without a reason, but this ones are just flammable… though they’re empty inside. :D

A bit of tech info: modeled in Maya, textured in Substance Painter, rendered with Iray. Single barrel is ~7300 triangles. Textures used for rendering are 4096x4096.

Artstation link:

Thank you for checking this out!



I have a hard time understanding why my model looks different in Substance Designer compared to Painter. Maybe I'm missing something, so I'm looking for a tip on this.

So, I have a model I've textured in Painter. I've exported textures as bitmaps and imported them into Designer, then simply attached them to corresponding outputs.

First thing that confuses me is that the same environment with same settings (exposure, texture scale, etc.) produces much different shading result in Designer compared to Painter. For example, the "tomoco_studio" environment with exposure set to 0 (default) gives decent ammount of light for a model in Painter, but in Designer the same environment with default exposure (0) makes my model dull and dark. To make it look close to what I get in Painter, I have to set the Exposure to 3.

Now the fun part begins. If I increase the exposure in Designer like I wrote above, I start to get different lightning results compared to Painter, specifically in the ammount of specular highlights. This best illustrated by pictures I'm attaching to this post below - they're rendered by Iray, though using the regular viewport renderer results are comparable (in a sense that they're still different between Designer and Painter). They are in animated GIF format to allow better compareability - one with the environment shown and one with black background. Also I'm providing a shot from Designer with the same environment, but exposure being set to zero (default).

As it can be seen, top of the barrel is considerably brighter in Designer than in Painter, and the lighting itself looks noticeably different, though the environment settings, except the exposure as was stated above, is the same. My best guess would be that different exposure settings lead to this result, but this will mean that it is impossible to get identical renders in Designer and Painter, because environments can't be set to have the same effect on lighting.

Any thoughts on what (and if) I'm doing wrong? I'm a bit surprised that I can't get Designer and Painter to produce identical shading and rendering results.

Thanks in advance for any help!


I'm facing some strange issue with camera in Substance Designer 5.5. In short, when in Iray rendering mode and camera is set to Orthographic projection, the view jumps close to the scene object and camera can't be zoomed out or zoomed in. I've filed a bug report through the support website, but decided to post here as well.

I suppose this isn't normal, right?..


I've been digging through Share and noticed that Brushes -> Airbrush category is swarmed with submissions that should belong to another categories. At first I thought that it would be nice to gather URLs of such submissions and post them here to help someone from Allegorithmic's staff to go through them and assign correct categories when the right time comes, but then I've realized that I would have to list roughly half of the Airbrush category like this. There's materials, some shaders, tutorials, alphas and even meshes. To be blunt, it's a total mess and it needs some love desperately.

I understand that resources are limited, but I hope that sometime in the future this will get a bit of attention.  :)

Hi everyone, I would like to discuss something in regards to baking done in Substance Painter. If some guys from Allegorithmic would join us to speak from the perspective of their technical expertise — it would be really great, because I'm not sure that I understand everything correctly.

So, the thing is that recently I was trying to troubleshoot a problematic Substance Painter normal map bake on Polycount. After the source of the problem was identified, I thought that it was a pretty interesting one. The day after I have faced the same issue personally when I was trying to redo UV mapping and baking on the Sci-Fi Container from a Wes McDermott's course "Substance Painter Texturing for Beginners" (by the way, thank you, Wes!). I've decided to start this thread after it came to my understanding that such kind of issue could be not so rare in practice. Here's the link to the Polycount in case someone would like to read through the mentioned thread:

In short, some specific lowpoly/highpoly geometry configuration can produce non-obvious shading issues in baked normal map. Let me illustrate this with an example. See first image attached below:;topic=9519.0;attach=15757;image.

So there I have a simplified geometry setup that illustrates this possible scenario that can produce some shading issues. In this example I have a lowpoly that consists of two quad faces with a hard edge between them and a highpoly that has some concavity in its corner. Nothing special about baking setup — it is Substance Painter 2 that bakes without a cage with Average normals = ON.

The problem is that with configuration like this (and many more that is similar) some baking rays projected for the bottom lowpoly face around the corner of a mesh will hit that concave area of a highpoly, due to how baking rays are distributed when using averaged normals projection. To my understanding, because the normal map will be computed in regards to a flat lowpoly surface normals (remember, the hard edge is at the corner), it will record a pretty extreme value in the blue channel of a normal map (around RGB 113, actually). On a practical side this will result in that when such normal map will be applied to the surface, a rendering engine will treat this part of a polygon face like it is oriented "backwards" in relation to the actual geometry and its face normal. I have a more detailed explanation on this in the Polycount thread linked above (though it's more like a theory of mine). Short version: if value in the blue channel is above 128, we "can" see the surface and it is oriented towards us. The value of 128 is the last that actually meant to be seen on the object's surface and it means the surface is parallel to the camera. If value is 127 or lower — the surface is effectively facing away from us, so we shouldn't really see it.

When rendered inside the application, be it Substance Painter or some game engine like UE4, it looks like picture shown below. First is the actual geometry from an example above, second and third is the Sci-Fi crate I've talked about, with lightning being cast at different angles. Notice how shading works on a problematic area — it is dark when in the light and bright when in the shadow. Looks weird, like an artifact. See second image attached below:;topic=9519.0;attach=15759;image

It is important to note that shading issues discussed is unlikely to happen when using all-soft lowpoly mesh, because due to softer lowpoly surface normals it is less probable that blue channel of the normal map will have values lower than 128.

So, my question is — should we really allow the baker to record in the blue channel of normal map values below 128? As I wrote above, to my understanding such value means that we're basically looking at the polygon from inside, i.e. it is backface to us. And this normally shouldn't happen, no? Is there any practical scenario when we would want to have such "backfacing" normals on a normal map applied to the mesh? Maybe some other operations or features use this data in a way I am not aware of?

I mean, if we're to clamp all blue channel values below 128 to actually record as 128 or 129, this will prevent such shading issues to come up and will make the baker a bit easier to handle, especially for less experienced users. I totally understand that in examples shown there's clearly some problems with the shape of a highpoly (no concavity should be allowed in situations like that) and how close the highpoly and the lowpoly are aligned, but it's not the question of how to "fix" some particular bake setups. At the same time, I'm pretty sure that such configurations with lowpolys that use some hard edges to reduce gradients on normal map do pop-up here and there pretty often, and it can be pretty confusing to troubleshoot them without knowing where to look specifically (blue channel) if such shading issues are to come out.

Any thoughts on this will be much appreciated. Thanks!

Pages: [1]