Author Topic: UE4 / Designer Compatibility and Optimization  (Read 2753 times)

Hey there. I started learning Designer last week, and I LOVE it so far. Today I tried getting some of my Substances into UE4, and noticed a few problems that I'd like some help understanding.

Firstly, there is a major difference when using CPU vs GPU. I understand from reading some other posts that CPU may not be able to accurately represent some nodes in the graph which is understandable, but is there some guide for which nodes to avoid if you want CPU compatibility? The result can be quite dramatic, in this case the CPU is on the left (incorrect) and the GPU is on the right (exactly like how it looks in Desginer <3)



Secondly, I parameterized the Distance, Softness and Angle nodes of the Shadow node in Designer, not really expecting it to work in UE4 but mainly out of curiosity. Well, it works SERIOUSLY WELL and looks incredible, since I plan to have a global shadow parameter in UE4 driven by realtime sun position change. So now it will not only affect the normals on the texture but give a fake shadow. This is the result:



Pretty sweet, BUT, the Async Rendering node causes a major, multi-frame hitch in GPU mode. Which means not only can I not update it every minute or 30 seconds or so which was the plan, but I really wouldn't be able to update it at all during runtime, since even a single hitch like that isn't acceptable. I followed this thread:

https://forum.allegorithmic.com/index.php?topic=19320.0

And followed Dan Stover's advice but it didn't help in GPU mode. In CPU mode, there is sometimes a small and single frame drop as it updates, but barely noticeable, and acceptable for updating the texture shadows every minute or whatever to match the sun position.

I would like to really use the power of Designer, even so far as having ground that becomes dynamically more / less dusty depending on the weather, things like that, so I'd love to sort this issue out. GPU mode certainly looks nicer and more accurate than CPU mode, but CPU mode performance seems to be better (in this case, not sure about overall) so I'm a little stuck on how to proceed.

To sum it up:

Is there a guide or advice on which nodes to avoid when making graphs for UE4 that need to be both CPU and GPU compatible, and,

In terms of programming, how can real-time updates to materials be made as performance efficient as possible?

Thanks a ton, really looking forward to getting to use Designer more!

Just going to squeeze in a more specific follow up question regarding re-rendering a Substance in UE4 with the Sync / Async Rendering node.

In this scenario: my entire graph is made up of non-parameterized nodes, but there is a node right at the end which has an exposed parameter (such as a node just before the Ambient Occlusion output to adjust its intensity or power).

In UE4 when this exposed parameter is changed and the Substance is updated at runtime, is it updating the entire graph? If so, is there a way to bake out the more "static" sections of the graph so that all the math isn't being recalculated and textures aren't being rebaked, so only the more basic operations are being recalculated?

I would really love some in-depth technical info about how UE4 handles Substance materials, and what the strain on performance is when used in an actual gameplay scenario.

I've managed to answer a few of my own questions  :)

Quote
Generally speaking, nodes exposing tweaks (custom parameters) that will be modified at run-time should be placed as close to the end of the graph as possible. This is because the output of each node is cached wherever possible. The further up the graph your tweakable node is, the more outputs will need to be regenerated whenever one of those tweaks is modified. If your tweaked node is close to the end of the graph, only the few nodes between it and the output nodes will need to be recomputed.

And I learned that Designer also has a CPU mode, so incompatibility with game engines can be avoided.

Quote
Use 8 bit when 16-bit is not needed. (Note that the Substance CPU engine does not actually support 16-bit color or 8-bit greyscale. The GPU engine supports all 4 combinations of 8/16 bits and greyscale/color. Currently, only the CPU engine is used in Unity and UE4)

https://docs.allegorithmic.com/documentation/display/SDDOC/Performance+Optimization+Guidelines

One thing I'm still concerned about is the large frame hitch with my shadow parameters when in GPU mode. My shadow node in Substance was right at the end of the chain, just before plugging into the AO output. But I'll try making some new substances with the practices in the Performance docs to see if that helps.

One more question, what exactly is the point of the CPU mode? I'm assuming it has been made specifically for UE4 and Unity, could anyone tell me in what scenario Allegorithmic thinks we should use the CPU engine? All I can find is this:

Quote
The Substance Engine can be CPU or GPU. The GPU engine will allow you to create 4K textures. The CPU engine is capped at 2K.

https://support.allegorithmic.com/documentation/display/integrations/Plugin+Settings

But I'm after an explanation to why there even is an option to use CPU mode.
Edit: Doh, is it because then Substance becomes compatible with mobile and integrated graphics CPUs? Should have thought that one through.

Thanks! Sorry for the triple post  :-*
Last Edit: February 18, 2018, 06:57:37 am

You can tell UE4 to use the GPU engine. Also, it's the flood fill nodes that don't work with the CPU engine as far as I know.

Quote
But I'm after an explanation to why there even is an option to use CPU mode.
Some computers don't have a GPU (e.g. render farm nodes) or a very weak one with little VRAM. Also, some applications prefer to fully reserve the GPU for realtime 3D rendering and would rather use the CPU for everything else. Finally, some developers are worried about potential compatibility issues with drivers and find using the CPU engine much simpler and safer in that regard.

On the subject of CPU/GPU compatibility, please note that things have changed a bit since the posts you got the information from were made:
  • The CPU engine still only supports 16bit greyscale and 8bpc color at this point, but now the GPU engine can also process 32bpc floating point greyscale and color textures, which are not supported by the CPU engine at this point either. We are currently working on the support for floating point textures with the CPU engine, and this will hopefully provide a slow but functional way to support the 16bpc color textures on the CPU.
  • support for resolutions above 2k has been added on the CPU engine in Designer earlier this year. I don't think this has been updated in a lot of third party integrations yet, but it should be on its way.
 
Last Edit: February 20, 2018, 11:26:14 am