Author Topic: Different result with CPU vs GPU engines  (Read 1591 times)

I think this is related to the Pixel Processor, as though the sampler nodes are forced to use Nearest filtering vs Bilinear when in CPU mode. Note the stepping in the detail in the SSE2 version.

Last Edit: October 24, 2015, 03:06:49 pm

Here's another few, showing just how horrendous it can get.


Hi,

Are the pixel processor inputs/filter in 16b/RGBA?
If you have the possibility to send us the sbs file it would be nice (my email in signature).

Thanks
Lead technical artist
gaetan.lassagne@allegorithmic.com

They are 16bit, but I didn't think CPU did 16bit, so I think I know what the issue is - I'm using gradients to remap the X/Y in a pixel processor, but any gradient greater than 256 pixels will start to double up due to 8bit only being 0-255.

Howcome the GPU engine doesn't work in UE? Is it a limitation in the Substance API? I would have thought it was just a matter of passing Substance a handle to the DirectX driver or something.

If you have the possibility to send us the sbs file it would be a good thing to see if it can be due to the way you proceed or some inconsistency during our engine computation :)

GPU engine is not delivered with UE4, the integration only use CPU to avoid performances issues with both engines (UE4/Substance). The Substance API provides a GPU engine (that you can use in some applications as 3dsMax, Maya, Flame, Modo, etc.) were there is no constant need of the GPU.
Lead technical artist
gaetan.lassagne@allegorithmic.com

I'll get that file to you shortly. In our case we have a seperate gpu for compute tasks so resource sharing wouldn't be an issue, and in any case I'm sure people would happily precompute the textures to get 16bit computations for the textures that need it - it's great that substance means a small distribution size but the generation abilities on their own is a great capability.