Author Topic: Why the need for position maps?  (Read 995 times)

Hey,

Having lots of fun revisiting substance designer - i was just wondering why baking position maps is necesary for some nodes like Noises? i was hoping you would just plug in a "world position" type of node and perhaps have nodes to normalise it too where necesary?

This might also knock onto more general geometry > shader workflows as i do love to just load my geo with data and pass it through same i would to mantra / arnold / redshift etc.. :)

~Craig

Contrary to a shader, the graph does not have access to the geometry data and it can only process 2d textures. That's why you have to bake the mesh data first in order to generate some nodes based on mesh info.
Product Manager - Allegorithmic

Thanks Nicolas! I'm surprised thats the case, are there plans for this in the future do you know?

The Substance engine has been designed and optimized to work exclusively on 2D textures. Changing that would basically mean to redo everything and some operations and nodes would not work anymore (fxmaps are 2D, blur and warps are 2D..etc).

Textures and shaders live in two distinct worlds...
Product Manager - Allegorithmic

Yeah i see what you mean - it just seems agonizingly close to a perfect workflow if you could say internally rasterise the geometry attributes to pass forward as a "texture" from there right? One of those times i wish i was smarter to proof of concept this haha

The MDL graph can do exactly this (because MDL is essentially creating a shader)... the workflow is currently not as polished as other elements of SD though.

From my POV I think if the MDL portion of SD gets really refined it could potentially be a way to get that "perfect workflow".

Even if you did a material entirely shader-side, if you want a texture as output, you still have to bake the end results.
Keep in mind that a shader rasterizes model information from a camera perspective, not in uv-space.

Just as a thought-experiment, if you want to take it one further you could not use a texture at all and render your final result entirely shader-side.
This would be useless for real-time applications, as your average materials easily takes a few seconds to be computed, so just one material would drop your fps to 1 frame each couple of seconds.
For raytraced renders it would be an equal slow-down in terms of just computation time.
For both realtime and not, you're also dealing with dozens to hundereds of texture-buffers (one for each node) which would eat up a lot of memory, and have to be written and read again and again through memory each frame.