Author Topic: I'm Looking for Critiques on My Pipeline  (Read 2123 times)

I’m using Designer and a little bit of Painter in a small scene I’m creating.

However, I’m not certain that what I’m doing is the most efficient way to do it. I’ve been trying to reference the “Atlantis” demo Allegorthmic made for Unreal, but obviously their substance work files were not included so there’s only so far I can go on that route. So I'm looking for some more experienced eyes.

My situation is thus:
I have a lot of assets that all are made up of a single kind of metal. I want these assets to have mesh-specific effects generated for them (edge wear, dirt, dust) using Designer. However, I also want to maintain the ability to make adjustments to the base metal material that will propagate to all assets using it.

I’ve achieved these points, but I’m just not sure my way is the most efficient.

My flow is as follows:
I’ve set up a Substance graph that takes baked mesh data maps (curvature, ao, normal, world space, position) and outputs an RGBA, and a normal map. Specifically, the graph generates dirt, edge wear, and dust masks and loads them into the R, G, and B channels respectively. Sometimes I need an emissive mask and I’ll plug that into the A channel. The normal map simply takes the input normal and blends in the various effects appropriately.

That is my base “function” graph.

From here I create a new graph for a mesh asset, generate the data maps for that mesh, reference the “function” graph, then plug the mesh data into the “function”, and plug the “function” into the outputs. Repeat this for each mesh asset.

So by the Time I publish, I have a single substance package that has around 10 graphs in it that are each referencing the “function” graph to generate masks for a particular mesh.

At this point I import the substance file into Unreal, and generate the instances. In UE, I have a master material that takes an RGBA mask and normal map. It separates the RGBA file into its channels and applies the appropriate coloring and adjustments to the metallic and roughness for each effect. This is overlaid on a base metal material I’ve created. The normal map is more or less just plugged in directly.

I make instances of this master material and simply plug in the RGBA and Normal maps that the Substance file generates. This gives me mesh-specific materials with editable effects (using exposed Substance parameters) and allows me to make adjustments to the base metal material (inside the UE4 master material) should I desire and those adjustments will then propagate to all of the instances.
This method has a number of little caveats and details that are needed to make it work correctly, but that’s the general gist.

It works, but it seems inefficient to need to create a new graph per asset. I don’t know if I would be better off just using the “function” graph as substance in Unreal, creating instances of it, and plugging the baked mesh data files into it there. But I know that Substance does some good compression work on bitmaps and I wonder if I would end up with a larger footprint going that route.

Does anyone have any suggestions on optimizing this process? Or would anyone mind sharing their Substance->UE4 flow?

Thanks!

I’m using Designer and a little bit of Painter in a small scene I’m creating.

However, I’m not certain that what I’m doing is the most efficient way to do it. I’ve been trying to reference the “Atlantis” demo Allegorthmic made for Unreal, but obviously their substance work files were not included so there’s only so far I can go on that route. So I'm looking for some more experienced eyes.

My situation is thus:
I have a lot of assets that all are made up of a single kind of metal. I want these assets to have mesh-specific effects generated for them (edge wear, dirt, dust) using Designer. However, I also want to maintain the ability to make adjustments to the base metal material that will propagate to all assets using it.

I’ve achieved these points, but I’m just not sure my way is the most efficient.

My flow is as follows:
I’ve set up a Substance graph that takes baked mesh data maps (curvature, ao, normal, world space, position) and outputs an RGBA, and a normal map. Specifically, the graph generates dirt, edge wear, and dust masks and loads them into the R, G, and B channels respectively. Sometimes I need an emissive mask and I’ll plug that into the A channel. The normal map simply takes the input normal and blends in the various effects appropriately.

That is my base “function” graph.

From here I create a new graph for a mesh asset, generate the data maps for that mesh, reference the “function” graph, then plug the mesh data into the “function”, and plug the “function” into the outputs. Repeat this for each mesh asset.

So by the Time I publish, I have a single substance package that has around 10 graphs in it that are each referencing the “function” graph to generate masks for a particular mesh.

At this point I import the substance file into Unreal, and generate the instances. In UE, I have a master material that takes an RGBA mask and normal map. It separates the RGBA file into its channels and applies the appropriate coloring and adjustments to the metallic and roughness for each effect. This is overlaid on a base metal material I’ve created. The normal map is more or less just plugged in directly.

I make instances of this master material and simply plug in the RGBA and Normal maps that the Substance file generates. This gives me mesh-specific materials with editable effects (using exposed Substance parameters) and allows me to make adjustments to the base metal material (inside the UE4 master material) should I desire and those adjustments will then propagate to all of the instances.
This method has a number of little caveats and details that are needed to make it work correctly, but that’s the general gist.

It works, but it seems inefficient to need to create a new graph per asset. I don’t know if I would be better off just using the “function” graph as substance in Unreal, creating instances of it, and plugging the baked mesh data files into it there. But I know that Substance does some good compression work on bitmaps and I wonder if I would end up with a larger footprint going that route.

Does anyone have any suggestions on optimizing this process? Or would anyone mind sharing their Substance->UE4 flow?

Thanks!

Hi,

I think you are definitely thinking in the right direction. You don't need to create the specific graphs for meshes. Instead, I would create single graph that takes the normal map as input using the input color node. I would then derive the curvature and AO from the normal so you don't need to include these extra maps. Now you have a single mesh graph that only uses normal and generates AO and curvature to complete the edge wear effects. This graph will change anytime a new normal map is fed to it.

You can then use the input in UE4. For example, if you import your normal maps in UE4 as a Substance Image type, you can then take the normal and feed it to the input of your substance graph. This lets you change the normal map directly in UE4, which then causes the mesh graph to compute effects for the mesh based on the specific normal map input.

Cheers,
Wes

Head of Substance Demo Art Team
the3dninja@adobe.com
Twitter: The3DNinja

Great, thanks Wes! That certainly sounds better, I didn't realize it was so simple to generate those maps from the normal.

The only loose end is that the dust generator in Designer also requests position and world space normal maps to do its thing. So I would need to still bring in those maps. Though in thinking about it, I suppose it may be better to look into generating that layer of detail in the Unreal material since it keeps track of world normals and position anyway. It would also give the added benefit of being able to rotate any which way.

Thanks again Wes, that helps out a lot.