Author Topic: Performance  (Read 16895 times)

Hi,

My team have been using Substance Designer for a long time. We have decided to automatize some of the manual steps.

I would need some technical assistance, I have two questions:

1. I have noticed "Cooking optimization options" in sbscooker.
My goal is to make sbscooker as quick as possible (there will be a dedicated computer). As far as I understand, I have to set "crc" to "1" and "full" to "0". But this actually increased cooking time for me. After that, I've applied "merge-data 1" and it made the process faster for me. What settings should I apply for the fastest performance?

2. In the future we would like to use sbsmutator to edit existing graphs, sbscooker to make these "ready-for-render" and sbsrender to render them. Is this the "official" way to this? And if it is, what hardware would you recommend? What is more important for this use-case, CPU or GPU? How much RAM would you recommend for a medium complex graph?

Thank you,
Markus

Hi,

My team have been using Substance Designer for a long time. We have decided to automatize some of the manual steps.

I would need some technical assistance, I have two questions:

1. I have noticed "Cooking optimization options" in sbscooker.
My goal is to make sbscooker as quick as possible (there will be a dedicated computer). As far as I understand, I have to set "crc" to "1" and "full" to "0". But this actually increased cooking time for me. After that, I've applied "merge-data 1" and it made the process faster for me. What settings should I apply for the fastest performance?

2. In the future we would like to use sbsmutator to edit existing graphs, sbscooker to make these "ready-for-render" and sbsrender to render them. Is this the "official" way to this? And if it is, what hardware would you recommend? What is more important for this use-case, CPU or GPU? How much RAM would you recommend for a medium complex graph?

Thank you,
Markus

Any update?

Hi Markus,

Although I don't have the answers to all of your questions, I'll try to help out with my experiences since it's better than nothing.

Quote
I have noticed "Cooking optimization options" in sbscooker.
My goal is to make sbscooker as quick as possible (there will be a dedicated computer). As far as I understand, I have to set "crc" to "1" and "full" to "0". But this actually increased cooking time for me. After that, I've applied "merge-data 1" and it made the process faster for me. What settings should I apply for the fastest performance?
I haven't strayed from the default settings so I can't provide any input here.


Quote
In the future we would like to use sbsmutator to edit existing graphs...
Edit what, exactly?
Although sbsmutator has editing capabilities, they can be limited to graph level I/O optimizations.
If you need more advanced capabilities like adding to the graph or building one, then pysbs would be needed.

Quote
In the future we would like to use sbsmutator to edit existing graphs, sbscooker to make these "ready-for-render" and sbsrender to render them. Is this the "official" way to this?
Yup, sounds about right.

Quote
And if it is, what hardware would you recommend? What is more important for this use-case, CPU or GPU?
Tough to answer as it's highly relative to your production needs and how you optimize your subprocesses.
Bakes (from-mesh): Heavily GPU reliant.
Renders: GPU/CPU reliant. Although I think the bias is more towards CPU performance. (I don't know this for sure and I can no longer find their blog entry that discussed it.)

An RTX capable card (GTX 1000/2000 series) would give you the best performance when baking.
A mid/high end multithreaded processor will give you great performance when rendering.

I'm not sure if RTX has been optimized or is even supported by Linux, so you may need to look into that if you're using Linux.

Quote
How much RAM would you recommend for a medium complex graph?
Again, very relative. How would you define a medium complex graph?
Baking:
The resolution of your asset will likely have the most significant impact on the memory as it needs to be cached during baking.
I've seen a single subprocess take anywhere from 2gb to 15gb, depending on a combination of factors.
Ideally aim for a minimum of 64GB.

Rendering:
Highly dependent on the optimization of your graph.
If you're allowing artists to work heavily with procedural scaling (not tiling), expect to use a lot more memory.
In fact monitoring memory and performance during rendering is how we've caught multiple graphs that weren't optimized properly.
Last Edit: February 10, 2020, 05:02:27 pm