Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - polarathene

Pages: [1]
Today I was trying to debug why we were getting some quality issues with 8k textures being processed through substance. One of these textures contains fine print text and while readable as input became a blurry mess on output despite being 8k. I've got the GPU engine set correctly, and our artist found in the settings that Substance Designer was capping inputs at 4k, if I switch that to 8k we can see quality issues resolved in Substance Designer's output previews, though I'm not sure how much this reflects the CLI tools.

Going through the docs I came across sbscooker option `--size-limit`(would be good to know default of SAT when rendering) I tried that with a value of 12 (for 8192px), this baked to an sbsar fine but calling sbsrender gives the following error:
"content of input file "filename,sbsar" does not form a valid package.

If I try to load the sbs in substance designer, it's fine at 4k, but going into settings to support 8k and loading the graph for rendering, memory goes up to about 16GB(out of 128GB available on this system) and SD crashes. Is it likely that sbsrender is having the same problem?

The graph itself is set to absolute 8k for the input bitmaps(they just show as 4k unless the limit is adjusted in settings).

It's possible that it's due to substance setting a memory limit? Docs mention for sbsrender that default is 1000MB, the parameter is listed as `--engine` just like the parameter above it for selecting an engine... That's a bit confusing if the same parameter should be specified twice, maybe `--engine-mem`? I've seen `--memory-budget` elsewhere for this, but it no longer seems to be supported.


I tried `--engine 24000` with no luck, sbscooker used up to 10GB of RAM, the sbsar file produced is about 5GB(`--compresion-mode 2`), the current graph being generated to render contains 20 textures, 5 at 2k, 5 at 8k and 10 at 4k, they're linked externally as far as I know, but I guess the sbsar is packing them?

`createIterationOnPattern` allows for duplicating multiple nodes by their UID, intelligently creating the connections which is great. It seems to duplicate an instance rather than support new unique instances. I used the method where one of the nodes duplicated was an input node, all it's copies used the same identifier string which when updated affected all other inputs, so the graph only saw it as a single input.

It would be great to support unique copies with their own identifiers, just like how substance will create a new input node with default text appending a number so that the identifier is unique. I can create these nodes in a loop with manual connection calls, the iteration method was just a nice convenience.

Presently for batch rendering with the commandline tools I've been modifying an SBS file(looking up resource bitmaps and adjusting the external linked filepath), exporting the SBSAR with sbscooker and rendering with sbsrender, the performance is a bit slow and GPU/CPU does not seem to be that high in usage, I'm assuming the I/O of writing the modifications + creating new SBSAR to render each time is responsible?

I have seen that the python demos using can split the cooking/rendering into a queue of tasks to make more use of the CPU, to make more use of the GPU would it be wise to use convert the substance graph to an instance of inputs nodes(instead of direct bitmap nodes) with a main graph containing the instances and bitmap nodes connected into the instanced graph so that I can render a chunk of a sequence at a time? Is there an advisable amount(I guess not as it'd depend on available GPUs and their memory/performance?)

If the substance is set to use parent width/height or bitdepth. It doesn't get these values based off the input bitmap resources(although you'd think it might be capable of that via metadata?). I've read that a substance like this will use a default size that varies based on applications. sbsrender defaults to 256x256 I think? While I can adjust the output size to 4096 it will still be processed/rendered via an input of 256 unless I add some exposed parameter/function manually to each substance to adjust that or set the node to use a specific size.

It's great that output can be specified, but why does sbsrender not allow to adjust the default initial/app size? (If it does this is not clear in documentation)

It would be nice if sbsrender could support GPUs for rendering as a parameter. The system I have has an Intel iGPU and two NVIDIA dGPUs, is there any information on how sbsrender is choosing GPUs for rendering with? If I could give priority to the dGPUs over the iGPU that'd be great :)

I've currently got my own python script to modify the XML in the .sbs files directly, it involves reading some information for batch processing and updating the filepath for bitmap assets. The Substance Automation Toolkit was new at the time I wrote that and the documentation was lacking due to my newness to Substance Designer.

Documentation looks like it's improved quite a bit, and it will be easier to migrate my code to using the Python API now, especially for the new features I want to add to the batch tool. From what I can see it's not suitable for rendering? Do I still use the batch tools like sbsrender for this? Do I still need to use sbscooker for creating the sbsar?

Support said that batch tools were no longer supported, but they seem to still be the same tools provided with Substance Automation Toolkit? Is there an intention to later provide API to these tools via the Pysbs API?

I've been in touch with support about licenses but the response with back and forth communications with support has been mixed.

I want to know what is required to be able to use Substance Automation Toolkit and/or Batch Tools on multiple machines for batch rendering. I've been told that the Indie license allows for this but not the Pro. I was then later told that the Indie license does not allow for this and it requires one license per machine, so I asked if Enterprise was required for batch rendering/automation and received a response that it was ok for Indie license to use multiple machines again...

I would like some clarity. I am absolutely fine with a single machine for Substance Designer but the workflow largely involves modifying the sbs file to update the input images/bitmaps or similar modifications then exporting to sbsar to render. Currently only using one machine for the automation and batch rendering, but it'd be nice to know what is required to use multiple machines.

Pages: [1]