Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - polarathene

Pages: [1]
1
Loaded the sbs into Substance Designer, raised limit in settings from 4k to 8k but didn't load the target graph. Exported the package to sbsar, sbscooker.exe peaked to 122GB of the RAM(99% RAM was in use), it's dipped down various amounts since all the way to 30GB and all the way back up to using as much RAM as possible. It's still running 25 mins later, presently at 94GB of RAM usage. This was with the export option of avoiding compression which was meant to speed up the write to disk.

I'll give it a little while longer, but I guess for some reason sbscooker is really struggling at creating the sbsar file. It just stopped(presumably crashed) while typing this, the sbsar file is still 0kb...sooo not enough RAM?

Just switched settings back to 4k and tried the gui export to sbsar again, it seems to be showing the same symptoms already exceeding 40GB in RAM(I take it the limit setting has no affect on this then? Confirmed by ending the sbscooker.exe task and getting an error with the command, --size-limit is always 12). Tried with 128px in limit settings and best compression, same problem(compression mode was updated to 1 though).

Just to confirm, SAT is given a --size-limit of 12 and exports the sbsar fine, definitely not as long or as much memory usage, at --size-limit 13 it seems to finish fine, sbsrender fails(although the error is from sbsbaker for some reason).

2
Today I was trying to debug why we were getting some quality issues with 8k textures being processed through substance. One of these textures contains fine print text and while readable as input became a blurry mess on output despite being 8k. I've got the GPU engine set correctly, and our artist found in the settings that Substance Designer was capping inputs at 4k, if I switch that to 8k we can see quality issues resolved in Substance Designer's output previews, though I'm not sure how much this reflects the CLI tools.

Going through the docs I came across sbscooker option `--size-limit`(would be good to know default of SAT when rendering) I tried that with a value of 12 (for 8192px), this baked to an sbsar fine but calling sbsrender gives the following error:
Quote
"content of input file "filename,sbsar" does not form a valid package.

If I try to load the sbs in substance designer, it's fine at 4k, but going into settings to support 8k and loading the graph for rendering, memory goes up to about 16GB(out of 128GB available on this system) and SD crashes. Is it likely that sbsrender is having the same problem?

The graph itself is set to absolute 8k for the input bitmaps(they just show as 4k unless the limit is adjusted in settings).

It's possible that it's due to substance setting a memory limit? Docs mention for sbsrender that default is 1000MB, the parameter is listed as `--engine` just like the parameter above it for selecting an engine... That's a bit confusing if the same parameter should be specified twice, maybe `--engine-mem`? I've seen `--memory-budget` elsewhere for this, but it no longer seems to be supported.

---

I tried `--engine 24000` with no luck, sbscooker used up to 10GB of RAM, the sbsar file produced is about 5GB(`--compresion-mode 2`), the current graph being generated to render contains 20 textures, 5 at 2k, 5 at 8k and 10 at 4k, they're linked externally as far as I know, but I guess the sbsar is packing them?

3
Fantastic, cheers for clearing all that up! Would be great for others if that information were more easily discoverable, this forum thread should help with that at least :)

4
GPU rendering is already in use with the parameters you mentioned, I discovered it when trying to figure out why 4k outputs were rendering as 2k images.

I'm not asking how to enable GPU rendering, I'm asking how to know which GPU is selected for rendering, or in the case of multiple GPUs priority/distribution of render tasks to GPUs. Eg, if I want to render 10,000 times, is each sbsrender call going to task the iGPU(which is dx10 capable) each time and thus render slower than a more powerful dGPU like an Nvidia 1080?

Can I alternatively blacklist the iGPU?

5
As mentioned, I would modfiy an sbs file, export the sbsar and then render, repeating the process several thousand times. The sbsar files would be several MB with current substances. I'm guessing beyond that, there might be some warmup/init time that could be avoided/reduced by having a substance that instances multiple copies of the graph I'm modifying/rendering, so that I could reduce the I/O/render calls by 5-10x say.

Depends if the approach will have substance use more of the GPU resources(which I note sbsrender has a default vram limit of 1000MB that I'd need to be aware of as well). I do know that one of the demos I saw for SAT rendered many texture outputs at once(our substances take several inputs but only one output is rendered by it's graph).

I'll be giving it a try soon to see if it reduces rendering time, if not I'm sure utilizing more cores will help too :) I have an idea how to scale the resource usage to make the most of the given hardware, although I'm curious about how sbsrender chooses which GPU(s) to use and if I can influence priority/selection.

Yes, I'm already using the appropriate engine parameter like d3d10pc on Windows, I found out about it in the past when I was confused why the outputs were capping the render size to 2048px.

6
I have mentioned this in another thread, but I guess I will need to try again to confirm it.

When using the parameters you're advising in the past, the input images that were 2k in resolution were being processed as 256x256(relative to parent like FAQ link states), the outputsize value given to sbsrender only seemed to affect the rendered output size, eg it could save images at 1024, 2048, 4096(with GPU engines), etc...however the content was blurry as if it had been upscaled the input image from a 256x256 size(despite external source being 2k).

The parameter name itself makes sense that it would affect the output image size, not adjust the default 256x256 size for sbsrender(where other supported programs have varying default sizes but from my understanding can be adjusted by those programs). For the time being, I manually set the resolution of the bitmap nodes to their source size, this doesn't render output images with that blurry upscaled look, instead they're crisp and detailed like expected.

7
`createIterationOnPattern` allows for duplicating multiple nodes by their UID, intelligently creating the connections which is great. It seems to duplicate an instance rather than support new unique instances. I used the method where one of the nodes duplicated was an input node, all it's copies used the same identifier string which when updated affected all other inputs, so the graph only saw it as a single input.

It would be great to support unique copies with their own identifiers, just like how substance will create a new input node with default text appending a number so that the identifier is unique. I can create these nodes in a loop with manual connection calls, the iteration method was just a nice convenience.

8
Thank you for the clarification. It'd be great if information like this is documented somewhere too with any other differences between the licenses :)

I understand it as while <100k revenue, 1 indie license subscription per user/machine(multiple can be licensed on the same account?) to use SAT on multiple machines for batching. While >=100k revenue, a Pro license for Substance Designer would be required(single user or sitewide unclear) and the other tool we use SAT would also use a Pro license but can use as many machines as we have available?

I had a look at the buy section of the site and under Pro, SAT is not listed as an option.

9
Presently for batch rendering with the commandline tools I've been modifying an SBS file(looking up resource bitmaps and adjusting the external linked filepath), exporting the SBSAR with sbscooker and rendering with sbsrender, the performance is a bit slow and GPU/CPU does not seem to be that high in usage, I'm assuming the I/O of writing the modifications + creating new SBSAR to render each time is responsible?

I have seen that the python demos using batchtools_utilities.py can split the cooking/rendering into a queue of tasks to make more use of the CPU, to make more use of the GPU would it be wise to use convert the substance graph to an instance of inputs nodes(instead of direct bitmap nodes) with a main graph containing the instances and bitmap nodes connected into the instanced graph so that I can render a chunk of a sequence at a time? Is there an advisable amount(I guess not as it'd depend on available GPUs and their memory/performance?)

10
I think I came across those which used a separate Python file batch_tools.py or something? Being new to Python and Substance at the time, they were not clear to me(smaller examples might have been easier to follow I guess), I can't recall what docs were like for that approach of using the batch tools. I believe I referenced it, some gist example and the output/generation from some GUI tool on Github that created the CLI command to learn/understand how to use sbscooker and sbsrender.The command line arg docs weren't clear enough for some areas I was having trouble, seemed to be a problem for others too when googling queries. Some example snippets would be helpful, though a proper Python API also helps as an alternative :)

Here is some examples:


Quote
--engine <arg>
    Switch to specific engine implementation. format of <arg> : <dynamic_library_filepath> or <engine_version_basename_substring> e.g.: ogl3, d3d10pc, ...

Doesn't indicate what default is, or what options are due to the eg/ellipsis. I believe default is CPU SSE2 or something(can't remember the string for it). I learned that you can find these listed in a combobox within Substance Designer UI or look for the library files to see where they are, an example of using the command would also have been useful. `<arg> : <dynamic_library_filepath_or_engine_string...> wasn't clear. ` --engine d3d10pc`, ogl3 doesn't appear to be supported on Windows, user must discover the available options instead of having a reference of what Allegorithmic offers per platform?

Quote
--set-value <arg>
    Set value to a numerical input parameter. Format of <arg> : <input_identifier>@<value>.

Similar with outputsize which is exposed by default, ` --set-value $outputsize@11,11` where 11 gives `width,height` and maps to a value of 2048px(default max on CPU rendering, as well as bitdepth limitations, took a while to figure that all out). I learned how to use this command thanks to the GUI tool via Github from memory. There was a similar option for setting size of nodes I think(input/output specifically?) but I didn't seem to have much luck getting that to work.

I was pleasantly surprised to find the .sbs file was XML when I tried to view it in an editor, I could make more sense out of the structure there than I could the docs at the time. It's great that the docs have been improving, the Pysbs API is really nice to use :)

11
If the substance is set to use parent width/height or bitdepth. It doesn't get these values based off the input bitmap resources(although you'd think it might be capable of that via metadata?). I've read that a substance like this will use a default size that varies based on applications. sbsrender defaults to 256x256 I think? While I can adjust the output size to 4096 it will still be processed/rendered via an input of 256 unless I add some exposed parameter/function manually to each substance to adjust that or set the node to use a specific size.

It's great that output can be specified, but why does sbsrender not allow to adjust the default initial/app size? (If it does this is not clear in documentation)

12
It would be nice if sbsrender could support GPUs for rendering as a parameter. The system I have has an Intel iGPU and two NVIDIA dGPUs, is there any information on how sbsrender is choosing GPUs for rendering with? If I could give priority to the dGPUs over the iGPU that'd be great :)

13
I've currently got my own python script to modify the XML in the .sbs files directly, it involves reading some information for batch processing and updating the filepath for bitmap assets. The Substance Automation Toolkit was new at the time I wrote that and the documentation was lacking due to my newness to Substance Designer.

Documentation looks like it's improved quite a bit, and it will be easier to migrate my code to using the Python API now, especially for the new features I want to add to the batch tool. From what I can see it's not suitable for rendering? Do I still use the batch tools like sbsrender for this? Do I still need to use sbscooker for creating the sbsar?

Support said that batch tools were no longer supported, but they seem to still be the same tools provided with Substance Automation Toolkit? Is there an intention to later provide API to these tools via the Pysbs API?

14
I've been in touch with support about licenses but the response with back and forth communications with support has been mixed.

I want to know what is required to be able to use Substance Automation Toolkit and/or Batch Tools on multiple machines for batch rendering. I've been told that the Indie license allows for this but not the Pro. I was then later told that the Indie license does not allow for this and it requires one license per machine, so I asked if Enterprise was required for batch rendering/automation and received a response that it was ok for Indie license to use multiple machines again...

I would like some clarity. I am absolutely fine with a single machine for Substance Designer but the workflow largely involves modifying the sbs file to update the input images/bitmaps or similar modifications then exporting to sbsar to render. Currently only using one machine for the automation and batch rendering, but it'd be nice to know what is required to use multiple machines.

Pages: [1]