Hey there! I just released my first Substance Designer Plugin. Sometimes I need a bit more control over the generation of MIP levels in DDS files. Since Substance Designer doesn't support exporting compressed DDS files with mipmap generation, I wrote a wrapper for binomial's crunch app.
Unfortunately, first saving out the files and then compressing them makes the process a bit slow. I was thinking that maybe I could compile crunch as a DLL and use ctypes to compress the sdtexture binary data directly instead of intermediately writing to files. Any help on this would be highly appreciated.
I'm further planning to support customizing the mip levels, so that one could set up the outputs of the graph to be resolution dependend (see example folder in the repo), and outputs from different resolutions will be stitched together als MIP levels in the DDS file.
I know this is a niche topic, since many engines generate the mipmaps automatically. But I hope this may be useful for some who need greater control.
Since all the patents for the DXT compression expired in 2018, I wonder what's holding back the developers to support offline compression of DDS files in Substance Designer?
BTW: Does anyone know how to get the real output size of the graph when inheritance is set to be relative to parent? I mean the settings in the Parent Toolbar. I know how to get the size of the output nodes themselves, but I need that of the graph to do some optimization in the code.
Having had a look at your output again, I think your results are actually correct. You only provided the wrong size to read from the buffer. There's a 1024² grayscale texture with 16 bits per channel (2 bytes per pixel) and you read only 1024 bytes, giving you the first 512 pixels of the first row of the node. I think you should read size of 2097152.
Hey, were you successful with that in the meantime? I just started using the API and was looking for something similar. The below code should work for 1 selected node. I left out all the safety checks for brevity.
I haven't tested the numpy part inside designer though, because I didn't install numpy into its python distribution, but I did check the conversions from the bytes to array in another python environment.
Basically I only used the ctypes.string_at function to read the buffer and converted using either uint8 or uint16, since the returned values can have different bits per channel (8/16).
def get_sd_tex(node): prop = node.getProperties(SDPropertyCategory.Output)[0] # Take the first output. print(prop.getLabel()) value = node.getPropertyValue(prop) sd_texture = SDValueTexture.get(value) return sd_texture
def print_tex_info(sd_tex): dim_x, dim_y = sd_tex.getSize() print(f"Dimensions: {dim_x} x {dim_y}") print(f"Bytes per pixel: {sd_tex.getBytesPerPixel()}") address = sd_tex.getPixelBufferAddress() print(f"Buffer Address: {address}")