Author Topic: Strange function node behavior when there's too much stuff going on?  (Read 235 times)

I've been trying to debug my function graphs for a few hours now, but I'm using "set as return value" to trace through my function logic progress, and I'm getting really strange results.

For example, if I have nodes setup like A->B->C, and returning at C gives me a clear image, but then dividing C by 2 and returning there gives me a competely black or white (or even a gradient sometimes) image, is this a sign of some internal failure or limitation?

Even if I replace the divide by 2 with something like adding 1, I still get the strange behavior, so it's not related to the division. I'm wondering if I'm hitting some type of internal memory limit. My graph is attempting to sample pixels around the active pixel to determine how varied the area is. And it works great when scanning 32 pixels away, and 64 pixels away. But I've hit this wall when I try 128. It's a shame, because I was really liking the effect, but I can't take it to large scale until I figure out what's happening.

If this is not likely, then its possible that I just have a strange bug in one of my function graphs somewhere.

Any advice or just a point in the right direction for the documentation would be appreciated, thanks!

It's hard to advice anything without seeing the issue or looking at the graph..
Product Manager - Allegorithmic

Sure, I can upload the graph and functions. It's a bit convoluted, and I didn't want to (seemingly) ask someone to debug my graph for me. I was hoping there was a simple explanation, like maybe I'm accessing something I shouldn't.

If you want to see the strangeness as it stands, open Interpolation_Test.sbs. Ignore the output node and just check the result of the bottom pixel processor. If you look at the bottom pixel processor's edit function, the variable Depth_Test is supposed to control the number of pixels that get tested in that routine, but it doesn't seem to have any effect at all when I change the value of Depth_Test. However, if I plug in the constant integer value node below it and change that, it seems to work. But even then, if you change that value to 5, you get a strange gradient result. Change to 6 and you get pure white.

There's a lot of repeat-like functions to scan various pixels, but the root scanning function is pixelVarianceScanRoot. I'm wondering if I'm not doing something wrong in there to cause the strangeness. Maybe writing to pixels outside of the memory I should be accessing. It's one of the few explanations I could come up with that makes any sense.

I've only been using Designer for a week or so, so still feeling my way around, and probably making some mistakes. Thanks for checking it out.
Last Edit: February 20, 2019, 10:14:30 pm

If I were to guess, it looks like I may be running out of some type of internal memory. If I unplug the bottom two connectors from my 8-choice depth switch, it works fine. And it will fail again even if I plug some constant zeroes into those bottom two connectors. As far as I know, those bottom connectors shouldn't even influence anything when the switch variable chooses the top-most option. I guess it depends on how it works internally.

In many cases, I can return at a certain node and all is well, but if I try to add any type of operation onto that and return after, it fails. And in many cases where it fails, just returning from an earlier point in the function makes it work.

So yeah, I think I'm running out of memory. Is there any type of memory setting in here that I can crank up? I'm only intending to use Substance offline, so I'm not worried about compatibility with any engines.

Thanks again

I'm investigating..

There was some unplugged inputs (depth_start / depth_something), but for now I'm suspecting an issue with the switch node (something related to integer).
Product Manager - Allegorithmic

This is quite a convoluted function indeed..

So it looks like you are indeed hitting a limit. The shader just does not compile.

Fyi, the "switch" node does not optimizes the generated code, all functions are in-lined and all the nodes will be computed.

It means all the "pixelSample" function is instantiated in total 15552 times if my maths is correct, which results in 62208 samples in total. That's quite a lot.

Having everything in a single pixel processor is not a good idea, and also you are not reusing results when computing larger kernel.

You should split your algorithm into multiple pixel processors, maybe reuse results for computing larger kernel (not sure if it's possible ?). Also maybe there are better algorithm to compute what you are looking for (what is it exactly ?).
Product Manager - Allegorithmic

This is quite a convoluted function indeed..
Yeah, sorry, my first attempt at the pixel processor, and any non-trivial function.

There was some unplugged inputs
I noticed that too after I uploaded, and a few math errors. I was making a lot of alterations to attempt to make sense of what was happening, so I had a feeling some shoes were left untied. Sorry about that.

Fyi, the "switch" node does not optimizes the generated code, all functions are in-lined and all the nodes will be computed.
I noticed it seemed to function that way, but was hoping it was an interface thing, and not really the logic behind the scenes. That's a bummer. That explains why it's hitting a limit then, and explains why unplugging some of the switch options fixes it.

I was under the impression that (general) code compilation doesn't require such logic to be processed if their respective conditions are not met. Is this something specific to the pixel processor? Something related to graphics processor limitations? And does it apply to all conditional branches? For example, if I have some code such as "if( X ) then do Y()", and Y is completely unsafe to do unless X is true, is the entire thing unsafe to include?


It means all the "pixelSample" function is instantiated in total 15552 times if my maths is correct, which results in 62208 samples in total. That's quite a lot.
Yep. It was written with the assumption that only logic in true conditions would be processed. Now that I realize this isn't true, I may be able to figure out an alternate strategy.

Having everything in a single pixel processor is not a good idea, and also you are not reusing results when computing larger kernel.
You'll notice there was another pixel processor in the graph. They were actually originally the same processor, so it was much worse originally. I'm sure I can split them up, now that I realize how it works internally.

You should split your algorithm into multiple pixel processors, maybe reuse results for computing larger kernel (not sure if it's possible ?). Also maybe there are better algorithm to compute what you are looking for (what is it exactly ?).
Trying to feel my way around was the primary purpose of it. It's just an interpolation node. You plug in two inputs and it interpolates between the two in different ways. For creating grunge maps or such. Mixing noises, etc. Most of the logic was removed to simplify what you needed to dig through, but the return value of the code you saw was intended to be used as an interpolation factor for a lerp operation. There's no logic in there that's important enough to fix. I just needed to understand why it was failing.

Thank you very much for your help. I appreciate you taking the time to look at it.
Last Edit: February 21, 2019, 11:09:21 pm