Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - CodeRunner

Pages: 1 2 [3] 4 5 ... 12
What would be the best option for initializing variables in a pixel processor node? I'm trying to setup some variables that will be used inside the pixel processor before the processing actually begins.

I tried using the "output size" function to generate variables, but for some reason, Designer doesn't always execute that function when the graph loads up. So the processor ends up being executed without the variables being initialized. I have to tweak some of the exposed variable values to get them to initialize.

Does anyone know why this happens, or if there is a safer way to initialize them outside of pixel processing?

Substance DesignerSubstance Designer - Feature Requests - Ghost Manager
 on: May 07, 2019, 11:59:11 pm 
We need a ghost-node manager that works like the dependency manager. Each ghost node should be listed where we can swap it out with a valid existing node that has same-name (or same-type) connectors.

There is an issue that causes function nodes in unloaded graphs to turn ghost when we rename their identifiers, which can cause a lot of distress. This would seriously help with that.

EDIT: Most importantly, make sure to leave some kind of identifier on the ghost node that lets us know what it was. Even if everything else is ignored, this feature should be included for sure, if its possible.

That was exactly what I needed. Plus additional information that I didn't know. Many thanks!

Do unplugged parameters always default to zero? Is it considered safe practice and normal workflow to leave function node parameters unplugged to specify zero, or should it be avoided?


Thanks much for the information.

I was exposing the seed from a sub-graph that had 3 instances in the current graph, and was offsetting each of those 3 seeds by an (exposed) set amount plus a specific (hard coded) offset for each one. For some reason, I was encountering a strange problem where they would all remain static after the "set amount" reached a certain threshold. But after changing the hard coded offset amount, it started working. So not sure what was going on there.

Good to know there is no internal limit apart from data limits.

Does anyone know what the internal maximum random seed is for Designer? It's not really 10 is it? This seems like it would severely limit the number of random possibilities. Values above 10 seem to function, but are they defective in some way? I haven't been able to find anything about this in the manuals yet.

Anyway, after exposing the parameter and trying to generate random values, it appears as though very high seeds are identical to each other (or I may just be doing something else wrong).

If anyone can help, I would really appreciate it.

Substance DesignerSubstance Designer - Discussions - Random Integer
 on: May 04, 2019, 03:51:59 pm 
I'm trying to write a simple function node that generates a random integer within a range of values supplied to the function. This was my first attempt:

But this had terrible results. While generating a random number between 0-5, I got the exact same result with random seeds 0, 1, 2, and 3 (maybe 4 too). So then I got a little frustrated and tried this:

Which appears to work much better. Different results for every random seed. Can anyone help me understand why? Am I making an obvious mistake in the first one that I'm not seeing? Or is there some internal operation concept that I don't get?

Thanks for any advice!

Does anyone know if resizing images is a safe way to perform a high-pass operation? Essentially, I'm extracting the small details from larger details by blurring the image and subtracting that blurred image from the original (high-pass). But I've found that simply resizing the image to a smaller scale, then back up to normal is much more efficient than blurring.

Does anyone know if this is a safe method to use? I've noticed that the pixel processor bi-linear sampling is apparently different than the filtering used by Designer when it resizes. I'm just wondering if the resized images are going to differ drastically across different machines. I'm only intending to use my graphs for texturing model assets offline, but would still rather make something stable.

Thanks for any help!

Does anyone know the answer? Seems like knowing the answer to this question would dramatically change the way one would design a function graph.

I've been trying to keep mine simple due to the fear that non-executed function nodes may add performance cost. But if it doesn't, it would give us more room to provide extra user customization.

Hey guys, I am (again) having some issues with patterns showing up in the randomness of my FX Map. I've trimmed the node down into something incredibly simple, where each dot has a random position 0-1 and a random luminosity 0-1, and this pattern shows up:

You can pretty easily see the pattern of luminosity running through the lines, where we have a bright-dark-bright-dark thing happening diagonally.

If anyone can shed some light on why this is happening, I would really appreciate it. From what I can tell, it seems to only happen when I call random from the FX switch node. If I use random in the quadrant node, it goes away.

I was once told by a member of the Allegorithmic staff that all paths of a float / switch (if/else branches) are executed during a single run of the Pixel Processor, even if only one of the conditions are met. Is this true?

I've noticed that the random node is a good way to test these situations. The outcome changes if random is called twice per pixel instead of once, or three times instead of two, because random always outputs the same results if called in the same order, but doing this changes that order.

When I create several conditional branches, where only one is true, random does not seem to be executed for those branches. This makes me think these branches are actually *not* being executed. Unless Designer is internally prepared for such situations and compensates for it by "ignoring" the random calls.

I would like to know how the optimization works? Does the entire conditional branch get skipped (in a way that would make the number of sample calls irrelevant), or are the function nodes designed to internally skip execution when their branch should be ignored (where they still cost something even when not used)?

Can anyone straighten this out for me? If this is true, it dramatically changes the way I can go about designing graphs.

Thanks for any help!

There is an issue that occurs when switching between two nodes that have many exposed parameters. When the user double clicks, the program has to switch both parameters and 2D display, but the large number of parameters takes a second to load, which prevents the double click from registering.

It's hard to guess, but I think this is happening because the double clicks are not being detected using time stamps. Or perhaps you need an input buffer, or to detect input on a separate thread apart from image processing.

A great feature to have would be an exposed int quadrant member that represents its "depth offset". When the user changes the value +1, the quadrant would behave as if 1 extra empty quadrant was inserted above it, attached to all 4 inputs.

I'm not sure how difficult this would be to program as a feature (maybe very easy), but it would be big in terms of functionality. One could expose the variable and name it scaling, and have instant noise scaling that is similar to the scaling used by the built-in FX noises. It would also make many FX maps simpler - we would only need to create quadrants that render things.

That's great, I appreciate your help as well. We will figure these confusing things out eventually.

The set node doesn't require a sequence node to follow it, but there are times when not using one can lead to problems like the one I just had. I believe the set node just sets the variable and returns its value as the result.

The sequence node just allows you to do "extra stuff" in a situation that is intended to perform some other specific task. The inputs of the sequence node are like execution streams. "In" is executed first, but the value plugged into it is completely ignored after it executes. Then "Final" is executed next, and the value plugged into that is returned as the result. So if you have 3-4 sequence nodes, you can think of them as execution steps, like a list of things that need done in a specific order.

It confuses me all the time when trying to plug up a bunch of sequence nodes. It helps me to remember that it works backwards. Like a line of people trading stuff. Each person asks the person behind them to give them what they want, so they can trade it to the person in front of them for whatever they want. The requesting starts at the end, but the trading begins at the beginning.

When you plug a set node into a sequence node, you're telling the program to set the variable before whatever comes next. But plugging the set node into the sequence node doesn't actually do anything, other than make sure the set node is executed first. For example, you could plug your set node into a divide node that divides your set value by a million, then plug that into the sequence node and it won't make any difference at all. The divide node will have no effect, even though it will execute correctly.

Pages: 1 2 [3] 4 5 ... 12