Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - teto45

Pages: [1] 2
Let me explain. The idea of this tool is great. And how it works, the workflow. But my problem with it is that the result is not satisfactory. At least for me.

What I don't like : You start with a material (typically the one you give for the tutorial, from many photos of the same fabric under many angles) that has many variations of colors, tints and so on. Then you produce a variation from a photo (by choosing 2, 4, 5, main colors) and bam, new material.

The big problem for me, is that variation doesn't reproduce the gradient, subtle changes of colors in the original material. For a pixel, its color is white, blue or red. No orange, no blueish variation. Red, blue, and white, period. Good enough for distant material, but that's all.

For me this feature won't be a good reason to buy Alchemy (but creating a material from a photo, yes), and must be improved imho.

Just unbelievable. I just can't believe it. It's a nightmare.

As my english is not that good, my explanations may be fuzzy, that's right. ^^

I'll try to be more specific.

The main goal to this node is to help a designer to have an alternative when a grayscale image is absent or under a value. Imagine that you develop a node, and ask the user to give a grayscale image for an effect as an entry. The user plugs nothing, or an image too dark for the purpose of your node (because... reasons). The node allows you to plus an alternative image that would be used instead, in order to have "something" (a neutral grayscale for example), because if not, your node doesn't work well.

Of course you can do many other things with this kind of node, it's just an example.

In short :
- It's a node with 2 entries A and B, and a color chooser.
- If A is not plugged with something -> B is chosen.
- If a pixel from A has a higher value that the value in color chooser -> B is chosen.

For example you can decide that if the grayscale image in A is too dark (since a pixel is lower than 100 for example), you lighten it and put the result in B. In this case, the output of the node will never have a pixel with a value < 100.

Pretty sure that it's because you had a dependency for you node that moved or was deleted somewhere. Like you are making a node A, that uses nodes B,C,D that aren't standard, and C uses itself a node that moved or disappeared. You have to check/open all nodes (and sub-nodes) that aren't standard and check if there's a missing dependency.

Yep, you're right. It's better to do what you want in the center first, and move the result after. :)

Important update8)

For people who use my nodes, please consider to update most of them, I updated 2 of them, Min_Value_Grayscale and Max_Value_Grayscale, which are used in other nodes (also updated):
  • To the ceiling
  • To the ground
  • By Pass alternative
  • Elevation advanced
  • Elevation
  • Masking Edge
  • Stack auto

Now these node won't freeze/crash you computer with high resolution heightmaps and they are dramatically faster than before, around 200x with a 2K image.  ;D

Oh another one, more tricky this time, if someone can answer:

I know that nodes use mass parallel processing, as pixel processor. But imagine that I plug the same image twice to a pixel processor: both are clones, that means that if I change things in the first, second remains unchanged ?

In other words: when I plug from 1 output (of a node) many inputs (of other nodes), the images from the output are clones ? Also in one node with many inputs, if I plug the same image to the inputs, all images are clones inside the node, so they are independent ?

Very noob for these questions, but I want to be sure that images between nodes are always clones, that there are no exceptions.  :D



and thanks!  ;D

By the way, no offense to anyone, but documentation needs to be improved. To many explanations are missing. :) For example, in functions we have a variable named $normalsomething, there's nothing about it.  What does the variable do? Maybe it would help me a lot. Anyway, thanks again.

Good news about Min_Value_Grayscale and the opposite. Problem was, until today, that the reliable method I used was time/resources demanding. At 8k a decent PC would crash or freeze, and many nodes I made are using both nodes, so it a huge problem that I had to fix. It's done, and now the nodes shouldn't be a problem anymore, and could be used extensively without problem.

Short story: I reduce the size of the input heightmap from whatever size to 64x64, without loosing any highest pixel value. So 1 pixel in a 64x64 image corresponding to a 128x128 square in a 8K image and it has the highest value of the pixels in the 128x128 square. Or the lowest, for Min_Value. After that, I'm using the old-fashioned way, as every computer can handle this easily.

During this process I learned one important thing, that can be useful to share with people who work with their own nodes: pixel_processor is a great node, but can be tricky to use. I discovered 2 things :

- Never use integer values when you handle pixels, never. At a position ($pos) [x,y], where [x,y] are pixelized versions of [0.12865,0.58962] for example, if you want to know the value of the next pixel, don't ask [x+1, y+1], ask [x+1.0, y+1.0]. Even if it should work, it doesn't.

- More important, don't hesitate to make a pixel_processor by operation. For example, if you want to transform 2 contiguous pixels, with pixel A depending of B and after B is transformed depending of A, separate the operations, one by node. Because as the node use drastically parallel processing, you aren't sure that the 2nd operation will be done after the 1st one. So if you transform blocks of pixels, and blocks can overlap, you can't be sure of the order of any operation you are doing. The solution is : Do the 1st operation (A is changed by the value of B) and link the node to another dedicated to the 2nd operation. With that way, the 2nd operation is safely done after the first.


There is one way : creating a random number, and create with it a grayscale image (say 64x64) that is an output of your node. So you can keep the value between all your nodes. If the random is greater than 1, I don't know. ^^


I have 2 quick questions, about function nodes:
- Cast nodes, how do they work? Example I'd like to cast a float to an integer, to cut decimal value. If I cast 23.6, the integer value is 23? 24? Am I right if I say that 12.2 becomes 12 ?
- Also If I "write" with modulo node: 12 % 2 the result is 0, right? Just to be sure, documentation lacks examples.
Just one last question:  if I have a float2 like [2,3] and add another float2 [7,5] (with the node add) the result is [9,8] ? So, by extension : [2,3]*[5,7]=[10,21] ?

Thanks for you answers!  ;D

Thanks for your answer.  ;D

By the way, just a thought: In Designer, library shares node for "normal" substances and MDL substances, it's starting to be a mess. 2 separated libraries, with a tab each would be appreciated.  :D

I wouldn't call it closed.
I hope so, remember PhysX...
'cause you know, for the moment, the technology is free, but soon enough it could be usable only with nVidia CG...

But OK and thanks for your explanations. I don't know many renderers, I hope ti'll be available for game engines soon. I must admit that the technology is fantastic.

Ah, I see... Thanks.

So I hope that Alle. won't spend too much time in a closed technology.

This thread is a success, obviously !  8)

Pages: [1] 2