Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Cory McG

Pages: [1] 2 3 ... 24
I think this is all about preference. Some people like having all the information there to be used in one graph, some like it divided into many. But either way, sometimes a material is being built that SEEMS like it will be simple, but quickly grows larger than expected. So if I plan to make a one-graph material that ends up clearly split down the middle between generator and effect, I might decide at some point that it would be easier to read and understand if I start splitting it up.

I rarely know ahead of time how the material will work, except for a vague idea, so I don't usually know how many pieces to make right away. But as I'm going along, I'll realize I need graphs that generate a specific thing, and start a new one there. Rather than splitting, therefore, I tend to add. But I can see how splitting would happen.

For me, one thing I've found is that it's easier to return to a graph that's broken into smaller pieces, where each piece does it's own job. If I only am doing something once that I won't need to look at in the future (very rare, re-using your old work is a great way to get things done faster), I'll do it all in one, without worrying about keeping stuff clean. But usually I think of it like object-oriented programming and make functions and subroutines..

I suspect he means photoshop's Radial Blur tool with the Zoom option selected. It can make images look like the camera is moving forwards through space with a slow shutter speed. This would be possible to rebuild in SD, and looking at the inner workings of other blurs like Slope Blur and High Quality Blur would give hints on how to do it. Some of the creators here might have some tricks up their sleeves, too

This is probably just how the image was saved. The pixels get recorded as black or white, and their transparency/translucency is recorded as another channel entirely. The system is probably just discarding the translucency channel, and what's left is black (which your shapes are) until the pixels hit 0 visibility, where it won't have been recorded and so defaults to white.

What you can do is use a Blend node to combine the black, translucent bitmaps with a white plain color, white on the bottom, and then convert that to grayscale. That should solve your problem.

Substance DesignerSubstance Designer - Discussions - Re: Blend node
 on: July 25, 2018, 12:48:59 am 
This dilema highlights my absolute favorite aspect of Substance Designer... It's modularity. The ability to create the tools we need to work more effectively. A multi-input blend node is something that can quite easily be made as a resource, once you know how, and then put in the library with the other assets for use in your future projects.

I did a quick search on Source and didn't find anything, but a blend node with optional additional inputs should be quite possible to put together if there's interest in it.

Edit: I looked a little further and found this brilliant resource...
I don't know if it has all the features of Blend, but adding in a built-in transform option would be quite useful in a lot of cases, and it's already set up to take up to 9 inputs.

Substance DesignerSubstance Designer - Discussions - Re: Odd UV positions
 on: July 24, 2018, 04:15:03 am 
I suspect this is more of a Blender issue than an SD one (or SP in this case) but I think I can help you out anyway.

I susupect that in Blender, your file has more than one UVMap stored in memory, and the one that Blender is set to isn't the same one as what Substance Painter is reading by default. I would suggest looking for this extra UV map, and deleting the one you don't want.

In Blender you can do this by going to the Object Data tab of the Properties window (its icon is three verts connected by edges into an upside-down triangle) and scrolling to the section called UV Maps. If my guess is right, there will be at least two. Figure out which one you want to get rid of by clicking on it and seeing the effect in the Uvs/Image window, and using the - button to delete it.

This should make everything consistent.
If this Wasn't the problem, maybe some screenshots of object settings could shed more light, but this is my best guess for now.

You mention here the difficulty in using two different terrain types in a system like this... And I think you're right, the tutorials and examples use very visually different assets because it's easy to see to a person who is trying to learn how the system works. In practice, though, sometimes you really Would want to blend two different terrain types, and it can be a challenge. Usually the colors are interpolated, by default... color 1 * value 1, + color 2 * value 2 (with splat maps being set up specifically so that all channels added together makes 1). And this can look muddy and unrealistic.

One solution people have found is to have custom shaders that use height-based blending. Height detail can easily be included in the alpha map of either the basecolor or the normal map, and that can be used, with influence from the splat, of course, to chose which type shows. This way you can have features like sand that gets in the cracks of stone or gravel at the edges, and it looks like the one type is simply submerged into the other type.

My favorite post about this can be found here: (I like it because it very clearly illustrates the math involved)

It's probably possible, but it seems like it'd be tricky. If you are trying to make the cracks seem independent for each brick, I usually find Directional Warp works just fine. Use a diagonal warp angle, and plug the bricks in as the intensity input... Be sure to turn Intensity up quite high. And it will chop up your cracked pattern nicely.

if you REALLY want to use rotation... well, maybe I'll check by this thread again in a few days and drop my idea for that if you're still interested.

Substance DesignerSubstance Designer - Discussions - Re: Tile sampler?
 on: July 15, 2018, 06:38:37 am 
You can often get these sorts of results by having multiple tile samplers with all the same settings, and layer them afterwards. This can often be annoying, though, trying to get everything to line up right, and to get the pieces to overlap when they're supposed to.
In your case, it might be possible to use color in a sneaky way, here... By using Tile Sampler Color you might be able to pull in the required maps as separate colors with a RGBA Merge node. Red for height, green for metallic, blue for basecolor (if you have paper labels or writing on the surface, for example). And an alpha map for the general shape, of course. Then make sure the tile smapler has it's blending mode set to Alpha Blend, and it should take care of the layering for you. Then separate out the colors back into their proper greyscale places, use a gradient map or some other trick to bring color to the Base Color, and you should be set.

This is an interesting question, in general, and one that doesn't really have a single answer. I think this is a perfect example of where the "artist" part of "Texture artist" comes in... You can make a very technically excellent and accurate material, but you need an artistic eye to find parameters and settings that hide the limitations of the medium and 'sell' the image. For single-image tiling it's often just a matter of choosing the right random seeds, trying out different combinations until they look balanced. With the right distrobution of lights, darks, detail, and resting area, an image can be tiled without a casual viewer ever noticing. Or, those very same functions, with some subtly different choices, can distract even the most immersed viewer. There's no formula for this, though.

There were some technical questions in your post, too, though. The first was how to use textures in large assets like Terrain in a way that hides the inevitable repetition... The best answer for terrain, I think, is one that's already very well-supported in Unity and Unreal, and that's splat-maps. Making a good splat mapped terrain is a whole other skill, and there are some clever tools to asist as well, but the basic idea is that the terrain has several textures loaded into it (I usually use four, but more or less can work), and an additional map indicates through it's colors which of the textures to sample at any given point.
If the final map (the splat map) is varied enough, and matches the shape of the terrain well, the repetition of the terrain textures (depicting things like grass, bare dirt, gravel, rock, etc) is hidden. Before one texture can fully repeat it's already changing into another type.
It's a time consuming process, though, and in many cases, even professional studios get lazy with areas that won't get much traffic... my favorite example is half-way up the far side of the big mountain in Grand Theft Auto V. The otherwise impressive splatting done on the rest of the terrain starts to lose its momentum, and suddenly the repeating patterns are very visible.

You mentioned decals. Decals can be very useful, since they represent something we'd expect to see anyway... variation. Patches of plant life, cracks in a road, roads in an otherwise natural setting, lichen... Plastic bags seems like a popular one. All these sorts of things can be placed around a texture in a way that looks natural, and also simply covers up the parts that would have repeated too much.

I've tried using a combination of big textures and detailed textures. I haven't really had good results. The assets usually seem boring from far away, and sort of bumpy up close. I had better luck with a range of differently sized pieces that could be repeated... I'm thinking here of a large boulder I once had to make. I tried having the macro-features, including AO, cracks, and weathering, handled with a custom, uv-fitted texture, and a plain rock texture that could repeat to make it seem high-res when standing next to it. What worked better is having small rock shapes make up the human-heighted part, with medium sized assets above that (which would never be seen too closely since they were up in the air) and one large piece on top to show the shape (which could be very low-res compared to the ground-level assets)

Finally, your question about SD-specific zooming. Perlin noise is a good example for this question, not just because it's a feature you enjoy, but because it works in a unique way that may give you some ideas of how to handle scaling in your own substances.
It can be thought of best as an algorithm, rather than a texture, I think. You may have read this elsewhere, but perlin noise is the result of several layers of 2d noise at different levels of blurriness. In this case, because SD has features like FXMap (which can generate tiled images very powerfully), the 2d noise can be thought of as a field of randomly placed white blobs on a black background. Combine this with another field of randomly placed blobs, but this time smaller, and more of them, and combine this again... and you get something like perlin noise.
The scaling trick is now quite simple... if you're comfortable with Substance Designer function graphs with variables, that is. If you set the size of the blobs to be an inverse of the number, you have a generator that can fill the screen with just enough blobs no matter how many you decide to put. And the more you add, the smaller they are.... Make their position be somewhat tile-like (also controlled by the inverse of the number of blobs...), and adding more blobs will look like the current ones are shrinking and more are being revealed. The fact that all SD functions repeat in X and Y by default means that this will always automatically tile.

This can get complicated, as you'll see by opening up Perlin Noise Zoom, going to Edit FX Map, and clicking on the function icon for Branch Offset. All that math controls the position of the blobs. But (somewhat) simpler solutions could be found, too. And I've found once I start in on creating a function like that, I can take one step at a time without any trouble until I look back and see a big mess like that and wonder how I ever figured out how it all worked...

One final note. Any good algorithm, if it works one place, it can work on anything else that runs commands, given enough time and memory. In this case, texturing through algorithms like this using nothing but shaders can be possible, where you'd have a non-repeating asset that could go on for miles, if you liked. I had a lot of fun with these sorts of textures when I was learning how to use Pov-Ray to create images... Although I prefer this more modern work-flow.

I hope you found this interesting. I know it's a long reply... But, of course, sometimes a complex subject like this needs a long reply.

Saying Substract might be a side effect of being a Substance user... We type "subst" so much that trying to type the first four letters makes our brain fill in the fifth.

But on to the question... If you want to make sure an input of white noise never gets darker then an input image (levels out low style) then it sounds to me like you want to blend the input image and the noise, making sure the blending mode is set to "Screen".
You could reproduce these sorts of calculations inside Pixel Processor without too much trouble, if there's something slightly different you had in mind (like, maybe, making the input image ligher AND darker, but more subtly if the input image is lighter in color), but it sounds like screen will cover your needs

Substance DesignerSubstance Designer - Discussions - Re: Wet Fabric
 on: March 27, 2018, 08:21:59 pm 
I was thinking about this too.

My first thought was about how a (thin) wet t-shirt clings to the skin and you see the skin below it.  So I was thinking you'd be able to use a World Normal map because the "upward-facing" parts are where the shirt would hang tight on the skin, and the "downward-facing" parts are where the shirt would be away from the sking.  But then I realized you'd need the shirt mesh to reference the World Normal map of the body.  That would be an interesting challenge.

If you're going with this pre-baked method, It might be best to bake out a height map for the shirt mesh, using a high-poly shirt as the "high poly" and the skin as low poly... Then project this onto the low poly shirt's uv's. That way you'll be able to figure out what parts of the shirt are close to touching by its low heightmap height.

The above solution is a good one! But, if you want something that's a little more exact, I wrote a little function to allow image warping like this... I thought it might be useful. For example, you could avoid having a hot-spot in the middle, it would be an even gradient up and down.

You're trying to connect an output to a node that's partially repsonsible for the creation of that output... which would create an infinite loop.

It's hard to tell which node is supposed to be attached to what, here, but I suspect that something got hooked up to something it wasn't supposed to.

For the second poster using Perlin Noise...

Almost all warp nodes you'll find work by telling each pixel where to sample from an image. The reason why this isn't working for you is that for any significant warping amount, you're asking the program to sample this pixel from, for example, 6, 10, and the next pixel from 6, 14. It's not that the noise input isn't smooth enough, just that there's that much of a value distance between those two pixels, and it's reaching across a gap between a white pixel and a black pixel. Your results come when the next one says something like 6, 13, which goes back onto a white...

A solution to this might be to use a blurred input for the image, then blur a bit and snap back into focus using Levels. This would only work for mask-like images, though, and not height maps...  A warp function that could PUSH pixels instead of pulling them would be best, but aside from vector graphic programs, that sort of function is quite hard to achieve (imagine an image that's not made up of a grid, but a collection of colored dots in arbitrary places, that you then have to connect up and place on a grid)

If you're content with only slight warping, something with less high-frequency noise like Gaussian Spots, or a Fractal Sum Base with the Max Level set fairly low might let you get away with using regular Warp.

For the original poster, though, I think the issue must be your resolution. Bell shape should definitely be smooth enough to warp without artifacts at 1k, but when testing at higher res, it does start breaking down. It must be the pixel count getting too close to the difference between one shade and the next. I'm not quite sure about the solution to that. Maybe downscaling at first is possible... either way, good luck!

There are ways to do this... I'll set function parts as outputs as I go along, dividing by the max expected result if necessary, so I can get a grey value as a result. If I want a pixel specifically I can set the input X and Y's as float values temporarily, and switch back to my swizzled $Pos later.

Pages: [1] 2 3 ... 24