Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Cory McG

Pages: 1 [2] 3 4 ... 24
The functions are good for setting up dynamic substances that respond to input, or for complicated filters. I don't think they can be used for custom shaders (although the developments in the new versions of Designer always surprise me!)
One thing to point out... the colors of the connectors are significant. The blue/green ones are floats, with it going from green to blue in higher dimensions (or channels, or number of indices in an array, depending on how you think about it). So the 3 dimensional float and the 1 dimensional float can't be directly added (which is why the line is red). However, there's plenty of vector and swizzle nodes to get the dimensions to line up!

Well, to start with, your friend is wrong... Designer is perfectly capable of generating mesh data maps, AO, Curvature, and Normals, to start with... And with these you can make high-quality, custom-fitting materials for your imported meshes without trouble.

I had Designer for a year or two as a stand-alone program before I bought Painter (and I haven't actually USED painter yet... still in the 'figuring out the interface' phase). It's just fine for many types of texturing. You can export individual maps or SBSARs for dynamic, in-engine substances, and you can import custom maps for things like blending two materials together.

What it can't do is allow you to add hand-painted details. Like placing screws on a machine, for example, is probably easier in Painter. In Designer you could create a screw adder, and add them one at a time, by specifying X and Y coords, but obviously this isn't as user-friendly.

I'd say go ahead and get Designer... It's a great piece of software and a powerful tool.

For splatmaps, this is, by the design of the splatmap system, quite straightforward. Since splatmaps use red, blue, green, and alpha, you can use the RGBA Split node found in Filters/Channels to separate them all out in a user-friendly way.

I was reminded of this brilliant filter on Share...
I think it will help you. The issue you're running into is that non-90 degree rotations don't tile without resizing. This filter fixes that. I'd suggest borrowing the code used inside and putting it as a function in your own Pixel Processor. You'll probably want to restrict the possible rotation results to a subset of rotations that you know won't end up being tiny... some of the really fractional rotations have to be scaled down quite heavily.

I hope this helps!

You could duplicate the same subgraph's node throughout, but if you want your graph to work quickly, from what I understand this would be the wrong thing to do... So I agree with you! That would also be a good place to use a radio node. Another way to think about this could be just a feature to hide the connection lines... replace _____ with _... and  ..._ maybe? Since really it's an interface thing rather than a construction thing. I'm pretty sure I'd use this a lot, actually.

So, it looks to me like this is a system for teleporting inputs, so the lines are easier to follow. It's a neat idea. I think a lot of this usefulness can be done the same way in Designer using referenced sub-graphs, where you build a new graph with custom inputs and outputs, and drag that onto your main graph. There are some situations where this isn't the best solution (like when you need many of the in-between steps in what would have been your subgraph) and in these situations I can see the radio node being the best sort of tool to make things clean.

I think either you're wrong, or I'm not understanding correctly. Offset is only the side-to-side motion of the new image, relative to the old image. Skew and rotation would be handled by the Matrix values.

Well, mental health aside (although that is an important topic, as well) the Transform node is really mostly driven by a 4 dimension Transform Matrix, which is... not very intuitive at all. What SD shows you is a set of controls that alter the matrix values in programmed ways to make the changes you want in an intuitive way. To see the settings that result in something like skew (which can also be thought of as rotation and stretch combined), click the Matrix button and you'll see four numbers.

I can't really tell you what those numbers mean, but those, with help from Offset, are the values that drive the transform. So if you want to replicate a transform, copy those four numbers into the replication and it should work just fine.

The trick is making sure the tile generator is set to Max blending mode, rather than the default Add. This makes two adjacent parabolas compete for visiblility... highest or brightest one wins that pixel.

The 2d Transform node will do what you need it to, I think.

This sort of thing is something where Substance Designer will really shine. Believable, sculptural, procedural tiles is definitely something you can do, although if you're new to the software, it may take some time to learn and adjust until you get a result you like.

Even if you're only going to be seeing the end result from a top-down, non-perspective view, I think procedural texture on a flat plane 3d model is still the way to go. Will this be a digital game, or printed out as physical map pieces? Either way, having a full material that can be rendered in 3d might give you a nice edge. I'm thinking about the effects of lighting and whatnot... A room with a torch on the wall can cast realistic light patterns and reflections on your SD tiles in the right 3d engine... then either screenshot and print, or just have that engine be the one that the game renders in. (I'm familiar with Unity, and a system like this would work just fine in that)

One of the features of SD Substances is that you can publish SBSAR files that can be used in many game engines to create textures on-the-fly by inputting custom parameters. In this case, you could set up a tile creator that colors and places them at random, and a custom damage amount (some rooms might be newer looking than others!) and then in-engine create several instances of that susbtance in order to create a small collection of unique materials for your floor objects.

I would not bother with Zbrush or anything, since nice looking tiles can be done quite effectively with pure SD. Look through Substance Share for some ideas of what can be done (and download them to see HOW). There are also some tutorials floating around. I won't get into the specifics of how to make nice tiles here, since I really only came to post responses to your questions about work-flow. But there's plenty of info out there, and lots of folks here who would be happy to give advice!

While a clock would reference all those materials within Substance software, all the practical pipeline processes would merge those maps into one set, a 'clock' material, in a sense, or three or four 'clock' maps.

Tiling textures is just the first part of texture creation, in my opinion. The real power comes from filters that alter the substance using mesh-data. Baked maps of the mesh can tell a graph what areas are exposed to damage, or where dirt-collecting crevices might be. In the end, your clock might have damaged, un-varnished corners in the wood, patina or rust in the grooves of the metal, and slightly scratched, less glossy plastic in areas that have flat surfaces.

I like to use exported maps in my workflow, although many engines can read SBSAR files, now, too. So I'll export these combined and filtered materials as Clock_BaseColor, Clock_Normal, and Clock_MetallicGlossiness and move them all into Unity. In Unity Gloss is stored in Metal's Alpha channel, and I only really find AO necessary if I'm aiming for photorealism, so baking it into BaseColor makes sense for me. No ID is necessary because all materials are combined.

The same sort of thing is true if you use SBSARs in-engine. The engine will bake out composite maps before rendering, anyway, it just does it before runtime or as the code demands (for dynamic textures)

For something like this, I think you could pull it off with some Transforms, some Blends, and a plain white square.

One thing that might be quite useful to you is that you can change the tiling of Transform nodes so items don't repeat. If you want to make a mask (to feed to a Blend node, so you can blend your repeating diagonal square pattern with the big tiles), you can feed a plain white square into a Transform, set the Transform's tiling setting to "Horizontal Only" or "None", and shrink the square down vertically until the strip of diagonals is in the right place.

One trick I like to use for predictable randomness is to use a second input image as the random seed. Usually of just channel mixes of white noise or something. It might help you here.

For your 9x9 grid, you could sample the random input image with something like floor(pos*3)/3 to get the top left pixel of each cell (which would be random but consistent)

The Material Blend node would help you here. Use some sort of noise as a mask (clouds could work nicely) and adjust it with a histogram scan node with the contrast turned up and the other parameter exposed. (the other paramter would control how MUCH rips there are) Then create a bare wall material and use that as material 1, and use your wallpaper as material 2, and your (black and white) noise image as a mask. Some normal map/height blending with the mask can be good to give the wallpaper edges some depth.

To go a step further, you could do this twice, once with wallpaper and ripped paper, using a slightly less contrasty version of that noise (black and white and a tiny bit of grey between) and then blend again with the blank wall using the noise with full contrast (just black and white this time)

Pages: 1 [2] 3 4 ... 24