Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - jtk700cln

Pages: [1]
Thanks for the reply--funny, I actually realized that 3d noise could potentially be a solution this weekend--I jumped into houdini to do point based noise deformation on a super high res mesh and realized that 3d noise in sd could achieve the same affect with ease--they both rely on point positions rather than uvs--thats the key.    Along the way it made me realize that I need to do a better job maintaining size between uv shells so there's that lol--you can see resolution discrepancies across the tiles--- though the texture itself no longer has splits.  As for the hardened normals affecting displacement-- Ill have to look into that.  The edges of the object are fused at this point--by that I mean the points are all fused, so the surface is watertight, but the seam edges do have separate vertices because they have separate tiles---does that technically mean they have split normals?  Id imagine not but Ill try to make sure......thanks for the tips!!   

Hey all--methodology question here-
We have a very largescale texturing task at hand--on organic surfaces, with multiple procedurally generated objects that will be calling the same substance designer graph to generate different textures--single varying nodes being position curvature and uv map data per object.   We are trying to everything we can procedurally in designer for the final look, and that includes covering seams.  The solution would ideally be triplanar projection--hammering the seam crevices with grunge or noise that is projected procedurally to just cover our tracks--not just for color but for normal and height maps as well.  Ive noticed though, that triplanar projection has limitations.  If you turn the height displacement up too high--well in certain cases if you turn it on at all, you can see the mesh splitting along the seams despite accurate position and world space maps.  It makes sense--in that triplanar projection isn't just magically gonna make each side of the seam have perfectly lined up values.  Its a very good helper but not perfect.  However the discrepancy in this case is high enough to make height maps a no go along the seams, which is unfortunate.  We can get away with normal maps though, and once the whole object is colored, the seams are very hard to see, but Im just hear to clarify things. 

The only alternative as we see it now is to do this seam edge work on the mesh itself inside Houdini--on a super high res version, and then bake these normal and height maps out in a traditional high res to low res workflow--then we could bring these into substance designer to help leverage the process.  Does that seem like the correct approach to you?  Or perhaps there are some things I am missing. Unfortunately we wont be able to have as much complexity in our seam coverage but it is what it is.   Ive attached a screen shot of a sample object--with the top portion untextured atm.  I do admit we are almost splitting hairs here, but it would be helpful to get and answer on this.  Thank you in advance for your help! 

Thanks for all the help Palirano.  If nothing else the first option is an awesome piece of information--and the BBox_size technique to scale the size is HUGELY helpful.

Moving forward!

[img]    Hey all--interesting conundrum here.  Im trying to find a way to intuitively  scatter objects within voronoi cell patterns.  From what Im seeing, there are two fundamental ways to approach this in substance designer--the first is you use the preexisting cells nodes to generate your cells --and then drive other shapes within the center of these using filters to affect the interior shape (see pic one).  This works fairly well.  However, what it you wanted to actually scatter objects at the center of each cell -- and have each object be scaled to fit within the shape itself?

     The second pic shows a method I tried--without success.  I built a shape--very very small, then splattered it.  This shape was duplicated, made larger, and more complex, and then scattered the exact same way the first shape was. 
Using the splatter from the first tiny shape and a distance function--I obtained a voronoi cell pattern.   When I blend the second splattered shape with this pattern, the object is not just gonna scale intuitively (obviously), and also technically is not sitting at each cells center computation wise. Is there a way to achieve this in SD?--im okay with using functions if necessary, but dont have experience with this in SD. 

If this was in houdini, I would make the voronoi cells first , place a point at the center of each cell with the area of the cell stored as an area attribute. Then when I scatter my object to that point, just multiplying its size by this attribute.  I have no idea how to do this in substance designer though.  I was hoping someone could help me out with this?  Thanks!  Ive attached a pic from Houdini to demonstrate a simple version of what Im trying to achieve.

I actually havent gotten a chance to check this yet--some back and forth on the design for the flowers themselves has slowed the texturing process  >:(.  Id imagine the same process should work for me same as for you though?   As I understand it--as long as the images coming into the designer graphs have frame ranges -- say matte1.$F.tiff --when you bring the designer graph into substance player it should ask you about these image files(are these sequences?)--then if you say (yes this is a sequence)--it should really just be plug and play on the exporting side.  That probably doesnt help much though--just reiterating what was said earlier--however--when i get to it, if it works--ill post on the work flow here. 

Thanks Nicolas!  Great to hear!
Im gonna do a test this weekend and see if it works out.

Houdini Artist here--relatively new to substance designer--LOVING it so far!

My question is about frame based exporting.  I know that subtance player is the tool that is utilized to generate multiple textures from a graph-great way to create some procedural variations very quickly.   
My question though is--can a graph itself be sourcing bitmap file sequences--such that when the graph is fed into substance player to generate texture sets per frame basis- it can pull the mattes that have a corresponding $F token?   

For example:
I have a series of leaf venation displacement maps.  They were built using simulations in houdini. 

These maps are being imported into substance designer as bitmaps, and are a critical part of our node network we have been building within designer to get the final result. 

Currently we are using one batch of 3 files to help push the texturing process.
at the moment we have:

these three mattes are a critical part of the template, and are themselves are just one set in a much larger group of matte sets (300 or so altogether). 

ideally we would like to get a finished graph template, and then use substance player to kick out a series of textures using in sequence


etc etc etc
until all 300 are exported

Is it possible to have substance designer know that it is reading sequence information at the beginning of the graphs in a situation like this?  --apologies if my syntax is poor here, just getting to know the program. 

Thank you in advance for your help!

just to reiterate--
maps are input as masks into a substance designer graph
leaf_venation.01.tiff for example-- drives things on frame01, while
leaf_venation.02.tiff (ideally) will drive the next group of outputs on frame02
when it comes to output everything

Thanks again!

Pages: [1]