Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - HapZungLam

Pages: [1] 2 3 ... 6
I am pretty new to fx-map.  I have a Fx-map that does iteration. Right now I just put random rotation in the sequence so each iteration is just randomly position and randomly rotate.

Is there a way or function that I can extract the direction from a vector flow map into fx-map? and use it to rotate each iteration?

I have no idea how to do that.  I saw there has a sample grayscale node and  sample color node. But I have no idea how to utilize them to get the vector rotation data from a flow map then use it as rotation


Is there something to do with PBR or I should simply force it to 1

From the PBR guide 2.  It says "The range for Dark Values could be more tolerant at 30sRGB and stricker at 50sRGB" 

From DONTNOD Physically based rendering chart for Unreal Engine 4 (the link included from the guide) suggest "The diffuse part of the base color (the one use by the non-metallic) must be in the range of the first gradient 50-243."

What do tolerate range and stricker range mean?

Are dark value don't go under 30 and 50 a general rule of PBR or its just for UE4.

I"ve been in subscription for over a year now.  I am a personal user mostly use it for my own art at home.  I discovered that I don't use it everyday.   I usually have to model and sculpt my subject before going into the texture phase.  THat will take a bit of time a month or two depending on how lazy I am.  But when I get into texture phase I'll be using substance everyday.  So its like on and off each month or 2.   So I figure what if I unsubscribe it during my modeling phase and resubscribe when I need it.  What will I lose in that matter.  Will I still get all the updates when I resubscribe?

I am not entirely sure if it is 2017.3's problem but I m sure I've exported 8k without a problem in 2.2.

My PC is using a gtx970 and I've heard bugs about 970's memory.  I switch and try it on my laptop.  My laptop is running a i7-6700 and a GTX960M  I left it over night and still going.  Its been over 10 hours the bar seems moving but its going extremely slow.  It is using 10% of my CPU and 50,700k memory.


Can anyone tells me what are those baker parameters do?
bent normal map has to be from mesh?  I tried to load my low poly and add high poly map onto the high definition meshes section and it baked out very bad.

What does bent normal do btw.  My colleague suggest me to bake a bent normal map to fix some reflection issues when the light isn't hitting the object.  I wonder what is the mechanic behind it.  I've read a lot of documents about it and it seems like bent normal = tangent normal + AO.   But it doesn't seem that way from other bent normal i saw.  If AO adding tangent space that I 'll see black.  But i don't.  Wonder why

Here is one of those confusing topic again.  I remember I've asked long ago how come using designer to convert from metal/rough to spec/gloss the spec value is grey.  And someone has answer it is because all object has spec and its 4% in linear and 2% in srgb (or the other way around)

Assuming my render engine will "add" gamma to any linear input(image get brighter when I set the input as linear).  Do I still put 4% gray on my spec/reflection map? or should I degamma that to 2% (or simply set the input as srgb)

What is a regular dielectric material with an IoR between 1.4-1.6 reflectance should be?  I believe the render engine that substance is using has been hardcoded into the PBR shader.  I knew the minimum value is 4% srgb and 2% linear but what is the maximum should be?

Hey guys,  When I use the "export to Photoshop" button in SP.  The file exported nicely into photoshop however If i drag the layers(from the SP export file) to any already opened file in photoshop or even I created a new document. It gets a lot darker.  Then I tried creating a new doc, made a greyscale layer. and drag the layer back to the sp exported document it is surely doesn't match.  And I did that into the base colour exported sp document. it gives me this

version: sp2.4, ps cs6

both docs are in RGB/8, using srgb

Left is the photoshop file after dragging the layer from SP export doc.  Right colour is the one from sp export

I remember 6.0 still has the old AO node and I can't find it in the current version.  Did you guys get rid of it all together?  The old AO node is still useful for me to convert height to AO.  It gives a better result on lower resolution height map.   Can we bring it back in?

I was watching this tutorial

I believe some of you guys have tried the mari extension pack.  They claim that this extension will make mari works like painter/quixel.  But I think allegorithmic may also get some idea from mari.

What it fascinates me is that the power of the node editor vs the photoshop traditional layering.  The mari extension seems able to find a balance of both.

I often found painter is a bit messy when the material is getting a lot of layers on top of another stack of layers at the mask.  On top of that sometimes some of the operation(filter) can be easily add in SD it can eventually do the same thing in SP but it require a few more steps.

eg.  If I want to do some slope blur, level and direction warp to my mask.  I need to look into the filter, drag and drop or click ontop the add filter then it'll pop up what filter is available then hit add.  For slope blur I need to open SD, make a new filter graph, pipe everything and export.  Get back to SP import that sbsar and hope it'll work.  Then apply it into my layer.  Browse again for a grunge for it to slope blur.

I mean it'll be better if SP also has the node interface like SD does..... well it is basiaclly 2 different software I know.  Would that be possible to somehow bridge SD better with SP.   I dont know.  People who has experience with Mari extension should probably have the same feeling.  What I feel is that allegorithmic has that already.  Just need a good bridge between the 2 great software.

What are the differences of the tile node, tile sampler node, and tile generator node.

I see the attributes are difference but it doesn't seem very difference.  All of them are giving out the same outcome to me.

Can anyone explain to me that which node fits best in which scenario?

I've just finished watching the naught dog GDC 2016 presentation on their pipeline:

As they(naughty dog) are hugely rely on their procedural data base that has been built by artists.  What will happen for scan data?  I'd assume photogrametry material will slowly take over SD procedural material or even zbrush sculpts.  I knew that allegorithmic and many other developers including render engines, texture websites, scan house like texture XYZ are slowly moving towards preparing scanning materials(very much like using photo reference for texturing back in the days).  Then substance designer will become a scan data prep tool rather.

I am not currently working in a studio that has a SD pipline for game.  Since game has memory limitation.  I do not know how will they handle the memory issue for scan data(but some how menage to dice pull it off for starwars).

I hope it can be a topic open for discussion of our industry on how the future goes?

My bigger concern is that do I even bother studying and practice learning how to build complex procedural material in SD? And what's the advantage in the industry for an artist who has the knowledge on that?  If scan data is going to take over.  Maybe people who has the knowledge of processing scan data for SD will have more advantage?

Since SD6 is out. I can upgrade myself to both SD6 and SP2 finally.  Do I just purchase substance live with my current account?  Do I need to worry about anything else?

Like in photoshop that you can punch in a code to get the exact colour eg: #ff0000 is red , #00ff00 is green

Its very useful for copy and paste specific colour

Pages: [1] 2 3 ... 6