Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - paradox00

Pages: [1] 2
1
Hello!

I have been learning a lot about Pixel Processor lately, specially thanks to this guy https://www.artstation.com/andreizelenco and the Facebook group he created to share knowledge.

Creating Pixel Processor functions is great, but there is what I think is a major drawback right now: there is no way to make loops. You end up doing what Andrei shows in his header image, repeating the nodes that are supposed to be in the loop as much as you need, which as you might imagine, is cumbersome and far from convenient.

Having a loop node on the Pixel Processor seems possible as we have already something like it in the FX-Map node in case.

The idea could be having something like in Houdini, where you have a node that defines and contains a loop, or having a 'solver' node, that calls for a function.

Whatever is best from your perspective.

In any case, having this would be awesome and I would like to know how to make a request (and hope it is created hahahah).

Thanks.

2
Substance DesignerSubstance Designer - Discussions - VR considerations
 on: August 21, 2017, 07:03:51 pm 
Hi! hope everything is doing great and creating awesome art.

I was wondering if anyone has some knowledge to drop regarding substances in VR. Not textures from Designer, of course, but actual substances imported into Unreal to use in VR. Maybe someone has something to say about performance, good practices etc?

I would like to know any info before running my own tests

Thanks!

3
Substance DesignerSubstance Designer - Discussions - Designer for VR
 on: November 08, 2016, 11:51:24 pm 
Hi Folks! looking for some guidance.

Im about to start a VR project for PC using UE4, i would like to know if you hand some reference about the optimization side of things using Designer. I do know the page where the general optimization guidelines for Designer are, but i would like to know any comment or help regarding Designer for VR, specially comparing using the Substance instead of the actual texture and stuff like that.

Of course, i already found a some information on google, but you maybe comment and point to more specific references, or maybe talk about your experience with the media.

Thanks!

4
Copying this from the thread on optimization.

On the topic of optimization, i read the new published guidelines for optimization and I still have some doubts.

I started a substance our team is aiming to use ingame. We need to texture a hard surface type of props, which are the main objects in the game and have different materials on them. The idea is to nail down or texture/material pipeline to be optimized for VR and mobile for the very scratch, even though for now the game is aiming to PC's. The art direction allows us to aim for a general optimization from the very start.

I tried two a approches, one which to me seems minimal (even though it uses more nodes) and one more streamlined:

The minimal approach is: Create masks from the ID map using the Color to Mask node, then create a bunch of blend nodes to combine each mask accordingly. That means, imagine there are 3 materials on the prop, then i use 3 Color to Mask Nodes, then there is a section with 3 blend nodes for color, another section with 3 blend nodes for roughness and similarly for metallic (there is no height and the normal and AO are baked). The blend nodes combine the color required with the mask and then with the next color and so on.

The streamlined approach is to use the Multimaterial Blend. In this case i have a custom node with exposed parameters to define the materials i need, then combine them using the Multimaterial Blend.

Judgingd by the time of each node on the graph it seems that the minimal approach is better, but i want to be sure on how to judge this kind of constructions.

On the other hand, i have a similar question regarding the Mask Builder (and nodes derived from it). I create a simple 2 blend node setup using the curvature and AO (plugged in to the mask of each blend) to create wear in our props which mostly works. Time wise it is faster than the Mask Builder, but i need a little bit more complex mask creator, probably adding two more blend nodes. Is it worth to create a custom one or am i wasting time and should use the Mask Builder?

In general

How much the time displayed on the nodes matter to consider it in optimization?

Is there any way of prioritizing which node i should primarily use and which are meant to be used just if needed (or not at all)? something on the lines of: "Blend, thisNoise, thatNoise, thatNode... are usually ok to use but forget about this other ones". For example, the Grunge Maps are a bad idea if you want to use the substance in an engine (more over on mobile), right? which nodes are as bad or in the next step of "badness" and so on.

I hope I explained myself.

Thanks.

5
On the topic of optimization.

I started a substance our team is aiming to use ingame. We need to texture a hard surface type of props, which are the main objects in the game and have different materials on them. The idea is to nail down or texture/material pipeline to be optimized for VR and mobile for the very scratch, even though for now the game is aiming to PC's. The art direction allows us to aim for a general optimization from the very start.

I tried two a approches, one which to me seems minimal (even though it uses more nodes) and one more streamlined:

The minimal approach is: Create masks from the ID map using the Color to Mask node, then create a bunch of blend nodes to combine each mask accordingly. That means, imagine there are 3 materials on the prop, then i use 3 Color to Mask Nodes, then there is a section with 3 blend nodes for color, another section with 3 blend nodes for roughness and similarly for metallic (there is no height and the normal and AO are baked). The blend nodes combine the color required with the mask and then with the next color and so on.

The streamlined approach is to use the Multimaterial Blend. In this case i have a custom node with exposed parameters to define the materials i need, then combine them using the Multimaterial Blend.

Judgingd by the time of each node on the graph it seems that the minimal approach is better, but i want to be sure on how to judge this kind of constructions.

On the other hand, i have a similar question regarding the Mask Builder (and nodes derived from it). I create a simple 2 blend node setup using the curvature and AO (plugged in to the mask of each blend) to create wear in our props which mostly works. Time wise it is faster than the Mask Builder, but i need a little bit more complex mask creator, probably adding two more blend nodes. Is it worth to create a custom one or am i wasting time and should use the Mask Builder?

In general

How much the time displayed on the nodes matter to consider it in optimization?

Is there any way of prioritizing which node i should primarily use and which are meant to be used just if needed (or not at all)? something on the lines of: "Blend, thisNoise, thatNoise, thatNode... are usually ok to use but forget about this other ones". For example, the Grunge Maps are a bad idea if you want to use the substance in an engine (more over on mobile), right? which nodes are as bad or in the next step of "badness" and so on.

I hope I explained myself.

Thanks.

6
Hi! haven't found a proper conversation about this subject. If you can redirect me to the proper thread if there is one it would also be great.

Here is the thing:

- Is there any sort of guidelines to decide when is more optimal, in terms of computation time (vs file size or customizability, maybe?), to export the texture rather than publishing the substance directly to the engine?

- Which are the proper things to take into account to make the call of instead of simply exporting the texture is better to publish the substance?

For example, this graph is rather complex https://twitter.com/lobachevscki/status/746784097066954752 and i sincerely don't know if it is optimal (in whatever terms should be accounted) to publish it to Unreal rather than just simply export the textures the graph produces and tweak them in Designer for further iterations https://twitter.com/lobachevscki/status/746780042638790656

I'm in some kind of connundrum because of the game i'm working made me ask this question, due the kind of assets we are delivering.

I saw that people at Naughty Dog rather chose to export textures from designer (to my knowledge), but there is the possibility of publishing it so i wonder what consdierations should be accounted to make the call.

Hope my point can get accross.

Thanks in advnaced.

7
Btw, haven't tested your solution, it sounds simple enough to not tested it before because of that.

8
The pattern looks like this (close enough WIP):



The way I proceeded was this:

1. Made the original pattern with a 2Kx1K resolution and exported it to Photoshop.
2. Crop it to diminish the borders. Of course, it is possible to create a cropped image that kind of respects the same ratio, but for now it isn't that way.
3. Imported it back to Substance and made some nodes.
4. Export output maps.
5. Even thoug all the outputs maps are in 2kx1K resolution, they looked squarish in Marmoset, so I Mmanipulated the geometry UV in order to achieve the rectgular (and original) look. Something is missing here.

It works, now I'm detailing and polishing the Substance, but it's kind of cumbersome to not have a crop tool if you can import not power of 2 textures resolution anyway.

9
Changed the resolution from 2K x 2K to 1K x 2K after exporting the png i got from Photoshop. I kind of worked, but it streches a bit the image.

I you have any suggestion it would be nice.

10
The cropping in photoshop strategy didn't work either, when i export it back to Substance it squares it.

I sincerely don't know what to do.

11
Hi, trying to recreate a pattern that looks like the one on the left in the image below:



I pretty much nailed the pattern shape, but now i need to crop it up because the reference tile is actually pretty tight and packed together.

When i input the pattern above into the Tile Generator it (obviously) creates huge spaces between patterns, it would be awesome if there is a way or workflow to reduce the amount of space indicated in red in the left image (it actually applies to the borders on the sides as well).

Cropping the blends node i used to create the shape doesn't seem to work at all. Trying to tile it by hand using a  Transformation 2D after i created the pattern doesnt' work either (at least i didnt make it work) because the shape is not a square and it has no offset in the reference tile. Any changes (size, interstices etc) on the Tile Generator destroys the pattern, it turns it into a squarish shape and that's not the idea,

Do i have to export the pattern, crop it in Photoshop and import it back? seems like the only solution for me.

Any suggestions?

Thanks.

12
I don't know. Let me tell him.

13
Bringing this back again.

A coworker tried to use the plugin in his machine and it is crashing it.

He installed the plugin, opened Painter, crashed; now Painter doesn't start unless he erases the plugin from the Plugins folder.

The error log doesn't have a particular message, it only says that the Tool stopped working.

Any ideas?

14
Done! it was me of course.

I was copying it in the Programs folder and not in the Documents. I got confused because both of them have a Plugins folder.

Thanks.

15
Thanks.

I don't see this "Export Masks View" you are metioning. It is suppose to be in the plugin? cuz Painter is not recognizing the plugin. I just copied the file I download from Substance Share into the Plugins folder, I did nothing else, and Painter is not seeing anything.

Of course I know I'm missing something, but that's the step I'm trying to locate.

Thanks again.

Pages: [1] 2