Author Topic: UV Seam Repair Filter  (Read 314 times)

I was considering attempting to write a custom filter that repairs UV seams in any texture or layer stack. Basically the user would stack this filter on top of any layer that has seams in Painter (caused by extreme blur, noise, some pattern, etc), and the filter would repair the broken UV borders using a distance threshold.

Does anyone know if this has been attempted before, or if its even possible?

This is my thought process:

  • input layer data (or texture with seams)
  • input position map (to convert 3D <-> 2D)
  • expose "distance" threshold parameter (create a pixel falloff using this)
  • probably expose some type of averaging interpolation value
  • compute (3D) distance from every pixel on position map to every other pixel on it (time consuming - maybe ignore pixels that are already close in 2D)
  • factor in difference in colors between pixels, so we ignore those that are already similar enough to blend
  • average color pixels based on their distance from each other, limited by the distance parameter

I think this would be enough to blur the line, but there are probably dramatic improvements and tweaks that could be done to improve the result. Such as ignoring pixels during the averaging step that are already adjacent to each other, so they don't get averaged unless they are separated by a UV border. May be able to detect this by comparing 3D distance with 2D distance.

Does anyone know if something like this is possible, or if it would even work? I was considering the Pixel Processor, but I'm not sure how one would average the pixels using it, since it only writes to one at a time. I'm new to Substance programs, so I'm sure someone out there has thought of this before. Thanks for any input!
Last Edit: February 14, 2019, 06:27:49 pm

I've thought about doing something similar to generate seamless patterns without explicitly generating a 3d noise, but I found the same hurdle you did here:
Quote
compute (3D) distance from every pixel on position map to every other pixel on it (time consuming - maybe ignore pixels that are already close in 2D)


My thought process is as follows, but take it with a grain of salt as I'm not at all an expert in this particular field:
To sample each pixel in the image for every existing pixel, you end up doing roughly a million samples per pixel for a 1024x1024 image. The pixel processor tends to crash after a couple hundred samples (per pixel) or so in my experience.

Also, just think about how much data that actually is. an uncompressed color image using an 8-bit-per channel format takes up 4 bytes per pixel. So a 1024x1024 image takes up about 4 million bytes (4 mb) in raw, uncompressed data. now multiply this by the total number of pixels in the image (roughly a million), you end up with 4e12 bytes (4 terabyte!) of data that needs to be pumped through the process.
You would need to write your own renderer to pipeline such a mass of information. The pixel processor will simply try to do it all at once and store all information in ram, which is not going to work with terabytes of data.


So yeah, you'd have to find a good workaround to find these pixels that are close in 3d space, but not in uv space.
Esger van der Post.
Game design student and texturing addict.

I agree with most of that. Another big hurdle (for me at least) is figuring out how to iterate pixels. If I can find a way to iterate all of the pixels on an image, such as while inside the pixel processor, I may be able to come up with something to optimize the process. At least enough to avoid taking 10 minutes.

Also, if there was some way to sample the object's texels in 3D space, one could locate the exact pixels to average, rather than scanning the entire image. So essentially it would be:

3D Position map pixel -> locate pixel in 3D space -> scan small area around it there (in 3D) -> for any pixels that have more variance than the threshold and are far enough apart in 2D space, use it as part of the average on the current pixel.

In that case, you would only need to scan a 3x3 area or a 5x5 area, etc, around each pixel. Much less data to process. But as far as I know, this isn't possible with existing nodes. I will be on the lookout for any process that would help here.

Thanks for your thoughts. Much appreciated.
Last Edit: February 15, 2019, 12:58:47 pm

After thinking about this, I realized this process is very similar to Substance Painter brush behavior. When the user paints onto a model, it is converting 2D screen space -> 3D model space -> 2D texture space. If we had a way to grab and use that functionality to paint pixels where we want, we could aim it at problem pixels and average them out. Maybe this is turning into more of a feature request than a buildable graph.

Another way to approach it, which would also be a feature request, is to tell the pixel processor somehow to only proces pixels in a given mask. This way you could mask out the edges of your uv island and scan just these pixels, discarding the rest entirely.

Iteration is also a problem in SD. You have to copy and paste your function to hardcode iterations right now. I've done this to create some blur filters, but there seems to be a limit to how many copies a processor can handle before crashing designer. I haven't had a stable processor with more than 256 of these iterations. You can have multiple pixel processors in a row to get higher numbers, but transferring data is an issue, since a pixel processor can at most output a 4-channel image.

So yeah, I'm sure looking forward to proper loops being implemented one day.
Esger van der Post.
Game design student and texturing addict.

I'm surprised someone hasn't found a way to hack in loops? I tried myself by creating recursive functions, but it appears Designer prevents that completely. I can't even use function A in B if A already calls B in any way. I even tried 3 levels deep.

I'm not understanding the need to protect us from loops. Having the program hang in an infinite loop once in a while is much better than copying and pasting the same code 16 times, and being completely cut off from certain functionality.


I highly doubt they would intentionally make loops un-usable if they were possible. Considering that a pixel processor will crash SD if you copy-paste an operation a couple hundred times, I'm under the impression SD can't handle many loops either way.

SD's pipeline is generally a bit different from similar image processors in that each node reads the previous image, processes it, and writes a new image. Most similar tools like shadertoy or Unreal's shader graph process everything in one continuous cycle and only writes an image at the very end, not at every node.
It's a stab in the dark, but perhaps conventional methods like loops became impossible as they designed SD around this pipeline.

That said, the FX-map can use hundreds of thousands of iterations, but can't proces on a per-pixel level.
Esger van der Post.
Game design student and texturing addict.

Honestly, I think the lack of loops is just them protecting us from hangs. Logic wise, it would literally be the same as repeating the same function node x number of times. Any data that changes inside the loop is no different than a parameter being passed to that function and/or returned from it.

If the iterations value was hard-coded, they could literally unroll our loops to provide the feature without any additional functionality to their engine. I have a feeling this will be the first type of looping they provide, if they do.

I don't know enough about Designer to be positive about anything, but that's my best guess.
Last Edit: February 22, 2019, 12:27:26 am

i've managed to get things to run for 10k+ iterations without crashing with some graph shenanigans, but it definitely is not as nice as writing a loop in code. currently writing up how to do that and once done will share.

as to why they won't expose looping functionality to us in some form or another i don't know. would love to write custom atomic nodes, as they seem to be able to do very performant looping(4k distance node is comparatively fast). then again... with looping you would see a lot of very powerful nodes crop up and i'm not sure how that would interfere with the yearly subscription thing long-term.