Author Topic: 3dsmax limitations ? cloud/network rendering, map input  (Read 135 times)

Hello every one

I have just  started learning Designer and creating materials for a film project in Max. I must say that as much as I am impressed by SD, I am not so much by the max integration....

Before I commit more , I'd like to know if the following limitations are my mistakes, or real limitations

I am using Max2019, Vray, Allegorythmic's plugin Substance2, Designer 2018.3.3

 - Cloud/distributed rendering : Is that possible ? Allegorytmic says here "What is to Come:Network/Cloud rendering support" . I plan to use a yet undecided commercial service.
- Exposed parameters cannot receive input maps/data in the slate editor ? Meaning that :
       - you have to create as many substance nodes as you need variations ? (instead of using, for exemple, vray userScalar node)
       - you can not use scene related textures (like the regular Falloff or Vray's distance node) as inputs



Hi Jerome,

For cloud rendering, it likely will not work. Normally for cloud rendering they run standalone instances of the renderer, such as VRay standalone. When running standalone, there are three pieces that exist in the pipeline to go from a Max scene to the renderer. You have the dcc tool, which then has the scene run through a translator to convert it to the native scene file, then the standalone that processes that. Most every production renderer does this in some form, so it applies to more than just VRay. In Arnold for example, these three pieces have separate names, 3ds Max, MAXtoA and Arnold would be the three pieces there.

To get our materials to render in VRay standalone, and thus distributed/network rendering, we would need to either get Chaos Group to edit their translator, or see if we can write plugins for it. With Arnold, for example, we could write a MAXtoA plugin, as they expose that functionality. It's something we want to do in the future, but it isn't as simple as just finding the correct way to write the 3ds Max integration, it'll require either separate pieces of engineering or negotiations, depending on the renderer. We also will have to find ways to distribute these, as we can't make it so the integration requires VRay to function, for example.

I'll need to ask for clarification, but I believe the bitmap node is the only thing supported for image inputs, as Max doesn't have as clean of a system for that as Maya. We will try to look at supporting procedurals in the future.

Galen Helfter
Last Edit: March 17, 2019, 01:11:18 am
Software Engineer, Integrations
Maya 2.0, 3ds Max & Modo

Thank you for the clarification.

For the lack of inputs, I believe I can do without it, but lack of cloud rendering is a big no-no.

Would a 'regular' network rendering works ? If all the rendering nodes have their own Max with vray renderer ? (and obviously substance 2)


Network rendering where the machines run through Max should work, it did when we tested it. I understand that's not a very convenient setup.

In the future, we're going to need to add the translation layers for network rendering, and perhaps even the ability to render the sbsars directly in the standalone renderer. Since it's one we can do on our own, Arnold will probably come first. If we add direct rendering into them, then we gain easier compatibility with it in other integrations, such as Maya and Houdini, so I think that's our best long-term option. It's a problem we need to address, but it's going to take some time.
Last Edit: March 17, 2019, 04:09:33 pm
Software Engineer, Integrations
Maya 2.0, 3ds Max & Modo