Wednesday, August 17, 2016

Render Farm - 3D Edition!

A few months ago I wrote a simple render farm in PHP using imagemagick. But it could only deal with very simple animations in 2D space. What if we could deal with complex scenes in 3D space? And, what if instead of using hacky HTTP transactions to move data back and forth, we used Socket.io to form a true protocol? Well, that's exactly what we're going to do here.

To do this, we're going to use the majesty and splendor of Three.js. This magical library somehow is a complete 3D rendering suite and is perfect for this. Unfortunately, I couldn't figure out how to make it run headlessly (I had Chrome working with xvfb for a bit but then it stopped working and I couldn't figure it out, but it is possible). But that's alright, we can just run Chrome on a bunch of computers and they'll be the rendering workhorses.

So here's the plan. We're going to design a scene in the Three.js editor. Then we're going to write a file that describes the movie that should be produced. Then a NodeJS server will listen for willing render nodes to ask for a chunk of frames, and then the nodes will send them back to the server as they're finished. The server will collect the frames and, once it has them all, will merge them together into a finished product.

I designed the following scene in the editor (simple.json in the repo). It has three spot lights, a sphere, and a cube. Simple enough, but our system should be able to handle anything the editor supports and exports.


I added a point light after I had taken these pictures to add some extra lighting and shadows. Next, I wrote scripts for the ball and the square to move. This is very important: on the update function you're given, you're passed a "event" parameter. You want to make sure that your motion is calculated off of event.time, this is where the frame number for the scene will be passed in our render nodes. Also, I've found that the preview in the editor is much more liberal with time - which means your animations will run a lot faster than in the renderer. About 6ish times faster on my machine (you can check by looking at event.delta to see how much time has passed since the last frame, my render code increments by 1 to keep the math easy, although I could scale the frame number on the client side.)

For the ball, I wanted it to bounce, so I used a sine function to get it to go up and down forever using this function:
function update( event ) {
    this.position.y = (Math.sin(event.time/30)+2);
}
For the cube, I simply added rotation to two of the axes.
function update( event ) {
    this.rotation.x = event.time/30;
    this.rotation.y = -event.time/30;
}
Next, I selected File > Publish. This will download a zip file of everything you need to preview the scene as if you clicked play in the editor. The only file we need from this zip file, however, is app.json. Everything else is included with the renderer and slightly modified to work on one frame at a time.

Now then, we have a scene, its behavior, but nothing about the render job. Let's make a file called job.json that describes things about the final project.
{
  "fps":60,
  "width":1920,
  "height":1080,
  "length":3000,
  "chunk": 20
}
Here, we define the FPS of the resulting movie (my movie was too slow @30FPS so I figured why not go for gold.) We're also rendering in Full HD, with a length of 3000 frames. The chunk property determines how large individual render tasks should be. This really depends on the number of nodes you have and how disproportionately powered they are. If you have a massive workhorse and tiny little netbook, you may want to set the chunk lower because the workhorse will get through more while the netbook takes its sweet time.

Next, I wrote the protocol that the server/client will share. First, the client will get all of the assets required from the server over basic HTTP (this includes all of the scripts and your app.json). It renders the first frame to make sure everything is initialized. Then, it connects to the server over socket.io. The server immediately sends it the size of the frame (it actually sends it the whole object specified above but we only care about the width and height). It then proceeds to resize the player and send a "ready" event to the server. This means the node is ready to take some frames. The server emits a "render" event with a start frame and a length (the chunk size or shorter if we're at the end). The node will render each frame and post it to /postframe along with the number of the frame it is. The server then writes it to disk and waits for the next one. When the node is done, it'll send ready again. When the server has all of the frames, it will emit a "done" event, in which case all clients will disconnect (this is to prevent some issues I was having where if I started the server again, it would start rendering things incorrectly automatically). The server then runs an avconv to stitch all of the frames together. And then it exits.

Okay, so assuming that that all works, we should see a video output of the ball bouncing and the cube rotating, just as we saw in the editor. And, low and behold, it worked! There are some serious issues with the code, but for a proof of concept it works exceptionally well. Next, I benchmarked the system using the sample scene that was just used. Full HD, 60FPS for 3000 frames with a chunk size of 20. We're going to scale from one node to 3 nodes. 

I found how to run Chrome headlessly from this gist. It'll run Chrome in a virtual frame buffer, which is exactly what we want. But it didn't work after a bit so I settled for a regular Chrome instance. Here's an image of the first frame produced from the test:



I was still fairly disappointed with the performance of the network. Socket.io was still having to reconnect many times. Perhaps this was because I was uploading frames as soon as they were done rendering and I wasn't waiting for them to finish uploading before firing off another one. So I rewrote the code to wait for a frame to finish uploading before moving on to the next one. It seemed a lot more stable. Sometimes there were missing frames toward the end? I'm not sure what happens to them, but it's odd and it doesn't happen always.

So I tried it on multiple computers to see how well it'd work if I kept tacking on nodes. Basically, I wanted to see if the network made this so slow that parallelization was pointless. Here are the results with three nodes. I was going to add my phone as a fourth but WebGL kept hitting snags.

Three laptops all rendering the same video in parallel.


There was a massive speedup adding the second node and only a slight speed up on the third one, leading me to believe that the network itself was the bottleneck. But, as this is a proof of concept and the concept has been more or less proved, let's move on to something else.

The scene we made was simple. Very simple. So I decided to spice it up a little bit with a texture and a complicated object from Thingiverse as well as a moving light, so I did just that.


Everything gets stored in the app.json including the textures and the models, so that's still the only file you need to load into the renderer. 

It worked very well, so that's a good sign. I decided to take it a step further by rendering the scene at 4K. I wanted to see just how far I could push this before it broke. I only used one node for this test to see if it'd even work. And, well, it did! I mean it crashed all of my other tabs, Atom, and almost everything else that was running, but it still got through it.

If I wanted to spend more time on this project, I'd certainly make this thing more error resistant. If a node craps out halfway through a job, then you're left without those frames. There's currently no check to make sure everything's accounted for. There should be. I'd also make the workflow at the end (it currently disconnects at the end) a little less glitchy, as that's where the lost frames are going.

You can find the code in this GitHub Repo and the sample videos down below. The simple scene is in 1080p60 and the advanced was in 3160p60, but YouTube prefers framerate over resolution, so it dropped the 4K and went with 1080p60. This project has some slight issues here and there, but this wasn't supposed to be Renderman. I just wanted it to work, and I think that it's super cool that it did. I'm also perpetually floored by what Three.js is capable of.




1 comment:

  1. Very cool project! It seems odd at first to distribute a "real-time" renderer like WebGL but once you start talking about rendering 4K scenes it makes sense. Also, you could further justify this if your renderer included ray-tracing or ambient occlusion shaders in GLSL that would allow scenes to use really advanced lighting, FX or physics that would never render in real-time on a single machine.

    If you could solve the I/O issues this could be a killer app that would give anyone a massive render farm for "free" to make the next Toy Story or whatever.

    ReplyDelete