TL;DR: We just uploaded our “experiences with getting moana rendering interactively (and with all bells and whistles!) in embree/ospray” whitepaper to ArXiv:
https://arxiv.org/pdf/2001.02620.pdf.
Background…
Quite a while ago (by now), Matt wrote an excellent article named “Swallowing the Elephant” – basically, his experiences with all the unexpected things that happen when you want to make even a well-established renderer like PBRT “swallow” a model on the scale of Disney’s “Moana Island” model.
This Moana model (graciously donated to the research community by Disney about two years ago; link here: https://www.technology.disneyanimation.com/islandscene) is the first time a major studio released a real “production asset” -with unbelievably detailed geometry, millions of instances, lots and lots of textures, full material data, lights, etc…. basically, “the full thing” as it would be used in a movie. The reason that this is/was such a big deal is that when you develop a renderer you need data to test and profile it with, and truly realistic model data is hard to come by – sure, you can take any model and replicate it a few times, but that takes you only so far. As such: Disney, if you’re reading this – you’ve done an immeasurable service to the rendering community, we can’t thank you enough!
While I was still at Intel, we had actually gotten a very early version of this model; we’ve worked on it for quite a while, and it has since been shown several times at major events. Just as a teaser for the rest of this article: Here’s a few “beauty”-shots of our embree/ospray based renderer on this model:
(click on the images for full res versions)
And just to give you an idea, here is a close-up of some of the region under the palm of the previous image:
Digesting the Elephant
Anyway, Matt at some point wrote an article about exactly this experience: What happens when you take even an established renderer that’s already pretty amazing – and then try to drop Moana into it. That article, or actually, series of articles, made for excellent reading, and in particular, rang some bells for us, because “we” (which means I and my now-ex Intel colleagues) had gone through many of exactly the same experiences, too, plus a few additional ones because our goal was not just offline rendering, but getting this thing to render interactively on top of Embree and OSPRay. And to be clear, our goal was not do just get “some” interactive version of moana, but the full thing, with every instance, every texture, curves, subdiv, full path tracing with Disney BRDF, every-frigging-thing... but also, at interactive rates.
And yes, many of things Matt wrote about sounded incredibly familiar (we had actually worked on that way before his set of articles) : you think you have a good software system to start with, you’ve rendered models with lots of triangles or lots of instances before, you’ve done path tracing, cluster based parallel rendering, etc, it’s available in some human-readable file format (even two different ones), so: how bad could it be?
For us as it was for Matt, the answer to this naive “how bad could it be” question turned out to be (to put it mildly) “not as easy as you thought it might”: In fact, I think I spent an entire week just working on fixing/extending my then-already-existing pbrt-format parser to be able to handle this thing even remotely correctly (or completely), and initially it took hours to parse; and at some point, I vividly remember Jim (my then manager) sending me an additional 512GBs of memory for just that purpose (by now we need much less than that, but at the time …). And all that doesn’t include the time spent on several aborted attempts at using the JSON format instead, let alone that at the time we hadn’t even started with the real work of getting that data into the ospray scene graph, into ospray, embree, cluster, etcpp, let alone dealing with things like PTex textures, subdiv, hair/curves, Disney BRDF, denoising, …. hm.
In fact, the number and kind of pitfalls you run into once you get such a “pushing the boundaries”-model to test with is truly astounding: For example, at the time we started the ospray scene graph still used a lot of strings for maintaining things like names of scene graph nodes or parameters of those nodes; all of that had been used plenty times before without issues – creating an instance node was only a millisecond or so, which surely can’t be too bad, right…. unless you throw ~100 million instances at at, at which point even a one-millisecond processing time for an instance node suddenly becomes an issue (though 100M milliseconds still sounds benign, it’s actually 100,000 seconds… or 1,600 minutes …. or 28 hours…. just to create the instance nodes in the scene graph). And there were plenty such examples.
And just to demonstrate a bit of what one has to deal with in this model geometry-wise (Matt has several similar shot, here’s a few color-coded “instance ID” shots around various places of this model:
Again – every different color in these pics is a different instance of often thousands of triangles. My favorite here: a single Coral in the water off the coast – barely covering a full pixel in some of the overview shot above:
I digress….
Yes, I really love this model… but anyway, I digress.
As said above reading Matt’s articles reminded me of all that, and once I read those posts I realized that Matt had actually done something pretty useful: not just “battle through” and “make it work”, but actually share what he learned by writing it up – so others would know what to expect, and hopefully have it a bit easier by knowing what to expect (when writing a piece of software it’s so easy to just say “oh, that will surely never happen” – yes, it does!). As a result, we decided to follow his lead, and to also write up our experiences and “lessons learned”…. and since our task was not only to “swallow” that model into an existing renderer, but actually to also add lots and lots of things to that renderer that it had then never been designed to do (remember, OSPRay was a purely “Svi-Vis” focussed renderer, then), too, I concluded that a fitting title for that write-up would be “Digesting the Elephant” (after all, digesting something is what you have to do after you managed to swallow it … :-/).
Anyway, I digress. Again.
The problem with this write-up was that I had since switched to NVidia, so writing articles on my pre-NVidia work wasn’t exactly top priority any more. I spent pretty much all of he 2018(!) Christmas holidays writing up a first draft, then all my ex-colleagues started adding in all the stuff I no longer had access to (plus a lot of new text and results, too, of course) but eventually we all kind-of “dropped” it because we all had other stuff to do. At some point we did submit it as a paper, but a “experiences” story just doesn’t fit that well at academic conferences, so not unsurprisingly it didn’t get in, and we completely forgot about it for another few months. Finally, a few days ago I accidentally stumbled over the paper repository, and realized we had all completely forgotten about it… but since the paper was already written, I asked around, and we decided we should at least make it publicly available; “technical novelty” or not, I still firmly believe (or hope?) that the story and experiences we share in there will be useful to those treading in our footsteps.
Long story short: We finally uploaded the PDF to ArXiv, which just published it under the following URL: https://arxiv.org/pdf/2001.02620.pdf.
With that, I do hope that this will be useful; and for those for whom it is : Enjoy!
PS: As usual – any feedback, comments, criticism on what we did: let me know…
PPS: And as still images show only so much, here a video that Will Usher made, uploaded to youtube: https://youtu.be/p0EJo0pZ3QI
PPPS: and a final little personal plug: If you’re playing with this model – or intending to do so – also have a a look at my github “pbrt-parser” library, that I also use when working with this model. It has a binary file format, too, so once the model is converted you can load the full thing in a few seconds as opposed to half an hour of parsing ascii files :-). Link here: https://github.com/ingowald/pbrt-parser/
5 thoughts on “Digesting the Elephant…”