Time to Image: 2 Hours … :-)

If you read this title for the first time, you may be excused for wondering why I’m happy about a 2-hour “time to image”: usually “time to image” means the time it takes for a interactive renderer to get from starting to load a model to when it is ready to display the first image on screen … and anything over “seconds” is usually not all that useful. In this case, however, I meant “time to first image” in the sense of “starting with a completely blank repo, to having a interactive OWL/OptiX-based viewer that can render a model”. And in that meaning, two hours IMHO is pretty awesome!

Let’s take a step back. Why did I actually do this?

On Sunday, I finally posted a first article on OWL, which only recently got tagged as “1.0”, i.e., with some reasonable claim to “completeness”. In this article, I made a big deal about how much easier OWL can make your life if you want to write an OptiX/RTX accelerated GPU ray tracer …. but what could I actually base this claim on? Sure, I/we have by now several non-trivial OWL-based renderers that show what you can do with … but how would one actually measure how easy or productive its use would be?

While on the way to get my coffee I decided to simply run a little self-experiment – initially not to prove a point, but simply because I was curious myself: To do that, I decided I’d pick something simple I always wanted to write but never got around to (I picked a structured-volume direct volume renderer), and then spend a day or two trying to write one. Using OWL, of course, but otherwise starting with a blank repo, with no other libraries, etc…. and to keep an eye on the clock, to see where time goes while doing that.

Stage 1

Oh-kay …. how’d it go? I got home from the coffee run that gave me that idea (had only sipped at this coffee so far…), then went to gitlab, created a new blank repo, called it owlDVR. Cloned it, added submodule, created a CMakeLists.txt that is about 15 lines long, created a ‘deviceCode.cu’ with a dummy raygen program, and started on some simple host code: created a ‘Model’ class to hold a initially procedurally generated N^3 float volume, created a new windowed viewer by deriving from the owlViewer that comes with the owl samples, overrode the ‘render()’, ‘cameraChanged()’ and ‘resize()’ methods, and then started on the actual OWL code: create context, launch params, and raygen; build pipeline and SBT; upload the volume into a buffer and assign that buffer and volume size to launch params in the constructor (and of course, build pipeline and SBT); then add some owlLaunch2D() in Viewer::render() and some owlParamsSet()’s for the camera in Viewer::cameraChanged() ….. and that was pretty much it from the host side.

Then take the deviceCode.cu, and start filling out the raygen program: create a ray from the camera parameters that I put into the launch params, intersect ray with the bounding box of the volume, step the ray in dt’s through the resulting [t0,t1] interval, add a tri-linear interpolation on the buffer holding the volume data; put the result into a hand-crafted procedural “transfer function” (pretty much a clamped ramp function), and get this (and of course, you can interactively fly around in this):

Not really super impressive as a volume renderer per se …. but finally looking at the clock, the whole project took slightly less than two hours, on nothing more than a laptop, while sipping a cofffee. And full disclosure: these two hours also include helping my son with some linear-algebra math homework (yes, covid-homeschooling is just great…).

Of course, I’ve now written a ton of other OWL samples myself, so I’ve gone through the motions of creating an OWL project before, and knew exactly what to do – somebody that’s never done that before might take a while longer to figure out what goes where. However, this still surprised myself about just how quickly that worked – after all, this is not just a simple blank window, but even already includes volume generation, upload, and a more or less complete ray marcher with transfer function etcpp … so the actual time spent on writing OWL code is even way less than that – and that surprised even me.

Stage 2: Fleshing it out

After this “first light” was done, I did spend another two hours fleshing this sample out a bit more: first adding CUDA hardware texturing for the volume and the transfer function, then later some actual model loaders for various formats, and some prettier shading via gradient shading …. all of which took another two hours, which were roughly spent in equal times on

a) creating textures, in particular the 3D volume texture : the transfer function texture was simple, because OWL already supports 2D textures, so that was trivial; the volume texture I had to first figure out how CUDA 3D textures even work, which took a while.

b) creating the actual transfer functions and color maps – I liberally stole some of Will Usher’s code from our exabricks project for that, but then spent half an hour in finding a bug in re-sampling a color map from N to M samples. (I guess the coffee had worn off by that time :-/ ).

c) adding loaders, fleshing out float vs uint8 format differences, adding gradient shading, etc.

All together that second stage took another two hours (and in that case this does include the time to help our “tile guy” unload his truck for our bathroom remodel…). Here some images that show the progression of that second stage: left the original first-stage volume with a simple hard-coded ramp transfer function and manual tri-linear interpolation; then the first picture with CUDA textures and a real color map in the transfer function; then after adding a loader for the 302^3 float “heptane” model, and on the right, after adding uint8 volumes and adding gradient shading, with the 2048^2×1920 LLNL Richtmyer/Meshkov-250 model (I had to move to my 3090 for the latter – laptop didn’t have enough memory).

The main thing I haven’t added yet is a real transfer function editor widget, and of course, as with any GUI code that may well end up taking more time than all the above combined… but as a “proof of concept” I’d still argue this experiment was quite successful, because the one thing that I did not have to spend much time on in the entire project was anything involving OWL, SBTs, programs, buffers, etc.

One valid question that any observant reader could raise is that in this sample I didn’t actually use anything that really required RTX and OptiX – I did not create a single geometry, and could have done the same with just plain CUDA. This is true (obviously), and in retrospect I might have picked another example. However, adding some additional lines of code to create any triangle meshes at this point would indeed be trivial, and would almost certainly take less than 5 minutes… In fact, I might do just that for adding space skipping: all I’d need is creating some rough description of which regions of the volume are active, then I could trace rays against that to find entry- and exit-points, and done.

Where to go from here?

As mentioned above, this little experiment started as a little exercise in “I want to know for myself”; and the main motivation for this blog post was to share that experience. However, literally while writing this blog I also just realized how useful it would be for for some users if I documented the individual steps of this toy project in a bit more detail, ideally in enough detail that somebody interested in OWL and/or OptiX could follow the steps one by one, and end up with something that would be “my first OWL / OptiX program, in less than a day”….. That would indeed make a lot of sense; but that’ll to wait for another post…

Introducing OWL: A Node Graph Abstraction Layer on top of OptiX 7

Finally, the day’s arrived: I’m hereby officially “introducing” OWL to the world.

This post is long overdue – I’ve already mentioned / hinted at OWL in previous posts (in fact, we already used an cited it in several papers), but have never yet actually posted or written about it, yet: initially, I didn’t want to write about it before it was “ready enough” for public consumption; then later, I got too busy using OWL for my own projects … and the more we did with it, and the more features it supported, the harder it got to find a good place to start talking about it.

Anyway; I finally tagged OWL as version 1.0 last week, so it’s about time to write a bit about it.

What is OWL?

OWL is, as the title of this post suggests, a “node graph” abstraction layer on top of OptiX 7. To be clear, OWL is not a “renderer” on top of OptiX (such as PRBT is), nor is it real-time “rendering engine” on top of OptiX 7 (as, for example, OSPRay is one on top of Embree); instead, it’s sole purpose it to take all the features of OptiX 7.x, and add a little bit of “magic” on top of it, in order to:

a) make it easier to get started with – and to properly use – OptiX, RTX technology, and hardware accelerated ray tracing; in particular for those are not the kind of “Ninja”-level RTX practitioners that can define the proper data layout and element ordering of a Shader Binding Table (SBT) at 3 am in the morning. Somebody recently decribed OWL as “training wheels for using RTX”; and though I think it’s more than that, it’s still a good picture.

b) to make it more productive to use OptiX 7 and RTX ray tracing, even for those that do dream about SBTs at night (yes, yes, I know…) – by automating some things that do need to get done in any OptiX program, but that the user really shouldn’t need to worry about, such as building the SBT, acceleration structures, etc. By making some of the more common (but time-consuming and bug-prone) tasks simpler for the user, the user can concentrate on what he/she really wants to do (the shader programs that implement the renderer!), not the care and feeding of device buffers, acceleration structures, and SBTs. E.g., once all geometries and groups have been created, building a shader binding table in OWL (even if you have no clue what it is) is as simple as calling “owlBuildSBT()”.

Now before I go into some more detail, here just a few sample pictures that have been rendered with OWL over the last few weeks…. or as I should probably say more accurately, pictures that “have been rendered with several different RTX accelerated renderers built with OWL in the last few weeks”:

Why do we really need something like this?

If you wanted to write a GPU ray tracer a few years ago, you had two options: Either use OptiX, or write your own in CUDA. Today, you not only have much faster ray tracing thanks to hardware accelerated ray tracing, you also have more choice in the sense that you could also use DirectX Ray Tracing, or Vulkan Ray Tracing extensions. With all this choice, the question is why one would need anything more than that.

To fully understand why one would need something like OWL, it is useful to take a look back at OptiX before RTX and OptiX 7 came around: Initially – and up to vertion 6.5 – OptiX was a rather high-level abstraction library, where it was quite easy to get something going rather quickly, by writing the desired closest-hit, any-hit, ray-gen, etc, programs, then define a few “attributes”, “buffers”, and “variables” in the device code that the programs could use. One would create a usually rather simple node graph on the host (that would define, say, a triangle mesh and an acceleration structure), set a few variables to parameterize the device-side programs and geometries, and done. Sure, it still took some time to wrap one’s head around what all these programs were for, and how to map one’s conceptual ray tracer to these ray-gen/closest-hit/etc programs …. but once you had that figured out, the mechanics of creating this “pipeline” of ray tracing programs – and mapping to it the device – was relatively simple. In particular, you wouldn’t even need to know what, say, a “Shader Binding Table” even was (let alone how to build it), or which data structures would need to get built when, etc … OptiX 6 would do all that, fully automatically.

The downside of that approach was that OptiX 6 was pretty opaque: once you started becoming a power-user you might end up with OptiX 6 doing things that you didn’t intend it to do, or at times you’d rather it wouldn’t, etc… and because it was closed source, you could easily end up not even knowing why it sometimes did what it did, or what to do to avoid it. So very easy to get started, but sometimes too opaque for power-users.

When OptiX 7 came around, it made short shrift of this problem, by pretty much stripping away all the “convenience” functionality, all the node graph, etc, and instead exposing the (then newly added) RTX technology on what is pretty much a driver level abstraction (which, by the way, is similar to the DirectX/RT and VulkanRT abstraction levels). In that new “driver-level” abstraction the user has full control over everything, including what CUDA streams get used at what point in time, and which memory allocations happen where, which which type of CUDA memory, etc. This change unlocked a whole new level of performance that users have since made impressive use of, and that was the key behind the last two years’ rapid developments in high-end GPU ray tracing. For a power-user (well, at least for me!), the switch from OptiX 6 to OptiX 7 was an experience that was just amazing, plain and simple.

The downside of this change to a driver level API was that it has become much harder to get started with OptiX (or DXR or VKR, for that matter): instead of setting up a simple node graph on the host, you now have to understand the intricacies of acceleration structures, of setting up build inputs and building/compacting/refitting acceleration structures; of compiling programs and pipelines, building shader binding tables, etc. With OptiX 7 this is still much easier than with, say, DXR or VKR, but for the un-initiated, it can still be daunting…. and even for those that do by now fully understand all these low-level technical details, due to their low-level nature there are a lot of opportunities to shoot oneself in the foot by overlooking something or committing copy-and-paste bugs. This can easily take time that could more productively be spent somewhere else.

What OWL aims to do is help users bridge this gap between productivity and convenience on one hand, and performance and low-level control on the other: Like OptiX 6, it offers a node graph abstraction in which the user can create and parameterize relatively high-level entities like “Buffers”, “Geometries”, “Groups” (ie, acceleration structures), and “LaunchParams”, with OWL then doing all the menial tasks of managing the required device memory, building/compacting/refitting the acceleration structures, setting up launch constants, handling multi-device and async launches, and in particular building programs, pipelines, and, yes, the infamous shader binding table.

While thus clearly aiming for convenience, OWL also borrows a lot of the “give the user control” philosophy from OptiX 7: In particular, OWL is a much “thinner” abstraction layer on top of OptiX 7 than OptiX 6 was: there is no magic compiler technology anywhere in OWL, and all device-side shader code is pretty much exactly the same OptiX code as without OWL (though with a few convenience functions). OWL is also much more “explicit” than OptiX 6 was, in that, for example, it is the user that says when the SBT gets built. Third, OWL is completely “transparent” in the sense that unlike OptiX 6 it is completely open source (https://github.com/owl-project/owl), so the user can always see exactly what it is doing at what point in time … and even if he or she may or may not ever want to touch any code in OWL itself, he or she can still always see exactly what OWL does at any point in time, and what to do to avoid the bottlenecks or crashes. Finally, OWL explicitly aims to allow easy and efficient “inter-op” with CUDA, and one can, for instance, at any type query the device addresses of buffers or the CUDA streams used for a launch, etc, …. so it is, for example, absolutely possible to launch a CUDA kernel that reads from or writes to any of OWL’s data, or to launch into the same CUDA streams used by OWL, to run CUDA kernels asynchronously to OWL launches, etc.

How “real” is this?

The web (or even only the github-section of it) is full of libraries at varying levels of completeness, often abandoned, or doing only exactly what the proejcts’ author needed the project to do, and hopelessly incomplete for anything else. And yes, I’m absolutely sure that there will also be some bugs or missing features in OWL, that simply haven’t been found yet because nobody has yet used it in a specific way that would trigger the respective bug or missing feature. In particular, OWL is not a official “product” with a big team of engineers whose sole job it is to maintain this code – it is a library I’ve originally written because I had need of it myself, and that has simply grown to be much more than that.

At this stage OWL is still relatively new, and will undoubtedly still have some teething problems. However, it is now more than two years in the making, and at least judging from the last few months the teething problems seem to be mostly over. OWL now has been used successfully by different users, and for actually several very different kinds of different applications; in fact, I don’t think I’ve done a single project in the last year that did not get easier by using it, and the list of things that got newly added over the last few months is rather small. To give an idea of how far OWL has come, and what it can already do: here a brief selection of what has already been publicly written about (or is otherwise publicly accessible), and all the pictures here have all been done with renderers that built on OWL.

Mowlana“. Mowlana started out as a sandbox for comparing/stress-testing different rendering back-ends (e.g., OptiX 6 vs OptiX 7), but has since developed into a bit more of “Moana on OptiX” viewer. I’ve recently written about this (https://ingowald.blog/2020/10/26/moana-on-rtx-first-light/), and though it’s clearly still “work in progress” virtually all the work of the last few weeks and months has revolved around things like data wrangling, with the entire OptiX component done by OWL, period.

OWL Samples. OWL itself comes with a few intentionally simple and self-contained samples; though intentionally simple these already demonstrate features like triangle and user geometry, different buffer and acceleration structure types, multi-gpu (pretty much free in OWL), different ray types, multi-level instancing, refitting, motion blur, etc. Here a few screenshots showing a OWL version of Pete Shirley’s Ray Tracing on a Weekend (plus some extensions, just because with OWL it was so easy to add this :-)), a OWL version of our OptiX 7 Siggraph course viewer, and some simple ones with multi-level instancing, IAS updating/refitting, and motion blur:

Exabricks“: Arguably the prettiest images I’ve been involved in in a while (though most of the credit for the right transfer functions etc goes to Will, Nate, and Stefan…) – our “ExaBricks” Adaptive Mesh Refinement (AMR) Rendering project that we presented last week at IEEE Vis – with all the actual ray tracing and rendering of course done in OWL. (OWL also automatically handled the multi-GPU rendering that this challenging scene could really make use of).

Paper: “Ray Tracing Structured AMR Data Using ExaBricks”. I Wald, S Zellmann, W Usher, N Morrical, U Lang, and V Pascucci. IEEE TVCG(Proceedings of IEEE Vis 2020).
https://www.willusher.io/publications/exabrick

“OWL Tubes”: This started as a proof of concept that RTX ray transforms can also be used to accelerate intersections for thin primitives like tubes and hair (see our HPG 2020 Paper on this through this link) … but even for this rather low-level operation, it was eventually easier to do it through OWL than through OptiX natively. For these pictures, too, all the credit goes to the other authors of the mentioned paper:

“Unstructured Mesh Rendering”: Though some of the latest results that produced these particular images are not actually published yet, here a few screenshots from our latest unstructured-mesh rendering, on the NASA Mars Lander Retropulsion Study data set:

We also used OWL for some of our recent papers on fast tet-mesh and unstructured-mesh point location (and some applications of that), but I’ll skip these for now.

“OWL Prime, and Primer”: While the “main” interface to OptiX 6 was the node graph layer – with closest hit, intersection, miss, and raygen programs etc – it also came with an additional API that allowed users to set up only the geometry part of the scene, and then trace entire wave-fronts of rays, and get back wave-fronts of intersections. This abstraction comes with a lot caveats (that I will not go into here), but – based on feedback I got – was still quite useful for a lot of users. For OWL, I developed a similar library called “OWL Prime” – though by now I mostly refer to it as “Prime Owl”, as if it was a feathery animal – that offers the same abstraction level, including asynchronous launches, automatic (and async) upstreaming/downstreaming of ray/hit streams (if data lives on the host, etc).

And just to put that library through its paces I also wrote a “little” wavefront renderer (fittingly called “primer”) that, by now, has stolen almost all the shading / material / fresnel / distribution / sampling / microfacet / etc code from PBRT – doing the PBRT shading in CUDA, and all the tracing in prime-owl. Again, here a few proof of concept pics, these ones fresh from Sunday night:

This is clearly still “work in progress” (PBRT isn’t ported in a weekend …), but as a proof of concept it was more than successful: Literally all the remaining work is entirely on the shading/sampling/materials side – I didn’t even have to change anything in prime-owl, let alone in OWL.

I also have some version of Pete Shirley’s “Ray Tracing in a Weekend” scene where all the shading is done on the host (using a literal copy of Pete’s code), and all the tracing is done asnchronously on the GPU (with prime-owl using OWL’s launches and support for CUDA interop to do this, of course).

VisII: VisII is a python-scriptable, ray tracing based “scene imaging interface” (ie, a renderer) that allows a python user to easy create, modify, and render both photo-realistic images as well as certain derived images like normals, object IDs, etc. The main use case for this is use in robotics and other AI / Machine Learning based algorithms, where users can easily create high-quality images in python, including the additional “labels” that many algorithms require. Here again a few pictures:

Of particular interest to this post is that VisII was the first renderer that I did not write at least a significant portion of myself; it is almost entirely written by Nate Morrical (latest code is on https://github.com/owl-project/ViSII), and though initially there were certainly things missing in OWL that I had add for some of his more advanced use cases – in particular, I had to add buffers of textures, different texture settings, and motion blur, if I remember correctly – it is nevertheless a non-trivial application that could be done by somebody other than the author of the underlying library… which is always a good litmus test.

There are, in fact, a few more projects using OWL, but for the sake of space I’ll leave those out for now.

Now finally, how does one actually use it??

Oh-kay .. this has been a loong post; there will, eventually, be much more to be written about, but at least for now, I hope this will have given a rough idea of what OWL is, how it works, and what it can already do. If, as I hope, this had made you at least curious as to how it actually works, then your next question will likely be “well, but does this actually look like to a potential user?”.

For that, I had initially written a rather detailed “primer” on all the OWL concepts, with sample codes, etc, but that turned out to be a bit too long for a single post, so I’ll defer that to a later post. For now, all I’ll do is provide a tiny (and obviously incomplete!) example of creating a simple scene of a set of triangle meshes (and a single instance thereof), very similar to the kind of content that the “optix7course” sample would have done. Leaving out all the pieces of loading data, etc, this would look roughly like this:

// declares how device-side triangle meshes look like, used in 
// owlGeomTypeCreate, not shown here
OWLVarDecl triMeshVars[] = {
  { "diffuseColor",OWL_FLOAT3,OWL_OFFSETOF(MyTriMeshClass,diffuseColor) },
   ...
};
...
OWLGroup createWorld(host::Scene *scene)
{
  std::vector<OWLGeom> geoms;
  for (auto mesh : scene->meshes) {
    // first, create the vertex/index buffers:
    OWLBuffer vertices
       = owlDeviceBufferCreate(context,OWL_FLOAT,
               mesh->vertices.size(), mesh->vertices.data());
    ...
    // second, create the geometry, and assign buffers
    OWLGeom geom = owlGeomCreate(context,triMeshGeomType);
    owlTrianglesSetVertices(geom,vertices,....);
    owlTrianglesSetIndices(geom,indices,...);
    
    // third, assign user data (_user_ declared what that is!)
    owlGeomSet3f(geom,"diffuseColor",mesh->diffuseColor.x,...);
    ...
    geoms.push_back(geom);       
  }
  // build bottom level accel:
  OWLGroup blas 
    = owlTrianglesGeomGroupCreate(context,geoms.size(),&geoms);
  owlGroupBuildAccel(blas);

  // build top-level/instance accel struct:
  OWLGroup ias
    = owlInstanceGroupCreate(context,1,&blas);
  owlGroupBuild(ias);

  //done:
  return ias;
}

Of course, I left out a whole lot of stuff here – context creation, definition of the geometry type, creating a module that contains the device programs, the device-side closest-hit and other device programs themselves, the launch call, etc. However, if you can read the above code then all these other things I left out should be rather simple to use, too. For example, once all geometries and accel structs are built, creating the shader binding table is a single call:

owlBuildSBT(context); // that's it ...

Similarly, assuming a launch params object has already been created, launching a frame is as simple as this:

void render() {
   // note _what_ variables are in the launch params is _user_'s choice!
   owlParamsSetWorld(myLaunchParams, "world", world);
   owlParamsSetBuffer(myLaunchParams, "fb",frameBuffer);
   ...
   owlLaunch2D(rayGenProgram, fbSize.x, fbSize.y, myLaunchParams);
   float4 *pixels = (float4*)owlBufferGetPointer(frameBuffer,0);
   ...
} 

Again, I’ve left out a whole lot of stuff; I’ll write a more detailed “primer” on OWL soon, but for this post, all I wanted to do is give you an idea that OWL exists, what it is, and that it is now ready for use.

If you’re interested in learning more: first, have a look at the OWL repo on github, namely https://github.com/owl-project/owl. In particular, OWL comes with a set of samples that are intentionally built in an almost “tutorial” style way, from very simple command-line ones that create a single triangle mesh, to more advanced ones with interactive model viewers, etc; these are all part of the github repo, under https://github.com/owl-project/owl/tree/master/samples. If you just want to get an idea of how OWL works, most of these should be rather self-explanatory, though I’d suggest to start with the simple command-line ones, so you don’t get distracted by any windowing code. And finally, I just started creating a github wiki as well, to provide some pages with explanations about what variables and launch params are, how to use them, etc: https://github.com/owl-project/owl/wiki. Going forward, this wiki will likely become the main entry point for documentation, how-to’s and frequently asked questions.

Wrap up

If this post did entice you to “maybe” give OWL a try, please feel free to do so; it’s completely free and Apache “do-as-you-please” licensed. If you run into issues, or have any specific “but how do I …” questions, most certainly let me know, and/or file an issue on the github issue tracker. And if you end up using OWL as “training wheels” to get the hang of it, and then later-on decide to go OptiX native – maybe even by copying useful pieces from OWL and discarding then rest – then that’s perfectly fine, too. I really do hope you’ll like it – there’s a tremendous amount of work in OWL, and I personally couldn’t imagine not using it any more … so I really do hope others will find it as useful as I do. And finally: If you do anything cool with it, let me know!

With that: back to work 🙂

PS: some links to further info:

PPS: Just because I know the question will come up: OWL stands for “OWL Wrappers Library”, because that’s exactly what it originally started out as: namely, a set of simple “wrappers” that would help with things like GPU memory allocation, up/download, building acceleration structures, setting up build inputs, etc. It just turned out that this abstraction level just wasn’t nearly enough to address what was the real elephant in the room (the beloved SBT… :-/ ), so by now that is a total misnomer …. but it sticks, so OWL it still is.

“The Elephant on RTX” – First Light. (or: “Ray Tracing Disney’s Moana Island using RTX, OptiX, and OWL”)

TL/DR: After Matt’s original “Swallowing the Elephant” with PBRT, and my own “Digesting the Elephant” with OSPRay, I finally got some “first light” on my “Moana on OptiX” sandbox; not fully done yet, but good enough to at least show a glimpse of. I’ll briefly say something about the voyage I took to get there, and will give a brief overview of what’s currently implemented, and what’s still missing.

Moana on OptiX/RTX: Background

As already mentioned/hinted at last week, I had actually been working on “Moana on OptiX”, on and off, since the very day I joined NVidia… which by now is over two years ago. In fact, there’s more than one such sandbox: at some point I had an OptiX 6 based Moana viewer (no RTX), an OptiX 6.5 one (with RTX), an OptiX prime one (pre-7, obviously), an optix-7-alpha one (way before 7 got released); then another pure CUDA one (with my own CUDA ray tracing core), and even one that ran the same CUDA shading code (with a few #define’s and template magic) on both CUDA, and on the CPU w/ either embree or OptiX prime as a backends … and probably a few more that I now forgot about. I also had a version that used NVLink to split the model over multiple GPUs; one that used managed memory to “page” to the host; and even one that can run mpi-data parallel (with full path tracing!) across diffurent GPUs and/or several different GPU nodes….

Of course, most of those Moana viewers were rather “prototypical” in the sense that they all looked at different aspects of the problem. For example, the CUDA-only version was heavily optimized towards lowest possible memory consumption, etc. Moreover, several of these sandboxes tried to be useful for both Moana and other heavily instanced PBRT models like “landscape” and “ecosys” – but since these all use very different materials, textures, lights, etc the latter ended up being a giant distraction….

All these different sandboxes were all nice and well …. but none of those sandboxes ever went all the way, and not one ever rendered the that model in the way you’d expect it to look: either the respective sandbox didn’t have a path tracer, or it didn’t include the water, or it didn’t use the lights, or the textures, or the curves, or … something that made it “un-showable”. That’s not to say that I didn’t ever have any of these individual components: in fact, I had some reservoir-sampling based direct lighting (from both quad lights and HDR envmap) over a year ago; I also at some point borrowed the Disney BRDF implementation from Will “TwinkleBear” Usher’s ChameleonRT ray tracer (https://github.com/Twinklebear/ChameleonRT); I had baking of the PTex textures in my github pbrtParser repo (https://github.com/ingowald/pbrt-parser) a long while back; Dave Hart gave me his curves tessellation code for the PBRT curves a loooong time ago, etc …. I just never had all of those particular pieces in the same viewer at the same time.

Anyway – I still don’t have all these pieces together right now, but triggered by Chris’ blog post last week I at least sat down and started pulling together all the pieces I still had, and trying to get my Moana back to rendering on a GPU. In particular, I finally sat down and extended my path tracer to also be able to handle water, by pretty much stealing Pete Shirley’s “Dielectric” material from his “Ray Tracing on a Weekend” series (because yes, I have no clue about things like Schlick, Fresnel, etc) …. and as of Saturday night, I finally have some “first light”, including something that’s at least looking like water:

BTW: The above renders quite interactively; currently at something like 25-ish fps on a RTX 8000 (at 2560×1080), and using about 32 out of the 48GBs of RAM. I (obviously) use progressive refinement, so while you move around the image is somewhat more noisy – but by the time you can even click on the screenshot app you pretty much get the above. (I’ll release the code at some point in time, when I cleaned it up a bit).

Moana on the GPU: Main Challenges

As several previous posts/articles have pointed out, there are a multitude of challenges in this model. I’d particularly point to Matt Pharr’s original “Swallowing the Elephant” series; to my own blog post (and accompanying paper) on “Digesting the Elephant”; and to Chris Hellmuth’s recent “GPU-Motonui” blog.

Most of these issues revolve around the sheer amount of data involved in this model, and in particular the data wrangling required to even get it loaded into a form that you can even start rendering it. For the GPU version, however, a few of these particularly stuck out:

  • Textures: All the textures in Moana are in Disney’s PTex format …. but there’s no PTex on the GPU, yet (at least, none that I could find). My first versions rendered without textures, but without textures, this model looks really different, as you can see by playing with this fancy new wordpress “image compare” feature:
  • Envmap: The envmap not only comes in two forms (EXR for HDR, and png for the default PBRT model), it’s also super-important to exactly match the orientation used in the pbrt file: in particular, the envmap is tilted to the back to account for the fact that the default camera points downwards; if you don’t do that you see the envmap’s lower “ground” half peeking through betwen the ocean and the clouds, which is really disturbing. I had to literally copy pieces of Matt’s PBRT code over to make it match :-/
  • Instance count: The model has close on a 40 million instances (even if you make sure to only create those you need ….) – but early versions of OptiX only allowed 16 million per instance BVH. Early versions of OptiX didn’t allow multi-level instancing, either, so at times I had to have three “root” BVHes to trace into serially. Later versions used two-level instancing, which one root IAS over smaller second-level IASes …. which of course asks the question of “how do you best partigion 40 million instances into N groups of less than 16M each …. but I digress – since 7.1 OptiX can do more than 40 million, so this problem is gone.
  • Number of tiny meshes: Though the PBRT file contains some really big meshes, there are also a lot (!) of tiny ones. This was an issue mostly for OptiX 6, but even in OptiX 7 this would create a lot of different build inputs and SBT entries. Even worse, for some of the objects one ends up with a few really large meshes plus a ton of tiny ones, all in the same group/BVH…. which caused some other issues.
  • Water: The water in this scene is actually particularly tricky: not only do you need a water shader at all, the water is also – in some parts of the model – modeled twice : there’s the main body of water in a giant box (with some low-frequency waves pattern on top), but there’s also a second surface with a higher-resolution tessellation of the waves within the default camera frustum. Now as long as you only use that default camera it’ll be OK, but once you move around, and some pixels refract the water twice, you get some really disturbing pictures.
  • Memory: This model is big. Really big.

Moana on RTX: Current State

In its current state, my sandbox does the following:

  • Textures: To make the ptex-textures appear on the GPU I currently use some bake-out tool I wrote for exactly that purpose: For each of the input model’s polygon meshes I first re-construct the original quads, and bake out a tiny 16×16 texel “micro-texture” for each of those quads; these then get throws into a larger texture atlas that gets uploaded as a 2k-x-whatever CUDA texture. During rendering, the code material shader then reconstructs the current triangle’s corresponding texture coordinates within its corresponding mesh’s atlas, and uses a cuda bi-linear texture lookup. Total memory for that – at 16×16 texels per quad – is about 3.5GB, which isn’t too bad. I’m sure that this baking out does create some lower texture quality than PTex, but right now I’m pretty happy with it – it also saves a ton of memory (3.5GB vs 40-ish in original PTex files), and is super-fast (’cause I can use texture hardware …).
  • Materials: Though I did have some Disney BRDF at some point in time (borrowed from Will’s ChameleonRT) I could never make this work with the water. Currently, I use the material’s “specTrans” value to determine if the material is water or not, and use either Pete Shirley’s RTOW “Dielectric” material (for water), or his “Lambertian” (for everything else). For the water I’m tracking whether I’m already in/out of the water, and just pass straight through any second water surface the path may encounter.
  • Lights: I currently use a plain forward path tracer, until the path hits the environment map. All other light soruces get ignores, and even for the envmap I currently use only the LDR version, since that’s what the original PBRT file uses.
  • Curves: Curves currently get tessellated into triangle meshes during loading. Since this generates a ton of additional geometry I use a very low tessellation rate, but still can’t see any artifacts from that, likely because the input patches are already pretty small.
  • Triangle Meshes: every other geometry in this model is a triangle mesh, anyway. To avoid the many small meshes I currently merge all triangle meshes within a PBRT “object” into a single, large triangle mesh, and store, for each triangle in this model, an index of a tiny struct describing the corresponding sub-mesh’s data. This triangle mesh then gets uploaded to OWL, by creating the proper OWLGeom and OWLTrianglesGeomGroup.
    To save on memory I do not use the texcoord and primitiveID arrays from the PBRT file, and simply compute this information on the fly in the CH program.
  • Instances: For older versions of OptiX I had to do some extra steps with multiple BVHes and/or multi-level instancing; in the latest version this is no longer required: I simply create a single list of all instances, throw those into and OWLInstanceGroup, and done.
  • Model import: I use my github pbrtParser project for all model importing – this library allows to first convert from the ascii PBRT model to binary “.pbf” version (with the exact same data), and loading from this format is few orders of magnitude faster than from the ASCII version … so very useful. Some of the set-up stages then get done right away on the “scene graph” that this library loaded: ie, transforming into a strict single-level instancing model, tessellating the curves into triangle meshes, extracting (and then removing!) the light sources, extracting the default camera pose and screen resolution, etc.
  • OptiX use: The actual OptiX usage in my latest viewer is all through OWL: OWL makes it (so!) much easier to deal with things like buffer uploads, launch params, building of data structures, constructing SBTs, etc, that I wouldn’t want to part with it (well, it got written largely for this very purpose…).
    In particular, having all the low-level OptiX code “hidden” through owl is a great help in debugging: getting this model wrangled is enough of an issue in itself, so knowing one doesn’t even have to look for bugs in things like setting up build inputs or building SBTs is a huge help.
    Other than that, the use of OptiX is pretty straightforward: I create the per-object BLASes and instance accel struct as described above; there’s one closest hit program for the triangle mesh that “deconstructs” the merged-mesh information, and stores primitive ID, instance ID, material ID, texture ID, etc, in the per-ray data. All textures, materials, etc, get first “serialized” on the host (ie, all textures, materials, etc, first all get collected into a single linear array each), then uploaded into an OWLBuffer of the respetive OWL_USER_TYPE(DisneyMaterial), OWL_TEXTURE, etc; these buffers then get attached to to the global LaunchParams from which they are accessible to both CH program and raygen program. All path tracing currently happens in the raygen program.

Moana RTX: What’s (still) Missing

OK, with all this implemented, what’s still missing? A lot, actually:

  • Disney BRDF: The current material model I have is “either it’s Dielectric, or it’s Lambertian” – for this particular model that’s actually not too far off, since most of the matrials are actually configured to look pretty much like that. However, it would still be useful to have the full Disney BRDF working.
  • HDR Env-map: The env-map should be HDR, but in the code above isn’t – I did have that in the past, but then took the EXR loader out to avoid some windows issues…
  • Quad Lights: The model contains a lot of “accent (quad-)lights”, in particular over the beaches; and these give the scene a nice reddish-warm “glow” that makes it look totally different. In the code above the light geometry is already separated from the surface geometry (else they show up as annoying white quads all over the place); but in the code above they’re not yet used. Given the large number of them one has to do some importance sampling, for which I have in the past used some reservoir sampling…. but that code got lost somewhere, so isn’t hooked up right now.
  • Next-event Estimation and MIS: Right now the path tracer just traces a ray until it hits a light source; and for the LDR envmap that works just well … but the moment I’ll make that envmap be HDR again that’ll get noisy… so will have to (re-)add some NEE and MIS.
  • “Real” Curves: I currently tessellate the curves (palm fronds, mostly) intro triangle meshes; but since OptiX 7.2 can also do curves I should now be able to save that memory.
  • Denoising: Currently not yet done – mostly because I’m still not sure whether I should do “the right thing” and first add denoising cleanly in owl, and then get it for free here … or rather to the “quick-hack” and get it done here, first …. we’ll see.
  • Animated water: The one piece I’ve never done yet – but in theory, the water in this model is animated…
  • More memory squeezing: I currently use about 32GBs of memory for this model – but since last week I got a shiny new 3090, so would obviously like to get that below 24GB. There’s several obvious ways for doing that; in particular the normal arrays can be encoded with way fewer bits, and right now I’m not even freeing the vertex and index arrays after building, which since OptiX 7.1 I could actually do (one can query the hit triangle’s vertices from OptiX).
  • Model/material fixes: there’s several objects in the PBRT model that have obviously broken/missing material data. Most of those I’ve now found and eliminated, but for some reason I haven’t yet identified my beach and ocean floor surface are all gray – which may well be the most annoying visual artifact in the current viewer. This may of course be a bug in my parser/importer, but whatever it is, I haven’t found it yet.

Anyway, having worked on this model for so long, it was a really “high”-moment on Saturday night, finally seeing something that looks at least roughly as one’d expect.

I’ll be working on those missing features on and off going forward; will update once I get some of those things working.

Fun with Moana…

As I woke up this morning I got greeted by a WordPress-ping that somebody (Chris Hellmuth, from https://www.render-blog.com/) had cross-referenced an earlier blog post of mine, about the first few fun steps of wrangling the Moana Motunui Island Model (see original post here)…. back then still with Embree and OptiX.

Chris’ blog talks about the next step of that voyage: making Moana render well with a GPU (and in particular, OptiX) : for his full post, look here: https://www.render-blog.com/2020/10/03/gpu-motunui/. It’s an interesting read indeed.

Now looking at this post, I got reminded that I never wrote about my own experiences with this … because of course, the very first thing I did when switching to NVidia two years ago was to start writing a OptiX-based renderer for Moana – there isn’t anything like a challenging model to find weak spots and painpoints, so of course that’s the first thing I did. At the time, getting Moana to run on a GPU wasn’t actually all that easy: Turing had only just come out, OptiX 7 wasn’t even out, yet, and though the – back then just released – RTX 8000 cards did finally have some “real” GPU memory (48GB, to be exact), dealing with a model of that size and complexity still wasn’t exactly easy.

The whole journey is actually a story of many fits and start (and re-starts, and re-re-starts, ….) because this model is so challenging on so many fronts – but still, I did get this to render some two years ago … but never shared any results from that. Initially this was because early version contained a lot of work-arounds for limitations that have since gone away: For example, OptiX 7 is much more effective at rendering this model than OptiX 6 was; and even OptiX 7 initially had some restrictions that required workaround (eg, at most 16 million instances). For the latter problem at some point I actually wrote some really cool tools – and most of a never-published paper – about the best way of transforming a more-than-16-million single-level instance model into a multi-level instancing model…. which is an interesting problem indeed … but I digress.

Another issue I initially had to fight a lot with was memory consumption: For the first version I did indeed need two 8000s coupled with NVLink, at least when loading textures and/or tessellating curves – I could actually replicate the “high-frequency” data like acceleration structures, but vertex positions, shading normals, textures, etc, were shared over NVLink. The latest version can squeeze it all into less than 48GBs (at some point I squeezed it into a 24GB RTX 6000!), so if you have more than one card you can actually have them render in parallel …. but again, i digress.

Yet another issue I ended up spending a truly unholy amount of time and brain power on is textures: the original textures for Moana come in Disney’s Ptex format, for which I couldn’t find any CUDA sampling code …. and doing texturing on the host (which yes, one can do, through managed memory and some nasty multi-pass thingys… just saying) kind-of defeats the purpose. So, eventually I ended up writing some additional tools that would bake out the ptex’es into little 8×8 or 16×16 textures for each invidual pair of triangles in this model, to throw those into some larger texture atlas (because no, CUDA does not like a few million tiny textures :-/ ), and then doing some fancy texture coordinate mapping to make bilinear hardware texturing work with that. Thinking about it, that problem alone would be a fun blog post to write about….

Anyway. I digress. Again.

Having read Chris’ blog I was reminded that I should probably dig out some of that code I wrote these two years back. A lot has changed since then – in particular, the OWL library (that I initially wrote for pretty much this project!) has become quite a bit more mature; and a lot of the early limitations have since been lifted in OptiX. Anyway, I did manage to at least make this code compile again, so here’s the very first picture of I got after the first 10 minutes of re-activating whatever code of that I could still find:

Quite obviously, this’ll need some more work to make it look pretty again: For some reason the water is now entirely white (it’s still a full path tracer, though, as you can see on the indirect illumination on the trees). Also, the environment map is missing in this pic, and the light sources are floating around as actual red square geometry pieces (on the beach, in the lower left). Also, memory usage is a bit higher than I remember (close on using the full 48GB, it should be less). Anyway – it still works. I’ll see if I can fix those missing pieces, and will post those, too, once done.

BTW: The entire project is – obviously – all written on top of OWL; in fact, it was one of the main drivers for this library, because it became pretty clear early on that if you want to focus your thoughts on what matters (like how to wrangle the content), then you really do want something that having something that takes the job of acceleration structure builds, Shader Binding Table construction, uploading of textures, buffers, etc, off your hand, and that you can reasonably rely on. Which, of course, reminds me of the fact that I still haven’t written about OWL. Which I should. And will. I promise.

With that: back to fixing that path tracer….

PS: In case anybody is wondering: roughly half of those 47GBs that it uses is textures ….

Update 1: Textures

Huh, as usual, it took a bit longer than expected, but here some first pic with real textures. The image above did contain some texels (quite a few, actually), but was actually buggy …. in my defense, I did say that dealing with this baking out is tricky. And it actually helps to realize that the pbrt file may actually contain the same logical texture in multiple different pbrt “ImageTexture” nodes. Here some first picture of one (!) of the corals. Gives you an idea of how massive this model really is: In the default picture, that entire shot will probably map to a single pixel (and is under the water).

Update 2: Envmap …

Turns out my github pbrtParser library I was using to load the model had somehow been broken over the course of the last few years, and didn’t actually “remember” any of the light sources again. Well, fixed that (which was a lot of work), after which making the environment map appear was as simple as using OWL’s recent support for 2D textures, and copying a few lines of code from Matt’s pbrt-v3 library (to match his fancy way of mapping from a ray direction to texture u/v coordinates).

And with that (four hours for the former, 15 minutes for the latter :-/) here we are once again with a proper environment map (and proper environment map lighting, of course):

(obviously that’s without the water and beach – still need to re-convert the full model after the parser fixes).

Also interesting how much of an impat the choice of lighting can have on the objects: here’s the same Coral as above, but now with the env-map lighting rather than a default “oversaturated white” that I used above:

Sharing “my” NASA Mars Lander Unstructured-Mesh Data Set

TLDR: If any vis (or other researcher) is in need of a large unstructured mesh data set (tets and/or other linear elements): I’m hereby sharing a properly wrangled and pre-processed version of the “NASA Mars Lander” Data Set (link at bottom).

For those that haven’t yet heard of this data set: It’s one of the most amazing data sets I’ve ever gotten my hands on – partly because of the amazing back-story (simulating the landing of Mars, how much geekier could it get?), but also because

  1. it’s gigantic (over 6 billion tets – yes, billion, not million)
  2. it’s not – as most ‘big’ data sets – some artificial test case, but a “real world” data set (yes, the did simulate at that accuracy)
  3. it even contains multiple time steps, so you can make cool animations (see here: https://developer.nvidia.com/techdemos/video/d2s20)
  4. it’s a “raw” data drop in the sense that this is really what the sim code (Fun3D) wrote out (ie, it’s useful for “in situ” and “data-parallel rendering” research, too; and
  5. it actually looks awesome when rendered:

     

    (image credits: Nate Morrical, UofU)

The full, unadulterated  data for that “Mars Lander Retropulsion Study” is available from the “Fun3D Retropulsion Data Portal” at https://data.nas.nasa.gov/fun3d/, and has been made available for the wider vis rendering community by the scientists that ran this data (for full attribution, see https://data.nas.nasa.gov/fun3d/), with a lot of help from Pat Moran.

Unfortunately, if you start working on that data you’ll quickly realize that the main reasons for its awesomeness – its  “raw data dump” nature, and sheer size – have a flip side in that getting this data into any form useful for rendering comes with a “non trivial” amount of data wrangling : you need to get the thousands of different files downloaded, parse them, strip ghost cells, extract variables and time steps, re-merge from hundreds of per-rank results to a single mesh, etc. Doing so has been a lot of fun, but it was also a lot(!) of work (even for somebody that has a lot of hardware, and a lot of experience dealing with those things)… so to make it easier for others to use this data I decided to make both the result of my wrangling, and the code used for doing it, available to others that might want to work with it.

The resulting data for me is – for both the “small” (order three-quarters of a billion tets) and the large lander (about six billion – with a ‘b’) — a single unstructured mesh, with a single per-vertex scalar field (for me, ‘rho’, for one of the later time steps). For really high-quality rendering you probably want more than one variable; and/or multiple time steps …. but since my google drive space is limited I’ll provide only these two dumps, which should be more than enough to get you started. I’ll also provide the library I wrote to deal with this data set, so anybody serious enough to deal with data of that size should be able to follow my steps and extract other variables, time steps, etc.

With that:

And as usual: Any comments/praise/feedback/criticism…. let me know. I’d be particularly interested in hearing from you if you actually use this data!

Cheers

Ingo

PS: If you end up using this data, please do not forget to properly attribute the researchers that made this data available; please check the corresponding info on the original data portal!

Fun Problems #1: How to tetrahedralize an unstructured mesh with pyramids, wedges, and hexes, without losing face connectivity

Having introduced the idea of a series of articles on “fun problems” earlier today, let’s get the first one out of the door: How to tetrahedralize an unstructured mesh that contains pyramids, wedges, and hexes, into a mesh that contains only tets, and in a way that one does not lose any shared-face connectivity in the output mesh. Most of the unstructured meshes I dealt with in the past have been “pure” tetrahedral meshes, but more recently more and more of those I encountered in any unstructured-mesh visualization projects also contained wedges, pyramids, and hexes – sometimes because those are natively the most obvious choices (eg, in the Agulhas data set multiple water depth layers for a triangulated ocean surface nativly form wedges, etc), and sometimes because the dual mesh of a structured AMR data set actually contains pyramids, wedges, and hexes, … so there we are. Maybe the most prominent example of such data is the NASA Mars Lander Retropulsion study data set that they recently released – almost all tets, but a few wedges thrown in, too. But since triangles and tets are so much easier to deal with, the most obvious choice for such data sets (if only for comparison purposes) is typically to simply tetrahedralize them into a tet-only mesh.

Now: Why is this tricky? After all, splitting an given wedge, pyramid, or hexahedron into tetrahedra isn’t all that complicated (in fact, I actually do remember that as a toddler I had toy puzzle with colored plastic tetrahedra that did just that!). After googling for solutions to that I found that you can actually also split a tet into five tets (rather than the obvious six ones), but that aside, the basic concept of tetrahedralizing such unstructured meshes isn’t all that complicated.

The problem, in fact, comes in through the back-door, if one innocently expects this tetrahedralization to maintain proper face connectivity: ie, if two elements shared a face in the input, we want the generated tetrahedra to also share faces in the output. For those faces in the input mesh that were triangle faces — ie, the four faces of any input tet, the four sides of an input pyramid, and the front and back sides of a wedge — that’s actually not a problem: those are triangles, and those won’t change. For those faces that are quadrilaterals, however — ie, the the base of a pyramid, the left side, right side, and bottom of a wedge, or the six faces of a hexahedron — it’s a bit more complicated: if the element itself gets split into tets these sides will, by necessity, have to end up as pairs of triangles… and there’s two ways of doing that, based on which of the two opposing pairs of vertices in that quadrilateral one ends up connecting.

Now if both elements that originally shared a quadrilateral face do this split in exactly the same way then both elements would end up with the “same” pair of triangles, so each triangle from the one element will have a matching one from the other one, and all is good. If, however, those two adjoining elements end up splitting that face in different ways, then the one’s triangles will not match the other ones, which can be, ahem, “problematic” for all the kind of algorithms that assume that each face can tell what the neighboring tet on the other side of that face would be (e.g., for face-to-face ray marching, or for the “shared faces” method in our 2019 “RTX Beyond Ray Tracing” paper).

In fact, even for algorithms that do not need to know this “neighbor on the other side”, ending up with an inconsistent way of splitting these shared faces can result in nasty surprises: quadrilateral faces in an unstructured mesh do not (!) have to be planar, but can actually be general bilinear patches (to visualize this imagine a hexahedron that’s a perfect cube, then take one of the vertices and pull it into an arbitrary direction …. then the three faces adjoining that vertex will become curved, bilinear patches). Now if two elements share a bilinear patch but split that into two triangles with different edges, then the resulting tet-mesh would either have a gap between these elements (if both chose the inward-flipping edge), or both elements would overlap in a tet-shaped region (if both flip outwards … and in case you were wondering: one going inwards and one going outwards only happens if they select the same edge).

Though this problem is kind-of obvious once one thinks a bit about it, I actually had to learn the hard way; most illustrations of wedges and hexes show only planar faces, so it took me a while to figure out why rendering my first tetrahedralized meshes seemed to produce some nasty “pockets” of empty space. Well, learned that one, didn’t we?

Anyway, i digress again. How to fix that problem? First obvious idea is to just make sure that all pairs or elements that share a face will always flip the same way; e.g., by always placing the edge into the vertex with the smallest index, or always picking the shorter of the two edges, etc. Again I spent quite a while trying to do just that, only to realize that this isn’t actually that trivial, either: to do this one would have to be able to independently choose the edge orientation in any of a hex’s faces …. but you can’t do that because at least one pair of opposing faces in a hex always has to have the same orientation (in case you’re wondering: tetrahedralizing a hex works by first using a diagonal plane to split it into two wedges, but that means that the two opposing faces split by that plane will have the same edge orientation….). Now I’m sure there’s a global optimization problem somewhere in there that would allow to somehow rescue that idea, but at that point I realized that this is a good deal more tricky than initially thought.

So, what else to do? Instead of always splitting each quad face with an edge into two triangles, I decided to instead insert a new vertex into the center of each such face, and split it into four triangles by connecting this new vertex to the four edges of that face (for unstructured meshes with scalar per-vertex data we obviously also have to interpolate the scalar values for this new vertex).  The obvious downside of this is that one has to create some new vertices, which has all kind of issues (more vertices, more tets, more memory; the need to interpolate scalar values, etc) …. but if we’re willing to do that we can guarantee that any shared bi-linear patch will always end up with the same four vertices.

Having decided to do this for the faces, the next question is how to actually tetrahedralize the elements such that the faces will end up with this pattern. Here some little illustrations:

First, a tet will obvious remain a tet, no changes whatsoever.

Second, let’s look at a pyramid, which has exactly one bilinear face (see sketch below this paragraph, red and blue faces just for illustration): We create a new vertex C in the center of this face (by averaging vertices 0,1,2, and 3; green arrow), then connect this new vertex with the top (4) and the four base vertices (0-3, new edges in green), and and up with four nice tets that, on the bottom face, create exactly the pattern we’re aiming for (green for new edges on that face). Perfect.

pyramid

(and, yes, i absolutely did steal my kid’s school pencils for this awesomely professional illustration… and free-hand sketching is so much faster than all the fancy programs!)

Third, let’s look at a wedge with front face 0,1,2 (blue) and back face 3,4,5 (red). We now have three quadrilaterals to deal with, for each of which we want to have the “four triangles” pattern described above. Luckily, there’s an easy way to create just that: we simply ignore the faces completely, and create a new vertex C in the center of the wedge (again, by averaging); now if we connect this new vertex to the existing five faces we end up with two tets (C to front and back triangle), and three pyramids (C to bottom, left and right quadrilateral). For the pyramids we do the same as described above, so our desired pattern appears on all quad faces of our wedge, too … perfect.

wedge

One will obviously have to further subdivide the three pyramids in this sketch (ending up with a total of 16 tets), but that got too much to draw …. and should be obvious.

Finally, compared to the wedge case the hexes are almost trivial: create a center vertex C, then connect it to the four faces, and get four pyramids …. feed those into step 2, and done.

hex

All in all, that algorithm is trivially simple to code up (oh, if only I had found that one earlier ….). Only one thing to consider: thanks to limited floating-point accuracy it is, in theory, possible that two elements sharing a face might end up with slightly different results for the center vertex, which might then throw off the code to find the “matching” triangles. This can, however, easily be avoided by always sorting the four vertex indices before adding the vertices, in which case both elemens would always perform the same computations, and up with the exactly same vertex (note we only have to do that for the face vertices, the inner vertices in the wedge and hex case are only ever created once, anyway).

As said above, the algorithm is simple, and foolproof; the main disadvantage is that additional vertices require more memory, and compute more tets, than tessellating without introducing new vertices. For example a hex can be represented with as few as 5 tets… but in our method it would create 24.

Hope that whoever found that page found it useful – I only wish I had found one like this before I tried the other routes. I wouldn’t be surprised at all if lots of people had done exactly that before, too … in fact, I’d be surprised if nobody did…. I just couldn’t find it. Any comments, suggestions, or issues with this: let me know!

PS: Fun fact – this all started with “just” needing to produce some more testing data for our “RTX Beyond Ray Tracing” paper – why not simply take an AMR data set (that we had plenty of), compute the dual mesh, and then tetrahedralize the result … can’t be that hard, right? Well …

“Fun Problems”

It’s been a while since I’ve written anything; for a lot of reasons (it’s 2020, what can I say….).

Anyway. One thing I recently realized is that there’s a ton of stuff I should be writing either papers and/or blog articles about (such as the OWL project, https://owl-project.github.io/, some of my “Moana on GPUs” experiences, or some of my recent work on data-parallel ray tracing) …. but that I don’t get to because I spend far too much time worrying about writing up other things that are fun, but significantly less important. These are typically “side problems” that I ran into while working on something totally different – often things I thought were trivial, but that suddenly turned out to be unexpectedly tricky, and that I (say: google) simply couldn’t find any solutions for.

Anyway. Many of these (solutions to) fun problems do indeed need to get documented if only so I could reference them in my code or papers ….  but in “proper paper form” – with previous work, discussions, comparisons to other solution, and in particular all the insane latex-polishing and perfect-figure-crafting – it just takes up too much time, and distracts from the real problems.

So. To break that log-jam I’ve decided to instead share some of these ideas in blog form; using hand-drawn-and-scanned scribbles rather than perfectly designed illustrations (if only i could have all the time back I spent experimenting with ever new sketching program ….), using wordpress rather than latex (oh my beloved \vspace*{…}, \multicolumn{}, and \includegraphics{}….), and doing away with all the stuff that otherwise takes up so much time. To distinguish those write-ups from any other “update”-style articles I’ll explicitly tag each one with a “Fun Problems:” prefix; in the same spirit as the “ISPC bag of tricks” series i wrote a few years ago. Basically they’re the same category: something worth sharing that’ll hopefully(?) help others; but that’s not worth making a real paper about.

As such: on to the first one – how to tetrahedralize a unstructured element with pyramids, wedges, and hexes, without losing proper shared-face connectivity….

Quick note on CUDA/OWL on Ubuntu 20

Not sure if a “blog post” is the right medium for that, but just in case anybody stumbles over this: I just upgraded my ubuntu 18 based laptop to ubuntu 20, and after that ran into some issues compiling my CUDA projects – apparently, the gcc version 9 that Ubuntu 20 ships with isn’t yet supported in CUDA (at least, not in the version I have installed!?), meaning I ended up with an error of

/usr/local/cuda/include/crt/host_config.h:129:2: error:
#error -- unsupported GNU version! gcc versions later than 8 are not supported!

Just in case anybody else stumbles over this, the solution is actually quite simple: First, make sure to install the ‘g++-8’ package:

sudo apt install g++-8

(note: mind the “++”: my machine already had `gcc-8` installed by default, but you need g++-8)

Once installed, just tell nvcc to always use ‘/usr/bin/gcc-8’. If you’re using cmake under linux, that’s as simple as setting CUDA_HOST_COMPILER to /usr/bin/gcc-8; cmake and make should then run through just fine.

Hope that helps.

PS: I also previously experimented with using ‘alternatives’ to change my system to use gcc-8 by default, but that didn’t go all too well: yes, setting gcc-8 as default compiler did make nvcc work, but the next apt update ended up being quite confused …. not good. Setting it in the project seems to work just fine.

PPS: Update – for another machine I also installed U20 “from scratch” (rather than upgrading). In this case, installing U20 was actually an amazingly impressive experience: not only did it install out-of-the-box on a RTX-enabled Thinkpad laptop that I had not gotten any other distribution to run on at all, it even came with pre-packaged nvidia 440 driver, with apt packages for CUDA, etc. You still need the “g++-8” think from above, but overall that was a very pleasant experience.

(Short-)Paper Preprint: Using RTX for Glyph Rendering

Haven’t written in a while…. (very busy…) but currently so down with allergies that updating my web presence is pretty much the only thing I’m still useful for, so : Just uploaded another paper pre-print, this time on the following (short) paper:

High-Quality Rendering of Glyphs Using Hardware-Accelerated Ray Tracing
Stefan Zellmann, Martin Aumueller, Nate Marshak, and Ingo Wald
Eurographics Parallel Graphics and Visualization (EGPGV) Short Papers, 2020.

(paper, as usual, on my  publications page at SCI: http://www.sci.utah.edu/~wald/Publications/index.html)

The core idea behind this paper was to look into how hardware ray tracing – and in particular, the ability to easily and cheaply created millions of “copies” of an object via instancing – would impact glyph rendering. At least in theory, there’s three key benefits that ray tracing brings to the table for glyph rendering: First, the ability to create lots of copies of objects, very cheaply, is good because there’s typically a lot of glyphs involved in rendering an image (and animating them is cheap, too, because all you have to do is change some transforms). Sure, you could previously do this with fragment shaders, too (at least for primary visibility), but with ray tracing you have this somewhat more “cleanly” integrated in the over rendering system, you have less issues with overdraw, etc…. so this should be useful.

Second, the ability to have arbitrary intersection programs allows for more easily creating non-trivial glyph shapes without having to worry about tessellation, which can either be tricky (say, for superquadrics) or – at least when doing naively – un-necessarily expensive (say, tessellating millions of arrows into hundreds of triangles each). Again, even before hardware ray tracing you could  fix this with fragment shaders, but still….

Third – and something that’s significantly less easy to fix with fragment shaders – with ray tracing it’s relatively easy to add secondary shading effects like shadows, AO, or indirect illumination …. and while this may also make images look “better”, the core motivation for that is that when you draw lots and lots of glyphs without such effects  you often end up with just a garbled mess on the screen, where it is hard to see which of the glyphs are actual where relative to the others (fancy jargon: “visual clutter”). In the last few year’s we’ve seen again and again how much shadows and AO can helps with that in, e.g., particle visualization … and whether particles, arrows, superquadrics, or other glyphs – it’s exactly the same problem.

In theory, all three of those advantages were kind-of obvious; the big question just was how well this would work in practice, how much effort it would be, whether there’s any un-foreseen pitfalls, and whether we could actually get this to render fast enough. And as shown in this (short) paper: it does actually work out pretty well: in fact, we even stumbled over some additional ideas we hadn’t initially expected – for example, you can actually use motion blur to convey motion, and I’m sure there’s other things we haven’t looked at yet, too (eg, would motion blur work for uncertainty visualization? For conveying error ranges? Would defocus or warping of the primary rays be useful, too?…?).

Anyway, it’s been a fun paper to play with; Stefan, Nate, and Martin have done a great job at that …. Enjoy!

PS: This was also one of the first (public) papers made with my new “OWL” library, with which building that framework turned out to be pretty easy… (well, that was the idea of OWL! 🙂 ) … but since I realize that the long-planned blog article about OWL still hasn’t actually been written yet I’ll say no more for now …

Digesting the Elephant…

TL;DR: We just uploaded our “experiences with getting moana rendering interactively (and with all bells and whistles!) in embree/ospray” whitepaper to ArXiv:
https://arxiv.org/pdf/2001.02620.pdf.

digesting-the-elephant-header

Background…

Quite a while ago (by now), Matt wrote an excellent article named “Swallowing the Elephant” – basically, his experiences with all the unexpected things that happen when you want to make even a well-established renderer like PBRT “swallow” a model on the scale of Disney’s “Moana Island” model.

This Moana model (graciously donated to the research community by Disney about two years ago; link here: https://www.technology.disneyanimation.com/islandscene) is the first time a major studio released a real “production asset” -with unbelievably detailed geometry, millions of instances, lots and lots of textures, full material data, lights, etc…. basically, “the full thing” as it would be used in a movie. The reason that this is/was such a big deal is that when you develop a renderer you need data to test and profile it with, and truly realistic model data is hard to come by – sure, you can take any model and replicate it a few times, but that takes you only so far. As such: Disney, if you’re reading this – you’ve done an immeasurable service to the rendering community, we can’t thank you enough!

While I was still at Intel, we had actually gotten a very early version of this model; we’ve worked on it for quite a while, and it has since been shown several times at major events. Just as a teaser for the rest of this article: Here’s a few “beauty”-shots of our embree/ospray based renderer on this model:

moana-extra-beauty-beach-dofmoana-extra-beauty-plants

(click on the images for full res versions)

And just to give you an idea, here is a close-up of some of the region under the palm of the previous image:roots-beauty

Digesting the Elephant

Anyway, Matt at some point wrote an article about exactly this experience: What happens when you take even an established renderer that’s already pretty amazing – and then try to drop Moana into it. That article, or actually, series of articles, made for excellent reading, and in particular, rang some bells for us, because “we” (which means I and my now-ex Intel colleagues) had gone through many of exactly the same experiences, too, plus a few additional ones because our goal was not just offline rendering, but getting this thing to render interactively on top of Embree and OSPRay. And to be clear, our goal was not do just get “some” interactive version of moana, but the full thing, with every instance, every texture, curves, subdiv, full path tracing with Disney BRDF, every-frigging-thing... but also, at interactive rates.

And yes, many of things Matt wrote about sounded incredibly familiar (we had actually worked on that way before his set of articles) : you think you have a good software system to start with, you’ve rendered models with lots of triangles or lots of instances before, you’ve done path tracing, cluster based parallel rendering, etc, it’s available in some human-readable file format (even two different ones), so: how bad could it be?

For us as it was for Matt, the answer to this naive “how bad could it be” question turned out to be (to put it mildly) “not as easy as you thought it might”: In fact, I think I spent an entire week just working on fixing/extending my then-already-existing pbrt-format parser to be able to handle this thing even remotely correctly (or completely), and initially it took hours to parse; and at some point, I vividly remember Jim (my then manager) sending me an additional 512GBs of memory for just that purpose (by now we need much less than that, but at the time …). And all that doesn’t include the time spent on several aborted attempts at using the JSON format instead, let alone that at the time we hadn’t even started with the real work of getting that data into the ospray scene graph, into ospray, embree, cluster, etcpp, let alone dealing with things like PTex textures, subdiv, hair/curves, Disney BRDF, denoising, …. hm.

In fact, the number and kind of pitfalls you run into once you get such a “pushing the boundaries”-model to test with is truly astounding: For example, at the time we started the ospray scene graph still used a lot of strings for  maintaining things like names of scene graph nodes or parameters of those nodes; all of that had been used plenty times before without issues – creating an instance node was only a millisecond or so, which surely can’t be too bad, right…. unless you throw ~100 million instances at at, at which point even a one-millisecond processing time for an instance node suddenly becomes an issue (though 100M milliseconds still sounds benign, it’s actually 100,000 seconds… or 1,600 minutes …. or 28 hours…. just to create the instance nodes in the scene graph). And there were plenty such examples.

And just to demonstrate a bit of what one has to deal with in this model geometry-wise (Matt has several similar shot, here’s a few color-coded “instance ID” shots around various places of this model:

moana-instances-size-overlap-closemoana-instid-ironwood-leavesmoana-instances-overview

Again – every different color in these pics is a different instance of often thousands of triangles. My favorite here: a single Coral in the water off the coast – barely covering a full pixel in some of the overview shot above:

moana-coral

I digress….

Yes, I really love this model… but anyway, I digress.

As said above reading Matt’s articles reminded me of all that, and once I read those posts I realized that Matt had actually done something pretty useful: not just “battle through” and “make it work”, but actually share what he learned by writing it up – so others would know what to expect, and hopefully have it a bit easier by knowing what to expect (when writing a piece of software it’s so easy to just say “oh, that will surely never happen” – yes, it does!). As a result, we decided to follow his lead, and to also write up our experiences and “lessons learned”…. and since our task was not only to “swallow” that model into an existing renderer, but actually to also add lots and lots of things to that renderer that it had then never been designed to do (remember, OSPRay was a purely “Svi-Vis” focussed renderer, then), too, I concluded that a fitting title for that write-up would be “Digesting the Elephant” (after all, digesting something is what you have to do after you managed to swallow it … :-/).

Anyway, I digress. Again.

The problem with this write-up was that I had since switched to NVidia, so writing articles on my pre-NVidia work wasn’t exactly top priority any more. I spent pretty much all of he 2018(!) Christmas holidays writing up a first draft, then all my ex-colleagues started adding in all the stuff I no longer had access to (plus a lot of new text and results, too, of course) but eventually we all kind-of “dropped” it because we all had other stuff to do. At some point we did submit it as a paper, but a “experiences” story just doesn’t fit that well at academic conferences, so not unsurprisingly it didn’t get in, and we completely forgot about it for another few months. Finally, a few days ago I accidentally stumbled over the paper repository, and realized we had all completely forgotten about it… but since the paper was already written, I asked around, and we decided we should at least make it publicly available; “technical novelty” or not, I still firmly believe (or hope?) that the story and experiences we share in there will be useful to those treading in our footsteps.

Long story short: We finally uploaded the PDF to ArXiv, which just published it under the following URL: https://arxiv.org/pdf/2001.02620.pdf.

digesting-the-elephant-header

With that, I do hope that this will be useful; and for those for whom it is : Enjoy!

PS: As usual – any feedback, comments, criticism on what we did: let me know…

PPS: And as still images show only so much, here a video that Will Usher made, uploaded to youtube: https://youtu.be/p0EJo0pZ3QI

PPPS: and a final little personal plug: If you’re playing with this model – or intending to do so – also have a a look at my github “pbrt-parser” library, that I also use when working with this model. It has a binary file format, too, so once the model is converted you can load the full thing in a few seconds as opposed to half an hour of parsing ascii files :-). Link here: https://github.com/ingowald/pbrt-parser/