Recreating the “Graphics Gems 2” Cover for Ray Tracing?

Usually I use this blog to share updates, such a newly published papers, etc …. but this time, I’m actually – kind-of – calling for some “community help”: In particular, I’m curious if there’s any graphics people out there that knows how to use a modelling program (I don’t; not if my life depended on it :-/), and that would be able to create a renderable model of the “scene” depicted on the “Graphics Gems 2” book. To be more specific, to create a model that looks roughly like that:

Screenshot from 2019-11-14 13-39-18

Now you may wonder “where does that suddenly come from?” … so let me give a tiny bit of background: We – Adam Marrs, Pete Shirley, and I – recently brainstormed a bit about some “potentially” upcoming “Ray Tracing Gems II” book (which hasn’t been officially announced, yet, but ignore that for now), and while doing so we all realized how much we are, in fact, channeling the original “Graphics Gems” series (which at that time was awesome, by the way!).

Now at one instance, I was actually googling for the old books (to check if they used “II/III/IV” or “2/3/4” in the title, in case you were wondering), and while doing so stumbled over this really nice cover … which would probably look amazing if that was a properly ray traced 3D model rather than an artist’s sketch. And what could possibly be a better cover image – or test model – used in RTG2 than the original cover from GG2!?

So – if there is anybody around that know his modelling tools, and looking for a fun challenge: Here is one! If anybody does model this and has anything to share – ideally full model with public-access license, or even only just images – please send it over. I can of course not promise that we’ll actually use such material in said book (which so far is purely hypothetical, anyway) – but I’d find it amazing if we could find a way of doing so.

PS: Of course, a real-time demo of that scene would be totally awesome, too 🙂

New Preprint: “BrickTree” Paper@LDAV

Aaaand another paper that just made it in: Our “BrickTree” paper (or – using its final, official, title: our paper on “Interactive Rendering of Large-Scale Volumes on Multi-core CPUs“) just got accepted at LDAV (ie, the IEEE Symposium on Large-Data Analysis and Visualization).

The core idea of this paper was to develop some data structure (the “BrickTree”) that had some intrisics “hierarchical representation” capabilities similar to an octree, but much lower memory overhead … (because if your input data is already terabytes of voxel data, then you really don’t want to spend a 2x or 3x overhead on encoding tree topology :-/). The resulting data structure is something that is more or less a generalization of a octree with NxNxN branching factor, but with some pretty nifty encoding that keeps memory overhead really low, while at the same time having some really nice cache/memory-related properties, and (relatively) cheap traversal to find individual cell values (several variants of this core data structures have been used before; the key here is the actual encoding).

Such a hierarchical encoding will, of course, allow for some sort of progressive loading / implicit level of detail rendering, where you can get some first impression of the data set long before the full thing is loaded – because even if your renderer can handle data of that size, loading a terabyte data set can literally take hours to first pixel!. (And just to throw this in: this problem of bridging load times is, IMHO, one of the most under-appreciated problems in interactive data vis today: yes, we’ve made giant progress in rendering large data sets once the data has been properly preprocessed and loaded …. but what good is an eventual 10-frames-per-second frame rate if it takes you an hour to load the model?!).

Anyway – read the paper …. there’s tons of things that could be added to this framework; I’d be curious to see some of those explored! (if you have questions, feel free to drop me an email!). Maybe most of all, I’d be curious re how that same idea would work on, say, a RTX 8000 – yes, the current paper mostly talks about bridging load times (assuming you’ll eventually load the full thing, anyway), but what is to stop one from loading once a certain memory budget has been filled!? This should be an obvious approach to rendering such large data, but I’m sure there’ll be some devil or two hiding in the details… so would be curious if somebody were to look at that (if anybody wants to: drop me an email!).

Anyway – enough for today; feedback of course appreciated!

PS: Link to PDF of paper is embedded above, but just in case: PDF is behind this link, or as usual, on my publications page.

IEEE Short Paper Preprint: RTX Accelerated Space Skipping and Adaptive Sampling of Unstructured Data

As a follow-up to our “RTX Beyond Ray Tracing” paper we (Nate, Will, Valerio, and I) also worked on using that paper’s sampling technique within a larger rendering context, where we took this technique, and added some nifty technique for doing efficient space skipping, and, in particular, adaptive sampling on top of this. And happy to say: The paper finally got accepted in the IEEE Vis 2019 short papers track, so Nate will present this in Vancouver later this year.

In the mean time I already uploaded a “authors preprint” of that paper to my publications page (, so if you want to read about it before his talk please feel free to download and read.

The core idea of that paper is, of course, to realize that in a time where we now all have access to fast “traceRay()” operations (in our case on the GPU, but the same should work on a CPU, too) you can of course use that technique to do some sort of hierarchical acceleration of volume rendering, too. In our case, we build a hierarchy over regions of the volume, compute min/max opacity for those regions (for a given transfer function), then use this traceRay operation to step through leaves of this data structure. And for each leaf of this data structure, you can then either skip it, or adjust sample rate, based on how “important” it is (at least for the space skipping side of that a similar idea was recently proposed by David Ganter at HPG, too, though in a somewhat different context).

Details in the paper; go read it… enjoy!

PS: Of course the technique is not restricted to tet meshes. That’s what we demonstrated it on, but it should work for any other type of volumetric data…


HPG Paper Preprint: Using RTX cores for something other than ray tracing….


Hey – just a quick heads-up: our HPG (short) paper on using RTX cores for something other than ray tracing got accepted; and for everybody interested in that I just uploaded a “author’s preprint” (ie, without the final edits) to my usual publications page (at

The core idea of this paper was to play with the idea of “now that we have this ‘free’ hardware for tracing rays, what else can we use it for, in applications that wouldn’t otherwise use these units?” – after all, the hardware is already there, it’s actually doing some non-trivial tree traversal, it’s massively powerful (billions of such tree traversals per second!), and if it’s not otherwise being used then pretty much anything you can offload to it is a win… (and yes, pretty much the first thing we tried worked out well).

For this paper we only looked into one such applications (point location, in a tet mesh volume renderer), just as a “proof of concept” … but yes, there’s a ton more where it’d make sense: I already used it for some AMR rendering, too (same basic concept), but there’s sure to be more. If you play with it and find some interesting uses, let me know – I’m curious to see what others will do with it!

Hope you’ll like it – this has been a lot of fun, hope you’ll enjoy reading it, too…


Preprint here:

Full citation: RTX Beyond Ray Tracing – Exploring the Use of Hardware Ray Tracing Cores for Tet-Mesh Point Location, Ingo Wald, Will Usher, Nathan Morrical, Laura L Lediaev, and Valerio Pascucci, Proceedings of High Performance Graphics (HPG) 2019. (to appear).

“Accidental Art”: PBRT v3 ‘landscape’ model in my RTOW-OptiX Sample…

Just produced a (accidental) pic that in some undefinable way struck me – dunno why, but IMHO it “got something” – and wanted to quickly share this (see below).

For the curious: The way that pic was produced was that I took the latest version of my pbrt parser (, hooked it up to my RTOW-in-OptiX sample (, and ran that on a few of the PBRT v3 sample models ( And since that RTOW-in-OptiX sample can’t yet do any of the PBRT materials I just assigned the Pete’s “Lambertian” model (with per-material random albedo value), which for the PBRT v3 “landscape” (view0.pbrt) produced the following pic. And I personally find it kind-of cute, so …. Enjoy!

PS: The buddha with random Metal material looks cool, too 🙂


PBRTParser V1.1

For all those planning on playing around with Disney’s Moana Island model (or any other PBRT format models, for that matter) : Check out my recently re-worked PBRTParser library on github (

The first version of that library – as I first wrote it a few years ago – was rather “experimental” (in plain english: “horribly incomplete, and buggy), and only did the barest necessities to extract triangles and instances for some ray traversal research I was doing back then. Some brave users back then had already tried using that library, but as I just said, back then it was never really intended as a full parser, didn’t do anything meaningful with materials, etc…. so bottom line, I’m not really sure how useful it was back then.

Last year, however – when I/we first started playing around with Moana I finally dug up that old code, and eventually fleshed it out to a point where we could use it to import the whole of Moana – now also with textures, materials, lights, and curves – into some internal format we were using for the 2018 Siggraph demo. That still didn’t do anything more than required for Moana (e.g., it only did the “Disney” material, and only Ptex textures), but anyway, that was a major step – not so much in functionality, but in completeness, robustness, and general “usablity”.

And finally, after my switching employers (and thus, no longer having access to that ospray-internal format) – yet still wanting to play with this model – I spend some time on and off over the last few months in cleaning that library up even more, into fleshing it out to the point that it (apparently?) read all PBRT v3 models, and in particular, to a point where all materials, textures, etc, are all “fully” parsed to specific C++ classes (rather than just sets of name:value pairs as in the first version). And maybe best of all – in particular for those planning on playing with Moana! : The library can not only parse exising ASCII .pbrt files, but can also load and store any parsed model in an internal binary file format that is a few gazillion times faster to load and store than parsing those ASCII files (to give you an idea: parsing the 40GBs of PBRT files for Moana takes close on half an hour … reading the binary takes … wait… less time than it took me to write that sentence).

Mind – this is still not a PBRT “renderer” of any sorts – but for those that do want to play around with some PBRT style renderers (and in particular, with PBRT style models!), this library should make it trivially simple to at least get the data in, so you can then worry about the renderer. In particular, it should by now be reasonably complete and stable to work out of the box. No, I cannot guarantee that library’s working state for windows or Mac (it did compile at some point, but I’m not regularly testing those two), but at least on Linux I’d expect it to work – and will gladly help fixing whatever bugs are coming up. Of course, despite all these claims about completeness and robustness (and yes, I do use it on a daily basis): This is an open-source project, and I’m sure there will be some bugs and issues as soon as people start using it on models – or in ways – that I haven’t personally tried yet. If so: I’d be happy to fix, just let me know (preferably on gitlab).

Anyway: If you plan on playing with it, check it out on either github, or gitlab. I will keep those two sites’ repositories in sync (easy enough with git …), so they should always contain the same code, at least in the master branch. However, gitlab is somewhat easier to use with regard to issue tracking and, in particular, push requests by users, so if you do plan on filing issues or sending push requests, I’d suggest gitlab. Of course, any criticism, bugs, issues, or requests for improvement are highly appreciated….


PS: Just to show that it can really parse all of Moana, I just added a little screenshot of a normal shader from a totally unrelated renderer of mine. Note that the lack of shading information is entirely due to that renderer; the parser will have the full material and texture information – it’s just the renderer that doesn’t support all effects, yet, so I don’t want to prematurely post any images of it, yet.

RTOW in OptiX – Fun with CuRand…

Bottom line: With new random number generator, RTOW-OptiX sample on Turing now runs in ~0.5 secs ….

Since several people have asked for Turing numbers for my “RTOW in OptiX” example I finally sat down and ran it. First result – surprise: In my original code there was hardly any difference between using Turing and Volta – and that just didn’t make sense. Sure, you do still need a special development driver to even use the Turing ray tracing cores from within OptiX, but I actually had that, so why didn’t it get faster? And sure, there’s only so much speedup you can except in a scene that doesn’t have any triangles at all, and only a very small number of primitives to start with. But still, that didn’t make sense. There also was hardly any difference between iterative and recursive versions … and none of that made sense whatsoever.

Well – in cases like that a good first step is always to have a look at the assembly (excuse me: PTX) code that one’s code is actually generating. In our OptiX example, that’s actually super-easy: Not only is PTX way easier to read than regular assembly, the very nature of OptiX’ “programs” approach also means that you don’t have to sift through an entire program’s worth of asm output to find the one function you’re interested in…. instead, you only look at the PTX code for the one kernel that you’re interested in. And even simpler, the cmakefile already generates all these ptx files (that’s the way OptiX works), so looking at that was very easy.

Now looking at the ray gen program, I was at first what, for lack of a better word, I can only call “dumbfounded”: thousands of lines of cryptic PTX code, with movs, xor’s, loads, and stores, all apparently randomly thrown together, and hardly anything that looked like “useful” code. Clearly my “actual” ray gen program was at the end of this file, and looked great – but what was all that other stuff?? No wonder this wasn’t any faster on Turing than on Volta – all it did was garbling memory!

Turns out the culprit was what I had absolutely not expected: CuRand. I hadn’t even known about curand before I saw Roger Allen’s CUDA example, but when I first saw it this looked like an easy-to-use equivalent to Pete’s use of drand48(), and simply used it for my sample, too. Now CuRand does indeed seem to be a very good random number generator, and to have some really nice properties – but it also has a very, very – did I say: very! – expensive set-up phase, where it’s taking something like a 25,000-sized scratchpad and garbling around in it. And since I ran that once per pixel it turns out that just initializing that random number generator was more expensive in this example than all rendering taken together ….

Of course, the solution to that was simple: Pete already used ‘drand48()’ in his reference CPU example, and though that function doesn’t exist in the CUDA runtime it’s trivially simple to implement. Throwing that into my example – and taking curand out – and lo and behold, my render time goes down to something like 0.5 sec. And in that variant I also see exactly what I had expected: that iterative is way faster than recursive, and Turing was way faster than Volta. Of course, changing the random number generator also changed the image (I haven’t looked in detail yet, but it “feels” as if the curand image was better), and has of course also made the Volta code faster. Either way – for now, 500ms is good with me 🙂

With that – back to work….

RTOW in OptiX – added iterative variant…

Huh, how fitting: Ray Tracing on a Weekend“, and I’m sitting here, Sunday morning, over a coffee, and writing about ray tracing on a weekend … on a weekend. And if that wasn’t recursive enough, I’m even writing about recursion in ….. uh-oh.

Aaaaanyway. For reference, I also just added a purely iterative variant of the “RTOW-in-OptiX” example that I wrote about in my previous two posts: The original code I published Friday night tried to stay as close as possible to Pete’s example, and therefore used “real” recursion, in the sense that the “closest hit” programs attached to the spheres did the full “Material::scatter” of its respective material (lambertian vs dielectric vs metal), plus doing a recursive “rtTrace()” to continue the path, thus doing some real recursive ray (actually: path) tracing.

Now if you read the previous section very closely you may have seen that I put “real” in quotes, for good reason: OptiX will internally re-factor that code to not really recurse in the way Pete’s CPU version did – with very deep stack and everything – but will likely do something more clever by re-factoring that code, which you can read more about in the original OptiX SIGGRAPH paper.

All that said, no matter what OptiX may or may not do with it, from a programmer’s standpoint it’s true recursion …. and though OptiX may do some refactoring to avoid the “gigantic stacks” problem – it’ll still have to do something to handle all the recursive state – and that, of course, is not cheap. Consequently, real recursion is generally something to be avoided (which, BTW, typically makes the renderer simpler to argue about, anyway).

Roger Allen’s CUDA-version already did this transformation, and used a recursive version: Since his example used CUDA directly, there was no way for any compiler framework to re-factor the code, so if he had used recursion the CUDA compiler would really have had to use enough stack space per pixel to store up to 50 recursive trace contexts, which would probably not have ended well.

In my original OptiX example, I didn’t have this problem, and could trust OptiX to handle that recursion for me in a reasonable way. Nevertheless, as said above real recursion is usually not the right choice to go about it (and BTW: on a CPU it usually isn’t, either!), so the downside of my staying close to Pete’s original solution was that this originally example might actually have led some readers to think that I wanted them to write such recursive code, which of course is not what I intended.

As such, for reference, I just added a iterative version to my example as well. The particular challenge in this example is that while the CPU and CUDA versions have real “Material” classes with real virtual functions, in OptiX it’s a bit tricky to attach real virtual classes to OptiX objects (yes, you can do it – after all, programs are written in general CUDA code – but let’s not go there right now). For my particular version, the way I went about this is to have the closest hit programs do one Material::scatter() operation for the material associated to that geometry, and return the resulting scattered ray and attenuation back to the ray generation program via the PRD. Of course, this approach works only because the Material in Pete’s code does only exactly one thing – scatter() – and wouldn’t have worked if we the ray generation program would have had to call multiple different material methods … but hey, this example is not about “how to write a complex path tracer in OptiX” – that may come at a later time, but for now, this is only about how to map Pete’s example, nothing more.

I do hope the reference code will be useful; and as usual: any feedback is welcome!

With that – back to …. work?

PS: For those interested in having a look: I already pushed the code to github ( I’ll be running some more extensive numbers when I’m back to a real machine (no, I don’t bring my turing to my sunday-morning coffee…), but at least on my “somewhat dated” Thinkpad P50 laptop, I get the following (both using 1200x800x128 samples):

  • pete’s version (with -O3, and excluding image output), on a Core i7-6700HQ@2.6Ghz(running at 3.2Ghz turbo): 12m32s.
  • optix version, on a Quadro M1000M: 18 sec.

Of course, this comparison is extremely flawed: Pete’s version doesn’t even use threads, let alone an acceleration structure, both of which my OptiX version does. Take this with a grain of salt – or an entire salt-trucks worth of it, for that matter! That said, the parallelism in the OptiX version comes for free, and the acceleration structure …. well, all that took was adding a single line of code (‘gg->setAcceleration(g_context->createAcceleration(“Bvh”))‘) …

PPS: First performance numbers on some more powerful card (driver 410.57, optix 5.1.1):

  • 1070, recursive: 0.58s build, 6s render
  • 1070, iterative: 0.66s build, 5.5s render
  • Titan V, recursive: 0.57s build, 2.6s render
  • Titan V, iterative: 0.63s build, 2.1s render
  • Turing: to come…

“RTOW in OptiX” sample code now on github…

As promised in last night’s post, I cleaned up the sample code and pushed to github:

I haven’t tried the cleanups on windows yet, but it should work. If you run into trouble, let me know!

One note on the code: I’ll very happily accept pull requests that cover bugs, typos, build fixes, etc. Please note I do want to stay as close as possible to the original example, though, so please don’t send pull requests with major restructurings, general improvements, or feature additions, even if they’d be useful in their own right…. this is not supposed to be a “how to do cool things in optix” repo; just a optix “port” of Pete’s example.

And now – back to work 🙂