RTOW in OptiX – Fun with CuRand…

Bottom line: With new random number generator, RTOW-OptiX sample on Turing now runs in ~0.5 secs ….

Since several people have asked for Turing numbers for my “RTOW in OptiX” example I finally sat down and ran it. First result – surprise: In my original code there was hardly any difference between using Turing and Volta – and that just didn’t make sense. Sure, you do still need a special development driver to even use the Turing ray tracing cores from within OptiX, but I actually had that, so why didn’t it get faster? And sure, there’s only so much speedup you can except in a scene that doesn’t have any triangles at all, and only a very small number of primitives to start with. But still, that didn’t make sense. There also was hardly any difference between iterative and recursive versions … and none of that made sense whatsoever.

Well – in cases like that a good first step is always to have a look at the assembly (excuse me: PTX) code that one’s code is actually generating. In our OptiX example, that’s actually super-easy: Not only is PTX way easier to read than regular assembly, the very nature of OptiX’ “programs” approach also means that you don’t have to sift through an entire program’s worth of asm output to find the one function you’re interested in…. instead, you only look at the PTX code for the one kernel that you’re interested in. And even simpler, the cmakefile already generates all these ptx files (that’s the way OptiX works), so looking at that was very easy.

Now looking at the ray gen program, I was at first what, for lack of a better word, I can only call “dumbfounded”: thousands of lines of cryptic PTX code, with movs, xor’s, loads, and stores, all apparently randomly thrown together, and hardly anything that looked like “useful” code. Clearly my “actual” ray gen program was at the end of this file, and looked great – but what was all that other stuff?? No wonder this wasn’t any faster on Turing than on Volta – all it did was garbling memory!

Turns out the culprit was what I had absolutely not expected: CuRand. I hadn’t even known about curand before I saw Roger Allen’s CUDA example, but when I first saw it this looked like an easy-to-use equivalent to Pete’s use of drand48(), and simply used it for my sample, too. Now CuRand does indeed seem to be a very good random number generator, and to have some really nice properties – but it also has a very, very – did I say: very! – expensive set-up phase, where it’s taking something like a 25,000-sized scratchpad and garbling around in it. And since I ran that once per pixel it turns out that just initializing that random number generator was more expensive in this example than all rendering taken together ….

Of course, the solution to that was simple: Pete already used ‘drand48()’ in his reference CPU example, and though that function doesn’t exist in the CUDA runtime it’s trivially simple to implement. Throwing that into my example – and taking curand out – and lo and behold, my render time goes down to something like 0.5 sec. And in that variant I also see exactly what I had expected: that iterative is way faster than recursive, and Turing was way faster than Volta. Of course, changing the random number generator also changed the image (I haven’t looked in detail yet, but it “feels” as if the curand image was better), and has of course also made the Volta code faster. Either way – for now, 500ms is good with me 🙂

With that – back to work….

RTOW in OptiX – added iterative variant…

Huh, how fitting: Ray Tracing on a Weekend“, and I’m sitting here, Sunday morning, over a coffee, and writing about ray tracing on a weekend … on a weekend. And if that wasn’t recursive enough, I’m even writing about recursion in ….. uh-oh.

Aaaaanyway. For reference, I also just added a purely iterative variant of the “RTOW-in-OptiX” example that I wrote about in my previous two posts: The original code I published Friday night tried to stay as close as possible to Pete’s example, and therefore used “real” recursion, in the sense that the “closest hit” programs attached to the spheres did the full “Material::scatter” of its respective material (lambertian vs dielectric vs metal), plus doing a recursive “rtTrace()” to continue the path, thus doing some real recursive ray (actually: path) tracing.

Now if you read the previous section very closely you may have seen that I put “real” in quotes, for good reason: OptiX will internally re-factor that code to not really recurse in the way Pete’s CPU version did – with very deep stack and everything – but will likely do something more clever by re-factoring that code, which you can read more about in the original OptiX SIGGRAPH paper.

All that said, no matter what OptiX may or may not do with it, from a programmer’s standpoint it’s true recursion …. and though OptiX may do some refactoring to avoid the “gigantic stacks” problem – it’ll still have to do something to handle all the recursive state – and that, of course, is not cheap. Consequently, real recursion is generally something to be avoided (which, BTW, typically makes the renderer simpler to argue about, anyway).

Roger Allen’s CUDA-version already did this transformation, and used a recursive version: Since his example used CUDA directly, there was no way for any compiler framework to re-factor the code, so if he had used recursion the CUDA compiler would really have had to use enough stack space per pixel to store up to 50 recursive trace contexts, which would probably not have ended well.

In my original OptiX example, I didn’t have this problem, and could trust OptiX to handle that recursion for me in a reasonable way. Nevertheless, as said above real recursion is usually not the right choice to go about it (and BTW: on a CPU it usually isn’t, either!), so the downside of my staying close to Pete’s original solution was that this originally example might actually have led some readers to think that I wanted them to write such recursive code, which of course is not what I intended.

As such, for reference, I just added a iterative version to my example as well. The particular challenge in this example is that while the CPU and CUDA versions have real “Material” classes with real virtual functions, in OptiX it’s a bit tricky to attach real virtual classes to OptiX objects (yes, you can do it – after all, programs are written in general CUDA code – but let’s not go there right now). For my particular version, the way I went about this is to have the closest hit programs do one Material::scatter() operation for the material associated to that geometry, and return the resulting scattered ray and attenuation back to the ray generation program via the PRD. Of course, this approach works only because the Material in Pete’s code does only exactly one thing – scatter() – and wouldn’t have worked if we the ray generation program would have had to call multiple different material methods … but hey, this example is not about “how to write a complex path tracer in OptiX” – that may come at a later time, but for now, this is only about how to map Pete’s example, nothing more.

I do hope the reference code will be useful; and as usual: any feedback is welcome!

With that – back to …. work?

PS: For those interested in having a look: I already pushed the code to github (https://github.com/ingowald/RTOW-OptiX). I’ll be running some more extensive numbers when I’m back to a real machine (no, I don’t bring my turing to my sunday-morning coffee…), but at least on my “somewhat dated” Thinkpad P50 laptop, I get the following (both using 1200x800x128 samples):

  • pete’s version (with -O3, and excluding image output), on a Core i7-6700HQ@2.6Ghz(running at 3.2Ghz turbo): 12m32s.
  • optix version, on a Quadro M1000M: 18 sec.

Of course, this comparison is extremely flawed: Pete’s version doesn’t even use threads, let alone an acceleration structure, both of which my OptiX version does. Take this with a grain of salt – or an entire salt-trucks worth of it, for that matter! That said, the parallelism in the OptiX version comes for free, and the acceleration structure …. well, all that took was adding a single line of code (‘gg->setAcceleration(g_context->createAcceleration(“Bvh”))‘) …

PPS: First performance numbers on some more powerful card (driver 410.57, optix 5.1.1):

  • 1070, recursive: 0.58s build, 6s render
  • 1070, iterative: 0.66s build, 5.5s render
  • Titan V, recursive: 0.57s build, 2.6s render
  • Titan V, iterative: 0.63s build, 2.1s render
  • Turing: to come…

“RTOW in OptiX” sample code now on github…

As promised in last night’s post, I cleaned up the sample code and pushed to github: https://github.com/ingowald/RTOW-OptiX.

I haven’t tried the cleanups on windows yet, but it should work. If you run into trouble, let me know!

One note on the code: I’ll very happily accept pull requests that cover bugs, typos, build fixes, etc. Please note I do want to stay as close as possible to the original example, though, so please don’t send pull requests with major restructurings, general improvements, or feature additions, even if they’d be useful in their own right…. this is not supposed to be a “how to do cool things in optix” repo; just a optix “port” of Pete’s example.

And now – back to work 🙂

Ray Tracing in a Weekend … in Optix (Part 0 of N :-) )

Yay! I finally have my first OptiX-version of Pete Shirley’s “Ray Tracing in a Week-end” tutorial working. Not the whole series yet (that’s still to come), but at least the “final scene”… pic below.

Background

Ever since Pete’s now-famous “Ray Tracing in a Week-end” came out (see, e.g., this link for more details), lots of people have used his mini-books to learn more about ray tracing. Those books are, in fact, absolutely amazing learning material (if you have not read them yet – you should!), but suffer from one big disadvantage: yes, they’ll teach you the fundamental basics (and in particular, the elegance and beauty!) of ray tracing – but they won’t teach you how to use modern GPUs for that. And in particular since the introduction of Turing, one really should know how to do that.

To fix that shortcoming, I recently suggested to Pete that “somebody” should actually sit down and write up how to do that same book series – step by step – in OptiX. Roger Allen has since done that same exercise for CUDA (see here for that (also excellent!) article), but that still has a shortcoming in that by using “plain” CUDA it doesn’t use Turing’s ray tracing hardware acceleration. To use the latter, one would have to either use Windows-only DXR (e.g., through Chris Wyman’s – equally excellent! 🙂 – DXR samples), or through using OptiX.

Long story short: I did eventually start on a “OptiX On a Week-End” (“OO-Awe”!) equivalent of Pete’s book series (and hope Pete will jump in – he’s such a much better writer than I am :-/)… but writing an entire mini-book, with examples and everything, turns out to be even more work than feared. So, following my motto of “better something useful early than something perfect too late” I finally sat down and skipped all the step-by-step introductions, all the detailed explanations, etc, and just wrote the final chapter example in OptiX. I’ll still write all this other stuff, but at least for now, I’ll do a much shorter version just with the final chapter.

So, what’s to come:

First, I’ll clean up the code a bit, and push that one final chapter example (with cmake build scripts etc) on github (I’ll write another post when that’s done). Once that’s public, I’ll write a series of little posts on how that sample works, relative to Pete’s CPU-only book. And only when all of that is out and written, then I will go back to doing the longer mini-book version. As such, this blog post was actually “part 0” of a series of posts that will soon be coming…. I hope you’ll find it useful!

With that – back to work…. 🙂

 

finalChapter

Joining NVidia…

As I’m sure some of you will have heard by now, today is my last day at Intel, and starting on Monday, I’ll be working for NVidia.

Looking back, I’ve now been working for Intel for almost exactly 11 years, and if you were to include all the time I worked “closely with intel technologies” during my PhD and Post-Doc times, it’s actually close on two decades: Even before starting my PhD (while working on Alex Keller’s ray tracer while in Kaiserslautern) I was already drilling holes into Celeron chips (and soldering on cables) to make them dual-socket capable (they were supposed to be single-socket only 🙂 ); and at the start of my PhD we (including Carsten, in Saarbruecken) were writing the first interactive SSE ray tracer prototypes, at a time when the Linux kernel didn’t even save the SSE registers, yet (yes, that makes for fun-to-replicate bugs on a dual-socket machine!). Later on, while finally working for Intel, I’ve been lucky to have worked on virtually every cool technology that had come out, from Larrabee, to Knights-anything, to pretty much any Xeon architecture built in the last two decades, to lots of other cool stuff. It’s been fun, I’ve worked with truly talented people (some of which are, in their field, hands-down the best in the world, and some of which I know for longer than I have my kids!). And yes, we’ve done some pretty cool projects, too: From the first real-time ray tracers on Larrabee, to things like compilers  (my IVL, and Matt’s ISPC), to several prototype ray tracers that never made it into the public, and all the way  to projects like Embree and OSPRay, both of which turned into massively successful projects. In other words, I’ve had the chance to work on pretty much anything I wanted, which was typically anything that either involves, requires, or is required for, the tracing of rays.

All that said, as Matt recently wrote on his blog: “the world it is a-changing” (see this link for his blog article); and once again channeling Matt (man – that seems to become a pattern here!?) I felt like I needed “to be in the thick of all of that and to help contribute to it actually happening”… so when the opportunity to do so came up I simply couldn’t say no. So with all that: Today is my last day at Intel, and Monday will be my first at NVidia – looking forward to it, that’ll be interesting indeed!

One final note…

While trying to figure out how to best break this news I had a second close look at the article Matt had written when he joined NVidia a few weeks back. While doing so, it was actually for the first time that I realized how just deeply he had thought about all this “ray tracing for real time” topic. Of course I had “read” that before, but never really appreciated how much thought went into it.

Anyway – just to follow up on that particular topic from my point of view: For me personally, it’s never been a question of the “if”, but only of the “when”, and the “who” will be the first to make it happen. To explain: Even when I was still in the middle of my masters degree (say, ’96 or so), it was already clear that all high-quality rendering was done via ray tracing – sure, there were interesting discussions on whether it’d be path tracing, backwards/reverse/forward path tracing, photon mapping, bidirectional path tracing, or Metropolis (all of which at some point in time I had played with back then 🙂 )… but in the end, they all used ray tracing. At the same time, anything that was primarily time-constrained was doing something else (at that time: REYES’s “split-and-dice”, the equivalent of rasterization), but even then it seemed clear to me that with “computers” getting “faster” every year it’d eventually only be a question of time until the time constraint would go away, and that eventually, “the simpler/more elegant algorithm” would get used (because at the end of the day, that’s what it always comes down to: Once you can afford it, you always pick the more elegant, and more general, solution).

And sure enough, over the last decade-and-half we’ve already seen this happening in the movie industry: When I started my PhD, the general opinion was still that this industry would “never” switch to ray tracing, because it needed too much memory (REYES could do streaming), because it was too slow (REYES was faster), because it needed nasty acceleration structures, and because all this photo-realism wasn’t all that important (and at least apparently, sometimes detrimental!) to the artistic process, anyway … yet still, by today virtually every production enderer has switched to ray tracing, because in the budget allocated for a frame it is now possible to do it, and once it is, it was just simpler to express that renderer in ray-based terms. As such, at least in my eyes it’s always been merely a matter of time until real-time graphics will do what the movie industry has already gone through – at some point in time ray tracing will be fast enough to do it in real time, and once it is – if history is any guide – people will use it.

Anyway – no matter how you do reach that same conclusion, whether you think deeply about it or simply extrapolate into the future – it does look like ray tracing is here to stay. Let’s see where it takes us. It’ll be a few interesting years ahead.

Preprint of our Vis’19 paper on Iso-surface ray tracing of AMR Data now available …

Finally gotten to making an “authors copy” and uploading it to my blog, but here it now is – a preprint of our Vis 2019 paper on “CPU Isosurface Ray Tracing of Adaptive Mesh Refinement Data”  (link to pdf).

paper-amr-iso-header

A few notes:

  • This paper is a direct follow-up to our previous AMR volume ray tracing paper (published at last year’s SigAsia Vis Symposium), but adds implicit iso-surface ray tracing capability (using a correct, analytic intersection method). The “octant method” reconstruction scheme was actually already sketched in the original submission of that previous paper, but wasn’t explained well enough back then, so got axed in the final version.
  • The “octant method” that this paper introduces is actually – if I may say so – pretty neat, because it’s both interpolating and continuous, even in corner cases. It may, however, well be the one thing in my career that I had to expent the most brain power to get right – it’s trivial in 1D, but even in 2D it took a while to get it right, and 3D has even more corner cases that some earlier attempts failed on (If only you could see the stack of notebooks all full of sketches: at one point I used xfig to draw a 2D AMR example, printed a few hundred pages full of that template, and pretty much used them all up going through the algorithm step by step, for each cell, until it finally worked!?). Worked on this – on and off – for almost 3 years, which is kind-of ridiculous …
  • The code is all implemented in OSPRay (of course?), as a loadable ospray module that is fully compatible with all other ospray actors (renderers, other geometry types, MPI parallel rendering, etc). This module is not yet being part of any official ospray release, but is already available upon request (Ethan should be able to provide – it’s all Apache License, so fully free), and will hopefully “at some point” be included in mainline ospray as well.
  • Though the paper’s title is exclusively on the adaptive mesh refinement (AMR) part, the actual code is just as much about the general implicit iso-surfacing code itself – the “impi” module (for imp-licit i-sosurface) is actually generally applicable to other volume types as well, and does come with an implementation for structured volumes, too. The paper itself is actually kind-of two papers in one, too… part on the IMPI module, and part on the octant method to use that for iso-surface ray tracing of AMR data. As such, I’d fully expect this module to be used as much without AMR as with AMR.
  • One reviewer (correctly!) pointed out that with all the “theoretical” continuity we claim in this paper there’s still a chance that there could be pixel-sized “shoot throughs” due to numerical accuracy issues: Even if we make the boundaries between levels fully continuous in a mathematical sense, the fact that different voxels/octants on different sides of the boundary use different floating point values for the cell coordinates (and those in different order of computations) means there can be elimination effects in the (limited-precision) floating point computations. Yes, that is perfectly correct, and I had fully overlooked it in the original submission (maybe one of the best reviewer catches I’ve ever seen!). But then, exactly the same effect will happen even for voxels in strutured volumes, without any level continuities ….

 

CfI: Embree on ARM/Power/…?

Executive Summary: This is a “CfI” – a “call for involvement” – for anybody interested in building and, in particular, maintaining a Embree version for non-IA ISAs such as ARM, Power, etc. If you’re mostly based on – or at least, very active on – any of those platforms, and interested in being involved with creating and maintaining a version of Embree on them …. let me know!

Background

As most of those that read this blog will surely know already, Embree (http://embree.github.io) is a ray tracing library that focuses on accelerating BVH construction and ray traversal, thus allowing the user to build their own renderer with minimum constraints, yet good performance. Arguably the biggest strength of Embree – other than performance – is its versatility, in that it allows things like user-programmable geometries, intersection callbacks, different ray layouts and BVH types, etc.

Another big strength of Embree is that it will automatically select – and use – the best vector instructions available on your CPU, and make full use of it; e.g., if your CPU has AVX512 it’ll fully use it; if it doesn’t it will fall back to AVX2, or AVX, or SSE, … you get the point. The caveat of this, though, is that Embree today only supports Intel-style vector extensions; yes, it supports SSE, AVX, AVX2, and AVX512; and yes, it works on AMD CPUs just as well as it does on Intel CPUs …. but if you’re on Power, ARM, SPARC, etc, it currently won’t work.

Embree on Non-IA CPUs?

On the face of it, only supporting IA (Intel Architecture) style CPUs isn’t too big a limitation … in particular in high-end rendering almost every rendering is being done on Xeons, anyway. However, if you are an ISV whose software is supposed to also run on non-IA CPU types – think a game studio, or the Steam Audio 2 that’s been recently announced (see here), then you’re currently faced with two choices: either don’t use embree at all (even where it would be highly useful); or change your software to support two different ray tracers, depending on which platform you’re on (ugh). As such, I would personally argue – and yes, this is my own personal view – that it would be highly useful to have a version of Embree that will also compile – and run – on non IA CPUs.

In fact, doing just that is way simpler than you might think! Of course, everybody’s first thought is that with all the man-years of development that went into Embree “as is”, doing the same for other vector units would have to be a major undertaking. In fact, if you only dare to take a look at Embree’s source (if you want to, you can conveniently browse it on its github page) you’ll very quickly realize that almost everything in Embree is written using some SIMD “wrapper classes” (see code in embree/common/simd) that implement things like a logical “8 wide float”, “16 wide bool”, etcpp… and that then implements these wrapper classes once in SSE, once in AVX, once in AVX512, etc.

In other words, once you implement those wrappers in your favorite non-IA vector intrinsics, you’re 95% there towards having all of Emrbee compile – and run – on that ISA. After that, there’s still a few few more things to do in particular relating to the build system (adapting the cmake scripts to your architecture), in properly “registering” the new kernels (because Embree’s automatic CPU detection currently only looks at Intel CPUs), etc … but all that is “small beer” – all the traversal kernels, API implementation, BVH building, threading, etcpp, should all work out of the box.

I am, of course, not the first one to figure that out: In fact, almost two years ago a github user “marty1885” already did exactly that for ARM (see this blog article of his for an excellent write-up!). Funny thing back then was that I had just done “almost” the same in writing purely scalar implementations for those wrapper classes, while he independently did this port to ARM Neon. (And just to make this clear: With “scalar” I do not mean that Embree itself was re-written without vectors; I’m talking about an implementation that realizes, say, the 16-wide float “vectors” as doing 16 scalar add/mul/etc, rather than using an explicit _mm512_add_ps etc; this means it’ll still compile on any platform, but the rest of Embree still “sees” this as a 16-wide vector).

Both of those test implementations yielded interesting results: For mine, it was the fact that this purely “scalar” implementation of float4, float8, etc worked really well – auto-vectorizers may be, well, “problematic” for complex code, but for something that continuously does 8 adds, 8 muls, etc at a time, they absolutely do see that those can be mapped to vector instructions – not just as good an manual intrinsics, but surprisingly close. I did, however, never go through the exercise of changing the makefiles, so never even tried on ARM etc (well, I don’t have an ARM!). For Marty, he went “all the way”, and in particular, got his entire renderer working on ARM. His finding? That performance was pretty good out of the box, without any rewriting of kernels etc altogether… which is pretty cool…. and highly promising.

So, what (still) needs to be done?

OK, proof of concept done – what would we still need? Well – the problem is that both mine and Marty’s implementations are about 2 years old, and as such, pretty deprecated right now. It’s not hard to re-do that for the latest Embree, but it has to be done (and I sure won’t have time for that!).

Second, even if that exercise was re-done, it would still have to be maintained: Every new release of embree adds a few additional things, fixed bugs, adds improvements, etc… and to be really useful for ISVs, the “ported” embree would have to keep up to date with those updates. Now thanks to git, that “keeping it updated” shouldn’t be too hard – very few of those release updates would touch the wrapper classes, CPU detection, or build system, so in 95% of cases I’d expect such an update to be as simple as a “git pull remote” to make a new release…. but it would still have to be done. In fact, if I were to build and maintain such a “non-IA” version of Embree, I’d do it exactly as such: clone the embree repo once, port the wrapper functions and makefiles just like Marty did it, then push that onto another, easy to find repo… and of course, pull in the master diffs every time Embree makes a new release, fix whatever needs to get fixed (probably not much), and push that release too.

Be that as it may – I will likely not have the time to maintain such a project …. but if anybody out there is eager to make himself a name in non-IA CPU ray tracing land (and possibly, with some of existing users of Embree that want to be platform agnostic) – well, here’s a relatively easy project to do just that!

Anyway – ‘nough for today … back to work!

PS: And of course, once you have a non-IA version of Embree, it’s trvially simple to also have a non-IA version of OSPRay, too: OSPRay itself doesn’t use vector intrinsics at all; it only uses them indirectly, through ISPC, and through Embree. ISPC can already emit to “scalar” targets, so as soon as Embree could emit to whatever ISA you require (or to scalar, as I had done) ….. well, all you’d need is a C++11 compliant compiler, and likely MPI… which aren’t all too rare nowadays :-). As such, if you do have a non-IA supercomputer (hello there, Summit, Sierra, Tianhe, Sunway, Titan, Sequoia, etc!), and you need a good, scalable, fast ray tracer …. take your sign!