Haven’t written in a while…. (very busy…) but currently so down with allergies that updating my web presence is pretty much the only thing I’m still useful for, so : Just uploaded another paper pre-print, this time on the following (short) paper:
High-Quality Rendering of Glyphs Using Hardware-Accelerated Ray Tracing
Stefan Zellmann, Martin Aumueller, Nate Marshak, and Ingo Wald
Eurographics Parallel Graphics and Visualization (EGPGV) Short Papers, 2020.
(paper, as usual, on my publications page at SCI: http://www.sci.utah.edu/~wald/Publications/index.html)
The core idea behind this paper was to look into how hardware ray tracing – and in particular, the ability to easily and cheaply created millions of “copies” of an object via instancing – would impact glyph rendering. At least in theory, there’s three key benefits that ray tracing brings to the table for glyph rendering: First, the ability to create lots of copies of objects, very cheaply, is good because there’s typically a lot of glyphs involved in rendering an image (and animating them is cheap, too, because all you have to do is change some transforms). Sure, you could previously do this with fragment shaders, too (at least for primary visibility), but with ray tracing you have this somewhat more “cleanly” integrated in the over rendering system, you have less issues with overdraw, etc…. so this should be useful.
Second, the ability to have arbitrary intersection programs allows for more easily creating non-trivial glyph shapes without having to worry about tessellation, which can either be tricky (say, for superquadrics) or – at least when doing naively – un-necessarily expensive (say, tessellating millions of arrows into hundreds of triangles each). Again, even before hardware ray tracing you could fix this with fragment shaders, but still….
Third – and something that’s significantly less easy to fix with fragment shaders – with ray tracing it’s relatively easy to add secondary shading effects like shadows, AO, or indirect illumination …. and while this may also make images look “better”, the core motivation for that is that when you draw lots and lots of glyphs without such effects you often end up with just a garbled mess on the screen, where it is hard to see which of the glyphs are actual where relative to the others (fancy jargon: “visual clutter”). In the last few year’s we’ve seen again and again how much shadows and AO can helps with that in, e.g., particle visualization … and whether particles, arrows, superquadrics, or other glyphs – it’s exactly the same problem.
In theory, all three of those advantages were kind-of obvious; the big question just was how well this would work in practice, how much effort it would be, whether there’s any un-foreseen pitfalls, and whether we could actually get this to render fast enough. And as shown in this (short) paper: it does actually work out pretty well: in fact, we even stumbled over some additional ideas we hadn’t initially expected – for example, you can actually use motion blur to convey motion, and I’m sure there’s other things we haven’t looked at yet, too (eg, would motion blur work for uncertainty visualization? For conveying error ranges? Would defocus or warping of the primary rays be useful, too?…?).
Anyway, it’s been a fun paper to play with; Stefan, Nate, and Martin have done a great job at that …. Enjoy!
PS: This was also one of the first (public) papers made with my new “OWL” library, with which building that framework turned out to be pretty easy… (well, that was the idea of OWL! 🙂 ) … but since I realize that the long-planned blog article about OWL still hasn’t actually been written yet I’ll say no more for now …