Ray Tracing the “Gems” Cover, Part II

As Matt Pharr just let me know: the original Graphics Gems II cover image was in fact a 3D model, and was in fact even ray traced! (that really made my day πŸ™‚ ).

That notwithstanding, the challenge still stands: Anybody out there to create (and donate) a 3D model of that that looks somewhat more “ray trace’y” for us all to play with?

PS: And I promise, we’ll work very hard to ray trace that faster than what was possible back then … I’ll leave it to you to find out how fast that was! (Tip: The book has an “About the Cover” page πŸ™‚ )

Recreating the “Graphics Gems 2” Cover for Ray Tracing?

Usually I use this blog to share updates, such a newly published papers, etc …. but this time, I’m actually – kind-of – calling for some “community help”: In particular, I’m curious if there’s any graphics people out there that knows how to use a modelling program (I don’t; not if my life depended on it :-/), and that would be able to create a renderable model of the “scene” depicted on the “Graphics Gems 2” book. To be more specific, to create a model that looks roughly like that:

Screenshot from 2019-11-14 13-39-18

Now you may wonder “where does that suddenly come from?” … so let me give a tiny bit of background: We – Adam Marrs, Pete Shirley, and I – recently brainstormed a bit about some “potentially” upcoming “Ray Tracing Gems II” book (which hasn’t been officially announced, yet, but ignore that for now), and while doing so we all realized how much we are, in fact, channeling the original “Graphics Gems” series (which at that time was awesome, by the way!).

Now at one instance, I was actually googling for the old books (to check if they used “II/III/IV” or “2/3/4” in the title, in case you were wondering), and while doing so stumbled over this really nice cover … which would probably look amazing if that was a properly ray traced 3D model rather than an artist’s sketch. And what could possibly be a better cover image – or test model – used in RTG2 than the original cover from GG2!?

So – if there is anybody around that know his modelling tools, and looking for a fun challenge: Here is one! If anybody does model this and has anything to share – ideally full model with public-access license, or even only just images – please send it over. I can of course not promise that we’ll actually use such material in said book (which so far is purely hypothetical, anyway) – but I’d find it amazing if we could find a way of doing so.

PS: Of course, a real-time demo of that scene would be totally awesome, too πŸ™‚

New Preprint: “BrickTree” Paper@LDAV

Aaaand another paper that just made it in: Our “BrickTree” paper (or – using its final, official, title: our paper on “Interactive Rendering of Large-Scale Volumes on Multi-core CPUs“) just got accepted at LDAV (ie, the IEEE Symposium on Large-Data Analysis and Visualization).

The core idea of this paper was to develop some data structure (the “BrickTree”) that had some intrisics “hierarchical representation” capabilities similar to an octree, but much lower memory overhead … (because if your input data is already terabytes of voxel data, then you really don’t want to spend a 2x or 3x overhead on encoding tree topology :-/). The resulting data structure is something that is more or less a generalization of a octree with NxNxN branching factor, but with some pretty nifty encoding that keeps memory overhead really low, while at the same time having some really nice cache/memory-related properties, and (relatively) cheap traversal to find individual cell values (several variants of this core data structures have been used before; the key here is the actual encoding).

Such a hierarchical encoding will, of course, allow for some sort of progressive loading / implicit level of detail rendering, where you can get some first impression of the data set long before the full thing is loaded – because even if your renderer can handle data of that size, loading a terabyte data set can literally take hours to first pixel!. (And just to throw this in: this problem of bridging load times is, IMHO, one of the most under-appreciated problems in interactive data vis today: yes, we’ve made giant progress in rendering large data sets once the data has been properly preprocessed and loaded …. but what good is an eventual 10-frames-per-second frame rate if it takes you an hour to load the model?!).

Anyway – read the paper …. there’s tons of things that could be added to this framework; I’d be curious to see some of those explored! (if you have questions, feel free to drop me an email!). Maybe most of all, I’d be curious re how that same idea would work on, say, a RTX 8000 – yes, the current paper mostly talks about bridging load times (assuming you’ll eventually load the full thing, anyway), but what is to stop one from loading once a certain memory budget has been filled!? This should be an obvious approach to rendering such large data, but I’m sure there’ll be some devil or two hiding in the details… so would be curious if somebody were to look at that (if anybody wants to: drop me an email!).

Anyway – enough for today; feedback of course appreciated!

PS: Link to PDF of paper is embedded above, but just in case: PDF is behind this link, or as usual, on my publications page.

IEEE Short Paper Preprint: RTX Accelerated Space Skipping and Adaptive Sampling of Unstructured Data

As a follow-up to our “RTX Beyond Ray Tracing” paper we (Nate, Will, Valerio, and I) also worked on using that paper’s sampling technique within a larger rendering context, where we took this technique, and added some nifty technique for doing efficient space skipping, and, in particular, adaptive sampling on top of this. And happy to say: The paper finally got accepted in the IEEE Vis 2019 short papers track, so Nate will present this in Vancouver later this year.

In the mean time I already uploaded a “authors preprint” of that paper to my publications page (http://www.sci.utah.edu/~wald/Publications/index.html), so if you want to read about it before his talk please feel free to download and read.

The core idea of that paper is, of course, to realize that in a time where we now all have access to fast “traceRay()” operations (in our case on the GPU, but the same should work on a CPU, too) you can of course use that technique to do some sort of hierarchical acceleration of volume rendering, too. In our case, we build a hierarchy over regions of the volume, compute min/max opacity for those regions (for a given transfer function), then use this traceRay operation to step through leaves of this data structure. And for each leaf of this data structure, you can then either skip it, or adjust sample rate, based on how “important” it is (at least for the space skipping side of that a similar idea was recently proposed by David Ganter at HPG, too, though in a somewhat different context).

Details in the paper; go read it… enjoy!

PS: Of course the technique is not restricted to tet meshes. That’s what we demonstrated it on, but it should work for any other type of volumetric data…

 

HPG Paper Preprint: Using RTX cores for something other than ray tracing….

teaser1

Hey – just a quick heads-up: our HPG (short) paper on using RTX cores for something other than ray tracing got accepted; and for everybody interested in that I just uploaded a “author’s preprint” (ie, without the final edits) to my usual publications page (at http://www.sci.utah.edu/Publications).

The core idea of this paper was to play with the idea of “now that we have this ‘free’ hardware for tracing rays, what else can we use it for, in applications that wouldn’t otherwise use these units?” – after all, the hardware is already there, it’s actually doing some non-trivial tree traversal, it’s massively powerful (billions of such tree traversals per second!), and if it’s not otherwise being used then pretty much anything you can offload to it is a win… (and yes, pretty much the first thing we tried worked out well).

For this paper we only looked into one such applications (point location, in a tet mesh volume renderer), just as a “proof of concept” … but yes, there’s a ton more where it’d make sense: I already used it for some AMR rendering, too (same basic concept), but there’s sure to be more. If you play with it and find some interesting uses, let me know – I’m curious to see what others will do with it!

Hope you’ll like it – this has been a lot of fun, hope you’ll enjoy reading it, too…

PS:

Preprint here: http://www.sci.utah.edu/Publications

Full citation: RTX Beyond Ray Tracing – Exploring the Use of Hardware Ray Tracing Cores for Tet-Mesh Point Location, Ingo Wald, Will Usher, Nathan Morrical, Laura L Lediaev, and Valerio Pascucci, Proceedings of High Performance Graphics (HPG) 2019. (to appear).

“Accidental Art”: PBRT v3 ‘landscape’ model in my RTOW-OptiX Sample…

Just produced a (accidental) pic that in some undefinable way struck me – dunno why, but IMHO it “got something” – and wanted to quickly share this (see below).

For the curious: The way that pic was produced was that I took the latest version of my pbrt parser (https://github.com/ingowald/pbrt-parser), hooked it up to my RTOW-in-OptiX sample (https://github.com/ingowald/RTOW-OptiX), and ran that on a few of the PBRT v3 sample models (https://pbrt.org/scenes-v3.html). And since that RTOW-in-OptiX sample can’t yet do any of the PBRT materials I just assigned the Pete’s “Lambertian” model (with per-material random albedo value), which for the PBRT v3 “landscape” (view0.pbrt) produced the following pic. And I personally find it kind-of cute, so …. Enjoy!

PS: The buddha with random Metal material looks cool, too πŸ™‚

landscape.accidental-art.jpg