New Preprint: “BrickTree” Paper@LDAV

Aaaand another paper that just made it in: Our “BrickTree” paper (or – using its final, official, title: our paper on “Interactive Rendering of Large-Scale Volumes on Multi-core CPUs“) just got accepted at LDAV (ie, the IEEE Symposium on Large-Data Analysis and Visualization).

The core idea of this paper was to develop some data structure (the “BrickTree”) that had some intrisics “hierarchical representation” capabilities similar to an octree, but much lower memory overhead … (because if your input data is already terabytes of voxel data, then you really don’t want to spend a 2x or 3x overhead on encoding tree topology :-/). The resulting data structure is something that is more or less a generalization of a octree with NxNxN branching factor, but with some pretty nifty encoding that keeps memory overhead really low, while at the same time having some really nice cache/memory-related properties, and (relatively) cheap traversal to find individual cell values (several variants of this core data structures have been used before; the key here is the actual encoding).

Such a hierarchical encoding will, of course, allow for some sort of progressive loading / implicit level of detail rendering, where you can get some first impression of the data set long before the full thing is loaded – because even if your renderer can handle data of that size, loading a terabyte data set can literally take hours to first pixel!. (And just to throw this in: this problem of bridging load times is, IMHO, one of the most under-appreciated problems in interactive data vis today: yes, we’ve made giant progress in rendering large data sets once the data has been properly preprocessed and loaded …. but what good is an eventual 10-frames-per-second frame rate if it takes you an hour to load the model?!).

Anyway – read the paper …. there’s tons of things that could be added to this framework; I’d be curious to see some of those explored! (if you have questions, feel free to drop me an email!). Maybe most of all, I’d be curious re how that same idea would work on, say, a RTX 8000 – yes, the current paper mostly talks about bridging load times (assuming you’ll eventually load the full thing, anyway), but what is to stop one from loading once a certain memory budget has been filled!? This should be an obvious approach to rendering such large data, but I’m sure there’ll be some devil or two hiding in the details… so would be curious if somebody were to look at that (if anybody wants to: drop me an email!).

Anyway – enough for today; feedback of course appreciated!

PS: Link to PDF of paper is embedded above, but just in case: PDF is behind this link, or as usual, on my publications page.

2 thoughts on “New Preprint: “BrickTree” Paper@LDAV

    1. Outch – completely missed this comment – sorry!
      How it compares: VDB is far more general in how it organizes its data, but yes, you could in fact think of using VDB to implement the bricktree.
      The main design goal for the BrickTree was to create a memory layout that was all as straightforward and not-nested as possible, so as to easily map it to different hardware platforms, and ideally “atomically” modify it very easily (for demand loading). Hence there’s only three different “types” of data in the bricktree – value bricks, index bricks, and indexofs bricks – and only exactly one array for each. I don’t know VDB enough to understand how it’d map on that low level, but on the high-level, they’re both sparce N-ary trees, so certainly similar in concept.

      Like

Leave a comment