Joining NVidia…

As I’m sure some of you will have heard by now, today is my last day at Intel, and starting on Monday, I’ll be working for NVidia.

Looking back, I’ve now been working for Intel for almost exactly 11 years, and if you were to include all the time I worked “closely with intel technologies” during my PhD and Post-Doc times, it’s actually close on two decades: Even before starting my PhD (while working on Alex Keller’s ray tracer while in Kaiserslautern) I was already drilling holes into Celeron chips (and soldering on cables) to make them dual-socket capable (they were supposed to be single-socket only 🙂 ); and at the start of my PhD we (including Carsten, in Saarbruecken) were writing the first interactive SSE ray tracer prototypes, at a time when the Linux kernel didn’t even save the SSE registers, yet (yes, that makes for fun-to-replicate bugs on a dual-socket machine!). Later on, while finally working for Intel, I’ve been lucky to have worked on virtually every cool technology that had come out, from Larrabee, to Knights-anything, to pretty much any Xeon architecture built in the last two decades, to lots of other cool stuff. It’s been fun, I’ve worked with truly talented people (some of which are, in their field, hands-down the best in the world, and some of which I know for longer than I have my kids!). And yes, we’ve done some pretty cool projects, too: From the first real-time ray tracers on Larrabee, to things like compilers  (my IVL, and Matt’s ISPC), to several prototype ray tracers that never made it into the public, and all the way  to projects like Embree and OSPRay, both of which turned into massively successful projects. In other words, I’ve had the chance to work on pretty much anything I wanted, which was typically anything that either involves, requires, or is required for, the tracing of rays.

All that said, as Matt recently wrote on his blog: “the world it is a-changing” (see this link for his blog article); and once again channeling Matt (man – that seems to become a pattern here!?) I felt like I needed “to be in the thick of all of that and to help contribute to it actually happening”… so when the opportunity to do so came up I simply couldn’t say no. So with all that: Today is my last day at Intel, and Monday will be my first at NVidia – looking forward to it, that’ll be interesting indeed!

One final note…

While trying to figure out how to best break this news I had a second close look at the article Matt had written when he joined NVidia a few weeks back. While doing so, it was actually for the first time that I realized how just deeply he had thought about all this “ray tracing for real time” topic. Of course I had “read” that before, but never really appreciated how much thought went into it.

Anyway – just to follow up on that particular topic from my point of view: For me personally, it’s never been a question of the “if”, but only of the “when”, and the “who” will be the first to make it happen. To explain: Even when I was still in the middle of my masters degree (say, ’96 or so), it was already clear that all high-quality rendering was done via ray tracing – sure, there were interesting discussions on whether it’d be path tracing, backwards/reverse/forward path tracing, photon mapping, bidirectional path tracing, or Metropolis (all of which at some point in time I had played with back then 🙂 )… but in the end, they all used ray tracing. At the same time, anything that was primarily time-constrained was doing something else (at that time: REYES’s “split-and-dice”, the equivalent of rasterization), but even then it seemed clear to me that with “computers” getting “faster” every year it’d eventually only be a question of time until the time constraint would go away, and that eventually, “the simpler/more elegant algorithm” would get used (because at the end of the day, that’s what it always comes down to: Once you can afford it, you always pick the more elegant, and more general, solution).

And sure enough, over the last decade-and-half we’ve already seen this happening in the movie industry: When I started my PhD, the general opinion was still that this industry would “never” switch to ray tracing, because it needed too much memory (REYES could do streaming), because it was too slow (REYES was faster), because it needed nasty acceleration structures, and because all this photo-realism wasn’t all that important (and at least apparently, sometimes detrimental!) to the artistic process, anyway … yet still, by today virtually every production enderer has switched to ray tracing, because in the budget allocated for a frame it is now possible to do it, and once it is, it was just simpler to express that renderer in ray-based terms. As such, at least in my eyes it’s always been merely a matter of time until real-time graphics will do what the movie industry has already gone through – at some point in time ray tracing will be fast enough to do it in real time, and once it is – if history is any guide – people will use it.

Anyway – no matter how you do reach that same conclusion, whether you think deeply about it or simply extrapolate into the future – it does look like ray tracing is here to stay. Let’s see where it takes us. It’ll be a few interesting years ahead.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s