Since it was getting closer to Christmas I decided to treat myself to a new toy – so since Friday evening, I’m the proud owner of a shiny, new, Jetson Xavier AGX (Newegg had a special, where you could get one for 60% off…). And since I found some of my experiences at least somewhat un-intuitive, I thought it might help other newbees to just write it up.
Why a Xavier?
Okay, first: Why a Xavier? Having for years worked only on x86 CPUs I had actually wanted to play with an ARM CPU for quite a while (also see this past post on Embree on ARM), but never gotten my hands on one to do so. And in particular after NVidia showed some ARM reference designs at SC this year I really wanted to get my hands on one to experiment with how all my recent projects would do on such an architecture.
Now last year my wife had already gotten me a Raspberry Pi for Christmas – but though this thing is kind-of cute I found myself struggling with the fact that it’s just too wimpy on its own – yes, you can attach a keyboard and a monitor, and it kind-of runs linux, but everything feels just a tiny bit too stripped down (Ubuntu Core doesn’t even have ‘apt’!?), so actually developing on it turned out to be problematic, and cross-compiling is just not my cup of tea (Yes, I understand how it works; yes, I’ve done it in the past, and yes, it’s a …. well, let’s keep this kid friendly). Eventually I want to use “my arm system” also as a gitlab CI runner, and with the Raspberry, that just sounds like a bridge too far.
In contrast, if you look at a Xavier it does have an 8-core, 64-bit CPU in it, 32GB of memory, a pretty powerful Volta GPU, NVidia driver, CUDA, cuDNN, etc – so all in all, this thing should be at least as capable as what I’m doing most of my development on (my laptop), and since it has the full NV software/driver stack and a pretty decent GPU I should in theory be able to not only do automated compile CIs, but even automated testing. So. Xavier it is.
First Steps – or “Where did you say the deep end is?”
OK – now that I got one, first thing you do is plug in a monitor and a keyboard, press the power button, and up comes a Linux. Yay. That was significantly easier than the Pi’s “create an account somewhere, then download an image from there, then burn that”. So far, so good. There’s also a driver install script (so of course, went ahead and installed that), then there’s your usual ‘apt’ (so went ahead and did apt update/apt upgrade), and yes, there’s a full linux (so created a new user account, started installing packages, etc. Just like a regular laptop. Great.
Except – the first roadblock: I have the nvidia driver installed, I have gdm running, have cmake, gcc, etc, but where’s my cuda? And wait – that driver is from early 2018, I somehow doubt that’ll have Optix 7 on it?
So, start searching for ARM NVidia drivers to install – and there is one, but it’s only 32bit? Wait, what? Turns out that even though the thing looks like a regular laptop/PC, that’s apparently not how you’re supposed to use it, at least not yet, and at least not from the developer tools point of view: The right way to get all that to work – as I eventualy had to learn realize after I had already done the above – is to use the NVidia “JetPack” tool set (https://developer.nvidia.com/embedded/jetpack). Good news: this tool is actually quite powerful – bad news: it flashes your Xavier, so all the time I spent in installing driver, creating users, updating system, etc … hm; i hope you will read this first.
Doing it the right way: JetPack
So, Jetpack. The way Jetpack works is that you download it for a host system, install it there, and use that host system to flash your Xavier. Initially I was a bit skeptical when I read that, because this entire “host system” smacked a lot of exactly the “cross-compile workflow” I wanted to avoid in the first place. Luckily, it turns out you really only need this for the initial flashing, and for installing the SDK – once that’s done, you no longer need the host system (well, “apparently” – it’s not that I wrote all that much software on it, yet).
OK, so just to summarize: To do it the right way, go to https://developer.nvidia.com/nvidia-sdk-manager, and download the .deb file (sdkmanager_0.9.14-4964_amd64.deb in my case), then install this with
sudo apt install sdkmanager_0.9.14-4964_amd64.deb
Then start the newly installed ‘sdkmanager’ tool, and you should see something like this:
Now this being me, I had of course already clicked through the first two steps before I took that picture, but all the values in those two steps were correct by default, so there’s not much to show for those first two steps, anyway. SDKManager now downloads a ton of stuff, until in step 4 you can then start installing.
Install Issues
During install, you first have to connect your Xavier – with the accompanying USB cable – to your host PC, then get it to do factory install. I was a bit surprised it couldn’t just do that through ssh (after all, my system was already up and on the network), but factory reset it is. To do that the tool tells you to “press left button, then press center button, then release both” – which isn’t all that complicated, except …. apparently this only works if your Xavier is off (at least mine simply ignored it when it was still on). So, first unplug, plug back in, and then do this button magic. That tiny aside, flashing then went as expected.
Some time later, the Xavier is apparently flashed, and SDKManager wants to install the SDK (driver, CUDA etcpp) – for which it apparently also wants to use the USB connection (the weird IP it shows in that dialog is apparently where the IP-over-USB connection is supposed to be!?). Two tiny problems: a) For some reason my host system complained about “error stablishing network connection”, so there was no USB connection. And b), that dialog asks for a username and password to log into the Xavier with, but since you haven’t even created one, yet, what do you use?
Turns out, after the first flash and reboot your Xavier is actually waiting for you to click on “accept license”, create a user account, etc (very helpful to know if your screen has already gone to screensaver, and you unplugged keyboard/mouse to plug in the host USB cable :-/). So before you can even do the SDK install you first have to plug in a keyboard and mouse, accept the license, and create a user account … then you can go back to SDKManager on the host system to install the SDK through that user account.
That is, of course, if your host PC could establish an IP-over-USB connection, which as stated before mine didn’t (and unpluggig the host USB to connect keyboard and mouse probably haven’t helped, either). Solution: ignore the error messages, plug in an ethernet cable to your Jetson, open a terminal, and do ‘ipconfig’ to figure out the IP address of the ethernet network. Then back to the host PC, change the weird default USB IP to the system’s real ethernet IP, and la-voila, it starts installing.
And la-voila, we have an ARM Development Box…
These little stumbling-stones aside, once the driver and SDK is installed, everything seems to be working just fine: reboot, apt update and apt upgrade, apt install cmake, emacs, etc, suddenly everything works just exactly as you’d expect it from any other Ubuntu system – the right emacs with the right syntax highlighting, cmake that automatically picks up gcc and cuda, etcpp, and everything works out of the box.
Now git clone one of my projects from gitlab, run cmake on it, and it even automatically picks up the right gcc, CUDA (only 10.0, but that’ll do), etc; so from here on it looks like we’re good. I haven’t run any ray tracing on it, yet, but once I got my first “hello world” to run last night, it really does look like a regular linux box. Yay!!!
Now where do we go from here? Of course, it only gets real fun once we get OptiX to work; and of course I’d really like to put a big, shiny Titan RTX or RTX8000 into that PCIe slot … but that’s for another time.
PS: Core Count and Frequency
PS – or as Columbo would say “oh, only one last thing” – one last thing I stumbled over when compiling is that the system looked un-naturally slow; and when looking into /proc/cpuinfo it showed only four cores (it should be 8!), and showed “revision 0” for the CPU cores, even though the specs say “revision 2”. Turns out that by default the system starts in a low-power mode in which only four cores are active, at only half the frequency (the ‘rev 0’ is OK, it’s just that the cpu reports something different than what you’d expect – it is a revision 2 core).
To change that, look at the top right of your screen, where you can change the power mode. Click on that, and change it to look like this:

Once done, you should have 8 cores at 2.2Ghz, which makes quite a difference when compiling with “make -j” :-).
So, for now, that’s it – I’ll share some updates if and when (?) I get some first ray tracing working on those thingys (wouldn’t it be great do have a cluster of a few of those? 🙂 ). But at least for now, I’m pretty happy with it. And as always: feedback, comments, and suggestions are welcome!