Silly(?), Useful Tools: Generating Data for Scaling Experiments

Over the many different rendering projects I’ve done over the years, I’ve frequently stumbled – again and again – over the same problem: How to get “useful” data for doing scaling-style stress-testing of one’s software. Sure, you can always take N random spheres, or if you need triangle meshes take the bunny and create N random copies of that with random positions … but then you still quickly run into lots of other issues, like, for example: “now i added all these copies they just all overlap?!”, or “ugh, how can i create the scene such that multiple scales still make sense with the same camera for rendering?”, or “what if i want more instances rather than more triangles?”, or “what if I want to look at more ‘shading’ data like textures”, etc.

All of those questions are “solvable” (this is not rocket science), but I’m always amazed how much time I spent over the years – again and again – to write (and debug, and re-debug, etc) those mini-tests. And since I just did that again I decided that this time should be the last one… so as a result of that I did add all the features I wanted, and pushed that into my miniScene repo on github.

The way this tool works is actually quite simple, generating a whole lot of spheres (or actually, “funnies”, see my tweet on how i stumbled over those), but allowing the user to control a whole lot of different parameters that can influence things like how much instantiation vs “real” geometry, what tessellation level for the funnies (ie, triangle density per sphere), what texture resolution to use for each of the funnies, etc. In particular one can control:

  • how many “non-instantiated spheres” to generate
  • how many different kinds of spheres to generate for instantiation
  • how many different instances of spheres to generate
  • what tesselation density to use per sphere
  • what texture res to use per sphere (sphere gets its own checkerboard pattern texture)

These spheres then all get put into a fixed-size slab that covers the space from (0,0,0) to (1000, 100, 1000), with sphere radii and instance scaling adjusted such that there should always be a reasonably equal density within that slab. Note that slab is intentionally 10x less high than wide, so we neither end up with just a 2D plane, nor with something that’s a cube (where all interior geometry is usually occluded by that at the boundary).

In particular, this tool allows for easily controlling whether you want to scale in instance count (increase instance count) or triangle count (increase num non-instances spheres and/or sphere tessellation level); whether to put more triangles into just finer surface tesselation or into more different meshes, how much of the output size should be in textures vs geometry, etc.

Here a few examples:

/miniGenScaleTest -o scaleTest.mini (ie, with trivially simple default settings) generates this:

num instances : 2
num objects : 2

num unique meshes : 101
num unique triangles : 40.40K (40400)
num unique vertices : 22.22K (22220)

num actual meshes : 101
num actual triangles : 40.40K (40400)
num actual vertices : 22.22K (22220)

num textures : 101

Which with my latest renderer looks like this:

Now let’s change that to use 10k instances: ./miniGenScaleTest -o scaleTest.mini -ni 10000, and we get this:

num instances		:   10.00K	(10001)
num objects		:   101
----
num *unique* meshes	:   200
num *unique* triangles	:   80.00K	(80000)
num *unique* vertices	:   44.00K	(44000)
----
num *actual* meshes	:   10.10K	(10100)
num *actual* triangles	:   4.04M	(4040000)
num *actual* vertices	:   2.22M	(2222000)
----
num textures		:   200
 - num *ptex* textures	:   0
 - num *image* textures	:   201
total size of textures	:   204.80K	(204800)
 - #bytes in ptex	:   0
 - #byte in texels	:   204.80K	(204800)
num materials		:   200

which looks like this:

But since that scene complexity is mostly all in instances (which for “large model rendering” is often considered “cheating” let’s instead add a few non-instanced spheres as well (but let’s add more instances, too, just for the fun of it): ./miniGenScaleTest -o scaleTest.mini -ni 10000000 -nbs 100000 -tr 32 (this creates 10 million instances of spheres (each having 4k triangles), and then another 100,000 spheres that are not instances, for a total of this:

num instances		:   10.00M	(10000001)
num objects		:   101
----
num *unique* meshes	:   100.10K	(100100)
num *unique* triangles	:   40.04M	(40040000)
num *unique* vertices	:   22.02M	(22022000)
----
num *actual* meshes	:   10.10M	(10100000)
num *actual* triangles	:   4.04G	(4040000000)
num *actual* vertices	:   2.22G	(2222000000)
----
num textures		:   100.10K	(100100)
 - num *ptex* textures	:   0
 - num *image* textures	:   100.10K	(100101)
total size of textures	:   1.64G	(1640038400)
 - #bytes in ptex	:   0
 - #byte in texels	:   1.64G	(1640038400)
num materials		:   100.10K	(100100)

(and zooming in a bit)

(note the “artifacts” on some of those spheres are intentional – they’re “funnies”, not spheres. I find these funnies more useful as testing geometry, but of course, if you want to generate “non-funny” spheres there’s a flag for that as well).

Now finally, let’s use this to push my two RTX8000 cards to the limit, and do this: ./miniGenScaleTest -o /slow/mini/scaleTest.mini -ni 10000 -nbs 2000000 -tr 32 … with which we end up at a whopping 800M unique triangles and an additional 32 GBs of texture data:

num instances		:   10.00K	(10001)
num objects		:   101
----
num *unique* meshes	:   2.00M	(2000100)
num *unique* triangles	:   800.04M	(800040000)
num *unique* vertices	:   440.02M	(440022000)
----
num *actual* meshes	:   2.01M	(2010000)
num *actual* triangles	:   804.00M	(804000000)
num *actual* vertices	:   442.20M	(442200000)
----
num textures		:   2.00M	(2000100)
 - num *ptex* textures	:   0
 - num *image* textures	:   2.00M	(2000101)
total size of textures	:   32.77G	(32769638400)
 - #bytes in ptex	:   0
 - #byte in texels	:   32.77G	(32769638400)
num materials		:   2.00M	(2000100)

The result looks like this:

… and just to show that this is really about to push my GPUs to the limit (even with my latest data-parallel multi-GPU renderer) here also the output from nvidia-smi:

Guess I might have squeezed a bit more (some 3GBs still unused on each GPU!), but the goal of this exercise was to have something that can bring my renderer to its limits, and guess that’s pretty much it for now.

BTW: The result still runs at 17 fps 🙂

If you want o have a look at this tool: have a look at the miniScene repo, then tools/genScaleTest.cpp. The resulting .mini file should be trivial to read and use for your own stuff, so …. enjoy!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s