V

Avatar



Grazing Jellies

Grazing Jellies is an augmented reality project i had the pleasure to work in. I made team with Neil Mendonza and Hudson-Powell, commissioned by the Abandon Normal Devices festival. Grazing Jellies takes place in a forest, a realtime portal into a colorful dream-like world of jelly creatures. The creatures were created to react to movement of the ambient/people and can also be called by making noise, When nothing is going on they wander around the world and hunt for food. My work on this project was mostly about the jellies generation/animation/rendering.

At first, there was the idea to use metaballs for the creatures, we wanted to give this jelly surfaces some wobbling/round forms. After some testing we decided not to, as i did not see any advantage specially in performace,  so we went with a more “traditional” method.  The creatures were generated from 2 steps. The body and the head.

The body: Create a skeleton line and generate a deforming cylindrical body from the line. After some time and tweaking the body right. After just had to start playing with values to get the animation going. Some trig + using creature’s motion parameters worked just fine, we got it right, but, there was still a problem, the creature’s heads. The head: Well i tried different methods, interpolating the body’s end with some spherical shape wish ease on, but it did not work out. The head textures just didn’t look good, “pinched” as Jody said several times.  We ended up creating the head as a second step, building an hemisphere and then “attaching”  it to the body’s end. Good news is later on, it came handy, as it made things alot easier to map the head’s texture. Some things just come handy sometimes. I had to tweak a bit to get the head and body animation get along, but in the end it turned out to look pretty good.

As for the lighting side, a kind of “ambient light” + phong lighting + cubemap reflections made it to the final version.

This is an awesome project and i really enjoyed working with the team. As said before this should not be a closed project, it was projected to live and change so it can fit other ambients, so expect to see more from the *mighty* Grazing Jellies.

Some videos on the development blog.

Full article on the festival @ Wired.





Mass cube rendering

I have recently done some experimental work in order to render mass amounts of cubes. I was moved by this video from Smash, i wanted to know how far i would be able to go on my NVidia GT240M (rendering side). My first choice was Geometry Shaders.
I quickly wrote an app that sent a list of points in space to the GPU and a geometry shader would generate a cube mesh for each one of  the points. Tested it on 100.000 cubes and the framerate was bad(10fps or so). The time was now for optimization.

Next step was to optimize the cube generation by lowering from a 24 vertex cube to 14 vertex triangle strip. Things got better, but nothing close to my expectations. I was not satisfied, i mean, i had alot of cubes on screen (100K which was not that much) and that was it, nothing else. We’re talking about 20fps or so, for 100.000 cubes (around 1.2 million triangles per frame). Later on i added vertex normals to the geometry shader and started to work on some lighting/shadowing, but i ended up going back on the rendering side of the job. Meanwhile, i was speaking with a friend of mine about this idea and we were discussing ways to compute lighting but i couldn’t stop thinking about my real problem. So it came to me.

Previously, i have done some experiments with opengl hardware instancing, but never got  to do much about it .  What better time than now, so i grabbed the project and took it for a spin. After a few hours i had the same amount of cubes on screen with a much much better framerate. Quickly implemented some eye-candy (coloring, texturing, vertex lighting), some tweaking here and there and as i was listening to Mr. Peter Broderick (hi, i love you man) added some audio analysis to the feature list.
Last but not least, a kind of “Brownian Motion” was used to generate points in space, increased the cube count to 512*512 and watched it flow ( at 20fps ).

In conclusion, Hardware instancing was much easier to implement  and performance seems much better at first sight. Above is a video of 262.144 audio-reactive cubes with GPU animation and basic lighting at around 20fps. For my video card i think that is very good. On a sidenote, i have not given up on the geometry shaders. I am not sure what will be my next step regarding the subject (back to geometry shaders?) but for now this is it. Hardware Instancing kicked Geometry Shaders in the ass.





,