V

Avatar



AntTweakBar with Core GL OSX

EDIT: As pointed to me by Christoph Ruß, here’s a solution for this problem without the need to change the code

“I would like to add that the problem really is that the OpenGL library can not be found due to it simply not being in the path variable for the dynamic linker. Your solution to provide a fixed path on compile-time surely works, but may not be ideal, if you want to avoid re-compiling the lib.

I would recommend adjusting your DYLD_LIBRARYPATH instead. This could go in ~/.bash_profile:

export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/System/Library/Frameworks/OpenGL.framework/Versions/Current/”

I’ve been trying to get AntTweakBar (version 116) to compile and run as my default GUI but ended up finding some obstacles that took me a couple days to find a solution. Should have been a smooth ride, but ended up in a couple of crazy days and an easy solution found just minutes ago.

Let’s begin with some information. It was pretty easy to compile following the instructions found on the website. Under Windows it took a few minutes to get it up and running, even as a Static Library. Problems began when i’ve tried to compile on OS X 10.9:

  1. It reports 2 errors with the declaration of glDrawElements and glShaderSource. It differs from the declaration from GL header.
    Easy fix: Just copy the declaration from GL header to ATB.
  2. Compiling as a static library was OK. After reading it’s change log i realised they have static library support but i had taken the longest path and was trying to get it done manually, so it took me more time that i had planned.

After some time, i finally got the wanted the static library compiled and linking successfully, i got the following error:

“AntTweakBar: OpenGL Core Profile functions cannot be loaded.
ERROR(AntTweakBar) >> Cannot load OpenGL library dynamically”

Tracked that problem down to the line:

gl_dyld = dlopen("OpenGL", RTLD_LAZY );

gl_dyld was always NULL, which tells me the library wasn’t being loaded/found (really?!). The fix i found is to change the first parameter to use the full path to OpenGL as follows:

gl_dyld = dlopen("/System/Library/Frameworks/OpenGL.framework/Versions/Current/OpenGL", RTLD_LAZY );

 

It is a very simple fix and worked just fine for me, but, it took me a few days of frustration and travel. After all i was supposed to get this up and running last Thursday. It’s nearly one week later.





Kinect Disintegrates

Another of my recent experiments with the kinect. As i keep wanting to play with particles running on the GPU i thought i would make it much more fun to play with and connect it with the Kinect. I came up with this. A million particles makes the user and disintegrates as time goes by. Actually, it looks like a man made of sand. Have a look:

This is how it all started:





OpenNI Block of Code

OpenNI, a library created by PrimeSense. It’s quite famous for its ability to use the Kinect to track users and feed us with a usable skeleton per user. Since i wanted to give this a try and play around with the skeleton tracking i had to have this. So i did. In the end i ended up with a wrapper around this API. It’s open-source and available to everyone. Its not perfect and needs work and more features, but it’s just a start. Use it at your own risk and with your favourite creative framework. It’s possible.

You can find it at github.





CgFX Block of Code

I have been doing more and more work away from java/processing so i had to bring some tools with me to help and improve development time. One of my favourite shader languages is CgFX, a brother of HLSL. I spend alot of time using this for my shader programming, so i wrapped it up in a simple to use block of code that should be easy to use with your favourite framework.

You can find it at github.





Ping-pong technique on GPU

Hello there. Here is a new tutorial, this time about ping-pong on the gpu. I’ve been wanting to write about it for sometime, finally it’s off my todo list. Let’s get down to business. Ping-pong technique is normally used with a shader that needs it’s result as a source parameter for it’s next iteration. This is usually used in the gpu as for now it is not possible to write a program’s result to itself, so, we’ll need another equal buffer to save the current result for how next step/iteration. That was too hard?

So imagine we have 2 image buffers Image1, Image2. Usually to change data from Image1, you would simply access it and write directly back to the same positioion. Now, when we’re talking about a shader fragment program we can’t simply do that. You may ask now, what’s the solution? Ping-pong it!

Here is what i’m talking about:

//
// Initialization
//
int W = 100;
int H = 100;
ImageArray = new int[2][W*H];
int CurrActiveBuffer = 0; // Current active buffer index
 
 
//
// Mainloop
//
for( int j=0; j<H; j++ )
{
	for( int i=0; i<W; i++ )
	{
		// ERROR! Write to same buffer. Not possible in gpu shader
		//Image1[i+j*W] = Image1[i+j*W] * 2;  // Mul by 2
 
		// Ping-pong version
		int src = CurrActiveBuffer; // Current active buffer (Input)
		int dest = 1-CurrActiveBuffer;  // Back buffer (Output)
		ImageArray[dest][i+j*W] = ImageArray[src][i+j*W] * 2;  // Mul by 2
	}
}
 
// Swap back and front buffers (read becomes write and vice-versa)
CurrActiveBuffer = 1-CurrActiveBuffer;    // CurrActiveBuffer ? 0 : 1;

As you can see we start with buffer 0. That’s where we get out data from (read) and write it to Buffer 1. Once the operation is done, we swap buffers every frame. This way we’ll be able to use last iteration’s data as input for the next iteration.

Now let’s put this in OpenGL language. I will be using a FrameBufferObject (FBO) and 2 textures here. The framebuffer will be holding to both textures as 2 Color Attachments. So let’s get coding:

//
// Initialization
//
int W = 100;
int H = 100;
int FboID;
int TexID[2];
int CurrActiveBuffer = 0;  // Current active buffer index
 
// Create the frambuffer
glGenFramebuffersEXT( 1, &FboID );
glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, FboID );
 
// Create 2 textures for input/output.
glGenTextures( 2, TexID );
 
for( int i=0; i<2; i++ )
{
	glBindTexture( GL_TEXTURE_2D, TexID[i] );
	glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
	glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
	glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
	glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
        // RGBA8 buffer
	glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, W, H, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
	if( _hasMipmapping )  glGenerateMipmapEXT( GL_TEXTURE_2D );
}
 
// Now attach textures to FBO
int src = CurrActiveBuffer;
int dest = 1-CurrActiveBuffer;
glFramebufferTexture2DEXT( GL_FRAMEBUFFER_EXT, 
                           GL_COLOR_ATTACHMENT0_EXT, 
                           GL_TEXTURE_2D, TexID[src], 0 );
glFramebufferTexture2DEXT( GL_FRAMEBUFFER_EXT, 
                           GL_COLOR_ATTACHMENT1_EXT, 
                           GL_TEXTURE_2D, TexID[dest], 0 );
);
 
 
 
//
// Mainloop
//
int src = CurrActiveBuffer;
int dest = 1-CurrActiveBuffer;
 
FBO.Bind();
 
glDrawBuffer( dest );
 
glBindTexture( GL_TEXTURE_2D, TexID[src] );
ShaderProgram.SetTextureUniform( 0 );
RenderScene();
 
FBO.Unbind();
 
 
// Swap back and front buffers (read becomes write and vice-versa)
CurrActiveBuffer = 1-CurrActiveBuffer;    // CurrActiveBuffer ? 0 : 1;

Why would you want to ping-pong? Well for instance imagine you are doing a water effect in the gpu. You’ll need to access data from your previous buffer right? Using the CPU that would be trivial as you have the last frame’s memory buffer allocated somewhere which you can read/write access directly. That doesn’t really work on the gpu side that way (but we’re getting closer with CL, CUDA, DC)

Have fun.


Flattr this





, Next