Chrome doesn’t support Depth Textures on Android. Chrome supports Depth Textures on iOS. Firefox supports Depth Textures on Android….
Threejs includes a nice fbx – and other formats – converter script to Threejs’s JSON format, but there was some issues and improvements i felt the need for, so i’ve created a modified version of it.
This modified version includes the following changes:
- Map AmbientColor texture slot to Three.js LightMap
- Material map filenames only, full path removed
- New console arg “-b” used to override MeshLambertMaterial by MeshBasicMaterial
- Creates a local “/maps” folder next to .js file and save all textures to it
First thing i’ve noticed was there in no way to get lightmaps to work when using Maya, so i had to have a slot that did just that. Also when adding textures to materials the script would always fail when attempting to copy textures, because the source would try to copy into itself. Not sure why it was made like that, but it didn’t work for me. Copying the textures into a local folder was a need as i want things relative to my project root.
You can find the modified script here: GIST
Update: I was told by Mr.Doob that it is possible to render partial geometry using offsets/drawcalls, therefore no need to change the library. There is a catch.. It only works for indexed geometry, which in my specific case would not do the trick, still it is good to know there is such an option.
One thing i’ve missed when using Threejs was the possibility to render a set of a Geometry. In my case a large BufferGeometry pool.
Setting up one was quite easy:
// Non-indexed geometry data
var numVertices = 1024 * 10;
cubePoolGeometry.addAttribute( "position", new THREE.Float32Attribute( numVertices, 3 ) );
cubePoolGeometry.addAttribute( "color", new THREE.Float32Attribute( numVertices, 3 ) );
cubePoolGeometry.addAttribute( "uv", new THREE.Float32Attribute( numVertices, 2 ) );
cubePoolGeometry.addAttribute( "normal", new THREE.Float32Attribute( numVertices, 3 ) );
When rendering the geometry it will internally use the size of the buffer, being the total length allocated at init time. Internal code shows exactly that:
_gl.drawArrays( _gl.TRIANGLES, 0, position.array.length / 3 );
_this.info.render.vertices += position.array.length / 3;
_this.info.render.faces += position.array.length / 9;
This will always render the full batch, which i did not want. The change was easy and could be a good thing for future releases of the library:
_gl.drawArrays( _gl.TRIANGLES, 0, position.numItems / 3 );
_this.info.render.vertices += position.numItems / 3;
_this.info.render.faces += position.numItems / 9;
This allows to render a subset of the geometry by changing the attribute variable numItems, like so:
// Render only first 20%
var numVertices = 1024 * 2;
cubePoolGeometry.attributes.position.numItems = numVertices * 3;
//cubePoolGeometry.attributes.color.numItems = numVertices * 3;
//cubePoolGeometry.attributes.uv.numItems = numVertices * 2;
//cubePoolGeometry.attributes.normal.numItems = numVertices * 3;
The full buffer is untouched and you can still get the total size by querying the array length (or save it for future reference). That was it.
After watching Rome, a project created by Google, my interest got bigger, so I downloaded a framework called Three.js. This framework is more than just WebGL, but i didn’t really care, i just wanted that section. If you get curious, check it out, it’s growing and even Google used it.
It was quite easy to start playing with, as it brings loads of samples (would be alot easier if i knew where to find the documentation), so i started from there. Create a simple page capable of rendering WebGL to a canvas, explore the framework more or less until i actually made something with it.
Next screenshots includes one from the application running with OpenGL and the other two were capture directly from the browser.
Last but not least, i want to mention the work of Tristan Bethe. The 3d scan model seen in this pictures is all him.