Tagged with 3D

Control Room

For a company hackathon I came up with a 3D web conferencing idea, I really wanted to feel like I'm inside of a control room and everything and everyone is around me. So I wrote a prototype with C and OpenGL that can use an arbitrary number of pre-recorded videos to demonstrate the idea. Altough I didn't win :(, you can try it here, now, in the browser. Listen to the videos carefully and use the arrow keys to navigate.

Control Room

Comment on Control Room

The Minimal Multiplatform C/OpenGL Project

In the past year I've created a few very nice wrappers for different platforms and operating systems that wrap the same controller file to simplify multi-platform C development for my stuff. For your edification and pleasure they are now yours on github!

TMMCP is a wrapper/project collection for C programmers who like it quick & simple & with total control. A TMMCP wrapper provides:

  • an OpenGL context
  • device input events
  • native audio/video playback
  • native video to texture rendering
  • some other functions I found userful during game development

The wrappers are very simple, after a little reading you can easily extend them if you need some special functionality.

Current list of wrappers and projects

  • iOS - Xcode
  • MacOS - Xcode
  • Android - Android Studio
  • asmjs/html5 ( emscripten ) - bash script

What is the license?

Everything in this repo is in the public domain. Take it use it learn from it.

How to use it

Open "template/sources/controller.c" in your favorite editor and start coding. After you finished open the wanted platform's project file and build/compile/run/deploy. You may have to add newly created source files / include paths to the project settings.

Cool, do you have any documentation?

The documentation is in-line in controller.c. If you don't get it check out the demo projects. demo_dragbox shows a draggable white box over a purple background - it shows off basic OpenGL/input handling and audio playing. demo_conference is an advanced demo, it creates a 3D conference room with video avatars using the wrapper's render video-to-texture functionality - even in html5!. But not on android, I'm still in the middle of the implementation, it will show blank avatars.

Contributors Wanted

Windows and Linux wrappers/projects would be awesome for start but any platform is welcomed warmly!


I'm using some stuff in the demos from these beautiful people :

Check it out under https://github.com/milgra/tmmcp

Comment on The Minimal Multiplatform C/OpenGL Project

WebGL Performance

I'm working on a multi-platform C and OpenGL based UI renderer and displaying things in 60 fps is essential.

It works well on desktop and mobile OS's and it looks good in WebGL on my non-retina Macbook Air but sadly on retina Macbook Pro's the framerate is dying in case of big browser windows ( ~more than 50% of the screen ).

After a few days of trial and research I figured out the following :

  • Google Chrome's webGL implementation on OS X is much faster than Safari's
  • enabling/disabling preserved drawing buffer makes no difference
  • using scissor test to draw only a fragment of the screen makes no difference
  • switching off antialiasing and the context's alpha channel makes difference
  • disabling alpha blending speeds up frame rendering big time
  • if frame rate is dying then every single javascript call inbetween makes everything slower ( thanks to single-threadedness )
  • in os x's high and maximum display scaling mode the maximum framerate you can achieve with a full-size browser window with a full size canvas is 35-40 fps. In lower modes 60 fps is reachable.
  • full texture upload at any time kills performance

So what did I learn?

  • webGL will never be as fast as standard (windowed) openGL because the browser has to blend the webGL canvas into the web page with every frame and this slows down rendering big time
  • webGL is smart enough not to swap frame buffers if gl context is not changed
  • for a full-scale retina os x webGL UI renderer experience I have to wait for the next generations of MacBooks

Anyway I rewrote my UI renderer to use as few function calls as possible, use as much glTexSubImage2D and glBufferSubData calls as possible instead of full uploads and now it uses bitmaps for text fields instead of individual textured quads for letters and I'm almost satisfied.

Comment on WebGL Performance

Laser Scanning

Last week I badly wanted to see my face as a voxel cloud on the screen so I entered the 3D scanning territory. There are two ways for a mid-class person to do it in an affordable way : reconstruction from photo series or triangulation based on a laser line. The first method is quite inaccurate and needs complex algorithms ( Autodesk123D is the biggest free solution, ), the seconds one is dead simple, accurate but it is very hard to add texture data. So I chose the latter. It needs only two things : a line laser and my mobile's camera.

So I went to the local home improvement store and bought a Bosch Quigo laser level, set up a simple scene and started coding.

The theory : you know the distance between the laser emitter and the camera lens focus point, you know the angle between the laser and the camera axis and you also know the field of view of the camera. From these data you can link an angle to every pixel of an image created by the camera, and from that angle you can tell the distance from the camera.

The red line is the laser, the blue line is the camera axis. d1 is the distance between them, it is set by you. a1 is the angle of the camera axis, it is also set by you. The black line is the line between the camera and the touch point of the laser on an object. The yellow line is the projection plane of the camera, the green lines are the field of view of the camera. The field of view is also known, you can check it up at your phones vendor or calculate it based on the focus length and sensor size ( in case of my iPhone 6s they are 4.15mm and 4.5mm so the focal angle is atan( ( 4.5/2.0 ) / 4.15 ) * 2 so roughly 56 degrees when you hold it horizontally ) or put down a one meter width something on the ground and take a picture of it from 2 meters away and do the math based on the image.

So the point is that the black line will have a corresponding pixel on the image, you know angle a2 ( which is the field of view divided by two ) so you can calculate angle a3 since they have a linear correlation to the pixel count. So ( image width / 2 ) / ( FOV / 2 ) = ( black dot pixel distance from center / wanted angle ). And after you have angle a3 you know the angle between the red line and the black line and a cosine function will give you the length of the black line : cos(angle) = d1/ wanted length.

This is my actual state, check it out here.

The next steps will be to make it freely movable, and possibly transforming the actual scan based on the motion sensor data to get a full scan of anything. Yaaay.

Comment on Laser Scanning