Ray Tracing Part One

Let's create a simple ray tracing engine just for fun and to showcase how simple and how CPU intensive the underlying algorithm is compared to rasterization. We won't use any libraries, frameworks, GPU drivers and other terrible things, let's stick with the linux framebuffer and the standard C libraries, it still can be done within a hundred code lines.

We need two things in our scene : a camera, defined by a focus point and a camera window, and some geometry, for example a rectangle. The process is simple : we create lines ( rays ) starting from the camera foucs point and going through the camera window starting from the top left corner with a specific stepping, and we check if these rays are colliding with the geometry ( the lines are intersecting with the rectangle ). If there is an intersection, the screen point will be white, if not, it will be blue for demonstration purposes. This is what we do for start.

Check out the code :

#include <stdio.h>
#include <stdint.h> // for uint8 and uint32
#include <fcntl.h>      // for open()
#include <float.h>      // for FLT_MAX
#include <sys/mman.h>   // for mmap()
#include <sys/ioctl.h>  // for framebuffer access
#include <linux/fb.h>   // for framebuffer access

typedef struct _v3_t
{
    float x, y, z;
} v3_t;

/* add two vectors */

v3_t v3_add( v3_t a , v3_t b )
{
    v3_t v;

    v.x = a.x + b.x;
    v.y = a.y + b.y;
    v.z = a.z + b.z;

    return v;
}

/* substracts b from a */

v3_t v3_sub( v3_t a , v3_t b )
{
    v3_t v;

    v.x = a.x - b.x;
    v.y = a.y - b.y;
    v.z = a.z - b.z;

    return v;
}

/* creates dot product of two vectors */

float v3_dot( v3_t a , v3_t b )
{
    return a.x * b.x + a.y * b.y + a.z * b.z;
}

/* scales vector */

v3_t v3_scale( v3_t a , float f )
{
    v3_t v;

    v.x = a.x * f;
    v.y = a.y * f;
    v.z = a.z * f;

    return v;
}

/* returns squared length to avoid square root operation */

float v3_length_squared( v3_t a )
{
    return a.x * a.x + a.y * a.y + a.z * a.z;
}

/* creates pixel color suitable for the actual screen info */

uint32_t pixel_color( uint8_t r, uint8_t g, uint8_t b, struct fb_var_screeninfo *vinfo )
{
    return  ( r << vinfo->red.offset ) | 
            ( g << vinfo->green.offset ) | 
            ( b << vinfo->blue.offset );
}

int main( )
{

    // get framebuffer

    struct fb_fix_screeninfo finfo;
    struct fb_var_screeninfo vinfo;

    int fb_fd = open( "/dev/fb0" , O_RDWR );

    // get and set screen information

    ioctl( fb_fd, FBIOGET_VSCREENINFO, &vinfo);

    vinfo.grayscale = 0;
    vinfo.bits_per_pixel = 32;

    ioctl( fb_fd, FBIOPUT_VSCREENINFO, &vinfo );
    ioctl( fb_fd, FBIOGET_VSCREENINFO, &vinfo );
    ioctl( fb_fd, FBIOGET_FSCREENINFO, &finfo );

    long screensize_l = vinfo.yres_virtual * finfo.line_length;

    uint8_t *fbp = mmap( 0 , screensize_l , PROT_READ | PROT_WRITE , MAP_SHARED , fb_fd , ( off_t ) 0 );

    v3_t screen_d = { vinfo.xres , vinfo.yres };    // screen dimensions

    // start ray tracing

    // set up camera

    v3_t camera_focus_p = { 0.0 , 0.0 , 100.0 };        // camera focus point

    // set up geometry

    v3_t rect_side_p = { -50.0 , 0.0 , -100.0 };    // left side center point of square
    v3_t rect_center_p = { 0 , 0 , -100.0 };        // center point of square
    v3_t rect_normal_v = { 0 , 0 , 100.0 };         // normal vector of square

    float rect_wth2_f = 50.0;                       // half width of square
    float rect_hth2_f = 40.0;                       // half height of square

    // create corresponding grid in screen and in camera window 

    int grid_cols_i = 100;

    v3_t window_d = { 100.0 , 100.0 * screen_d.y / screen_d.x };    // camera window dimensions

    float screen_step_size_f = screen_d.x / grid_cols_i;    // screen block size
    float window_step_size_f = window_d.x / grid_cols_i;    // window block size

    int grid_rows_i = screen_d.y / screen_step_size_f;

    // create rays going through the camera window quad starting from the top left corner

    for ( int row_i = 0 ; row_i < grid_rows_i ; row_i++ )
    {
        for ( int col_i = 0 ; col_i < grid_cols_i ; col_i++ )
        {

            float window_grid_x = - window_d.x / 2.0 + window_step_size_f * col_i;
            float window_grid_y =   window_d.y / 2.0 - window_step_size_f * row_i;

            v3_t window_grid_v = { window_grid_x , window_grid_y , 0.0 };

            // ray/pixel location on screen

            int screen_grid_x = screen_step_size_f * col_i;
            int screen_grid_y = screen_step_size_f * row_i;

            // pixel location in framebuffer

            long location = ( screen_grid_x + vinfo.xoffset ) * ( vinfo.bits_per_pixel / 8 ) + 
                            ( screen_grid_y + vinfo.yoffset ) * finfo.line_length;

            // project ray from camera focus point through window grid point to square plane
            // line - plane intersection point calculation with dot products :
            // C = A + dot(AP,N) / dot( AB,N) * AB

            v3_t AB = v3_sub( window_grid_v , camera_focus_p );
            v3_t AP = v3_sub( rect_center_p , camera_focus_p );

            float dotABN = v3_dot( AB , rect_normal_v );
            float dotAPN = v3_dot( AP , rect_normal_v );

            if ( dotABN < FLT_MIN * 10.0 || dotABN > - FLT_MIN * 10.0 )
            {
                // if dot product is not close to zero there is an intersection 

                float scale_f = dotAPN / dotABN;
                v3_t isect_p = v3_add( camera_focus_p , v3_scale( AB , scale_f ) );

                // let's find if intersection point is in the rectangle

                // project intersection point to center line of rectangle
                // C = A + dot(AP,AB) / dot(AB,AB) * AB

                AB = v3_sub( rect_center_p , rect_side_p );
                AP = v3_sub( isect_p , rect_side_p );

                float dotAPAB = v3_dot( AP , AB );
                float dotABAB = v3_dot( AB , AB );

                scale_f = dotAPAB / dotABAB;

                v3_t proj_p = v3_add( rect_side_p , v3_scale( AB , scale_f ) );

                // check x and y distance of intersection point from square center

                float dist_x = v3_length_squared( v3_sub( proj_p , rect_center_p ) );
                float dist_y = v3_length_squared( v3_sub( proj_p , isect_p ) );

                // compare squared distances with squared distances of rectangle

                if ( dist_x < rect_wth2_f * rect_wth2_f && 
                     dist_y < rect_hth2_f * rect_hth2_f )
                {
                    // cross point is inside square, we draw it white

                    *((uint32_t*)(fbp + location)) = pixel_color( 0xFF, 0xFF, 0xFF, &vinfo );
                }
                else
                {
                    // cross point is outside square, we draw it blue

                    *((uint32_t*)(fbp + location)) = pixel_color( 0x00, 0x00, 0xFF , &vinfo );          
                }

            }

        }

    }

    return 0;
}

In the upper part of the code we define a vector structure with three members for three dimensions and a few functions to manipulate them. ( addition, substraction, dot product, scaling and length ).

The first part of the main function is dealing with the linux framebuffer, we map it to an uint buffer. There's not much to explain in this, I just copied this part of the code from a linux pro's tutorial.

The second part is the ray tracing itself. We set up our world here, create the grid on the screen and on the camera window and start iterating. The most complicated part is the line-plane ( ray - rectangle ) intersection and the point - rectangle center line projection. Both are done with dot product calculations in a slightly different way.

Ray-Rectangle intersection in our scene setup :

raytrace

Point to Line projection

raytrace

After we have both points we can check easily if the line intersects with the rectangle or not and we can draw our pixels.

How to run it :

Just copy the code above, create a file called raytrace.c and paste everything in it. Then compile the file : 'gcc raytrace.c -o raytrace'

And then run it : './raytrace'

On some linux distros you may need root access rights to access the framebuffer, this case type : 'sudo ./raytrace'

If you don't see anything you probably have to switch to console mode because your window manager interferes with your framebuffer. Just switch to a console with CTRL+ALT+F1.

You should see this :

raytrace

In the next part we make our camera movable, use squares instead of dots and we add light sources!!!

Comment on Ray Tracing Part One

The Non-Sysadmin Manifesto

I’m a computer user. 90 percent of the developed countries is a computer user in some way.

Every second spent fighting the computer instead of just using it is a waste of time.

I’m a software engineer. Every second spent fighting the computer instead of just coding is a waste of time.

Software industry is pretty much a mess no matter what architecture and operating system you use because you have to waste a lot of time and energy.

But we can make it better!

We need absolute standalone applications independent from the OS version and the shared library versions the OS have

Problems : App doesn’t start, quits or hangs immediately or during runtime, installation fails because of missing software or dependency hell

Apps should contain EVERY dependency that is needed to run the app on the specific architecture. App size is not a problem anymore in, we have terabyte SSD’s and gigabyte/s networks now.

Apps should run out of the box on the target architecture regardless the OS version. Of course it can be tricky with hardware drivers that are manufacturer-dependent but OS’s should use industry-standard API’s to let programs communicate with hardware and should hide drivers to make this possible.

Of course this can raise security concerns — what if somebody replaces libs and hacks your memory management? Well, it is possible on current OS’s already. You should know where your app is coming from.

What OS’s can do to make using a computer safer is provide total transparency! It should log and show you in a human readable way where the app is connecting to ( TCP/IP requests ), it’s actual connection, what hardware it is using, it should ask for permissions to use specific hardware if the user asks it to, it should ask for permissions to connect to remote hosts if the user asks it to. Apps should report their actual progress to the OS and to the user if they are doing something CPU consuming.

With these rules computers can be perfect, safe and stable production instruments.

We need absolute standalone development projects

Modern software projects are a hell to set up. Dependencies, dependencies of dependencies, parameters, settings scattered between a thousand config files, makefiles, old project files using old IDE versions, old scripts using old script language versions and old dependencies…

Why programming have to be like this? Why don’t we make self-contained development projects containing everything? The closest thing to this maintaining a virtual machine image with everything installed, the proper OS version, IDE version, all downloaded libraries but it’s a drag.

We should make dev projects download/install/move in one package with everything that is needed for immediate development/deployment.

In addition, if Apps become first class citizens on the OS, Code should be the first class citizen in development projects, we should do more setup/environment checking/etc in the code itself and make less setup scripts/config files.

I’m dreaming of a world where everything is out-of-the-box, straightforward and just working. Let’s do this!

Comment on The Non-Sysadmin Manifesto

MilGraPi

Preparing a Raspberry Pi for OpenCV development is really time consuming, OpenCV takes hours to compile and a lot of other things have to be set up, so I just share my SD Card image here to speed up Raspberry OpenCV development for others.

For Raspberry Pi 3 Model B :

I shrank the root partition to 7GB to make it suitable for smaller SD cards. It has 300MB free space only so you better expand it to fit on the target SD card. You can do this right on your raspberry with an additional USB-SD card stick and gparted. User/pass is pi/raspberry. After startup it autologins directly to OpenBox. Right click -> Terminal emulator to open a terminal. To test and run the OpenCV examples type "workon cv" to activate the python virtual environment, go into "/home/pi/Desktop/OpenCV-Face-Recognition-master/FacialRecognition" and type "python 03_face_recognition.py" . If you have a raspberry camera installed and enabled with raspi-config, a camera window should pop up and face detection should start. For a usb camera you have to modify the scripts a little.

For Raspberry Pi 3 Model B+ :

It is a 16 Gbyte SD image file in Mac dmg format, balenaEtcher can handle it. User/pass is pi/raspberry. After login you can start the GUI by typing startx. Right click -> Terminal emulator to open a terminal. To test and run the OpenCV examples type "workon cv" to activate the python virtual environment, go into "/home/pi/Desktop/OpenCV-Face-Recognition-master/FacialRecognition" and type "python 03_face_recognition.py" . If you have a raspberry camera installed and enabled with raspi-config, a camera window should pop up and face detection should start. For a usb camera you have to modify the scripts a little.

If you find this image useful please donate at the top of the page.

What does it contain

Base System

  • Raspbian Lite

GUI

  • openbox for window manager
  • tint2 for taskbar
  • slim for autologin - Model B image only
  • pcmanfm for file manager - Model B image only
  • chromium for stack overflow

Dev Tools

  • lxterminal for terminal
  • vim/nano for python
  • codeblocks for c/c++ development - Model B image only
  • python for opencv development
  • opencv 4.0 for computer vision
  • picamera python module for the raspberry camera
  • opencv face recognition examples - Model B image only
  • steamberry face and motion recognition - Desktop/SteamBerryMotionDetector and SteamBerryFaceDetector

Games

  • Scratch, Termite, Cortex, Brawl for short rests ( enable full KMS OpenGL support in raspi-config to play them ) - Model B only

Download

Comment on MilGraPi

Cortex And Brawl Is Open Source

And finally, my second and third game, Cortex and Brawl, is also open-source!!! Check them out.

https://github.com/milgra/cortex

https://github.com/milgra/brawl

Comment on Cortex And Brawl Is Open Source

KineticUI Is Open Source

I really like UI renderers. I really like beautiful UI's. I'm really not satisfied with the state of UI currently. With todays technology, super sophisticated animations, font animation can be done and should be used. This is my take on a modern, smooth UI renderer.

Check it out on github

Demo :

Comment on KineticUI Is Open Source

WebConfThreeD Is Open Source

For a company hackathon I came up with the idea of a 3D web conference tool. I wanted it to look like a control room where you can see everything at once and can zoom to individual elements. Here it is :

Check it out on github

Demo :

Comment on WebConfThreeD Is Open Source

PreziThreeD Is Open Source

I was always curious why prezi didn't create a 3D version out of their app. It turned out that they experimented with it and dropped it in the end. So I created my own version and I really like how it looks like, how "rooms" can improve the mood of the presentation. But it complicates presentation creation big time for sure.

Check it out on github

I've created a demo presentation about my life! :)

Comment on PreziThreeD Is Open Source

Remotion Is Open Source

One of my earlier projects with a moderate success was to make a wiimote out of an iphone wiht the help of the accelerometers. I created an iphone app and a small macos host application, they find each other via bonjour, and the host injects the motion information as mouse coordinates. It's six years old, might not compile on newer macos'es.

Check it out on github

Comment on Remotion Is Open Source

Issue Report

I received this on github today, I love it! :)

Issue

Comment on Issue Report

Termite Is Open Source

The time has come, finally I was able to release the full multiplatform, multi-store-integrated code of Termite on github to help everyone who is in the same boat. I'm working on the multiplatformization of Termite for months now and I was struggling a lot of times. Then plan was to finish it up quickly but life ( and the systems created by programmers ) had other plans! I stucked a lot of times at issues that seemed so tiny at first glance and I had to spend hours figuring it out.

The code, compilation&deployment guide and tips&tricks can be found here :

https://github.com/milgra/termite

I did all development on an early 2016 12'' MacBook with a fanless intel core m7, 8GB RAM and 512GB SDD and it kicked ass! It runs Windows and Linux smoothly in VMWare, the game ran with 60 fps inside the virtual machine. XCode/CodeBlocks building is also superfast.

What kills it is Android Studio. I don't think CPU development will ever reach a state where Java desktop applications run smoothly. And it is not only slow because of Java, slowness is amplified by the gradle-scripts that run between the IDE and the project so there is a very loose connection between the IDE and the code. Actually my general feeling of Android development is that there is a very loose connection between everything and you don't know what is really happening and why is it happening. Learning curve is super steep. I can imagine developers who gave 5-6 years of their lives to android development and have a mostly clear picture on whats and whys but I'm not planning to be one :) Anyway, great respect to android developers, it seems to be the biggest suck factor in the industry nowadays.

To be a decent desktop operating system Linux needs a default GUI and a simple way to install binary/closed source applications. GNOME is okay but all developers should stand behind it and push it together towards perfection, and a bundle-based application structure would be awesome ( like on MacOS ) without dependence magic. For open-source programs apt-get install is fun until you have to add new sources to the sources list or older versions with removed dependencies, etc. Compiling from source is also fun, for sysadmins and time-millionares :)

iOS and it's API's became way too complicated. Doing autolayout in Interface Builder is a lifelong journey, doing things that were super simple back in 2010 are now super-complicated ( hiding the status bar, rotation, etc ), entitlements files are everywhere for increased security. The biggest pain was an fopen issue, it worked a few years earlier but now it only creates the file and then it cannot be read/written. It turned out that fopen on iOS creates files with 0000 permissions instead of 0666 which caused a 2-hour head scratching. Using open with explicit permissions solves the problem but why did fopen became obsolete?

Raspberry is a super cool little machine. It was super easy to port the game to it, runs well, I love it.

Steamworks is a mess. The API is a mess and the site is a mess. I spent days clicking through the site and I still have no idea how to go to the steamworks admin/store admin/the main page of the application with three clicks, I think it's impossible. Settings are scattered everywhere and the whole thing is backed by Perforce!!! You have to publish your changes every time to Perforce, it's insane. It's like a high school project. The documentation is not really talkative, I used the Steamworks sample project, the documentation and google together to fix issues but I wasn't prepared for random persistence errors which can be solved by disabling and re-enabling Inventory Service for example. But they are the biggest, have infinite money, they can do this :)

The best OS for multiplatform development is definitely MacOS. It puts everything under your ass out of the box and then gets out of your way. It has everything that linux has and everything that windows has and much much more.

Comment on Termite Is Open Source