Posted by Milan Toth

SteamBerrry Suite

steamberry

This year I opened the café of my dreams : it is half café, half co-working space, half geeky-nerdy community space with raspberry pi's, musical instruments, japanese kimono and live cat and puppy streams. I use raspberrys for everything : for cat/puppy streaming, for synthetizing sound for the MIDI keyboard, for playing music from youtube, for scratch for kids and for motion/face detection. Motion/face detection is mainly for educational purposes : you can check out how it works on the upper level, and you can store your face/name if you want to save it for eternity! :)

The code is python based and uses openCV, for a pre-installed raspberry image check out this post :

http://milgra.com/milgrapi.html

It is also prepared for both raspberry and usb cameras.

The main function grabs the camera image in an infinite loop and sends the image to the motion and the face detector functions. The motion detector checks the differences between the previous and the actual image, the face detection uses openCV's face detection. If space is pressed it starts to save the actual face into the data model, and writes out the training images and the id-name pair under dataset folder. If you want to give a specific name to a face, you have to edit idtoname.txt. After restart it checks for a model file and an idtoname file and loads them if they exist.

check it out on github

Comment on SteamBerrry Suite

Ray Tracing Part Four

Okay, we have diffuse and specular colors, refraction and reflection but the algorithm is not generic. We need a recursive function that follows the ray throughout its lifetime, splits up the ray into multiple rays in case of refraction and reflection and averages the resulting colors.

#include <stdio.h>

// returns ray color recursively

uint32_t get_ray_color( v3_t s_p , v3_t e_p , rect_t* source_rect , uint32_t* iterations_u )
{

    uint32_t color = bckgrnd_col_u;

    if ( (*iterations_u)++ > 30 ) return color;

    nearest_res_t nearest = get_nearest_rect( s_p , e_p , source_rect );

    if ( nearest.rect != NULL )
    {

        // check for direct connection with light for diffuse color

        nearest_res_t blocker_r = get_nearest_rect( nearest.isect_p , light_p , nearest.rect );

        if ( blocker_r.rect == NULL ) 
        {
            // diffuse color

            color = nearest.rect->col_diff_u;

            // specular color

            // mirror light point on normal vector to get perfect reflection

            v3_t light_mirr_p = point_line_mirror( nearest.isect_p , v3_add( nearest.isect_p , nearest.rect->norm_v ) , light_p );

            v3_t tofocus = v3_sub( s_p , nearest.isect_p );
            v3_t tomirrored = v3_sub( light_mirr_p , nearest.isect_p );

            float angle_f = v3_angle( tomirrored , tofocus );

            // the smaller the angle the closer the mirrored ray and the real ray are - reflection 

            float colorsp_f = ( float ) 0xFF * ( ( M_PI - angle_f ) / M_PI );
            uint32_t colorsp_u = ( uint8_t ) colorsp_f;
            colorsp_u = colorsp_u << 24 | colorsp_u << 16 | colorsp_u << 8 | 0xFF;

            color = color_average( color , colorsp_u );

        }
        else 
        {
            // shadow

            color = color_multiply( nearest.rect->col_diff_u , 1.0 - blocker_r.rect->transparency_f );
        }

        color = color_multiply( color , nearest.rect->transparency_f );

        // if rect is transparent calculate refraction and check for further intersections

        if ( nearest.rect->refraction_f > 1.0 )
        {

            v3_t tofocus = v3_sub( s_p , nearest.isect_p );
            float angle = v3_angle( tofocus , nearest.rect->norm_v );
            float length = v3_length( tofocus );

            // get refraction angle
            // n1 * sin( Theta1 ) = n2 * sin( Theta2 ), n1 is 1 ( vacuum )
            // Theta2 = acos( sin( Theta1 ) / n2 )

            float theta = M_PI_2 - acosf( sinf( angle ) / nearest.rect->refraction_f );

            // rotate tofocus vector in new position

            v3_t cam_to_normal_p = point_line_projection( nearest.isect_p , v3_add( nearest.isect_p , nearest.rect->norm_v ) , s_p );

            v3_t cam_normal_ycomp_v = v3_sub( cam_to_normal_p , s_p );
            v3_t cam_normal_xcomp_v = v3_sub( nearest.isect_p , cam_to_normal_p );

            // get needed x length and y length for theta

            float y_d = sinf( theta ) * length;
            float x_d = cosf( theta ) * length;

            cam_normal_xcomp_v = v3_resize( cam_normal_xcomp_v , x_d );
            cam_normal_ycomp_v = v3_resize( cam_normal_ycomp_v , y_d );

            v3_t newtarget_p = v3_add( cam_normal_xcomp_v , cam_normal_ycomp_v );
            newtarget_p = v3_add( nearest.isect_p , newtarget_p );

            uint32_t refr_color_u = get_ray_color( nearest.isect_p , newtarget_p , nearest.rect , iterations_u );

            if ( refr_color_u != bckgrnd_col_u ) color = color_average( color , refr_color_u );

        }

        // reflect ray on intersection point and check for further intersections

        if ( nearest.rect->reflection_f > 0.0 )
        {

            v3_t light_mirr_p = point_line_mirror( nearest.isect_p , v3_add( nearest.isect_p , nearest.rect->norm_v ) , s_p );

            uint32_t refl_color_u = get_ray_color( nearest.isect_p , light_mirr_p , nearest.rect , iterations_u );

            if ( refl_color_u != bckgrnd_col_u ) color = color_average( color , refl_color_u );

        }

    }

    return color;

}

As you see not much happened compared to the previous version, I just pulled out the ray handling part from the screen window point iteration loop and created a more generic function from it. To make things more spectacular arrow keys now move the first rectangle instead of the camera so you can see shadows, refraction and reflection better on the second rectangle.

Final result :

raytrace

Download the code : raytrace_part_four.c

Comment on Ray Tracing Part Four

Ray Tracing Part Three

We will add an other rectangle, shadow and animated camera in this part.

To use an arbitrary number of rectangles we will need a rectangle structure and an array that holds them. A rectangle is defined by it's base point, side point and it's normal vector. We will pre-calculate the width and height of the rectangle. We need members for transparency, diffuse and specular color also.

#include <stdio.h>

typedef struct
{

    v3_t base_p;
    v3_t side_p;
    v3_t norm_v;

    float wth;
    float hth;

    float transparency_c;

    uint32_t col_diff_u;
    uint32_t col_spec_u;

} rect_t;

int rect_cnt_i = 2;
rect_t rectangles[ 2 ];

With this we can organize intersection detection to a separate function that returns the nearest rect intersecting with the given line.

#include <stdio.h>

nearest_res_t get_nearest_rect( v3_t start_p , v3_t end_p , rect_t* exclude_r )
{

    nearest_res_t result = { 0 };
    float dist_f = FLT_MAX;

    if ( rect_cnt_i > 0 )
    {

        // iterate through all rectangles

        for ( int index_r = 0 ; index_r < rect_cnt_i ; index_r ++ )
        {

            rect_t* rect = &rectangles[ index_r ];

            if ( rect == exclude_r ) continue;

            // project ray from camera focus point through window grid point to square plane

            v3_t isect_p = line_plane_intersection( start_p , end_p , rect->base_p , rect->norm_v );

            if ( isect_p.x != FLT_MAX )
            {

                // let's find if intersection point is in the rectangle

                v3_t proj_p = point_line_projection( rect->side_p , rect->base_p , isect_p );

                // check x and y distance of intersection point from square center

                float dist_x = v3_length_squared( v3_sub( proj_p , rect->base_p ) );
                float dist_y = v3_length_squared( v3_sub( proj_p , isect_p ) );

                // compare squared distances with squared distances of rectangle

                if ( dist_x < ( rect->wth / 2.0 ) * ( rect->wth / 2.0 ) && 
                     dist_y < ( rect->hth / 2.0 ) * ( rect->hth / 2.0 ) )
                {
                    // cross point is inside square, let's calculate it's color based on light reflection angle

                    float distance = v3_length_squared( v3_sub( isect_p , start_p ) );

                    if ( distance < dist_f )
                    {

                        result.rect = rect;
                        result.isect_p = isect_p;

                        dist_f = distance;

                    }

                }

            }

        }

    }

    return result;

}

To introduce shadows we need to modify the window grid iteration loop a little. If there is an obstacle ( rectangle ) between the intersection point and the light we set that point to the shadow color, if not we calculate diffuse and specular color.

#include <stdio.h>

// create rays going through the camera window quad starting from the top left corner

for ( int row_i = 0 ; row_i < grid_rows_i ; row_i++ )
{

    for ( int col_i = 0 ; col_i < grid_cols_i ; col_i++ )
    {

        v3_t window_grid_v = camera_target_p;

        window_grid_v = v3_add( window_grid_v , v3_scale( window_stepx_v , grid_cols_i / 2 - col_i ) );
        window_grid_v = v3_add( window_grid_v , v3_scale( window_stepy_v , - grid_rows_i / 2 + row_i ) );

        // ray/pixel location on screen

        int screen_grid_x = screen_step_size_f * col_i;
        int screen_grid_y = screen_step_size_f * row_i;

        // check for intersection

        uint32_t color = 0x000000FF;    // background color

        nearest_res_t result = get_nearest_rect( camera_focus_p , window_grid_v , NULL );

        if ( result.rect != NULL )
        {

            // check for direct connection with light for diffuse color

            nearest_res_t blocker_r = get_nearest_rect( result.isect_p , light_p , result.rect );

            if ( blocker_r.rect == NULL ) 
            {
                // diffuse color

                color = result.rect->col_diff_u;

                // specular color

                // mirror light point on normal vector to get perfect reflection

                v3_t light_mirr_p = point_line_mirror( result.isect_p , v3_add( result.isect_p , result.rect->norm_v ) , light_p );

                v3_t tofocus = v3_sub( camera_focus_p , result.isect_p );
                v3_t tomirrored = v3_sub( light_mirr_p , result.isect_p );

                float angle = v3_angle( tomirrored , tofocus );

                // the smaller the angle the closer the mirrored ray and the real ray are - reflection 

                float colorsp_f = ( float ) 0xFF * ( ( M_PI - angle ) / M_PI );
                uint32_t colorsp_u = ( uint8_t ) colorsp_f;
                colorsp_u = colorsp_u << 24 | colorsp_u << 16 | colorsp_u << 8 | 0xFF;

                color = color_average( color , colorsp_u );

            }
            else 
            {
                // shadow

                color = 0x111111FF;
            }

            // if rect is transparent calculate refraction and check for further intersections

            // reflect ray on intersection point and check for further intersections

        }

        framebuffer_drawsquare( screen_grid_x , screen_grid_y , screen_step_size_f , color );

    }

}

Finally let's make the camera movable with the arrow keys. Let's put the whole grid iteration in an infinite loop and check for keypress in every iteration.

#include <stdio.h>

int code = getch( );

if ( code == 67 ) camera_focus_p.x += 10.0;
if ( code == 68 ) camera_focus_p.x -= 10.0;

Final result :

raytrace

Download the code : raytrace_part_three.c

Comment on Ray Tracing Part Three

Ray Tracing Part Two

Let's create a movable camera first. In the first part we fixed the camera to the z axis and the camera window was lying on the xy plane so it was simple to build up the camera window grid. But if we want to move the focus point or the target point to an arbitrary place in the 3D space we have to build up the camera window grid in the 3D space. We get the camera window normal by substracting the focus point from the target point. To get the camera window horizontal axis we have to get the cross product of a vector lying on the xy plane ( 0 , 1 , 0 ) and the normal vector. With that we can get the vertical axis with another cross product. If we have this two vectors we can resize them to the window grid stepping size and we can use them to build up any point on the window grid.

raytrace

#include <stdio.h>

v3_t camera_focus_p = { 40.0 , 20.0 , 100.0 };
v3_t camera_target_p = { 20.0 , 0.0 , 0.0 };

// camera window normal
v3_t window_normal_v = v3_sub( camera_focus_p , camera_target_p );
// xz plane normal
v3_t xzplane_normal_v = { 0.0 , 1.0 , 0.0 };
// create vector that is on the camera window and parallel with xz plane ( horizontal sreen axis )
v3_t window_haxis_v  = v3_cross( window_normal_v , xzplane_normal_v );
// create vector that is on sceeen plane and perpendicular to scr_nrm vector and prev vector ( vertical axis )
v3_t window_vaxis_v  = v3_cross( window_normal_v , window_haxis_v );
// resize horizontal and vertical screen vector to window step size
v3_t window_stepx_v = v3_resize( window_haxis_v , window_step_size_f );
v3_t window_stepy_v = v3_resize( window_vaxis_v , window_step_size_f );

// create rays going through the camera window quad starting from the top left corner

for ( int row_i = 0 ; row_i < grid_rows_i ; row_i++ )
{

    for ( int col_i = 0 ; col_i < grid_cols_i ; col_i++ )
    {

        v3_t window_grid_v = camera_target_p;
        window_grid_v = v3_add( window_grid_v , v3_scale( window_stepx_v , grid_cols_i / 2 - col_i ) );
        window_grid_v = v3_add( window_grid_v , v3_scale( window_stepy_v , - grid_rows_i / 2 + row_i ) );

Let's draw squares instead of dots to make the result better looking.

#include <stdio.h>

void framebuffer_drawsquare( int sx , int sy , int size , uint32_t color )
{
    for ( int y = sy ; y < sy + size ; y++  )
    {
        for ( int x = sx ; x < sx + size ; x++ )
        {

            long location = ( x + vinfo.xoffset ) * ( vinfo.bits_per_pixel / 8 ) + 
                            ( y + vinfo.yoffset ) * finfo.line_length;

            *( ( uint32_t* )( fbp + location ) ) = pixel_color( 
                    ( color >>24 ) & 0xFF , 
                    ( color >> 16 ) & 0xFF , 
                    ( color >> 8 ) & 0xFF, &vinfo );

        }
    }
}

Finally let's create a light source and let's create a specularish reflection of the light from the surface. It's quite simple, we will mirror the light ray on the surface normal and check if the resulting vector'angle is close enought to the focus point - intersection point vector's angle.

#include <stdio.h>

// set up light

v3_t light_p = { 0.0 , 30.0 , 0.0 };

//
...
//

v3_t light_proj_p = point_line_projetion( isect_p , v3_add( isect_p , rect_normal_v ) , light_p );
v3_t light_mirr_p = v3_sub( light_proj_p , light_p );

light_mirr_p = v3_scale( light_mirr_p , 2.0 );
light_mirr_p = v3_add( light_p , light_mirr_p );

v3_t tofocus = v3_sub( camera_focus_p , isect_p );
v3_t tomirrored = v3_sub( light_mirr_p , isect_p );

float angle = v3_angle( tomirrored , tofocus );

// the smaller the angle the closer the mirrored ray and the real ray are - reflection 

float colorf = ( float ) 0xFF * ( ( M_PI - angle ) / M_PI );
uint8_t coloru = ( uint8_t ) colorf;
uint32_t color = coloru << 24 | coloru << 16 | coloru << 8 | 0xFF;

framebuffer_drawsquare( screen_grid_x , screen_grid_y , screen_step_size_f , color );

Final result :

raytrace

Download the code : raytrace_part_two.c

Comment on Ray Tracing Part Two

Ray Tracing Part One

Let's create a simple ray tracing engine just for fun and to showcase how simple and how CPU intensive the underlying algorithm is compared to rasterization. We won't use any libraries, frameworks, GPU drivers and other terrible things, let's stick with the linux framebuffer and the standard C libraries, it still can be done within a hundred code lines.

We need two things in our scene : a camera, defined by a focus point and a camera window, and some geometry, for example a rectangle. The process is simple : we create lines ( rays ) starting from the camera foucs point and going through the camera window starting from the top left corner with a specific stepping, and we check if these rays are colliding with the geometry ( the lines are intersecting with the rectangle ). If there is an intersection, the screen point will be white, if not, it will be blue for demonstration purposes. This is what we do for start.

Check out the code :

#include <stdio.h>
#include <stdint.h> // for uint8 and uint32
#include <fcntl.h>      // for open()
#include <float.h>      // for FLT_MAX
#include <sys/mman.h>   // for mmap()
#include <sys/ioctl.h>  // for framebuffer access
#include <linux/fb.h>   // for framebuffer access

typedef struct _v3_t
{
    float x, y, z;
} v3_t;

/* add two vectors */

v3_t v3_add( v3_t a , v3_t b )
{
    v3_t v;

    v.x = a.x + b.x;
    v.y = a.y + b.y;
    v.z = a.z + b.z;

    return v;
}

/* substracts b from a */

v3_t v3_sub( v3_t a , v3_t b )
{
    v3_t v;

    v.x = a.x - b.x;
    v.y = a.y - b.y;
    v.z = a.z - b.z;

    return v;
}

/* creates dot product of two vectors */

float v3_dot( v3_t a , v3_t b )
{
    return a.x * b.x + a.y * b.y + a.z * b.z;
}

/* scales vector */

v3_t v3_scale( v3_t a , float f )
{
    v3_t v;

    v.x = a.x * f;
    v.y = a.y * f;
    v.z = a.z * f;

    return v;
}

/* returns squared length to avoid square root operation */

float v3_length_squared( v3_t a )
{
    return a.x * a.x + a.y * a.y + a.z * a.z;
}

/* creates pixel color suitable for the actual screen info */

uint32_t pixel_color( uint8_t r, uint8_t g, uint8_t b, struct fb_var_screeninfo *vinfo )
{
    return  ( r << vinfo->red.offset ) | 
            ( g << vinfo->green.offset ) | 
            ( b << vinfo->blue.offset );
}

int main( )
{

    // get framebuffer

    struct fb_fix_screeninfo finfo;
    struct fb_var_screeninfo vinfo;

    int fb_fd = open( "/dev/fb0" , O_RDWR );

    // get and set screen information

    ioctl( fb_fd, FBIOGET_VSCREENINFO, &vinfo);

    vinfo.grayscale = 0;
    vinfo.bits_per_pixel = 32;

    ioctl( fb_fd, FBIOPUT_VSCREENINFO, &vinfo );
    ioctl( fb_fd, FBIOGET_VSCREENINFO, &vinfo );
    ioctl( fb_fd, FBIOGET_FSCREENINFO, &finfo );

    long screensize_l = vinfo.yres_virtual * finfo.line_length;

    uint8_t *fbp = mmap( 0 , screensize_l , PROT_READ | PROT_WRITE , MAP_SHARED , fb_fd , ( off_t ) 0 );

    v3_t screen_d = { vinfo.xres , vinfo.yres };    // screen dimensions

    // start ray tracing

    // set up camera

    v3_t camera_focus_p = { 0.0 , 0.0 , 100.0 };        // camera focus point

    // set up geometry

    v3_t rect_side_p = { -50.0 , 0.0 , -100.0 };    // left side center point of square
    v3_t rect_center_p = { 0 , 0 , -100.0 };        // center point of square
    v3_t rect_normal_v = { 0 , 0 , 100.0 };         // normal vector of square

    float rect_wth2_f = 50.0;                       // half width of square
    float rect_hth2_f = 40.0;                       // half height of square

    // create corresponding grid in screen and in camera window 

    int grid_cols_i = 100;

    v3_t window_d = { 100.0 , 100.0 * screen_d.y / screen_d.x };    // camera window dimensions

    float screen_step_size_f = screen_d.x / grid_cols_i;    // screen block size
    float window_step_size_f = window_d.x / grid_cols_i;    // window block size

    int grid_rows_i = screen_d.y / screen_step_size_f;

    // create rays going through the camera window quad starting from the top left corner

    for ( int row_i = 0 ; row_i < grid_rows_i ; row_i++ )
    {
        for ( int col_i = 0 ; col_i < grid_cols_i ; col_i++ )
        {

            float window_grid_x = - window_d.x / 2.0 + window_step_size_f * col_i;
            float window_grid_y =   window_d.y / 2.0 - window_step_size_f * row_i;

            v3_t window_grid_v = { window_grid_x , window_grid_y , 0.0 };

            // ray/pixel location on screen

            int screen_grid_x = screen_step_size_f * col_i;
            int screen_grid_y = screen_step_size_f * row_i;

            // pixel location in framebuffer

            long location = ( screen_grid_x + vinfo.xoffset ) * ( vinfo.bits_per_pixel / 8 ) + 
                            ( screen_grid_y + vinfo.yoffset ) * finfo.line_length;

            // project ray from camera focus point through window grid point to square plane
            // line - plane intersection point calculation with dot products :
            // C = A + dot(AP,N) / dot( AB,N) * AB

            v3_t AB = v3_sub( window_grid_v , camera_focus_p );
            v3_t AP = v3_sub( rect_center_p , camera_focus_p );

            float dotABN = v3_dot( AB , rect_normal_v );
            float dotAPN = v3_dot( AP , rect_normal_v );

            if ( dotABN < FLT_MIN * 10.0 || dotABN > - FLT_MIN * 10.0 )
            {
                // if dot product is not close to zero there is an intersection 

                float scale_f = dotAPN / dotABN;
                v3_t isect_p = v3_add( camera_focus_p , v3_scale( AB , scale_f ) );

                // let's find if intersection point is in the rectangle

                // project intersection point to center line of rectangle
                // C = A + dot(AP,AB) / dot(AB,AB) * AB

                AB = v3_sub( rect_center_p , rect_side_p );
                AP = v3_sub( isect_p , rect_side_p );

                float dotAPAB = v3_dot( AP , AB );
                float dotABAB = v3_dot( AB , AB );

                scale_f = dotAPAB / dotABAB;

                v3_t proj_p = v3_add( rect_side_p , v3_scale( AB , scale_f ) );

                // check x and y distance of intersection point from square center

                float dist_x = v3_length_squared( v3_sub( proj_p , rect_center_p ) );
                float dist_y = v3_length_squared( v3_sub( proj_p , isect_p ) );

                // compare squared distances with squared distances of rectangle

                if ( dist_x < rect_wth2_f * rect_wth2_f && 
                     dist_y < rect_hth2_f * rect_hth2_f )
                {
                    // cross point is inside square, we draw it white

                    *((uint32_t*)(fbp + location)) = pixel_color( 0xFF, 0xFF, 0xFF, &vinfo );
                }
                else
                {
                    // cross point is outside square, we draw it blue

                    *((uint32_t*)(fbp + location)) = pixel_color( 0x00, 0x00, 0xFF , &vinfo );          
                }

            }

        }

    }

    return 0;
}

In the upper part of the code we define a vector structure with three members for three dimensions and a few functions to manipulate them. ( addition, substraction, dot product, scaling and length ).

The first part of the main function is dealing with the linux framebuffer, we map it to an uint buffer. There's not much to explain in this, I just copied this part of the code from a linux pro's tutorial.

The second part is the ray tracing itself. We set up our world here, create the grid on the screen and on the camera window and start iterating. The most complicated part is the line-plane ( ray - rectangle ) intersection and the point - rectangle center line projection. Both are done with dot product calculations in a slightly different way.

Ray-Rectangle intersection in our scene setup :

raytrace

Point to Line projection

raytrace

After we have both points we can check easily if the line intersects with the rectangle or not and we can draw our pixels.

How to run it :

Just copy the code above, create a file called raytrace.c and paste everything in it. Then compile the file : 'gcc raytrace.c -o raytrace'

And then run it : './raytrace'

On some linux distros you may need root access rights to access the framebuffer, this case type : 'sudo ./raytrace'

If you don't see anything you probably have to switch to console mode because your window manager interferes with your framebuffer. Just switch to a console with CTRL+ALT+F1.

You should see this :

raytrace

In the next part we make our camera movable, use squares instead of dots and we add light sources!!!

Comment on Ray Tracing Part One

The Non-Sysadmin Manifesto

I’m a computer user. 90 percent of the developed countries is a computer user in some way.

Every second spent fighting the computer instead of just using it is a waste of time.

I’m a software engineer. Every second spent fighting the computer instead of just coding is a waste of time.

Software industry is pretty much a mess no matter what architecture and operating system you use because you have to waste a lot of time and energy.

But we can make it better!

We need absolute standalone applications independent from the OS version and the shared library versions the OS have

Problems : App doesn’t start, quits or hangs immediately or during runtime, installation fails because of missing software or dependency hell

Apps should contain EVERY dependency that is needed to run the app on the specific architecture. App size is not a problem anymore in, we have terabyte SSD’s and gigabyte/s networks now.

Apps should run out of the box on the target architecture regardless the OS version. Of course it can be tricky with hardware drivers that are manufacturer-dependent but OS’s should use industry-standard API’s to let programs communicate with hardware and should hide drivers to make this possible.

Of course this can raise security concerns — what if somebody replaces libs and hacks your memory management? Well, it is possible on current OS’s already. You should know where your app is coming from.

What OS’s can do to make using a computer safer is provide total transparency! It should log and show you in a human readable way where the app is connecting to ( TCP/IP requests ), it’s actual connection, what hardware it is using, it should ask for permissions to use specific hardware if the user asks it to, it should ask for permissions to connect to remote hosts if the user asks it to. Apps should report their actual progress to the OS and to the user if they are doing something CPU consuming.

With these rules computers can be perfect, safe and stable production instruments.

We need absolute standalone development projects

Modern software projects are a hell to set up. Dependencies, dependencies of dependencies, parameters, settings scattered between a thousand config files, makefiles, old project files using old IDE versions, old scripts using old script language versions and old dependencies…

Why programming have to be like this? Why don’t we make self-contained development projects containing everything? The closest thing to this maintaining a virtual machine image with everything installed, the proper OS version, IDE version, all downloaded libraries but it’s a drag.

We should make dev projects download/install/move in one package with everything that is needed for immediate development/deployment.

In addition, if Apps become first class citizens on the OS, Code should be the first class citizen in development projects, we should do more setup/environment checking/etc in the code itself and make less setup scripts/config files.

I’m dreaming of a world where everything is out-of-the-box, straightforward and just working. Let’s do this!

Comment on The Non-Sysadmin Manifesto

MilGraPi

Preparing a Raspberry Pi for OpenCV development is really time consuming, OpenCV takes hours to compile and a lot of other things have to be set up, so I just share my SD Card image here to speed up Raspberry OpenCV development for others.

For Raspberry Pi 3 Model B :

I shrank the root partition to 7GB to make it suitable for smaller SD cards. It has 300MB free space only so you better expand it to fit on the target SD card. You can do this right on your raspberry with an additional USB-SD card stick and gparted. User/pass is pi/raspberry. After startup it autologins directly to OpenBox. Right click -> Terminal emulator to open a terminal. To test and run the OpenCV examples type "workon cv" to activate the python virtual environment, go into "/home/pi/Desktop/OpenCV-Face-Recognition-master/FacialRecognition" and type "python 03_face_recognition.py" . If you have a raspberry camera installed and enabled with raspi-config, a camera window should pop up and face detection should start. For a usb camera you have to modify the scripts a little.

For Raspberry Pi 3 Model B+ :

It is a 16 Gbyte SD image file in Mac dmg format, balenaEtcher can handle it. User/pass is pi/raspberry. After login you can start the GUI by typing startx. Right click -> Terminal emulator to open a terminal. To test and run the OpenCV examples type "workon cv" to activate the python virtual environment, go into "/home/pi/Desktop/OpenCV-Face-Recognition-master/FacialRecognition" and type "python 03_face_recognition.py" . If you have a raspberry camera installed and enabled with raspi-config, a camera window should pop up and face detection should start. For a usb camera you have to modify the scripts a little.

If you find this image useful please donate at the top of the page.

What does it contain

Base System

  • Raspbian Lite

GUI

  • openbox for window manager
  • tint2 for taskbar
  • slim for autologin - Model B image only
  • pcmanfm for file manager - Model B image only
  • chromium for stack overflow

Dev Tools

  • lxterminal for terminal
  • vim/nano for python
  • codeblocks for c/c++ development - Model B image only
  • python for opencv development
  • opencv 4.0 for computer vision
  • picamera python module for the raspberry camera
  • opencv face recognition examples - Model B image only
  • steamberry face and motion recognition - Desktop/SteamBerryMotionDetector and SteamBerryFaceDetector

Games

  • Scratch, Termite, Cortex, Brawl for short rests ( enable full KMS OpenGL support in raspi-config to play them ) - Model B only

Download

Comment on MilGraPi

Cortex And Brawl Is Open Source

And finally, my second and third game, Cortex and Brawl, is also open-source!!! Check them out.

https://github.com/milgra/cortex

https://github.com/milgra/brawl

Comment on Cortex And Brawl Is Open Source

KineticUI Is Open Source

I really like UI renderers. I really like beautiful UI's. I'm really not satisfied with the state of UI currently. With todays technology, super sophisticated animations, font animation can be done and should be used. This is my take on a modern, smooth UI renderer.

Check it out on github

Demo :

Comment on KineticUI Is Open Source

WebConfThreeD Is Open Source

For a company hackathon I came up with the idea of a 3D web conference tool. I wanted it to look like a control room where you can see everything at once and can zoom to individual elements. Here it is :

Check it out on github

Demo :

Comment on WebConfThreeD Is Open Source