Our project sprouts from the idea of developing a software that explores the possibilities of an open sourced image processing library. The first members of our team discussed two initial proposals with José Dominguez involving the analysis of real-time captured images. One of the proposals was to develop Rock-Paper- Scissors (RPS), and the other, (no less interesting but more difficult when seen from  the bio-metric computing knowledge) required, a lie detector based on images taken from the eyes. We decided on RPS because the problem domain did not require us to study subjects too far from our fields. RPS is in the field of every child!

We began researching about the technologies needed to integrate opencv libraries into our .Net environments. Since opencv is written in C and intended to be used by C/Cpp programmers, we considered using this language, but we found that we were not focusing enough on the low memory management aspects but  more in the high-level interaction with the user. This led us to delve into EMGU, a C# wrapper for opencv primitives and low-level memory management.

We found that the problem of gesture recognition for our game could be approached in two broad ways: one of them is the analysis of geometric patterns in the images taken and the other is  the training the system to learn which images could be recognized as a true gesture. Although opencv implements algorithmic support for the aforementioned approaches, we found that the training concept, although easy, required a lot of time only to take the picture sets required to train the machine.

All things considered, we decided to develop a geometric approach as the solution. Now, the problem has two parts: The first one is to extract the required data from a raw rgb image captured by the camera, and the other is to effectively analyze the geometric nature of the extracted data.

To extract the required data means to determine the contour of the hand from the captured images. In our case, this contour is extracted by applying some operations supported by opencv in order to convert the rgb image in a b/w image containing only the hand (think about this as a ‘white glove’ in a black background). We researched the internet and found a lot of blue sky research documents about how to determine ‘which parts of the image constitute a skin’; to be brief, the basic algorithms are written in opencv but those algorithms require the elicitation of some constants that can only be determined under managed conditions in labs with special camera equipment. We went easy on that restriction and established that the background of the place where the image is being captured has to have only one color and preferably a color ‘far from red or orange’ (e.g. grey).

To analyze the geometric nature of the image means to take the contour and extract some features that could be subject to processing by a specific algorithm. We tried many algorithms answering questions like ‘how could a finger be distinguished from the other ones?’ or ‘which is the farthest fingertip from the center of the hand?’. All those approaches had their pros and cons but we eventually found some geometric variations of curves which let us to process the image that we needed.

Finally, we got the help of an UI designer who improved the presentation of the app. This made the application nicer for the eye of the player and made it better in relation to usability requirements.

Facebooktwitterredditlinkedinby feather

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>