Intel Perceptual Computing Hackathon “Best Educational” Winner

Stashed away in the upper chambers of Sacramento’s Hacker Lab, fueled by cola, beer, and chips, BOGWAC (Bunch of Guys Working at Concordus) pushed the limits of Intel’s Perceptual Computing SDK. We started the evening with a plan to build a “Math Blasters”-style game for teaching American Sign Language finger spelling. We were going to use Unity3D to build the game and the SDK to integrate Creative’s Interactive Gesture Camera for sign recognition. Two points need to be made here: 1) No one on our team had ever used Unity3D and 2) No one on our team had ever seen or used the SDK. Were we ambitious? Maybe.

With our sights set, we broke into two teams: Game Development and SDK Integration. The Game Development team was pleasantly surprised to find that there were many readily available free assets for building games. Importing one of the environment assets had our game environment ready within a few minutes. Next we had to build the “enemy” and player objects. Scripting away, by 5pm the next day we had what we desired; a shooter game controlled by a user’s input of specific alphabet characters.

The SDK Integration team had an interesting challenge. When we originally arrived, we were told that the basic poses & gestures built into the SDK would not accommodate our goal. We would have to build our own pose mappings using either depth or geonode data. We had a number of people tell us that we wouldn’t have enough time to build those custom mappings in time for presentation because of the complexity of the task. So, we took that as a challenge and ran. Our new goal was to have at least a single sign mapped to show the concept. Interestingly enough, by the end of our time, we not only had seven characters mapped (U,V,C,G,H,Y,L), but we were able to add an initial calibration to allow the game to define the radius of each finger in order to appropriately calculate the differences between G & H, and U & V & Y. We ran out of time, but our plan was to add depth data and color data to differentiate between very similar signs like A, S, T, N, and M. Given one more day, we could have had a complete alphabet that was accurate despite differences in hand sizes.

During our demonstration we showed the game’s ability to reject similar but inaccurately presented signs as well as the ability to accept accurately presented signs. What we were told the SDK was not ready to do, we did. What we were told we wouldn’t have time to do during the Hackathon, we did. All of this we did with no experience with Unity3d (thank you Unity for being easy to use) and no prior exposure to the SDK (thank you Intel for making it easy to extend).

In the end, we won “Best Educational” implementation of the SDK. All of the other participants developed wonderful implementations. We are grateful to Intel for the opportunity to play with their SDK. We are grateful to the Hacker Lab for hosting the event.

Below is a demo of our game; one round lost and one round won.

Tagged with: , ,
Posted in .Net, Blog, Concordus Applications, Press Release
  • John Feagans

    Wondering if you attempted yet to solve the problem of repeated fs signed letters in the ASL manual alphabet and have you done further development on this learning game? I am looking at how to do recognition of signs like shi and ji (shi moves horizontally) and yo where the sign moves inward in JSL. KSL might be easier as it has only one compound ssang siot. We do formative learning quizzes which are uni-directional but the addition of sign recognition makes the learning experience bi-directional which you could only do in a classroom setting.