Sunday, December 2, 2007

Cracking Interactivity with Simon Conlin

On Thursday, November 29, 2007 our guest speaker named Simon Conlin came for the final guest lecture for the course Multimedia Pioneering at Sheridan College, Canada.

Simon Conlin is the co-founder of Flash In The Can, also known as FITC. FITC ( is a Canadian company that produces engaging design and technology events that inspire, educate and challenge the best new media designers and developers from around the globe. Simon is also a consultant for the technologies related to gesture interactivity.

Simon originally has a music background, but few years ago he developed a taste in multimedia and now he is a pioneer for today’s and future’s interactive/new media technology. Simon had an accent and I guessed it was British and yes he moved to Canada like five years ago.

In the beginning Simon directed us to a website called where the users could take pictures using their webcams and deform the pictures in different forms to create abstracts.

Simon had a list of YouTube links to share; I have included the links of the ones I liked at the end of this entry. Right after the introduction he played a video by Zach Booth Simpson, an installation artist who has a biology background. He connects science with interactivity and creates interactive art using projectors, cameras, and shadow detection algorithms under the name Mine-Control.

Simon gave us tips, how we can combine our learning form the Interactive Multimedia course with our past background and future interest or knowledge. I think he was some how relating to Zach Booth Simpson, biologist turned artist and himself as well, musician turned multimedia pioneer.

Next, Simon showed us a video of the interactive stage setup for George Michael’s concert with real-time audio analysis and position tracking by a German design team called Meso. In layman terms I would say it’s like the media player’s visualizations, but at the same time it’s a lot different than just a visualization. I’m impressed how technology is dramatically changing the way things are or thing were. Wow! Make the floors and walls come alive.

At last Simon showed us a multi touch interactive wall where the user could play with musical instruments with the motion capture of their hands. It reminded me of the GestureTek’s trip where we saw a similar kind of interactivity and/or game.

I liked the video where Zach Booth Simpson talked about art and interactivity.

It was a brief and quick lecture and was more of YouTube videos than the actual speaker speaking. At last Simon ended up forwarding his list of YouTube links so that we could explore more on our own.

Related Links:


Sunday, October 14, 2007

Field Trip: GestureTek

On Friday morning, October 12, 2007, I along with my classmates and my teacher Dan Zen visited GestureTek at downtown Toronto.

At GestureTek we met Mr. Vincent John Vincent, president of GestureTek Inc. Almost twenty years ago from now he, in collaboration with his business partner Francis Macdougall, pioneered the idea of human computer interaction using video cameras. Today GestureTek is world leader in gesture recognition technology. GestureTek has developed a system where you can use your hands and body instead of a mouse, keyboard, joystick and touchpad to control the interaction on the computer like: accessing information, controlling interactive interface, and immersing yourself in interactive 3D games and controlling the movement, for instance we can say like in Wii, but it is technically different from Wii. GestureTek’s system is designed to work with any computer devices like Desktops, Laptops, Mobile Phones, Consoles, and Public Displays etc. and can work on surfaces like floors, walls, panels, screens etc.

The moment I entered GestureTek I saw an interactive screen display right in front of the entrance which had GestureTek’s logo on a white background. As I started moving my hands in front of the screen the logo’s characters started bouncing with my gestures and that white background started wiping off with the movement of the hands and I could see my self in the screen for that moment, like clearing off the vapour from a mirror.

The first thing Mr. Vincent introduced is called HoloPoint, which can track the movement of the finger by placing the finger in the control frame that has two cameras on left and right side of the frame. This was very interesting and it was my first time to see anything like that in real. We don’t need no mouse or pointing device to control the interface. It is going to change the way we use computers or interactive environments.

Then he took us system called GestureXtreme. It was a green screen where we could virtually immerse ourselves into the 3D environment. It transports our image into a computer generated landscape without use of any gear or interactive device like joystick etc. I was fascinated when I saw myself on screen and interacted with onscreen characters in real time. My body motion controlled the program. I tried the soccer game where I was the goalie and I had to stop the ball coming into the goal by the movement of my hands, head or legs. The best part of this type of gaming environment is that we exercise physically also. This system of GestureTek is already being used by the news channels for the display of weather forecast.

The next thing we saw was HoloFrame, which was a finger tip control 60” projection screen which creates a sense of floating image in a dark room that allows the user to control and interact with the display just by pointing and moving fingers on it. Initially I thought it was a conventional touch screen, but then I asked Mr. Vincent how it works. Two tiny cameras imbedded in the screen track the movement of the hand or where we point. He mentioned the usage of this technology for Ford. The most interesting part was when Mr. Vincent mentioned about the possibility of having to try the clothes in store without actually wearing them.

Afterwards we saw an interactive floor which had eye-catching realistic visuals which intelligently responded to the body’s movements. I saw a couple of ads where we could play with the interactive floor. Three of my classmates tried the interactive racing game on the floor where they had to continuously shake their feet over the steps. Karen was the winner every time. I was impressed how it has or it is going to revolutionalize the advertising and entertainment in public space. Just use your hand and feet to control and interact in real time with dynamic ads, games, and images etc.

Next we saw a multi touch interactive table which looked like Microsoft Surface . I’m still wondering If GestureTek has licensed Microsoft for the use of their technology or Microsoft has created it on their own. Technically the interactive table was similar to the interactive floor, but in this case the floor was the glass on the table. I got a chance to play a ping-pong game. Though it was not very accurate but it was interesting. I also tried the interactive table where we had a virtual photo album on the screen and we could move, place, and change the size of the pictures in the way we like. It would be a good idea of having such a thing in restaurants or clubs.

After that Mr. Vincent showed us the interactive wall which was based on their GestureFX System Series like the interactive floor and interactive table. This is a very effective way to utilize walls for advertising and interactive games.

At last we saw the GestureTek’s AirPoint System which is from the GestPoint family of products like HoloPoint and HoloFrame. It has a similar architecture like HoloPoint; having two cameras and a couple of infrared capturing devices which capture motion of hands in a matrix. It is designed for public installations like conference rooms and can handle any lighting conditions.

GestureTek has created so many possibilities with this technology. DoCoMo, a Japanese company, in collaboration with GestureTek has started providing this innovative technology on mobile phones, which lets the user to browse maps and play videogames by tracing a finger or by tilting or waiving the handset etc. Examples can be seen here, here and here

GestureTek has also designed a system called IREX which will help the healthcare professionals use the gesture recognition technology for their patients in rehabilitation and exercise and may help them recover from disabilities. Also, it will help physically impaired people to interact with computer by the interpreting sign language

As far as technology and ideas are concerned, it was all very interesting and I can’t wait for the day when it will be implicated in our daily lives. No more of being couch potatoes and those old computer gaming interfaces because now we will be able to exercise physically. I don’t know if this will interest people in terms of health, but everyone has a different vision to it. The tour was a demo of how technology is going to impact our future lives.

Related Links:

GestureTek Home
About Computer Vision Technology
GestureTek Mobile™ in Use
Motion Sensing in Camera Phones
Microsoft Surface
Rehabilitation in Action
Xbox 360 using GestureTek’s technology
Working of Multi-Touch System
Play Station to sense user emotions
EyeToy using computer vision technology
Replacing Keyboard and Mouse

Friday, October 5, 2007

Visualization Design Tour

Today October 4, 2007, Students of Interactive Multimedia visited Visual Design Institute (VDI) at Sheridan College where we met Mr. Song Ho Ahn, Visualization Researcher and Developer and Mr. Ian Howatson, Senior Web Developer.

VDI is dedicated in research and excellence of computer visualization and simulation developing wide range of applications in science, medical, engineering, educational, cultural and environmental fields.

In the beginning Ian introduced an animator/simulator called “Collision Investigation: Skid Marks” designed for police officers which could calculate the speed of a car based on the skid marks on the road. The customizable environment was designed to explore different results based on road conditions. At last the system used special mathematical algorithms to calculate the speed of car from the length of skid mark. This project had involved five years in research and development of the system to help Canadian Police Forces to grab hold of the suspects. This system mainly used XML, Flash, 3D Modeling/rendering tools like Maya and OGRE 3D (Object-Oriented Graphics Rendering Engine) for graphics. Ian presented a another simulation project called "Greater Toronto Airport Authority (GTAA)" in which the user could park the car virtually using a Steering Wheel remote control, brake and accelerator pedal controls. Ian demonstrated a third project called "Vikings (Realtime)" to explore the history and mystery of Vikings village (Canada) as it would have looked hundreds of years ago.

In the next part, Song took us to his lab where he demonstrated a software called "Facial Animation Communication Engine (F. A. C. E.)" developed in C++ which could control the movement of a 3D character by the means of a video camera focused right on the face. Technically the camera sent the captured movie to the software in real time and the software used customized plug-ins to sense/transform the motion. The plug-ins converted the stream of colored images, received from the video camera, into hi-contrast black-and-white sequence images to clearly define facial features. Depending on the movement of those light and dark patterns the system controls the movement of the 3D object or character. This system is still under development and can only sense the movement of the face yet, but the system has to be taken to an advanced level to sense/recognize every emotion/expression on the face. In comparison to Wii a system like F. A. C. E., when fully developed, can let us play interactive video games by sensing the gestures/movements of the body. A system like would not be just limited to playing video games, but could also be used in character animation where it will be able to make movements of 3D characters more realistic. An example of such technology can be seen here .

At last, we visited the "Immersion Studios" which is another unit of VDI. It had a high-tech multi-screen display system where different movies coincided to form a single wide screen movie clip. We were shown a science fiction movie where the actions on the screen were controlled from the wireless tablet(s) provided to the user(s), as prompted. The actions were based on the choice made by a group of users in common or majority. Each phase of the game, from starting to end, was dependent on the selections made by the users jointly. An Example can be seen here and here. At last we had to kill the space virus, that had affected the space traveler, by constantly poking on the screen of the table which was interesting but not very effective because it kept halting in between. By the use of such a technology we will be able to watch the same movie multiple number of times and every time we can have different ending based of what we selected.

The educational tour of VDI helped us explore various emerging technologies of today and tomorrow. It will help us pioneer for tomorrow.

Related Links:

Sony technology focuses on smile

Saturday, September 22, 2007

Portrait Assignment

Finally, I have completed my portrait assignment!!! I wanted it to do it with Mask, but it did not work. So, I came up with a new idea. You can see it working Here

Wednesday, September 19, 2007

Trial of Zen Mix

Tried Zen Mix for myself and came out with this. All you need is a URL to your picture and a FLV format video. The Picture is one of my assignments and can be seen Here

You can try it for youself at

Sunday, September 16, 2007

Happy Birthday

Today Septembers 16, 2007 a blog for Multimedia Pioneering is born.