New Contributions to Vision-Based Human-Computer-Interaction in Local and Global Environments

Share
Author
Kohler, M.R.J.
Pub. date
January 1999
Pages
276
Binding
softcover
Volume
2 of Dissertations in Computer Graphics
ISBN print
978-1-58603-137-4
Subject
Computer & Communication Sciences, Computer Science

Vision-based human-computer interaction means to use computer-vision technology for interaction of a user with a computer-based application. This idea has recently found particular interest of research. Among the many possibilities of implementing interaction, we focus on hand-based interaction, expressed by single hand postures, sequences of hand postures, and pointing. Two system architectures are presented which address different scenarios of interaction, and which establish the frame for several problems for which solutions are worked out. The system ZYKOP treats hand gestures performed in a local envoronment, for example, on a limited are of the desktop. The goal with respect to this classical scenario is a more reliable system behaviour. Contributions concern color-based segmentation, forearm-hand separation as a precondition to more shape-based hand gesture classification, and classification of static and dynamic gestures. The ARGUS concept makes a first step to the systematic analysis of hand-gesture based interaction combined with pointing in a spatial envoironment with sensitive regions. Special topics addressed within the architectural framework of ARGUS are the recognition of details from the distance, compensation of varying illumination, changing orientation of the hand with respect to the cameras, estimation of pointing directions, and object recognition. These and other problems are analyzed, and solutions to them are suggested. The performance of many of the proposed solutions is investigated experimentally, based on prototype implementations of the ZYKLOP and ARGUS architecture.