MobileASL
Home People Publications Presentations Press Coverage Theses Pilot Field Study Links Contact Us

MobileASL is a video compression project at the University of Washington and Cornell University with the goal of making wireless cell phone communication through sign language a reality in the U.S.

Below is a movie explaining the research and showing the phones in action!

With the advent of cell phone PDAs with larger screens and photo/video capture, people who communicate with American Sign Language (ASL) could utilize these new technologies. However, due to the low bandwidth of the U.S. wireless telephone network, even today's best video encoders likely cannot produce the quality video needed for intelligible ASL. Instead, a new real time video compression scheme is needed to transmit within the existing wireless network while maintaining video quality that allows users to understand semantics of ASL with ease. For this technology to exist in the immediate future, the MobileASL project is designing new ASL encoders that are compatible with the new H.264/AVC compression standard using x264 (nearly doubling compression ratios of MPEG-2). The result will be a video compression metric that takes into account empirically validated visual and perceptual processes that occur during conversations in ASL.

People already use cell phones for sign language communication in countries like Japan and Sweden where 3G (higher bandwidth) networks are ubiquitous. See videos from Sweden.

This material is based upon work supported by the National Science Foundation under Grant Nos. IIS-0514353 and IIS-0811884, Sprint, Nokia, and HTC.


Figure 1: A hypothetical version of a video phone being used for ASL.


Figure 2: Using skin detection algorithms to find important areas in the video.


Figure 3: (a) Using motion vectors in the video to (b) distinquish macroblock level encoding.

MobileASL University of Washington on Facebook

last updated 2/2014

visitors since 2/16/07