The wristwatch computing project is investigating how we can interact with very small, body-worn devices. The first such device under investigation is the wristwatch. Wristwatch computers have been around for many years, but almost all of them are complicated, many purpose devices. Our approach is different. Rather than having a be-all, do-all watch, why not have it be a lightweight (both physically and mentally) interface to another, nearly-ubiquitous device? We propose that a watch computer makes more sense when wirelessly connected to a mobile phone.
There are three directions to our research:
There are a number of tasks that are commonly done on a mobile phone that shouldn't actually require touching the phone itself. These include sending a call to voicemail, reading an SMS or dialing a number for a Bluetooth headset. These tasks are so short that the amount of time taken to find the phone (in a pocket, purse or bag), perform the task, and put the phone away is likely to much longer than the time just the task takes; for example, reading an old SMS takes as many as 14 steps on a Nokia 6630 and 9 steps on an Apple iPhone!
How can we decrease the overhead of performing such simple tasks? Our approach is to put the interface onto the wristwatch, and control the interface with gestures.
The issue with gesture control, however, is that of false positives. This is when the gesture recognition system mistakes a random hand movement for an intentional gesture and performs an undesired action. One solution to this problem is “push-to-gesture”: requiring a button press to begin gesture recognition. The problem here is that the non-watch-wearing hand is now involved, so why not just add some more buttons to perform actions?