The Virtual Console research project introduced the notion of wearable displays to replace large, heavy, expensive, workstations onboard airborne platforms. Adding head-mounted displays, which obscured some or all of the users external vision, introduced new challenges around standard input methods. This study evaluated traditional and non-traditional input methods while using a prototype version of the Virtual Console hardware and software.
Results of this study were presented on 2010/09/10.
This is a summary of our methodology. Full details of the final report are unavailable pursuant to distribution restrictions.
As a primary researcher I co-authored the project test plan, as well as conducted multiple of the testing seasons. Additional seasons were conducted by the fellow researcher. Following the completion of the testing, I collated all data and wrote the final test report.
Participants were given multiple short phrase (e.g., “watch out for low objects”) and alpha-numeric codes (e.g., “123fd”) input tasks while wearing a head worn display (HWD) similar to that pictured below. The six different input methods tested were:
- Control: using a standard keyboard and monitor
- Peek: See the physical keyboard by peeking underneath the HWD
- Voice Recognition: Speak aloud inputs to the Windows 7 Voice Recognition software
- WebCam: Video of the keyboard is displayed inside the HWD
- On-Screen Soft-Keyboard: Use a mouse to type with the Windows Software Keyboard app
- Touch Type: The physical keyboard cannot be seen.
The evaluation used standard Windows software, as opposed to the Virtual Console prototypes, in order to minimize development work and focus attention of participants towards the input method and not the prototype system.
The Virtual Console’s unique visual interface creates new challenges and new opportunities for inputting text and system commands. This test was designed to evaluate the feasibility of six different input methods while wearing a head worn display similar to that that might be used with the Virtual Console.
The design of the test focused on two independent variables:
- Control: No HWD, standard keyboard and monitor input
- Peek: Wearing HWD, but can “peek” under bottom of the HWD to see standard keyboard
- Voice Recognition: Wearing HWD, use Windows voice recognition software with the standard keyboard removed
- WebCam: Wearing HWD, a window displays a video of the standard keyboard positioned at a comfortable typing position
- On-Screen Soft-Keyboard: Wearing HWD, use mouse to type with the Windows software keyboard
- Touch Type: Wearing HWD, use standard keyboard with no visual line-of-sight
- Mixed Alpha-Numeric: Five character string of the format: NNNaa (e.g., “123ge”) mimicking track numbers
- Short Phrases: Multi word phrases intended to mimic instant message chat.
Metrics were collected both automatically and through subjective interviews after each test. Objective metrics automatically collected included: task completion time, error rate and characters per second. Subjective metrics collected after the test included information such as perceived effectiveness and comfort of each input method.
Participants were provided with written instructions for the overall evaluation and for each specific input method. All participants performed the control condition first with the remaining conditions randomized. The tasks within each condition were administered in a fixed order, consisting of 1) Short Phrases and 2) Mixed Alpha-Numeric codes.
Participants were allowed to practice with each input methods until they felt comfortable. Because participants were unlikely to be familiar with the voice recognition and on-screen keyboard input methods they were given additional instruction and practice time in these conditions.
Quantitative metrics collected during each test included task completion time, characters per second, and error rates. Following the completion of all input methods each participant was given a questionnaire consisting of a Likert scale to gather the perceived effectiveness and comfort of each input method.
It was found that when typing phrases, the On-Screen Soft-Keyboard was slowest for all participants and had some of the lowest effectiveness ratings. All participants, regardless of how fast they were in the other conditions, were slowed down to around 1-2 characters per second when using the On-Screen Soft-Keyboard. The Peek, WebCam and Touch Typing conditions did not appear to substantially alter the speed at which participants were able to input phrases. It is believed that this is because participants were simply touch typing phrases most of the time. So even when the ability to see the keyboard was available, it was not being utilized when typing phrases.
Typing alpha-numeric codes was slower in (terms of Characters Per Second) for all participants than typing phrases. Participants appeared to cite two reasons for this. 1) They were less comfortable touch typing numbers than letters. 2) The time required to reposition the hands from the number keys to the home keys (letter keys).
The difficulty participants had touch typing numbers is evident in the substantially higher Total Error Rate for Touch Typing codes than Touch Typing phrases. Touch typing phrases is not substantially different from the control, however touch typing codes was the most error-prone condition for all participants.
Participants identified several ways to improve the Voice Recognition and WebCam input methods.
Effectiveness ratings for the Peek, WebCam and Touch Typing conditions were generally more positive than the Voice Recognition and On-Screen Keyboard ratings.