Flight Test Evaluating Virtual Console Technology

The virtual console project is developing an alternative to the current hardware consoles. By using head worn displays the operator can be provided with more accessible display space while using less space, weight, and power. While such a workstation could be useful in many applications, the primary target is for airborne command and control platforms. As part of Boeing’s research into this concept it is necessary to understand how the system operates in an airborne environment.

This is a summary of our methodology. Full details of the final report are unavailable pursuant to distribution restrictions.

My Role

As the principal researcher it was my job to see the test process through from beginning to end. I authored the project test plan; worked with our developer to sketch out wireframes for presented content and story flow; conducted two in-flight tests myself; collated all data and wrote the final report.

Our Goals

The purpose of the test was threefold:

  • Determine how well the infrared tracking technology in the airplane environment.
  • Determine how stable the image is when the tracker is attached to a vibrating airplane cabinet.
  • Understand issues relating to designing a system so that risk can be minimized when the design concept is transitioned to an airplane program

The following specific objectives were defined:

  1. Determine the effect of airplane vibration on motion tracking and image stability.
  2. Determine if the text stability is sufficient for use by AWACS operators
  3. Determine if the image stability is sufficient for use by AWACS operators.
  4. Determine if the image quality of the Rockwell Collins SX45 is sufficient for AWACS PAD use.
  5. Evaluate issues related to limited field of view

Constraints

  1. We will be able to use the TrackIR for motion tracking.
  2. We will have the Rockwell-Collins SX45 HWD with HALO headgear.
  3. We will be able to load our own software onto the borrowed laptops.

  4. We will only have people for a short time. The time in the flight down will be scheduled for set up and preparation. The will not be as busy as during the exercise, but they will have tasks to perform.
  5. Operator comfort for the Rockwell Collins SX45 cannot be tested at this time. We will need to get a system we can wear for 8 hours in the lab before it will be worthwhile testing comfort in the air. The system we looked at is clearly not ready, and there will need to be significant re-design before we are ready.
  6. We cannot connect to the airplane’s mission systems.
  7. We cannot run the AWACS 40/45 PAD and the simulation on the laptop.
  8. We cannot run the “virtual display” version of the virtual console. It requires a special computer with video capture cards. We can only use the “extended desktop” version.
  9. We will be using computers borrowed from the AWACS flight test group. They will most likely have Windows XP and we may need to destroy the hard drive after the flight.

Test Design

Test subjects were AWACS operators assigned to the Joint Test Fleet. These are operators who are experienced in AWACS operation and trained in the new block 40/45 software. They were not be tested for color or stereo vision.

The aircraft used was the AWACS E-3 Sentry Test System 3. I flew on two separate Joint Expeditionary Force Experiment (JEFX) flights, in which the head worn display (HWD), infrared motion tracking system, and test laptop were set up at an operator station. Subjects were called back when duties allowed them to a window of opportunity to participate in the test scenario.

Method

Before the test the operators were briefed on the objectives of the project and the objectives of the test. The limitations of test system were described along with a brief description of how the test will be conducted.

Participants were shown a series of static images and simple animations in a slide show sequence. During each slide the operator was asked to perform a specific task, such as read a paragraph of text or locate a specific object among multiple similar objects. During each slide, quantitative measures were taken (e.g., time to complete) as well as qualitative feedback about the task. A subjective questionnaire, concerning the system as a whole, was given following the test.