top of page

Augmented Reality Assistant

The Augmented Reality Assistant for Spacewalks project was developed to provide astronauts with a virtual assistant and Mission Control support during spacewalks on the lunar or Martian surface. This technology aims to enhance the efficiency and safety of spacewalks by providing real-time information and guidance to astronauts as they conduct their extravehicular activities.

Download Published Works

NASA HEADSET.jpg

Purpose of Augmented Reality Assistant

"In high-stakes environments like spacewalks, where communication is limited and errors can be costly, our AR assistant was designed to empower astronauts with real-time, intuitive support—reducing cognitive load and enhancing mission success."

Research Questions

  • Does the design have any areas that might confuse users?

  • Where is the best location to display alerts so that they are noticed and understood?

  • How should task instruction be presented to enhance performance and reduce cognitive workload?

Wall of ideas

Methods

This project employed quantitative, and qualitative methodologies to address the research questions and gain a comprehensive understanding of user behavior in the AR environment.

Methods Used​

  • Questionnaire

  • Completion Rate and Time

  • Think-a-loud Protocol

  • Expert Testing

Mindfulness

Cognitive Walkthrough

Cognitive Walkthrough and Resulting Design Improvements

To identify potential usability issues in the early stages of design, we conducted a cognitive walkthrough of the initial AR interface. This method was selected because it allowed us to systematically evaluate the user’s experience when navigating tasks without training—an especially important consideration for high-stakes, time-sensitive EVA activities.

​

The walkthrough was performed by team members with expertise in human factors and user-centered design, using task-based scenarios and key questions such as:

​

  • Will the user know what to do at each step?

  • Will they notice and understand the correct options?

  • Will the feedback clearly indicate progress or completion?

 

​

Virtual Reality Glasses

Evaluating Alert Visibility and Comprehension in AR

To test how effectively users could detect and respond to in-situ alerts, we conducted a study simulating real-time task interruptions—mirroring the types of attention-demanding interactions astronauts might experience during a spacewalk.

​

Method Overview: Participants completed a task using the AR assistant while alerts appeared randomly in different locations within the visual field of the headset. Each alert prompted a specific action (e.g., "Draw a circle") that required both noticing and understanding the message.

​

  • If the action was performed, we inferred the alert was both seen and understood.

  • If the action was not performed, it indicated a potential failure in visual salience or clarity of the instruction.

 

Post-Task Questionnaire: To complement the behavioral data, participants completed a brief questionnaire asking how many alerts they noticed and how clear they found them. This helped us identify discrepancies between perceived and actual alert recognition and understand how placement, wording, or competing visual elements influenced alert effectiveness.

 

Why This Matters: In high-stakes environments like spacewalks, missing a single alert could have mission-critical consequences. This study provided direct insights into optimizing alert placement, clarity, and timing—ensuring that critical information is not just delivered, but noticed and understood under cognitive load.

 

Task Instruction Comparison

One of the primary goals of the AR assistant was to reduce cognitive workload during multi-step procedures, particularly in environments where users may have limited communication or high task demands. To assess the impact of instruction delivery style on performance and mental effort, we compared two formats: just-in-time (step-by-step) and all-at-once.

​

Method Overview: Participants were asked to complete a complex origami task while wearing the AR headset. One group saw all procedural instructions displayed at once, while the other group received just-in-time instructions—one step at a time—advancing only after pressing a virtual button in the AR interface. Task completion time and success (correct final product) were recorded for both groups.

​

Cognitive Workload Questionnaire: Following the task, participants completed a cognitive workload questionnaire that captured both overall workload and key subscales such as mental demand, effort, frustration, and temporal demand. This allowed us to understand not only how well participants performed but also how taxing the experience was for them.

 

Why This Matters: By comparing these instruction styles, we were able to determine which approach best supports focus, accuracy, and cognitive ease—critical qualities for systems designed for high-stakes environments like space missions. The findings directly informed our decision to implement just-in-time guidance as the default mode in the AR assistant.

Computer with Graph

RESULTS

The results from each method provided actionable insights that directly informed design improvements to enhance usability, reduce cognitive load, and support task performance in high-stakes AR environments.

Neurotechnology

Cognitive Walkthrough Results

​Key Findings:

​

  • Menu Structure Confusion: The walkthrough revealed that some menu options were nested too deeply, which increased cognitive effort and created opportunities for user error.

  • Missing Waypoint Cues: There was no clear method for displaying waypoint locations, a critical feature for navigating tasks in unfamiliar or disorienting environments.

  • Task Switching Ambiguity: Users could not easily track task progress and occasionally switched tasks before completing the current one.

​​

Design Changes:

​

  • Simplified Menu Structure: All menu options were restructured to be accessible within one degree of separation from the main interface, supporting usability principles such as simplicity and consistency.

  • Persistent Waypoint Indicator: A visual cue was added to the AR display to continuously show waypoint locations, improving spatial orientation.

  • Clear Task Progress Display: The interface was updated to clearly differentiate between completed, active, and pending tasks, reducing confusion and enhancing situational awareness.

VR Space Shuttle

Alert Evaluation Results

Key Findings:

​

  • Alert Visibility Issues: Participants did not consistently see alerts that were displayed in peripheral or lower regions of the visual field. Post-study questionnaires also revealed that some alerts went unnoticed entirely, especially when users were focused on task-relevant visuals.

  • Comprehension Gaps: Even when alerts were seen, users occasionally failed to follow the instruction, indicating that the messaging was either unclear or not contextually grounded enough for quick understanding.

  • Mismatch Between Perceived and Actual Performance: Participants overestimated how many alerts they had seen. This mismatch highlighted a gap between user confidence and actual performance—underscoring the need for more salient and memorable alert design.

​

Design Changes:

​

  • Alert Placement Optimization: Alerts were repositioned to appear within the user’s primary line of sight during key tasks. Peripheral alerts were reduced or supplemented with directional cues (e.g., subtle arrows or motion).

  • Standardized Alert Formatting: Instructions were rewritten using consistent phrasing and action verbs. We also paired brief visual icons with text to improve recognition speed and comprehension.

  • Reinforcement Through Redundancy: For critical alerts, both visual and auditory cues were used. This multimodal approach ensured that users had multiple chances to detect and process important instructions.

Task Instruction Presentation Results

Key Findings:

​

  • Task Performance: Participants had similar completion times and success rates regardless of whether instructions were presented all at once or step-by-step.

  • Cognitive Workload: When all instructions were shown simultaneously, participants reported higher overall cognitive workload and exertion.

  • User Frustration: Participants using the just-in-time instruction format reported greater frustration, likely due to the need to press a virtual button to advance through steps.

 

Design Changes:

​

  • Balanced Instruction Delivery: To reduce cognitive load while minimizing frustration, the final design presents a full task overview at the start of each new activity.

  • Just-in-Time Step Presentation: Following the overview, each step is delivered one at a time, reducing memory demands while maintaining clarity and structure.

​

Final Design

Below are some of the functions of the AR assistant as tested at NASA's rock yard.

Navigation Feature

Task Instructions

Manual Markers Feature

bottom of page