← Selected Work
OnVUE Workspace Scan
Case Study

OnVUE Workspace Scan

Using AI guidance to make environment validation easier for candidates and lighter for operations teams.

AI-Guided UXComputer VisionGlobalFigma
Role
Turned a high-friction four-photo room scan into a guided 360° experience
Helped move workspace validation earlier in the journey
Built rapid prototypes to test and refine the concept quickly
Type of Work
Human-AI collaboration
Rapid Prototyping (Figma)
Usability Testing
High-Level Impact
Reduced live greeter workload and helped lower check-in failure rates
Helped candidates fix environmental issues before entering the live proctoring queue
Achieved significant cost reduction while ensuring strict security compliance

Overview

AI‑guided workspace scan that reduces friction before it becomes a failure.

Business context: Workspace scans are a private, high‑stress step in remote proctoring. When guidance is unclear, candidates make preventable mistakes-and those mistakes become support cost, delays, or exam failure.

What I led: I redesigned the workspace scan experience into an AI‑guided flow that coaches candidates earlier and hands off clearer evidence to reviewers.

AI‑guided workspace scan that reduces friction before it becomes a failure.

Challenge

Manual review created bottlenecks, inconsistency, and unnecessary anxiety.

Human‑reviewed scans were slow and inconsistent at scale. Candidates experienced the process as enforcement rather than support, increasing anxiety during a moment where trust and compliance must coexist.

Role & Team

Lead designer partnering with product, engineering, and policy stakeholders.

  • I led the UX strategy and detailed UI for the guided scan flow.
  • I partnered with engineering to translate model outputs into understandable, actionable feedback.
  • I collaborated with compliance and operations stakeholders to ensure the experience remained policy‑accurate and globally consistent.
  • OnVUE workspace scan flow screens.

Constraints

Ethical AI, privacy, inclusivity, and reliable fallbacks.

  • Privacy‑sensitive environments require respectful, transparent guidance.
  • False positives and edge cases must degrade gracefully (no dead‑ends).
  • Global consistency matters-rules must be clear across regions and user contexts.

Approach / Process

Shift from enforcement to prevention with coach‑like feedback.

I reframed the experience as coaching: validate earlier, explain “why,” and provide specific fixes candidates can act on. The flow is designed to reduce downstream reviewer load while improving candidate confidence.

Shift from enforcement to prevention with coach‑like feedback.

Key Design Decisions

Itemized guidance + confidence handoff for trustworthy decisions.

  • Guided capture: structured the scan into clear steps with minimal ambiguity.
  • Itemized feedback: surfaced specific issues and recommended fixes, not generic errors.
  • Confidence handoff: packaged evidence so reviewers can make faster, better‑supported calls.
Itemized guidance + confidence handoff for trustworthy decisions.

Outcome / Impact

Reduced operational load and improved consistency at scale.

The redesigned flow reduced time spent per candidate at the “greeter” stage and improved global consistency for workspace validation.

See the Outcomes panel for the key metrics tracked for this initiative.

Reflection

AI works best as a coach-not an enforcer.

The senior design lesson was balancing accuracy with humanity: making AI feedback understandable, actionable, and fair-especially when the user is stressed.

Outcomes
Drastically Reduced
Greeter Time per Candidate
Globally Consistent
Workspace Validation