← Selected Work
OnVUE Smart Review
Case Study

OnVUE Smart Review

Designing an AI-assisted review workspace for high-stakes decisions at global scale.

AI/ML UXHAIIDecision SystemsFigma
Role
Reworked the high-stakes exam review process
Designed the end-to-end workstation for AI-assisted review
Led the design of evidence capture and audit-ready workflows
Type of Work
Internal operations UX
AI-human collaboration
High-stakes decision support systems
High-Level Impact
Supported the shift from costly live proctoring to a scalable record-and-review model
Reduced time to decision for 3-hour sessions to under 10 minutes
Maintained strong reviewer satisfaction and high agreement on decisions

Overview

Evidence-first AI-assisted review for high-stakes decisions.

Business context: Recorded exam review at scale creates a signal-to-noise problem. Reviewers can’t watch thousands of hours manually, and AI flags alone aren’t sufficient for decisions with real consequences.

What I led: I designed an evidence-first, triage-based workstation that ranks sessions by risk, surfaces contextual proof, and keeps human judgment firmly in the loop — a human-in-the-loop design pattern for agentic AI operating under high-stakes constraints.

Evidence-first AI-assisted review for high-stakes decisions.

Challenge

AI flags without context increase risk and reviewer fatigue.

Moving from live proctoring to recorded review shifts the bottleneck: reviewers need to find brief infractions quickly, but raw AI flags can be noisy and can introduce automation bias if presented without supporting evidence.

Role & Team

Owned the review experience and partnered with AI/ML and investigation teams.

  • I led UX strategy and UI design for the end-to-end reviewer workflow.
  • I partnered with AI/ML stakeholders to translate model outputs into understandable evidence cues.
  • I collaborated with investigators/reviewers to ensure decisions were well-supported and auditable.
  • Smart Review investigator workflow screens.

Constraints

High-stakes adjudication, auditability, and automation-bias risk.

  • Decisions must meet explainability standards — defensible, auditable, and traceable across the review chain.
  • The design must reduce automation bias by prioritizing evidence over “confidence scores.”
  • Throughput matters, but not at the cost of accuracy and fairness.

Approach / Process

Triage first, evidence always, and structured decisions.

I structured the workflow around triage-helping reviewers spend attention where it matters most-then built an evidence model that makes each flag inspectable and comparable. The UI supports fast review without turning the human into a rubber stamp.

Triage first, evidence always, and structured decisions.

Key Design Decisions

Design for adjudication: proof packaging and confidence handoffs.

  • Evidence-first layout: surfaced the “why” behind flags, not just the flag itself.
  • Structured decisions: made outcomes consistent and telemetry-ready.
  • Decision confidence: enabled reviewers to close cases with the right level of certainty.
Design for adjudication: proof packaging and confidence handoffs.

Outcome / Impact

Higher throughput with stronger agreement and clearer decisions.

The workstation improved review throughput versus live proctoring while supporting consistent decisions and strong agreement between reviewers and investigators.

See the Outcomes panel for the throughput and agreement metrics tracked.

Reflection

In agentic AI systems, the design contract is explainability — not the model.

The senior design work is making complex signals usable and fair: clear evidence, defensible decisions, and a workflow that respects human judgment.

Outcomes
10x Increase
Throughput vs. Live Proctoring
98% Agreement
Between Reviewers and Investigators