Designing a human-centered clinical decision support experience grounded in physician research and AI safety requirements

Timeline: Dec 2025 – Jan 2026

Summary

Lumia IA is a concept-to-design initiative for a clinical AI assistant aimed at helping physicians confirm diagnoses faster, reduce cognitive overload, and keep the clinical decision process clearly human-led. The project started with a key tension: clinicians want speed and clarity, while AI specialists require safety, transparency, and auditability.

I led the UX/UI work end-to-end, combining discovery interviews with physicians and AI specialists, defining the product experience principles, and designing an interface that supports decision-making without positioning AI as “infallible.” The result was a cohesive experience direction and a set of approved layouts ready for validation and iteration.

Context

Clinical workflows are time-constrained and high-stakes. Physicians routinely deal with incomplete information, shifting hypotheses, and the need to justify decisions. In this environment, an AI assistant must be designed as decision support, not decision replacement—making sources, confidence, and limitations explicit while fitting naturally into how doctors think and document.

My Role

Lead UX/UI Designer working with the AI specialists who requested the project and interviewing practicing physicians to ground the product in real workflow needs.

The problem

Goals

Constraints and approach

The project needed to move quickly while staying credible to two audiences with different expectations: clinicians and AI stakeholders.

Key decisions:

Discovery and research

I grounded the design in two perspectives:

Physician interviews

AI specialist interviews (project requesters)

Key improvements (iteration highlights)

1) A clinician-first information hierarchy

Designed screens to prioritize what clinicians need in sequence:

2) Transparent “suggestion” model with controlled language

To prevent over-trust and reduce friction in review:

3) A review-and-confirm workflow that matches clinical decision-making

Instead of “accepting AI outputs,” the flow supports:

4) Feedback loop designed for safety and iteration

Designed lightweight feedback moments that do not interrupt workflow:

5) Visual system aligned to clinical trust + human warmth

Applied the Lumia visual direction (clean, minimal, humanized tech):

Approved layout

Final layouts designed by me, reviewed with AI specialists and validated through physician feedback sessions.

Outcomes

Based on validation sessions and early prototype feedback:

What this demonstrates

Next steps