Designing a human-centered clinical decision support experience grounded in physician research and AI safety requirements
Timeline: Dec 2025 – Jan 2026
Summary
Lumia IA is a concept-to-design initiative for a clinical AI assistant aimed at helping physicians confirm diagnoses faster, reduce cognitive overload, and keep the clinical decision process clearly human-led. The project started with a key tension: clinicians want speed and clarity, while AI specialists require safety, transparency, and auditability.
I led the UX/UI work end-to-end, combining discovery interviews with physicians and AI specialists, defining the product experience principles, and designing an interface that supports decision-making without positioning AI as “infallible.” The result was a cohesive experience direction and a set of approved layouts ready for validation and iteration.
Context
Clinical workflows are time-constrained and high-stakes. Physicians routinely deal with incomplete information, shifting hypotheses, and the need to justify decisions. In this environment, an AI assistant must be designed as decision support, not decision replacement—making sources, confidence, and limitations explicit while fitting naturally into how doctors think and document.
My Role
Lead UX/UI Designer working with the AI specialists who requested the project and interviewing practicing physicians to ground the product in real workflow needs.
The problem
- Physicians were overloaded by scattered information and time pressure during case review
- “AI suggestions” without context can reduce trust and adoption
- The product needed to balance speed with governance: transparency, traceability, and safe language
- The experience needed to remain human, clinical, and calm—avoiding “magic AI” framing
Goals
- Design a workflow that helps clinicians confirm a diagnostic direction faster
- Make reasoning transparent: evidence, confidence, and “why this suggestion”
- Reduce friction: fewer clicks to reach decision-critical info
- Support auditability and accountability (what was suggested, what was accepted, what was rejected, and why)
- Create an experience foundation that can evolve with model improvements without breaking trust
Constraints and approach
The project needed to move quickly while staying credible to two audiences with different expectations: clinicians and AI stakeholders.
Key decisions:
- Treat trust as a product feature: sources, confidence, and limitations are part of the UI
- Keep the clinician in control: “review and confirm” interaction pattern
- Design for real clinical rhythm: skim → focus → verify → document
- Avoid prohibited language and unrealistic claims (no “infallible,” no “automatic diagnosis”)
- Build for future scalability via reusable patterns and a clear information hierarchy
Discovery and research
I grounded the design in two perspectives:
Physician interviews
- Interviewed physicians to understand how they confirm diagnoses, where they lose time, and what would make an assistant genuinely usable.
- Key themes (illustrative but realistic):
- Fast access to the “why” matters more than the final answer
- Clear separation between patient facts vs AI interpretation builds trust
- Doctors want assistance with summarization, differential hypotheses, and red flags—while keeping authorship of the decision
AI specialist interviews (project requesters)
- Interviewed AI specialists to define constraints for safety, explainability, and responsible use.
- Key themes:
- Traceability is required for internal review and governance
- Confidence must be communicated carefully (avoid false certainty)
- The interface needs structured feedback loops to improve model behavior safely
Key improvements (iteration highlights)
1) A clinician-first information hierarchy
Designed screens to prioritize what clinicians need in sequence:
- Patient overview (high-signal summary, timeline, key vitals/labs)
- Clinical flags (risk triggers + missing critical info)
- Differential suggestions presented as hypotheses, not conclusions
- Evidence panel with sources and rationale (what data contributed)
2) Transparent “suggestion” model with controlled language
To prevent over-trust and reduce friction in review:
- Suggestions shown with confidence as ranges / tiers (not absolute claims)
- Explicit “reasons” and evidence links visible at decision time
- Clear disclaimers and interaction copy reinforcing human accountability
3) A review-and-confirm workflow that matches clinical decision-making
Instead of “accepting AI outputs,” the flow supports:
- Compare hypotheses quickly
- Validate evidence and missing data
- Record a clinician decision with a short rationale
- Export/share a structured summary suitable for documentation
4) Feedback loop designed for safety and iteration
Designed lightweight feedback moments that do not interrupt workflow:
- “Helpful / not helpful” with optional structured reasons
- Ability to flag problematic suggestions
- Capturing intent without demanding long forms from clinicians
5) Visual system aligned to clinical trust + human warmth
Applied the Lumia visual direction (clean, minimal, humanized tech):
- Calm hierarchy, restrained color usage, soft gradients and curves
- Avoided “sci-fi AI” aesthetics
- Ensured the UI feels clinical and dependable, not experimental
Approved layout
Final layouts designed by me, reviewed with AI specialists and validated through physician feedback sessions.
Outcomes
Based on validation sessions and early prototype feedback:
- Physicians completed “case understanding → decision direction” faster (illustrative: ~25–30% reduction in time-to-orientation)
- Higher reported confidence when evidence and limitations were visible (illustrative: +15–20% uplift in perceived trust)
- Reduced clarification cycles with stakeholders due to explicit states, structured rationale, and audit-ready history (illustrative: fewer back-and-forths during review)
What this demonstrates
- Product-led UX in a high-stakes domain balancing speed with governance
- Ability to integrate stakeholder constraints (AI safety) with real user needs (clinicians)
- Strong focus on trust primitives: explainability, traceability, and controlled language
- Designing for adoption: workflow fit, minimal friction, and clear accountability
Next steps
- Run a broader round of usability tests with more specialties and contexts
- Validate documentation/export flows against real clinical note requirements
- Define success metrics for a pilot (time-to-orientation, adoption, disagreement rate, feedback quality)
- Prepare a scalable design system for additional modules (labs, imaging summaries, integration points)