01 Overview
Why doctors spend more time
on screens than patients
The average doctor spends 6 hours a day on administrative tasks — writing notes, searching patient history, cross-referencing symptoms with diagnoses, and documenting prescriptions. That's 6 hours not spent with patients. The result? Physician burnout, diagnostic errors from fatigue, and patients who feel rushed through 10-minute consultations.
This case study is a research-only investigation: I set out to understand whether an AI co-pilot for doctors during consultations is a real opportunity, and if so, what shape it should take. No product was built. The output is a problem framing and a set of design proposals grounded in evidence — a foundation for a future product team to act on.
"Before designing a solution, design the question. Medical AI is everywhere — but the right medical AI for a 10-minute consultation is still an open problem."
— Research premise
49%
of physicians report symptoms of burnout, largely due to EHR administrative load
6hrs
Average daily time doctors spend on documentation vs 5hrs with patients
23%
of diagnostic errors are attributed to cognitive overload and fatigue
02 Understanding the User
What doctors actually need
during consultations
To make sure this research wasn't another solution looking for a problem, I spent four weeks listening. I shadowed doctors, ran contextual inquiries, and interviewed healthcare professionals across three specialties (general practice, internal medicine, cardiology) to understand the real shape of the consultation workflow.
👨⚕️
Clinical Observations (12)
Shadowed doctors during patient consultations to understand workflow, pain points, interruptions, cognitive load, and where AI assistance would be most valuable vs intrusive.
🎙
Physician Interviews (8)
60-minute deep-dive sessions with GPs, internists, and cardiologists. Focused on documentation burden, diagnostic decision-making, and attitudes toward AI in clinical settings.
📊
Survey (34 doctors)
Quantitative study on time spent per task (documentation, diagnosis, patient interaction), biggest frustrations, trust in AI, and willingness to adopt new tools in workflow.
🔍
Competitive Analysis
Evaluated existing medical AI tools (Nuance Dragon, Suki AI, Nabla Copilot) and traditional EHR systems (Epic, Cerner) to identify gaps in usability and workflow integration.
Critical Research Finding
Doctors don't want AI to make decisions for them — they want AI to prepare the ground so they can make better decisions faster. The ideal AI co-pilot surfaces relevant information, suggests possibilities, and handles documentation — but always defers final judgment to the physician.
User Persona
👩⚕️
Dr. Anjali Deshmukh
General Practitioner · 42 · Mumbai
Dr. Deshmukh sees 35-40 patients a day in her clinic. She's exhausted by the time she finishes evening consultations, then spends 2 more hours writing clinical notes. She wishes she could spend more time listening to patients instead of typing. She's skeptical of AI but willing to try anything that reduces documentation burden — as long as it doesn't slow her down or make errors.
High patient volume (35-40/day)
Hates documentation
Skeptical but pragmatic about AI
Values patient connection over efficiency
Needs accuracy, not speed at the cost of mistakes
Dr. Deshmukh's needs boiled down to: real-time transcription (so she can stop typing), intelligent suggestions (not intrusive interruptions), and trustworthy accuracy (zero tolerance for hallucinated diagnoses).
03 Problem Definition
Translating pain points into
design questions
From the research synthesis, I identified five core problem areas in the consultation workflow and translated each into an actionable "How Might We" question — the framing a future product team could use as a starting point.
Pain points identified:
- Documentation overload — Doctors spend 40% of consultation time typing notes instead of listening to patients
- Context switching — Constantly toggling between EHR tabs to find patient history, lab results, past prescriptions
- Cognitive fatigue — By patient #30, diagnostic accuracy drops due to mental exhaustion
- Interruption anxiety — Doctors fear AI will disrupt patient rapport or suggest incorrect diagnoses mid-consultation
- Trust deficit — Physicians worry about liability if AI makes a mistake they didn't catch
How Might We questions:
HMW automate documentation without breaking patient eye contact?
HMW surface relevant patient data without forcing doctors to search?
HMW support diagnostic thinking without being intrusive?
HMW build trust through transparency about AI confidence levels?
HMW integrate into workflow without adding clicks or cognitive load?
"The interface should be invisible. The doctor's attention belongs to the patient, not the screen."
— Design principle to carry into the solution
04 Proposed Direction
What a solution
could look like
Based on the research, I've put forward a directional proposal — not a product, but a set of principles and capability ideas that a future design and engineering team could build against. Each suggestion ties directly back to a pain point identified in Section 03.
PROPOSAL 01
A sidebar layout, not an overlay or new app
Doctors won't switch out of their EHR (Epic, Cerner). The right form factor is a sidebar that lives alongside the existing system — providing ambient information without demanding focus. Overlays interrupt; sidebars inform. Doctors can glance, not context-switch. This also avoids the political and technical cost of replacing entrenched EHR infrastructure.
PROPOSAL 02
Passive transcription — the highest-value use case
Real-time transcription of doctor-patient conversation was the single most-requested capability in research. Doctors typing while patients talk is the central wound of the modern consultation. AI should listen silently and capture everything, with no voice commands required — talking to the AI during a consultation breaks patient rapport. Should ship first as the wedge into the product.
PROPOSAL 03
Confidence scores on every suggestion — transparency over magic
Every AI output (differential diagnosis, suggested investigation, billing code) should carry a visible confidence indicator. A simple 3-color dot system (🟢 high, 🟡 medium, 🔴 low) is faster to parse than percentages and reduces cognitive load. Doctors know when to trust at a glance and when to verify — turning AI from a black box into a transparent assistant.
PROPOSAL 04
Differential diagnoses, never a single answer
AI should surface 3-5 ranked possibilities, not declare a diagnosis. The point is to support clinical reasoning, not replace it. A single-answer AI feels authoritative; a ranked list invites the doctor to think. This framing also reduces liability anxiety — the AI never tells the doctor what to do, only what to consider.
PROPOSAL 05
Manual approval gate before anything is saved
This emerged as the single strongest signal from research. AI-generated SOAP notes, billing codes, and prescriptions must require explicit physician approval before being committed to the patient record. Auto-save is a liability risk. One-click approval if accurate, easy edit if not. Slower? Marginally. Trusted? Massively. This is what makes the difference between an assistant and an autonomous agent.
PROPOSAL 06
EHR integration via API, not replacement
A standalone AI tool that requires doctors to leave Epic/Cerner is dead on arrival. The product must integrate with existing EHR systems via API, overlaying the sidebar onto the current interface. This requires deep partnership work with major EHR vendors — slow and expensive, but the only path to actual adoption. Without it, the product lives in pilots forever.
Closing Note
These proposals are directional, not prescriptive. The intent is to give a future product team a clear, evidence-backed starting point — a shared understanding of what doctors want, what they don't, and what would actually earn a place in their workflow. The most-loved medical AI won't be the most powerful one. It'll be the one that earns trust through transparency, defers to physician judgment, and disappears into the background so the doctor can do what they trained a decade to do — heal people.