Your doctor sits across from you, fully present, listening—not typing or glancing at a screen. Yet every important detail you share makes it into your medical record. This is the vision of Christopher Sharp, a physician at Stanford Health Care and chief medical information officer at Stanford University Medical Center. For Sharp, technology shouldn’t create barriers between doctors and patients; it should free clinicians from tiring administrative tasks so they can provide better care. At Stanford, he was an early adopter of artificial intelligence tools to transcribe and analyze medical histories.
Sharp arrived at Stanford University School of Medicine as a resident in internal medicine in the late 1990s. A graduate of Dartmouth College’s Geisel School of Medicine, he continues to see patients as a primary care doctor at Stanford Health Care—and it is this work that most clearly teaches him the benefits and risks of technology.
Scientific American spoke to Sharp about how AI is changing medicine and how to use it to support patients and doctors.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
[An edited transcript of the interview follows.]
Why did you decide to begin working with AI at Stanford?
AI provides an important window to access data locked up in narratives somewhere in the record that would be very hard to identify or find. It also provides the opportunity to utilize data in new ways that does not require as much effort by our clinicians.
Our clinicians spend a lot of time digging through electronic data, summarizing it and making decisions. Documentation is very important—it’s how we convey clinical information forward, mitigate risk, and meet legal, compliance and billing requirements. But all of that creates added burden and is not the primary value of providing direct care. AI tools that help with those administrative functions are a really big win for our clinical providers.
How does it help with summarizing patient records?
We use a tool that summarizes the key activities described in clinician notes. It helps us say, “The medicine doctor has been treating them for diagnosis A and B, the urologist has seen them for diagnosis C, the neurologist seeing them for diagnosis E”—without our having to go into each area of the chart manually. What makes this powerful is that it has citations, which allow the doctor to do validation and deeper exploration.
The other exciting tool is something we call ChatEHR (“EHR” stands for electronic health record). Different clinicians have different questions at any given moment, so we’ve started experimenting with an open platform where users can use a chat interface to engage with patient data. It offers flexibility to ask about a certain aspect of care and then continues chatting to go deeper.
Can you give an example of how ChatEHR has been useful?
We needed to screen multiple charts to find patients eligible for a particular care pathway. Previously, many people had to read through charts manually. We used ChatEHR to experiment, and once optimized, built it into an automation. With a single click, multiple charts could be reviewed and presented back to one screener. For instance, some patients might be eligible to go to a lower acuity unit rather than to the general hospital where they’re mixed with higher-acuity patients. If we can identify those patients, we can help them go to the most appropriate location of care. What might take hours now takes minutes.
You also use ambient AI scribe software that listens to appointments. How has that been received?
This has been one of our biggest successes. We rolled it out more than a year ago with very rapid adoption. The AI scribe is easily adoptable—clinicians use their phone to listen to the conversation, create a transcript and generate a medical summary within a minute of completing the interaction.
It’s medically focused. If you and your patient have a long chat about their golf game before discussing their clinical problem, that won’t be transcribed into the summarization. Only the clinically important points appear.
Has this reduced doctor burnout?
Absolutely. Our clinicians felt this approach was much better in terms of their cognitive load and their overall wellness in the workplace. The cognitive work of summarizing such a conversation is significant. I should note we thought we’d see tremendous efficiency—that doctors might go home sooner or see more patients because they’d spend less time documenting. What we found was that clinicians spent a fair amount of time reviewing, editing and approving the documentation, so they weren’t taking much less time. It wasn’t efficiency they gained as much as reduced cognitive burden.
You also use AI to draft responses to patient messages. How is that working?
We saw a 200 percent increase in patient messaging during COVID, and it hasn’t gone down. That created a challenge for clinicians to absorb all that engagement. We were one of the first in the nation to use AI-generated draft responses as starting points. This requires clinicians to evaluate for accuracy and voice—patients like hearing back from their clinician in their own voice. Again, this is not an immense time-saver, but it reduces the burden of coming up with language that is both accurate and empathetic. It creates the opportunity for clinicians to spend more time honing language rather than developing it from scratch. The AI also looks back at information in the patient’s chart for context. I’ve been struck that sometimes it reminds me of something I might not have remembered myself.
Where do you see this technology going next?
The evolution is amazingly quick. The AI scribe had many more errors when we started than it does today. We’re also seeing additions. We’re experimenting with suggested orders. If I say, “I want to make sure you get a chest x-ray to rule out pneumonia,” the listening tool can tee up that order for review and approval.
The next significant change will be when these technologies become more directly available to patients. Instead of navigating our portal, patients may be able to just ask a question and have AI navigate to the right interaction.
With doctors’ cognitive burden decreasing, do you think we’ll eventually see differences in patient outcomes?
This is the holy grail—tools so beneficial that we’d see changes in care. We’ve not studied this enough to know yet. But there are fascinating studies showing that time of day affects care—patients who are seen early are more likely to have preventive care reminders discussed than [those who are seen] late in the day, when doctors are tired. My hope is that these tools will even out those unwanted variations.
Is there a specific moment that convinced you this was the right path?
I vividly recall sitting with a patient who told me about how her sister had died. It was important not to be typing and just to be really looking at her and supporting her. During that conversation, she shared important details about her family’s health history. I never reached over to my keyboard to document those clinical details, but they were captured by the AI.
I was struck when I read the summarization—it simply said the patient’s sister had died and noted her health condition, even though I’d had a very emotional connection with my patient during that moment. That was an example where the machine did what the machine does really well, and I did what a human does well.
A version of this article appeared in the March 2026 issue of Scientific American as “Christopher Sharp.”

