“Stop worrying and let AI help save your life,” a New York Times op-ed prescribed in January. So I wanted to ask Dr. Robert Wachter, author of that essay, “why not worry?” and read his latest book, A Giant Leap, before speaking with him.

I spent time with Bob on a Zoom session last week to dig into some patient/consumer-facing issues regarding AI and health care. Before I launch into our engaging conversation, I’d like to share some context about Dr. Wachter and his evolution that I believe makes him someone to trust at this noisy and chaos-before-creation time for AI in health care.
After completing medical school at the University of Pennsylvania, Bob did a residency in internal medicine at UC San Francisco and eventually joined UCSF in 1990. An important and less-known aspect of this early part of his career as a physician in San Francisco is that Bob was in the first generation of doctors dealing with a new and deadly disease that would come to be known as AIDS. He shared his experience in his 1991 book, The Fragile Coalition: Scientists, Activists and AIDS, which is an important resource on patients, activism, and health citizenship that sits on my own bookshelf next to Randy Shilts’ masterwork, And the Band Played On.
In 1996, Bob coined the term “hospitalist” in the New England Journal of Medicine and is credited as a pioneer in the specialty – a crucial pillar for patient safety in hospitals.

Three decades later, Bob wrote A Giant Leap to brainstorm how AI is transforming healthcare and what it means for our future – the subtitle of the book. As I see Bob as an ever-evolving lifelong learner, I see this new book is an informative, engaging follow-up to The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, which he published in 2015 (and another must-read that has aged very well).
To kick off our chat, I asked about the promise of AI for the doctor-patient relationship.
AI and the doctor-patient relationship
Jane: Can AI help strengthen shared-decision making and trust?
Bob: The promise is that it resurrects it in some ways. The relationship is frayed partly because of a general mistrust from experts, but mostly because patients recognize that it’s hard to get in to see your doctor. Your doctor is distracted; your doctor’s looking down at the keyboard. Your doctor seems to be spending more time checking boxes on a computer screen than actually paying attention to you and your needs. The relationship is laden with friction and prior authorizations. So the promise is that if AI works, it should and I think can take a lot of that stuff away and therefore liberate the relationship to be one that’s a little bit more pure about these two human beings: one, an expert, the second, an expert in their own problems. What layers on top of that is the patient now having the opportunity to be more of an expert than they ever were before in terms of their own opportunity to understand what’s going on and ask a very intelligent agent about questions they might have about their health.
I could see that going either way. I think if doctors are trained correctly and socialize correctly, they will see that as a net positive that I now have a more informed partner. I could also see some tension growing. In some ways, the relationship when it’s “I’m expert, you are not, I know everything you don’t,” that’s not very healthy. But the patient now is not only an expert in their own experience but actually has access to a lot more information. Many physicians will feel challenged by that, and we’re going to have to work through that in order to get it right.

From Paging Dr. Google to patient-grade AI
Jane: Well, you lived through the Dr. Google phase, right? People coming to the exam room with many inches thick of printouts from Internet searches.
Bob: We’re used to that, but this thing’s much smarter than Google was. And not just smarter, but more user friendly in a way that’s meaningful for patients and meaningful for physicians as well. I could put into Google, “my left leg is swollen today,” but I could not put into Google, “I’ve got a patient who’s an 82-year-old man with CLL who comes in with a fever, a swollen leg, an infiltrate on chest x-ray and a creatinine 2.7. What do you think is going on?” We can now get useful results. So here is the ability to sort of speak to AI almost as you would speak to a physician.
There’s a risk there for patients in that when I do that, when I pick up my phone and I talk to Open Evidence or I talk to ChatGPT or Gemini, I can give it a narrative. One of the things doctors learn to do is take a hundred facts that we might have at our disposal and distil it down into the kernel of the case. I think when we ask patients to do that, it’s not obvious that they would know how to do that. They may have a hundred facts. There’s no reason they should know one fact in their past history or medication they’re on, or one symptom that they’re having being particularly salient and five others are not. So my own belief is that the tools that we need to build for patients for using this unbelievable new technology are going to be more customized for patients than they currently are.
It could become more doctor-like: if you were talking to a doctor, the doctor wouldn’t immediately say, “Oh, your headache is a migraine, or you have meningitis.” We would ask a bunch of questions, such as, “Do you have a fever? Does your neck hurt? Does the bright light hurt your eyes?” You, the patient, would answer. We’d go back and forth. But just having a blank chat box prompt I don’t think gets patients exactly where they need to be. However, it provides the substrate around which I think we can develop tools that allow patients to do a whole lot more for themselves. And to me that’s very exciting….although I think some people in medicine will find that threatening.
The nuanced argument for/against humans in the loop
Jane: You talk about humans in the loop and Nicholas Christakis’s thought that our trust in humans goes beyond the rational. In the last section of the book you discuss scenarios of whether patients will receive primary care from a human or a virtual agent. Given the growth of digital-front-doors and digital-first primary care, is it inevitable that the doctor would be pulled out of the visit, as the CTO of Akido recently said in an MIT Technology Review article?
Bob: When I first was thinking about writing the book, my concern was that the book was going to be out of date five minutes after it’s done. And both my wife and my publisher said, “well, if that’s true, you’ve written the wrong book.” That forced me to helicopter up and ask what are the big questions when you have this tool that now is smarter than I am…but not perfect. In some ways we are potentially supplanting a human, but we’re not talking about supplanting your accountant or even your Uber driver. We’re talking human at a time of great need and anxiety. Those are the big questions. And the human in the loop aspect really was one of my favorite questions to chew on.
I tried to go into it without a huge amount of bias. I’m rooting for the humans, and recognize people believe that seeing a human at a time of great need (eg., at a cancer diagnosis) is valuable for all sorts of reasons. But I think we have to put that to the test. Because what if the AI is smarter than the human and actually more likely to get you the right answer? What if yes, you can see a human, but that will cost you $500 and you can see the AI for free. What if you can see the human, but it will take you a month to get an appointment and you can see the AI now. What if studies show the human in the loop can degrade the overall performance of the dyad because the AI is smarter than you are?
I tried to treat this as a really important and interesting question that we don’t quite know the answer to. My bias is you’ve got a serious illness, a complex illness, you’re anxious and you will find real value in that relationship between humans. What is funky about AI is that we’ve never had a technology that acts so humanlike and there’s a risk there: the risk is that we will give it undue trustworthiness because it seems so human. And so we’ve got to test to be sure that it is really good.
I think that the other risk of the human in the loop is something I discussed with Michelle Mello of Stanford, who I interviewed for the book and who I think is the top scholar in the country on legal issues meeting medicine. She posed, “is it fair to the physician to be the human in the loop if the AI is wrong but is so human-like and you’ve learned to trust it? It was right the last 20 times and then something goes off the rails, and you (that is, the physician) are the one who’s going to get sued. You’re the one who’s holding the bag.” So there’s just a lot of complexity in this.
The case for digital primary care for access and a sustainable health care system
I really do think it’s going to be important for us to be not too wedded to the idea that there has to be a doctor for everything. I look at primary care and the system is so broken, if we’re going to insist on human oversight for everything, I don’t know if you can unbreak it. If you’re going to tackle primary care, you have to ask the question, are there things that right now we say you need to see a doctor for that you really don’t?
It is a very important first step because at the end of the day, if the AI can successfully diagnose your high blood pressure or your cholesterol or even your urinary tract infection, then you see a doctor to get your prescription and the medicine is safe and the AI is really good at it, why do you have to see a doctor for that? But then I think you do need to see a doctor for more complex problems, more emotionally-laden problems. Trying to figure out how that works and what things you can sort of manage yourself in a world with really good AI — can the regulatory system become flexible enough so that there are things you can actually get autonomously through AI? I think these are just really interesting questions that are going to consume us for the next five to ten years.
If the alternative is nothing, then I think there are certain problems where if we can pull the doctor out of the loop, it would be fine. And to the doctor who says, “no, no, no, that can’t work,” I remember 30 years ago when we considered nurse practitioners seeing a patient, wondering “how’s that going to work?” Now we just all accept that, where there’s not enough bandwidth among physicians, we bring in another person who is less expensive and lesser trained, but quite competent. Deploying AI is just sort of extending that to the next level.
Still, we’ve got to draw a line and not be too flippant about this and say doctors are worthless—that there’s nothing that they do that adds value. I think that’s going to be wrong. We have to look at the totality of what patients need and the inability of our current system to deliver that. And when it makes sense for the system to be an AI-enabled copilot with a doctor, great. But I’m absolutely confident that there will be parts of it where we say, actually, that can be managed by a patient themselves with the right AI tools and let’s be a little more flexible on the regulatory side so that if the AI makes the diagnosis correctly and wants to prescribe a medicine, if the medicine is safe and the AI can do this effectively, why should the patient have to see a doctor?
One study I love published a couple of years ago found that if a primary care doctor was just doing the preventive care they’re supposed to do, that’s 27 hours a day if no patient has the temerity to be sick. So the thing is impossible (JSK edit – that is, unsustainable). The only way you can solve that equation is test for certain things for which a patient can see an AI doctor.
It’s been interesting the last six months when I was writing the book: I have a chapter where I talk about a conference in 2024 listening to the CEO of Hippocratic AI, Munjal Shah. He was about the only person who said they were trying to build an AI nurse or AI doctor. While everybody else might have thought that, no one uttered that vision at the time.
In the last six months, it feels like that’s tipped.
Health Populi’s Hot Points: Here in Health Populi in 2015, I wrote about 3 doctors who “write right” about health care — including Dr. Wachter, along with Atul Gawande, MD, and Eric Topol, MD. That year these physicians wrote three milestone books tracking the impact of digital transformation in health care on patients and doctors.
And now in this tumultuous adoption phase of AI in health care, we have another milestone book in A Giant Leap. You’ll find the voice of a physician with empathy for both patients and fellow physicians, and an understanding of the complexities and difficulties in changing healthcare workflow and cultures.
Furthermore, beyond technology adoption and friction, Bob recognizes, in his words,
“More fundamental challenges in healthcare: unsustainable costs, a dysfunctional payment structure, inequalities in care and access, conflicts of interest, and our habitual focus on treatment over prevention.”
With A Giant Leap, Dr. Wachter has written six books. I must note that his writing style can be even poetic at times. In A Giant Leap, he brings into the text nods to Tolstoy’s Anna Karenina, Mitch Albom’s Tuesdays with Morrie, and even calls up the image of Franz Kafka in nightmare on hold with an insurance company.
Without meaning to spoil the end of the book, let me conclude this post with Bob’s hopeful conclusion keeping the human in health care: that, “there will always be a need for a human guide,” with deep medical knowledge, refined clinical judgment, as well as the “emotional intelligence to recognize and address their unspoken fears, the leadership skills to orchestrate their care across diverse teams, the patience and wisdom to navigate the inherent uncertainties of medicine and health care’s bureaucratic cul-de-sacs, the uniquely human capacity for deep compassion that transcends both the practical and the algorithmic.”
Then there’s my favorite sentence in the entire book…which I think captures the ultimate grace in the doctor-patient relationship…indeed, humanity in the loop….
“….And someone to be on the receiving end of a speechless patient’s kisses.”
A personal PS — In full transparency, I happened to meet, in quite a random way, a young student wearing a University of Pennsylvania t-shirt on a train bound for Geneva, Switzerland from Florence, Italy, one summer a few decades ago when we were all traveling through Europe on Eurail passes. I had intended to board a train to Genoa from Firenze, but got on the wrong track. There I found two U-Penn undergraduate students with whom I rode into Switzerland: Bob Wachter and his BFF Frank.
I’ve tracked Bob’s work since he wrote The Fragile Coalition, and so have been positively biased as I tracked his journey for many, many years. It was a joy to catch up with him in this conversation, rooted in our uncanny meeting well before either of us was on our respective health care work lives.




Thanks to Jennifer Castenson for
yeva, digital storyteller, for engaging in
Jane joined host Dr. Geeta "Dr. G" Nayyar and colleagues to brainstorm the value of vaccines for public and individual health in this challenging environment for health literacy, health politics, and health citizen grievance.