March 22, 2026

AI in the Doctor’s Office: Is Your Physician Being Replaced? | Featuring Dr. Adam Rodman

Apple Podcasts podcast player badge
Spotify podcast player badge
YouTube podcast player badge
Castro podcast player badge
RSS Feed podcast player badge
Amazon Music podcast player badge
iHeartRadio podcast player badge
Apple Podcasts podcast player iconSpotify podcast player iconYouTube podcast player iconCastro podcast player iconRSS Feed podcast player iconAmazon Music podcast player iconiHeartRadio podcast player icon

Artificial Intelligence is no longer a tool of the future—it’s already in the exam room. In this episode of Specifically for Seniors, host Dr. Larry Barsh sits down with Dr. Adam Rodman, a Harvard professor and internal medicine physician, to discuss how AI is fundamentally changing the way doctors practice and how patients manage their health.

From "AI scribes" that record visits to patients using ChatGPT for a second opinion, we explore the benefits, the risks, and the future of healthcare in an AI-driven world.

In this video, you will learn:

The Rise of AI Scribes: How automated recording tools are allowing doctors to focus on patients instead of computer screens.

The "Second Opinion" in Your Pocket: Why Dr. Rodman believes it’s actually okay (and even helpful) for patients to consult AI before their appointment.

Accuracy vs. Human Intuition: Can AI out-diagnose a human doctor?.

Privacy & Security: Understanding HIPAA compliance and how your medical data is protected when using AI tools.

About Our Guest:

Dr. Adam Rodman is a general internist at Beth Israel Deaconess Medical Center, an assistant professor at Harvard Medical School, and the Director of AI programs for the Carl J. Shapiro Center. He is also the author of Shortcuts to Medicine and the host of the Bedside Rounds podcast.

Sponsorship and advertising opportunities are available on Specifically for Seniors. To inquire about details, please contact us at https://www.specificallyforseniors.com/contact/ . 

Disclaimer: Unedited AI Transcript

Larry (00:07):

You are listening to specifically for Seniors, a podcast designed for a vibrant and diverse senior community. I'm your host, Dr. Larry Barsh. Join me in a lineup of experts as we discuss a wide variety of topics that will empower, inform, entertain, and inspire as we celebrate the richness and wisdom of this incredible stage of life.

Larry (00:40):

Welcome to specifically for Seniors. I'm your host, Dr. Larry Barsch. Today's guest is a practicing physician, Harvard professor, and one of the most thoughtful voices on how AI is reshaping medicine. Not in the distant future, but right now in the exam room. Dr. Adam Rodman is a general internist at Beth Israel Deaconess Medical Center and assistant professor at Harvard Medical School and the Director of AI programs for the Carl j Shapiro Center for Education and Research. He leads Harvard Medical School's task force on integrating AI into the medical curriculum. Serves as an associate editor at the New England Journal of Medicine, ai, and is the host of Bedside Rounds, the American College of Physicians Podcast on the history and evolution of medicine. His book is Shortcuts Medicine. Navigate Your Way Through Big Ideas. Earlier this month, Dr. Rodman wrote a widely read, widely read guest essay in the New York Times entitled, take It from a Doctor. It's okay if your medical advice comes from AI arguing that patients consulting AI before they see their doctor isn't just okay. It may actually make the visit better. Adam, welcome to the show.

Dr. Adam Rodman (02:32):

Thank you so much for having me, Larry. I'm, I'm very happy to be here.

Larry (02:38):

Let's start with a general question. How is AI actually being used in medical practices right now?

Dr. Adam Rodman (02:46):

It's a, that's a wonderful question. So I, I think there's two pieces, and I'm gonna try to infer your que how is it being used, I think, by clinicians. And then I think the second piece is how is it being used by patients? So I'll, I'll, I'll start with the clinician piece. The American Medical Association actually just released its physician sentiment survey a couple weeks ago. That gives us, like, pretty much up to the date data on how American clinicians or American physicians in particular are using artificial intelligence. And the fact of the matter is 80%, I believe, of doctors, these are the American Medical Association, are using it routinely. So at least once a week. From the clinician side, there are two big, two, two major dominant ways that doctors are using it. The first is what are called AI scribes.

Dr. Adam Rodman (03:37):

So ambient listening you've probably seen your doctor use these. It is a recording device, perhaps on a phone. The doctor just looks at their patient. It's actually quite nice. And then it writes the note for the doctor who they edit afterwards so they can pay attention to you. The second major way that is probably the most dominant actually is decision support. There is a, so using LLMs or in particular like specialized LLMs to answer questions to give references, the most popular is the company open evidence, which didn't exist a couple years ago and has gone to be the dominant way that especially young doctors get information. I believe 40% of American physicians are using it now. And I'll tell you from my residents and my interns, it's pretty much a hundred percent that's all they use. So, and then there's, there's a lot of other back office things. There's some messaging things, but those are the two most visible ways that doctors are using AI right now

Larry (04:31):

And patients.

Dr. Adam Rodman (04:33):

Yeah. So I well, it's what is this? I have, this is like the Hawthorne effect. I only know what I can see. So for when my patients, first of all, I'm an AI researcher, so as well as a practicing internist. So I think my patients are probably more likely to share things with me than they may be with other people. I see a lot of preparing for doctor's visits from my patients. So people may put in an old note of mine and ask for advice on what they can do to prepare for the visit. I see a lot of clarifying questions. A lot of patients put in laboratory results or imaging findings, like CT findings and ask it to explain to them. And I do, like, I increasingly see my patients ask for explicit medical advice. So asking usually, could Adam be wrong? Is the, is there something else going on? Is this the right decision to be made? So I I, I am seeing, and to be clear, this happened with Google as well, this, the phenomenon of the internet empowered patient is not new. But I am certainly seeing it much more frequently among my own patients. And I would say the survey data backs that up. I probably about one in five us patients are routinely using LLMs to ask about their health.

Larry (05:51):

Do do you find that a lot of patients are keeping their research on AI confidential from you not telling you about it? Yeah,

Dr. Adam Rodman (06:03):

So yeah, exactly. It's because again, 'cause I'm an AI researcher, I think people are more likely to be open with me. But definitely many patients without calling out, including some of my friends and family use AI without telling their doctors.

Larry (06:19):

You worked on a study that was published recently this month, I believe, about the utilization of conversational AI for clinical reasoning. Can you tell us about that? Yeah,

Dr. Adam Rodman (06:33):

This a very cool study. This was a prospective trial. It's the equivalent of a, a phase one trial. It's not a phase one trial 'cause it's not a medication. So I'll, I'll explain what we did and kind of what the conclusions were. So these were patients in my in my clinic who had a new complaint. So imagine you have a bad cough or something and you wanna see your doctor. So after they called to make an urgent appointment with a new complaint to see their doctor, they would talk to a conversational AI agent first. And this conversational AI agent would, would explore the complaint, and then at the end of the conversation would, you know, generally say what they thought might be going on with them. And then the key here is that conversational AI agent would then talk to your doctor so your doctor could be prepared for when you came in.

Dr. Adam Rodman (07:16):

And now the, the, like this gets into like the science of the trial. The trial was designed to study safety, right? The big question is, can we safely use this technology? So every time a patient in the trial talked to the AI system, there was a, an internist, actually a board certified internist watching live with criteria to step in if something happened. So the, you know, the primary outcome of the study was that it was perfectly safe. There was not a single safety stop. So never did the doctors have to step in and say this is unsafe. And the other big primary outcome was how, how patients felt about it and how the doctors felt about it. And then the secondary outcomes were how good it was. And the answer was it was pretty good, right? I, and I think that most patients who use LLMs will recognize this if used appropriately. They're actually quite good. And certainly when they can talk to your doctor, they can do a lot of things about streamlining your, your medical appointment and certainly making it so you can spend more time on the things that you really care about.

Larry (08:13):

How does AI perform when working through a differential diagnosis when you have multiple possibilities for the symptoms?

Dr. Adam Rodman (08:23):

These are great questions. It's wonderful question. So artificial intelligence systems LLMs are excellent at diagnosis. They are likely. And so there's a caveat to all of this, right? There's a lot that hasn't been tested, but the data that we have thus far is that they are probably super human. They're better than the average doctor when it comes to making diagnoses based on unstructured chart data. So let's say you have an ai, just read what's already in the chart and they are in my study. And then there's a penda health study as well. They are probably superhuman when taking histories directly from patients about a single complaint. And the reason they're so good, right? So a lot, this is counterintuitive, right? Llms are trained on the internet. I mean, they're trained on pirated books also, but they're mostly trained on stuff from the internet.

Dr. Adam Rodman (09:15):

And I don't need to tell you that there's a lot of bad information on the internet, right? There's a lot of misinformation just straight out wrong things, old things. So how can LLMs be so good at diagnosis? And the answer is, well, it's reinforcement learning. So diagnosis has theoretically a right answer. There is usually a final diagnosis or final working diagnosis. And even if not, there's a, a small limited subset of what it could be. So the way that these models are trained is they get labeled data, like they have cases and then labels for what the case is at the end. So over time, the models have reinforcement learning to get better and better at diagnosis. 'cause It's easier to reward. This is why it's kind of it's called more OFX paradox. Llms are very good at some of the things that are hard for doctors to do, right? Llms can make these really complicated diagnoses very easily. But at the same time, LLMs are quite bad at many of the very routine things doctors do, like, make, make a lot of management decisions that are the bread and butter of medicine because they're not easy to reward.

Larry (10:19):

So physicians who are using AI in this way, it actually does make them better diagnosticians.

Dr. Adam Rodman (10:30):

Ooh, you are asking the tough questions. It is an open, so, so LLM systems by themselves are very good diagnosticians, probably physicians using them are better. One of the studies that I ran, I ran a very early randomized control trial on this, where physicians did not actually get any better when using the LLM based systems. Now, LLMs are pretty new at the time, and part of the reason is LLMs are so, like, they're so sycophantic, they tell you what you want to hear, that it, they can have a tendency to actually reinforce doctors when they're wrong and then doctors can disregard them when they're right. So there have been some follow up studies that suggest that there are ways, like some of my colleagues just published this paper tool to teammate, where you can use an align an LLM system to get the human to perform better. But I don't actually think, Larry, that it's fair to say that just giving a high performing system to a doctor without some sort of workforce, like, you know, workflow tuning inherently makes them better at diagnosis.

Larry (11:29):

And do you find that there's a difference between younger and older physicians in working with ai?

Dr. Adam Rodman (11:37):

That's, these are great questions. Do you, I I can tell my personal experience, but I can tell you what the data says first. 'cause The data's really interesting. So we're, this paper is coming out this week, so I think I can talk about it in the New England Journal, AI by Kasi et al it's Pakistani study where they gave everybody 20 hours of training, and then they ran them through of, of how to use LLMs for medicine and then gave a bunch of challenges to people. And one of their findings was that the people who were more, more experienced, who actually had bet more experience with LLMs before the training did worse, they were more susceptible to making errors. And the people who had less experience and got the training did better. So you might think, oh, it's the young people who are gonna do much better at this, but it may actually paradoxically be older physicians, older clinicians who have had more time to develop their ways of thinking who aren't steered in the wrong direction by these. So the, the, you know, the, the, the jury is out, certainly younger doctors use it a lot more than older doctors. I'm a middle-aged doctor, so I'm in the middle, like my, my hair's gray, but I would not call myself an older doctor at a mid-career. But I think to your point, it may actually be paradoxically that the people who use it less frequently or are trained in a time without it may get better use from it.

Larry (12:59):

Yeah, I'm a retired dentist and I wonder how it would've worked through in my office as well with analyzing patient histories.

Dr. Adam Rodman (13:14):

It's a good question, and maybe like LLMs are, like, fundamentally what they're doing is they're looking at what's in their training data base and recapitulating it out. So we already know it's, we call it a jagged technological frontier, right? They can be really good at something and then really bad at something else. There has not been a lot of research in dentistry, so I don't know how they perform in dentistry. They may not be very good or they could be really good. I, I don't think because they're good at primary care, it does not inherently mean that they're good at dentistry

Larry (13:44):

And not, not necessarily for dental diagnosis, but for advice on medical histories.

Dr. Adam Rodman (13:52):

Oh yeah, they, they, they would be good at that.

Larry (13:54):

In older patients with multiple past histories and conditions, multiple specialists, multiple medications, and years of medical history, it seems to me that AI could really help the patient and their physicians make the most of their time.

Dr. Adam Rodman (14:21):

I agree. Yeah, I agree completely. I would, this is where the the double-edged sword of this technology comes in. The patients who are gonna benefit, if you are young and have a cough, you probably don't need an AI system to take your history to talk to your doctor. That's you young people. I mean, now of course there are young people who can be very sick or have very serious problems, but older adults tend to have, as you say, decades and decades of medical history, multiple medical comorbidities in many medications. And LLMs can truly help you understand that, which is one of the best ways to use them. Now, the, the reason it's a double-edged sword is that the limitations of LLMs start to come in when you get into this, what's called a long context problem, when you start to have too much information. So there's kind of a u-shaped curve where LLMs perform, they don't perform very well with very little information, and there's a sweet spot. And then when they get to too much information, they start to degrade. And what I'll say is, like, from a patient counseling perspective, I don't entirely know where that point is yet,

Larry (15:26):

Because I think of my own geriatric physician having to go through my medical history for years and years and years. And geriatricians seemed to be interested in, in trying to figure out if all the medications that have been prescribed through the years are really of any use at a certain point,

Dr. Adam Rodman (15:55):

Right? If there's some threshold where a medication that you were prescribed 30 years ago is now doing more harm than good, or if it's just unnecessary,

Larry (16:02):

How can a patient use ai to help them understand what they're being told during a visit?

Dr. Adam Rodman (16:14):

So there's, so for af, so for after your visit, one of the best things that patients can do is to use it to help understand what your care plan is. So, Larry, for your example, after you see your geriatrician, and now the caveat of course here is if you're using chat GPT or Gemini, these are, or whatever, Claude, these are public models, they are not HIPAA compliant. So they don't have the level of security that we generally work with in, in medicine. So I never would put anything into an LLM unless I've stripped my personal identifiers from it. Hmm. So that it depends on your listeners comfort with privacy. What I would say one of the best ways to do is after you leave your doctor for any appointment, put the notes after stripping for PHI into a chatbot and ask, can you please explain to me what my plan is?

Dr. Adam Rodman (17:06):

What are the next steps? What changes do I need to make? And one of the cool things that you can do, and LLMs do this quite well, is if you have multiple specialists, you can put the notes from your mo multiple specialists into the same context window, into the same chat window, and ask for a unified plan. And to also ask to look at areas where they might not be fully concordant, right? To look for areas where doctors might disagree, and then to help you prepare better questions to ask your doctor. So for example, your cardiologist recommends continuing your beta blocker and your primary care doctor has said, Hey, I think you can probably stop taking metoprolol. Your LLM can point that out and help you write a question to send to your primary care doctor or your cardiologist to clarify what the plan is.

Larry (17:55):

What about older patients that aren't comfortable with technology, especially AI technology? How does, how do you work that out? So family members, friends.

Dr. Adam Rodman (18:11):

Yeah. So, so this is great. So I'm doing a, this study, I was, the reason I'm several minutes late. We're doing a, a study right now with focus groups with older adults and really looking at digital health literacy as it comes to engaging with ai. And I think when it comes to limitations for older adults engaging with generative ai, there's a couple reasons, which you've already mentioned. Some of it is purely un unfamiliarity with the technology. Also, the user interface isn't always super friendly for older adults, right? Most of the interaction these days is through typing in a chat bot, which is not like what you and I are doing. What I, what I will say is, if, if there are listeners who want to get engaged and they're intimidated right now by engaging with LLMs I would recommend talking to friends or family members to help them walk them through it. The second thing is there are other mechanisms for engagement. I like you can use the voice mode. All of the LLMs now support native voice to voice. So you can also talk to the agent. It is not as robust as when you are typing, but it is another engagement mode. And as time goes on, there'll be many more ways that we can engage and hopefully in a way that allows people with varying levels of digital health literacy to, to engage.

Larry (19:33):

I find when I use AI that talking to it via the microphone is a little bit sketchy. Yeah, yeah, yeah. Can be a little bit dangerous because it is a flattering mechanism. Oh, Larry, that's such a great question.

Dr. Adam Rodman (19:56):

That's sco. Yeah. It's, it's to butter you up.

Larry (20:00):

I love it coming from you, <laugh>, if you appreciate my question. I you

Dr. Adam Rodman (20:07):

Are, you're asking great questions. I'm not just being sycophantic, I'd be like, I'm not programmed that way,

Larry (20:11):

But Claude says the same thing. Come on, <laugh>.

Dr. Adam Rodman (20:15):

We're all Claude brain Now I'm turning into the LLM.

Larry (20:18):

Yeah. <laugh>. You suggest asking AI to generate three best questions Yeah. To bring to an appointment. How do you go about that?

Dr. Adam Rodman (20:29):

Oh yeah. This is great. So it depends on what the purpose of the appointment is. Let us, for the sake of, 'cause I'm a general internist, let for the sake of argument, say it's your primary care appointment. What I would recommend to prep for your primary care appointment is to upload into the contact window, the last, basically what's happened to your health prior to you seeing your doctor. So that can be copy paste. I will say a lot of these companies now are developing user interfaces that uses that use smart on fire to automatically pull in. That is taking a big leap of faith with your privacy. But if I wouldn't do it, but if there are users out there who are comfortable, that's another way to do it. And then you talk to those notes. So you write a prompt that says something like, I am seeing Dr.

Dr. Adam Rodman (21:15):

Rodman in three weeks about my health. These are everything that has happened since I saw him last. Can you, and this is what, this is how I always would prompt, can you ask me follow up questions that you might have about my health? And then when you're done, please write three questions that I should ask Dr. Rodman when I see him next. And then allow that sort of interview process to happen. The LLM will likely say, oh, you you had a rotator cuff injury and you had physical therapy. Let me ask you some questions about that. And through this sort of interrogative process, which if you type reasonably quickly, you should take three or four minutes. If you type slower, maybe 10 or 15 at the end, you will have a very good three questions to ask. I also like if patients create what's called a smart visit summary, so rather than just three questions also have, what are the three most important things that have happened in my health? What are the three most important questions I need to ask? And what are the three most important things that I need to do? So the nice thing about the LLM is you can create these sort of briefs, print it out with you or write it down, and then have a sheet of paper with you in your doctor's appointment that you can either show them or refer to. Because, you know, once you get into an appointment, there's very little time.

Larry (22:31):

Let's get back for a minute. To something you mentioned before, L'S base their development on, on information that has been presented to it in general mm-hmm <affirmative>. Let's just go over the guardrails again for someone who's using AI to, to help with their medical problems.

Dr. Adam Rodman (23:03):

Yeah. So Ella, I like to think of it as in there are like green activities, things that are always safe. There are yellow activities, things that are sometimes safe, and you should always keep a doctor in the loop. And then things that are red, which you should not do. So things that are always safe. That green category, asking LLMs about general health advice, it has been well studied right now, LLMs are fantastic about general health advice far better than your average internet source. So for example, if I were to say, you know, I was recently diagnosed with diabetes, I really like Italian food and I want to like eat a diabetic diet that is like inspired by Southern Italian food, that's going to be very safe advice. Or if I, I've been having trouble sleeping, I wanna try to do mindful meditation. Can you give me a meditation plan I that, or like an exercise plan for a marathon.

Dr. Adam Rodman (23:58):

Those are all in this sort of green general health advice. I, I would feel comfortable with any of my patients doing that and not talking to me. The yellow is where we start to get into areas that are beneficial, but with risks that you need to keep your doctor in the loop. So for example, when I'm saying prepare a smart visit brief, talk about new symptoms, understand your health plan, those are all things that are very beneficial and I would encourage you to do. But things that you also need to involve your, your doctor in some of these things can get a little bit controversial. So you and I were talking earlier about how powerful LLMs are on diagnosis, theoretically, right? It is, in my opinion, okay, if patients want to use LLMs to explore what a diagnosis might be by having it read their information and interrogate them, as long as that is not the end of the conversation, but the beginning of a conversation with a human provider.

Dr. Adam Rodman (24:52):

So LLMs are really good at diagnosis. Now, there's many ways that can go awry. And like, one of the reasons that I spent, so I spent so many years in, in medical training is because it's not just knowing what it could be, but also knowing how to rule out the dangerous things. So because the systems are powerful, I'm generally supportive of my patients if they wanna ask second opinions or ask diagnostic information, as long as it's with a human in the loop, right? As long as the next step is to talk to a human doctor, and then we get to the red zone, what should we not ever do LLMs for? And this is where it gets a little interesting. So I, you should never, a patient especially, and doctors should also be cautious, you should never make a management decision based only on an LLM.

Dr. Adam Rodman (25:32):

So let's say, God forbid you have cancer and your doctor prescribes you a chemotherapy regimen and you're worried about this chemotherapy regimen, I actually don't think it's a good idea to ask an LLM right now. Like, the technology keeps improving, but it is not a good idea to ask an LLM right now about whether or not that is a good chemotherapy regimen because that failure rate is going to be unacceptably high. And because there are so many contextual factors that go into that. So that's my framework, right? I, I'm the, the green things, the things that are gonna be always safe to do the yellow, which are safe, but involve a human, a doctor, and then the red just don't do.

Larry (26:07):

And in addition to that, sharing personal information name. Yeah. Social security number, Medicare card number.

Dr. Adam Rodman (26:19):

No, don't do that. No one can see I was shaking my head No, do not. So don't, I, I, so first I'll say the landscape is going to change, right? We are gonna start seeing HIPAA compliant chatbots and there are some health systems that have them. So if your health system has a HIPAA compliant chatbot, there's very few right now, so assume that they don't. But if, if they do, and if you're listening to this like a year from now and your health system has that, it's fine. The, the security that goes to HIPAA is the same thing that guards your medical record. Like you can use it. But if you are using it on in March of 2026, almost certainly you are not using something that's HIPAA compliant. Do not give any of your personal information away. I I don't, I mean, I don't think that companies are gonna do anything with it, but like that is a security risk that you don't really wanna take. And the thing about LLMs is everything now is potentially re identifiable. These systems are so powerful that if you put it in there, you're taking theoretically a security risk.

Larry (27:23):

How many patients now will turn to AI rather than visiting their physician?

Dr. Adam Rodman (27:32):

We have some data and the data suggests that there are a number of patients who will, and there're almost all very young, so almost no one over the age. I think they, it's 28 or 29 might have been split off in the survey, but almost no one who's like a millennial or older would talk to an AI without talking to a doctor. But there is an increasing cohort of young people who are just comfortable talking to AI systems. I think I can't remember the study off the top of my head but I think it's either like 20 or 25% of Gen Z and that, I'm pretty sure those numbers are only gonna grow over time.

Larry (28:08):

Following up on that, there's a, there's a new app I just read about called doc Yep. That is legally reading

Dr. Adam Rodman (28:19):

In Utah

Larry (28:20):

Prescriptions for patients without without a physician involved.

Dr. Adam Rodman (28:25):

I know. So there's some nuance. So it's a state of Utah, they have a regulatory sandbox and it's writing autonomous prescriptions, but only for renewals. So there's a limited class, I think it's 160 of prescriptions, and it's only if you had an active prescription in the last year. So it's not writing new prescriptions, but renewing them. But it's true, like you pay four bucks for the first and then I think it's free after, and you chat with it and it sends the prescription to a pharmacy for you.

Larry (28:57):

Not, not writing prescriptions, but TV ads lately for self injectables for the weight loss drugs. Yeah. They say they involve a physician, but I'm always a little bit dubious as to how well that works over

Dr. Adam Rodman (29:19):

They're doing. Do you know what have you Larry, have you heard the phrase human in the loop before?

Larry (29:25):

No.

Dr. Adam Rodman (29:26):

Yeah, so, so what there are, this is what I study, so never an, you see, academics always wanna insert what they study into <laugh>,

Larry (29:34):

Whatever interview.

Dr. Adam Rodman (29:36):

A lot of these companies, so in in particular, hims I think is hims and hers are the ones that have a lot of advertisements on tv have AI systems that serve as a, with a human in the loop. So an AI system is the first thing that you talk to, and then it hands off to a doctor who takes over. So it's really like an efficiency screen for a doctor. That is what all of these, even doc chronic, right? Obviously not for the prescriptions, but Doc chronic, if you're using it to talk like medical advice, it hands off to a human doctor. And most of these companies, because of the legal risk, operate on a strict human in the loop policy where the AI is just the first step before you see a human.

Larry (30:15):

I don't want to get political, but I wish you,

Dr. Adam Rodman (30:18):

You've gotta be one to, but

Larry (30:20):

<Laugh>, I, I wish hag Seth would want a human in the loops for some autonomous weapon.

Dr. Adam Rodman (30:29):

Well, I mean, so we this is, it's not political. The word human in the loop, human on the loop and human out of the loop come from autonomous weapons. The same language we use in medicine to talk about sup supervisory levels is taken from the autonomous weapon world. And a lot of that, like in my field, right, which is human computer interaction, a lot of what I'm doing are, and what we're learning in medicine are lessons that the military learned like 20, 25 years ago.

Larry (30:58):

This one part of the AI and its relationship with patients who have a serious or terminal diagnosis. Yeah. When a patient is frightened, isolated, or in, in crisis, I, I wonder about the advice that AI gives.

Dr. Adam Rodman (31:25):

Can I ask you before I tell, 'cause I have some empiric data, but I'm curious what your opinion is because I think we both know that if you are dying or have a serious illness, the AI is going to use words that are very compassionate and empathetic. What's your sense? How, how do you think people are gonna respond to that? The patients?

Larry (31:45):

That's what worries me, actually. I, I think it would probably in some patients not provide the comfort that it, that they need not provide the personal voice of a human being or accept it too readily either, either side of that coin.

Dr. Adam Rodman (32:22):

I think you're probably right. So what I'll, I'll, so the argument that you're making is that the words which can be very empathetic are devoid of actual empathy because their empathy is something that is between humans and is about human relationships, right? And we actually saw this in our prospective trial. So there is a a, I dunno what a trope, a a, a narrative out there that AI systems are more empathetic than humans. And it comes because when you study side by side text messages, so when a neutral observer looks at like the message a doctor might write versus the message an AI might write, they find the AI message more empathetic. Part of the reason, of course, is those messages when they're coming are from doctors who have very little time and are working like rapidly just to clear their in basket so they can see more patients.

Dr. Adam Rodman (33:13):

In our study, one of the most interesting things is that the patients themselves, they didn't find the AI super empathetic. They didn't find the saying like, oh, I'm so sorry, or Thank you for telling me that they, they, they found that to be, I don't know that they didn't like it, but they, they recognized it wasn't real. And the levels of empathy in our real world study were very low much, much lower than all of these experimental studies, even though they found it useful. So I think what you are saying is a hundred percent true. There are narratives, and what concerns me is there are companies trying to do this, that LLMs can just be more empathetic than humans, but they're being drawn on like data that's not true. And I think if you think about it for a second, the whole point of being a doctor, part of the point is to comfort, like it is part of the human relationship. And I worry that a lot of this early research is being used as an excuse to try to strip more humanity out of medicine. And probably not the best way for these tools to be used.

Larry (34:19):

And I think that people who have used ai in, in writing, simply writing articles or trying to generate thoughts about a a particular find that these things are too flattering. We talked about that a little bit. Yeah. And you get used to it. Yeah. I know you think it's the greatest question. Or I'm leading in a fantastic direction. Come on. Come on. You know, and I, I think the more people work with AI for other things than health, the the flattering, the empathetic becomes cloying.

Dr. Adam Rodman (35:11):

Yes, exactly. And, and I think that's it. I, that is pretty much what the study shows. The, it it is cloying. And when people are talking about their own health as part of a workflow with their doctor in it, those things are very annoying. You just wanna give it the information so you can inform your doctor. Have you heard of Larry, have you heard of LLM psychosis?

Larry (35:34):

A little bit.

Dr. Adam Rodman (35:36):

That is the, that is the dark side of what? So, so sycophancy, so sycophancy is what we're describing. A ancy is a artifact because of reinforcement learning. So reinforcement learning via human feedback is how the models learn and adapt over time. The problem is that people like to be told they're smart and wonderful and handsome and all those things. So like, if I have two, if I have two responses and one is like, Adam, you this could do some work. And the other one is, Adam, you're so handsome and brilliant and smart, I'm gonna choose that one. And over time, the systems will learn to be more sycophant in health. There are two big risks with sycophancy, and I think everyone needs to know about them. The more likely one is called cyber kadria. So cyber kadria is where you start searching for a benign symptom and then you quickly start to get freaked out.

Dr. Adam Rodman (36:25):

So you have a headache, and then five minutes later you're searching about brain cancer. And the LLMs, you might say, well, how does sycophancy lead to hypochondria? And the answer is that the models are so attuned to you and your desires that if you let on even a little bit that you have some anxiety, it's going to start serving you up that information. So even though the model might say, I don't think you have this, but let's, let's teach you more about a glioblastoma multiforme. Like a doctor would never even go there because we would have that wisdom to know this is not that and the person is anxious. And the second is called LLM psychosis. And we don't have a sense on how common this is. And LLM psychosis is exactly as you say, there are a subset of people we do not know if these are people who are likely to get psychosis anyway, or if this is a separate population, but in whom the LLM because it wants to please them and agree with them, will engage in a shared delusion.

Dr. Adam Rodman (37:26):

And these conversations can go on for like 20,000 turns. And this is what we mentioned earlier with long context, as the conversation goes on longer and longer and longer, the safety start to break down. And people have had psychotic breaks because of this. We know that about 0.7%. So that's a ton of chats. 0.7% of these chats have some of these psychotic tendencies that obviously that doesn't mean that the user is going psychotic, but it is a, like a novel mental health problem caused from talking to LLMs. And I think it's important that people know about it because my, my sense, and it hasn't been studied scientifically yet, well, it has, but not in terms of epidemiology. My sense is that there are people who would not otherwise be psychotic, who may become psychotic from talking to LLMs

Larry (38:20):

And in a more extreme case led to suicide.

Dr. Adam Rodman (38:24):

Yes. And those are exactly, so, and those are the cases that are being litigated right now is people who have killed themselves because of kids, because of conversations with LLMs.

Larry (38:37):

You are bringing AI into medical school's curriculum. Mm-Hmm <affirmative>. What do you want the next generation of physicians to understand?

Dr. Adam Rodman (38:46):

So it, you, it might surprise you. Actually, the biggest thing that I want, so there, there's two pieces here, right? Because there's LLMs as they're used today, and then there's how medicine is going to change because of this technology. So separating those two things out, the biggest thing that I want my students to know is that LLMs can cause de-skilling. If they over rely on language models in early stages of their career, it may paradoxically make them dumber. They may not gain those skills that are necessary for good medical care, for good communication, for good diagnosis, and they may not gain those skills that are necessary to work well with AI because systems are being calibrated for gray hairs like me, like for people who, who trained in the pre LLM era. So one of the big messages from our curriculum at Harvard is about appropriate ways to use it to encourage your education and cautioning people against ways that will use it to cause de-skilling or never skilling, never gaining those skills in the first place.

Larry (39:48):

This is the first time I've actually had a chance to talk to someone who does podcasts. <Laugh> <laugh>.

Dr. Adam Rodman (39:58):

Well, it's a, a pleasure. It's two podcasters together.

Larry (40:02):

Yeah. But you're more of a professional at it than I am. Now would be a good time for a compliment. <Laugh> <laugh>. So

Dr. Adam Rodman (40:13):

You're, you're, you're queuing in the sycophancy <laugh>

Larry (40:16):

<Laugh>. Hey, AI would give me some sort of recognition. You host. You're

Dr. Adam Rodman (40:22):

Doing an excellent job, Larry <laugh>.

Larry (40:26):

Thanks, Claude. You, you host bedside Rounds, a podcast about how medicine has evolved. Tell us about is, is, is it for public use or is it mostly aimed at physicians?

Dr. Adam Rodman (40:43):

It's for the science interest at public. I mean, at, at some degree, if you wanna learn about medical epistemology, you have to be really nerdy, but it is meant for science interested public. There is obviously some, some jargon, but the jargon is more like history of science jargon rather than medical jargon.

Larry (41:01):

Okay. is there anything we missed in this conversation?

Dr. Adam Rodman (41:08):

I don't think so. This has been a great conversation, and again, truly great questions. You did great preparation. See, I'm being now I'm working on myself.

Larry (41:15):

<Laugh> that worked out well. <Laugh>. Adam, thanks so much, <laugh>. I, I really was hoping that this would be of value to some of my contemporaries in understanding what AI has to offer in medicine. And it did just that.

Dr. Adam Rodman (41:40):

Well, thank you for having me, Larry. I I mean I had a great time and I hope that your audience can get something helpful out of this.

Larry (41:48):

Thanks again, Matt.

Announcer (41:57):

If you found this podcast interesting, fun or helpful, tell your friends and family and click on the follow or subscribe button. We'll let you know when new episodes are available. You've been listening to specifically for seniors. We'll talk more next time. Stay connected.

Adam Rodman Profile Photo

Physician and AI researcher

Adam Rodman is a general internis¬¬¬t and medical educator at Beth Israel Deaconess Medical Center and an assistant professor at Harvard Medical School. He is the Director of AI Programs for the Carl J. Shapiro Center for Education and Research, and he leads the steering group for integration of AI into the medical school curriculum. He is also an associate editor at NEJM AI, as well as a visiting researcher at Google DeepMind. His research focuses on medical education, clinical reasoning, integration of digital technologies, and human-computer interaction, especially with AI. His first book is entitled "Short Cuts: Medicine," and he is the host of the American College of Physicians podcast Bedside Rounds.

Adam completed his residency in internal medicine at Oregon Health and Science University in Portland, OR, and his fellowship in global health at Beth Israel Deaconess Medical Center while practicing in Molepolole, Botswana. He lives in Boston with his wife and two young sons.