National Headlines

CMT White Paper AI: The Trust Gap Is Already Here

What Patient Behavior, AI Adoption, and a 30-Point Confidence Collapse Mean for Primary Care in 2026. Half of America has already made a health decision using AI without consulting a physician. Patient trust in doctors and hospitals has fallen 30 points in four years. And the tools patients are turning to at 2 a.m. are not going away. This is not a technology story. It is a relationship story — and primary care is running out of time to read it correctly. 

By the Editor-in-Chief, Concierge Medicine Today
Access Full Version: Download White Paper (PDF)

Published/Written: April 15, 2026
An Independent Editorial Analysis of Patient Behavior, Physician Resistance, and the Trust Gap That Is Reshaping Primary Care. 


Concierge medicine and membership-based practice positions and jobs open FOR Doctors.We all can relate to a time in our career when we recognize that there is a particular kind of professional blind spot that only becomes visible from the outside. Business professionals call it “insider-itis.” And we have talked about it here at Concierge Medicine Today and at our annual industry conference, the Concierge Medicine Forum, for a lot of years now.

Or better said — when you sit on the other side of the exam room, you can see it clearly.

It is the moment when an industry’s internal conversation stops matching the lived reality of the people it exists to serve. When the experts are debating one thing and the public has already moved on to something else entirely. When the defense of a standard sounds less like wisdom and more like someone who hasn’t looked up in a while or is simply ‘out of touch.’

Most physicians have had that moment in their own life — as a patient, as a family member sitting in a waiting room, as a spouse who got news nobody wanted to hear. In those moments, the system looks completely different as a physician sitting (or waiting) on the outside. The forms feel longer. The wait feels longer. The distance between you and the person who can actually help you feels longer.

That moment has arrived in American medicine at large. And patients can see it clearly, even if many physicians cannot.


I. The Headline That Started a Conversation — And the Nuance That Got Lost

In April 2026, a study published in JAMA Network Open by researchers at Mass General Brigham’s MESH Incubator generated a headline that spread quickly across physician forums, medical blogs, and social media: AI chatbots misdiagnose in over 80% of early medical cases.

Physicians shared it. Some cited it as confirmation of what they had long believed — a ready-made opportunity to say “See, I told you so” and reward their own confirmation bias or reinforce existing assumptions about a technology many had already decided to distrust from the jump. A few used it to counsel patients away from AI tools altogether.

But here is what the headline did not say — and what the study actually found.

On final diagnosis, with full clinical information, failure rates fell to less than 40 percent across all models. The best performers exceeded 90 percent accuracy. [11]

The researchers tested 21 large language models on 29 standardized clinical vignettes, feeding models information in sequence — the way a real physician encounters it — beginning with age, gender, and symptoms, then adding physical examination findings and laboratory results. All tested models failed to produce an appropriate differential diagnosis more than 80 percent of the time when patient data was incomplete. [11]

Concierge medicine and membership-based practice model for modern physicians.That 80 percent figure is real. It refers to a specific, rigorously defined clinical task — generating a complete and appropriately bounded list of candidate diagnoses at the earliest, most information-limited stage of a case. The methodology was strict by design. Models were scored as failing if they listed too few diagnoses or, notably, if they listed even one diagnosis too many — penalized, in other words, for being too cautious.

On final diagnosis, with full clinical information, failure rates fell to less than 40 percent across all models. The best performers exceeded 90 percent accuracy. [11]

Two independent commenters — one a physician — noted publicly on LinkedIn just this week that the headline was misleading. One wrote that models correctly diagnosed cases more than 60 percent of the time, that the 80 percent referred specifically to the differential question under strict criteria, and that it was unwise to extrapolate from what are essentially controlled laboratory tests. The other wrote simply: “The headline here is misleading but not surprising.”

The study’s own authors were measured in their conclusions. Co-author Marc Succi said the goal was to help separate hype from reality, and that the results reinforce that large language models in healthcare continue to require a human in the loop and very close oversight. [11]

That is a responsible, calibrated conclusion. It is not a declaration that AI is dangerous. It is not a recommendation that patients stop using these tools. It is a call for informed, supervised, appropriately contextualized use — which is, notably, exactly how most patients are already using them.

The problem was not the research. The research was serious and the methodology was stronger than most in this AI and healthcare conversation and space. The problem was the way some physicians received the headline — as confirmation of a conclusion they had already reached internally rather than as data to be examined carefully. That is not a technology problem. That is a closed-mindset problem. And closed mindsets in healthcare today are widespread — as they are in every field — and have consequences that extend well beyond the individual who holds them.


II. This Is Bigger Than Access: Trust, Presence, and a Device That Never Goes Offline or Distant From Our View

Here is the framing that most physician commentary gets wrong related to AI — including commentary that is genuinely well-intentioned.

The conversation about patients using AI for health decisions almost always defaults to the access argument. The waiting room was closed. The phone went to voicemail. The appointment was six weeks out. I don’t know how much it will cost. I do not want to drive across town again. I don’t like the people there. The list could go on and on as we know.

And so the patient turned to their phone. With a swipe, a click and few prompts later, they have a reply in minutes (or second).

That is true. And it matters. But it is not the whole story. Not even close.

In 2022, a survey found that 85 percent of U.S. internet users had searched online for health or medical information in the past year. A third of internet users between the ages of 16 and 64 cited researching health issues as a main motivation for using the internet, with over a quarter checking health symptoms online weekly. It is estimated that 7 percent of all Google searches — roughly 580 million per day — are health related. [25]

Those numbers are no joke. Yet all too often a large community of healthcare practitioners dismisses these results or simply ignores them.

“Patients are not embracing AI because they want less humanity in their healthcare. They are using AI precisely because they want more of it — and the current system is not consistently delivering it. They want their physician present, focused, and genuinely engaged. They are turning to AI for the information layer so that when they do see their physician, that time can be more human, not less.”

Editor-In-Chief, Concierge Medicine Today

Patients are not turning to their devices because their doctor’s office is closed. They are turning to their devices because their devices are always open. Always present. Always available at the exact moment a question forms — not six weeks from now, not after navigating a phone tree, not after sitting in a waiting room. Right now. In this moment. Wherever they are.

The integration of AI into mobile health applications has been accelerated by the widespread adoption of smartphones. Studies on AI-powered health apps have increased at a mean rate of 45 percent per year over the past decade — nearly three times the rate of non-AI health apps. [26] A physician will never be as readily accessible as the device a patient carries in their back pocket or at the restaurant while having lunch with a friend. Most physicians will never be accessible from the nightstand at 2 a.m. ready to take the call on the other side. Most  physicians will never be available in the parking lot sitting in car alone after a difficult diagnosis, or in the moment a new symptom appears on a Sunday night at 2am before the fear has had time to become rational.

A device will. An AI tool will answer the question. And patients know this. They are not confused about the difference between a chatbot and a physician. They are making a rational choice about which resource is available to them in the specific moment they need help. Most patients are weighing their options and risks carefully.

Direct primary care vs concierge medicine comparison for physicians.Research from Brown University using a think-aloud protocol with participants ages 19 to 80 found that people across all age groups are actively using AI-powered tools including ChatGPT and voice assistants to seek health information — and that these tools are reshaping traditional patterns of health information seeking by providing single, direct answers rather than a list of sources to investigate. [27] Patients are not just searching for information the way they once Googled a symptom. They are having a conversation. They are asking follow-up questions. They are getting responses that feel, in the moment, like dialogue.

That is not a replacement for the physician relationship. But it is filling a space that the physician relationship, structurally, was never designed to occupy.

And here is where the trust dimension becomes impossible to ignore.

The Philips Future Health Index 2025 — the largest global survey of its kind, surveying over 16,000 patients across 16 countries — found that patients are more open to AI when it frees up doctors for personal interactions, easing their fears of a less human healthcare experience. More than half of patients — 52 percent — worry about losing the human touch in their care. [28]

Read that carefully.

Patients are not embracing AI because they want less humanity in their healthcare. They are using AI precisely because they want more of it — and the current system is not consistently delivering it. They want their physician present, focused, and genuinely engaged. They are turning to AI for the information layer so that when they do see their physician, that time can be more human, not less.

Is this statement (above) accurate?

The trust collapse data backs it directly and completely. A joint MGH and Harvard survey documented a thirty-point drop in patient confidence in physicians and hospitals between 2020 and 2024. That is not an opinion. That is a measured, sourced, peer-reviewed finding. The sentence (above) is a logical consequence of that data — not an editorial invention.

The proportion of adults reporting a lot of trust for physicians and hospitals decreased from 71.5% in April 2020 to 40.1% in January 2024, according to a survey of 443,455 U.S. adults across all 50 states conducted by researchers at Massachusetts General Hospital and Harvard University.

The proportion of adults reporting a lot of trust for physicians and hospitals decreased from 71.5% in April 2020 to 40.1% in January 2024, according to a survey of 443,455 U.S. adults across all 50 states conducted by researchers at Massachusetts General Hospital and Harvard University.

THREE ADDITIONAL THINGS WORTH KNOWING ABOUT:

First, the sample size is extraordinary. The study included 582,634 responses from 443,455 adults across 24 survey waves conducted every one to two months from April 2020 through January 2024. Mass General This is not a small poll. This is one of the most comprehensive longitudinal trust studies in American healthcare history.

Second, the decline is not partisan. The features associated with lower trust persisted even after accounting for partisanship — they were not simply an indication of someone’s political affiliation. EurekAlert! This means the finding cannot be dismissed as a political artifact. It cuts across demographics consistently.

Third, Harvard Medicine Magazine confirmed the trend has continued. In the most recent survey conducted by the research group in April 2025, confidence in hospitals and doctors stood at just over 40 percent Harvard Medicine Magazine — meaning the collapse documented in the study has not meaningfully recovered.

A study published in Nature Medicine in 2024 offered identical medical advice to two groups — one told the advice came from a human physician, the other told it came from an AI chatbot. Those who believed the advice was AI-generated were less likely to deem it reliable and less willing to follow it. [29] Patients still want their physician’s voice on their care. The authority of the physician relationship has not collapsed. What has changed is patients’ willingness to wait passively for that relationship to show up on its own schedule.

A study published in JAMA Network Open in February 2025 found that patient trust in AI is deeply embedded in their prior experiences seeking care and their broader trust in health systems. [30] A patient who feels dismissed, rushed, or underserved in the exam room does not develop more trust in the institution when that institution tells them not to trust AI. They develop less trust in both.

“The physician who understands this is not threatened by the device in the patient’s pocket. They see it for what it is — a tool their patient is already using, a conversation already in progress, an opportunity to be the most trusted voice in a space that is filling up whether they participate or not.”

Editor-In-Chief, Concierge Medicine Today

The physician who does not understand this will keep delivering patient safety messages to people who stopped fully listening several years ago.


III. How Half of America Is Already There

A national poll of 1,007 adults commissioned by the Ohio State University Wexner Medical Center found that 51 percent of Americans used AI to make an important health decision without first consulting a medical professional. [1]

Once again, these numbers are nothing to scroll by.

Here is what that number actually means — and what it doesn’t.

It does not mean patients have abandoned their doctors or decided technology is better than medicine. Look closer at the data. Among those who used AI for health-related purposes, 62 percent used it to understand symptoms before deciding whether to seek care. Forty-four percent used it to help explain a test result or diagnosis. Twenty percent used it to prepare for an upcoming medical appointment. [1]

These are not patients trying to replace their physician. These are patients trying to prepare for one, understand one, or reach one — and finding AI in the gap when nothing else was available.

It’s like we’ve said at the Concierge Medicine Forum and Concierge Medicine Today for years… “It’s no longer about being the best Doctor in the world anymore, it’s about being the best Doctor FOR the world, FOR your Patients and FOR your local community.”

Marketing and growth insights on concierge medicine industry, physician adoption, and membership-based practice trends across the United States.Public openness to AI in healthcare has actually declined — from 52 percent in 2024 to 42 percent in 2026. [1] Patients are not running toward AI with unchecked enthusiasm. They are using it carefully, selectively, and — when the data is read honestly — largely as a bridge to the physician relationship they still want. The hype cycle is cooling. The underlying need is not.

This is not a generational flaw in humanity. It is not naivety about technology. It is the rational, reasonable behavior of people doing the best they can inside a system that has made timely, affordable, accessible healthcare genuinely hard to find.

And here is where the conversation gets uncomfortable — not because the observation is unfair, but because it is accurate.

Many physicians entered medicine with a deep, sincere commitment to the patient-first experience. That commitment is real and it is honorable. Most physicians practicing today chose this profession at significant personal cost — years of training, financial sacrifice, and a genuine desire to help people. That foundation matters and it should never be dismissed.

But first principles require us to ask a harder question: when the structural reality of how care is delivered means a patient cannot reach their physician for 59 days, and the response to that patient’s use of available tools is dismissal rather than curiosity — who is being protected in that moment? The patient? Or a version of practice the patient has already had to adapt around?

That is not an indictment of any individual physician, practice, institution nor organization. It is one patients observation about a systemic gap that patients (like me) are living inside every single day.

Globally, 73 percent of patients have faced delays in care, waiting an average of 70 days for an appointment. In the United States, patients wait an average of 59 days. One in three patients said their health worsened during the wait. One in four eventually required hospitalization. [2]

Into that 59-day gap — and into every night, every weekend, every quiet moment of worry that precedes a physician visit by days or weeks — AI became available. Patients used it. Not because they trust it more than their physician. Not because they believe it is perfect or a substitute for clinical judgment. Because their physician wasn’t reachable, the waiting room wasn’t open, their phone was right there, and they had a question that needed at least the beginning of an answer.

That is not reckless behavior. That is human behavior.

Please do not gloss over these next numbers. Fifty-nine percent of patients now believe AI can improve healthcare. Seventy-three percent say they welcome more technology if it genuinely enhances their care. [2]

These are not patients trying to replace their doctor. These are patients trying to find their doctor — or someone, or something, that will meet them where they are, when they need it, in the moment the question forms.

The physician who responds to that reality with dismissal — who tells a patient not to trust the tool they used at 2 a.m. because the office was closed — is not protecting that patient. They are protecting a version of medicine the patient has already had to move on from.

Are warnings and cautionary concerns about AI important? Absolutely. No credible voice in this conversation — including CMT — is arguing otherwise. The limitations of AI in clinical settings are real, documented, and worth communicating clearly to patients. That is part of the physician’s job and it is a valuable part.

But there is a meaningful difference between cautionary guidance offered from a posture of engagement and dismissal offered from a posture of resistance. One serves the patient. The other serves the status quo. And patients — quietly, consistently, across every demographic — have learned to tell the difference.

That distinction is what “out of touch” looks like from the other side of the exam table. And the data suggests it may be part of the reason patient trust in the institution of the physician-patient relationship has reached a historic low.

In fact, a peer-reviewed study published in JAMA Network Open, led by researchers at Massachusetts General Hospital and Harvard University, examined 582,634 survey responses from 443,455 U.S. adults across all 50 states over a four-year period. It found that the proportion of adults reporting a high level of trust in physicians and hospitals fell from 71.5 percent in April 2020 to 40.1 percent in January 2024 — a decline that held consistently across all demographic groups regardless of age, gender, income, race, or political affiliation. [3]

That is not a technology problem. That is a relationship problem. And it was forming long before most patients had ever downloaded an AI app or typed a symptom into a search bar.

That is a thirty-point collapse in four years. It did not happen because of AI. It happened because patients felt unseen, unheard, and underserved.

Dismissing the AI tools patients are turning to is not a strategy for rebuilding that trust. Showing up — with curiosity, with measured caution, with availability, with a genuine willingness to meet patients in the world they are actually living in — is the only strategy that has ever worked.


IV. Do Physicians Really Believe Patients Are That Naive?

So here is where the conversation requires both sides to be honest.

Do physicians really believe that patients — as a general population — are so naive, so impulsive, so reckless, so easily seduced by a chatbot that they will simply abandon their doctors for a device? That the generations of people who built relationships with their family physicians, who called the nurse line at midnight, who drove 45 minutes to see a specialist they trusted — that those same people are now going to hand their healthcare over to an app and never look back?

The data says otherwise. Clearly and consistently.

Research published in Frontiers in Psychology examining 1,183 participants found that patients still strongly prefer human physicians — particularly for complex or emotionally significant medical situations. [4] A Sermo poll found that 42 percent of physicians believe their roles will endure specifically because people will always value empathy and human-to-human interaction in healthcare. [5] Researchers at Northwestern University’s Kellogg School concluded that the need for human interaction in healthcare is likely to keep AI as a complement rather than a substitute for physicians for the foreseeable future — because physicians elicit information, explain procedures, and provide a compassion in communication that AI is unable to replicate. [6]

Even the most cited framing in this debate — the one that has made its way from academic medicine to the American Medical Association itself — does not say AI replaces physicians. It says physicians who understand how to use AI will replace those who do not. [7] That is not a replacement story. That is an adaptation story. And there is a very large difference between the two.

“Dismissing the AI tools patients are turning to is not a strategy for rebuilding that trust. Showing up — with curiosity, with measured caution, with availability, with a genuine willingness to meet patients in the world they are actually living in — is the only strategy that has ever worked.”

– Editor-In-Chief, Concierge Medicine Today

So no. Patients are not abandoning their doctors for devices. The fear that they are is largely a conflation — understandable, but a conflation nonetheless.

Now here is where intellectual honesty requires acknowledging something real on the other side of the exam room or your practice service window.

The risk is not that patients replace their physician with AI. The risk is that patients arrive at their physician having already made decisions — or formed strong beliefs — based on AI information they could not adequately evaluate. A 2024 KFF survey found that 57 percent of U.S. adults do not feel confident determining whether AI-generated health information is true or false. [8] That is a genuine clinical problem. A patient who arrives convinced of a diagnosis they found at 2 a.m. — anchored to it, emotionally invested in it — presents a real challenge to the physician who has to redirect them toward what the evidence actually shows.

Concierge doctor and patient experience in membership-based medicine.That concern is worth taking seriously. It is worth addressing directly, educationally, and with patience rather than condescension.

But here is the critical distinction the dismissive response misses entirely: the solution to a patient arriving with incomplete AI information is not to tell patients to stop using AI. More than 40 million people globally turn to AI tools specifically for daily health information.

Suffice to say, you and I will probably never attain that type of reach or be able to persuade that many people to do something different.

That said, seventy-nine percent of U.S. adults use the internet to find answers to health questions, with 75 percent saying AI-generated responses adequately address their queries sometimes, often, or always. [9]

The solution is then to become the physician patients want to bring their AI findings to!

You see, the physician who reacts to a patient’s chatbot research and curiosity with dismissal teaches that patient one thing: don’t bring it up next time. And a patient who stops sharing what they are doing outside the office is a patient whose care just got harder to coordinate, not easier.

The physician who responds with curiosity — who says “let’s look at what you found and talk through it together” — builds something irreplaceable. Not because they validated the AI output. Because they validated the patient’s instinct to be engaged in their own health. And that, every piece of evidence confirms, is the foundation on which treatment adherence, long-term trust, and genuine health outcomes are actually built.

In fact, a 2024 Harvard study found that AI made the correct diagnosis 88 percent of the time on standard prompts — compared with 96 percent among human physicians given the same information. [10] That eight-point gap matters clinically. It also confirms what patients intuitively understand: AI and physicians are not equivalent, and most patients are not operating under the illusion that they are. They are using AI because it is available. Not because they believe it is better.

The fear that patients think otherwise — that they are naive enough to trust a chatbot over a physician who knows their history, their family, their context, and their face — is the conflation. And that conflation, more than any specific piece of AI technology, is what is making some physicians look out of touch to the very patients they are trying to protect.


V. Before Physicians Cite AI’s Error Rate, a Word About Their Own

The Mass General Brigham study matters and its core finding is legitimate: AI with incomplete information struggles at early differential diagnosis. Physicians bring irreplaceable clinical reasoning to exactly those high-uncertainty, information-limited moments at the start of a case. That is real and worth defending.

But intellectual honesty requires applying the same scrutiny to both sides of the comparison.

The commonly cited figure for overall medical diagnostic error rates in the United States is 10 to 15 percent. Hospital autopsy studies corroborate this, with estimated major error rates of 8 to 24 percent. In primary care specifically, diagnostic errors are estimated to affect 6.3 percent of encounters — translating to more than 12 million Americans experiencing errors each year. [12]

A study published in JAMA in 2024 found that 23 percent of patients transferred to an intensive care unit or who died in the hospital had a missed or delayed diagnosis, and 17 percent of those errors led to temporary or permanent patient harm. [13]

An analysis of 226,718 reports in the National Practitioner Data Bank found that the leading malpractice allegations over a 20-year period were failure to diagnose, delay in diagnosis, wrong or misdiagnosis, and failure to order the appropriate test — many of them linked to cases of disability or death. [14]

Twelve million diagnostic errors annually. Twenty-three percent of critically ill patients with a missed or delayed diagnosis. These are not arguments against physicians. They are arguments against the pretense of perfection — and against a profession that holds AI to a standard of scrutiny it does not uniformly apply to itself.

The same study that documented AI’s struggles at early differential diagnosis also found that all tested models arrived at a correct final diagnosis more than 90 percent of the time when provided with all pertinent information. [11]

The honest picture is not “AI is dangerous” and it is not “AI is infallible.” The honest picture is that both AI and physicians make mistakes — in different situations, at different rates, for different reasons — and patients deserve a human relationship and medical profession and community of professionals willing to examine both with equal candor.


VI. What Resistance Actually Communicates

There is a difference between caution and dismissal. And most patients — even the ones using AI at midnight or at the restaurant with a friend or bedside of a loved one— can tell which one they are receiving.

  • Caution sounds like: “That is an interesting tool. Let’s look at what you found and talk through it together.”
  • Dismissal sounds like: “You shouldn’t trust that. AI makes mistakes.”

One of those responses builds a relationship.

The other ends one.

And, did you realize that one in four Americans say they would not visit a healthcare provider who refuses to embrace AI technology. [15] That is a quarter of the patient population making a pre-appointment decision about a physician’s relevance based on their perceived relationship with the tools patients are already using every day.

This is worth digesting and understanding carefully because the instinct behind physician resistance is not wrong. The concern that patients will make uninformed decisions based on AI output is legitimate and documented. A 2024 KFF survey found that 57 percent of U.S. adults do not feel confident determining whether AI-generated health information is true or false. [8] That is a real clinical problem — not a hypothetical one.

But your response to that problem matters enormously.

For example, a patient who is told not to bring their AI research to the office does not stop them doing the research. They just stop sharing it (and how they feel about it) with you! And a patient who stops sharing what they are doing outside the office is a patient whose care just became harder to treat and coordinate — not easier.

The Philips Future Health Index found that patients — regardless of their level of knowledge about AI — prefer to receive information and reassurance from their doctors. Knowledgeable patients are more aware of AI’s potential benefits, but also more conscious of its risks — and they are looking to their physician to help them navigate both. [28] That is not a threat to the physician relationship. That is an invitation to strengthen it.

In fact, seventy percent of patients are now open to AI tools for researching physicians. Eighty-four percent check online reviews before appointments. Negative reviews can deter patients even against personal recommendations. [16] The patient who leaves a physician’s office feeling dismissed about the tools they are using does not typically become a loyal patient. They become a one-star review and a referral that goes somewhere else.

Did you know that a qualitative study published in 2025 found that patients have specific informational needs for trusting relationships with their physicians around AI — including how AI tools are overseen, how they impact care, and how physicians use them. [17] They are not asking physicians to abandon technology. They are asking physicians to engage with it honestly and transparently. That is the same kind of transparency patients have always wanted about how decisions are made on their behalf.

You see the physician who can have that conversation openly — who treats patient AI use as an entry point into a richer dialogue rather than a problem to correct — is practicing exactly the kind of relationship-driven medicine the best practices in this industry were built to restore, like what I see often happening in concierge medicine practices today. The physician who cannot is not failing clinically. They are failing relationally. And in a practice model built entirely on the strength of the physician-patient relationship, that is the failure that matters most.


VII. What Some Physician Commentary Gets Right — And Where the Conversation About AI Could Go Further

It is worth acknowledging something honestly about AI: much of the physician commentary that pushes back on AI use comes from doctors who care deeply about their patients. The frustration behind it is real. The values underneath it — presence, relationship, clinical judgment, the irreplaceable weight of a physician who actually knows you — are exactly right.

The argument is not with the values. It is with what happens when those values become a reason to stop engaging with what patients are actually experiencing outside the exam room.

Some physicians share stories about the extraordinary access they provide — showing up for patients at inconvenient hours, going beyond what the system requires, treating the person rather than the chart. Those stories are genuine and they matter. They make a powerful case for relationship-driven care. CMT has been telling versions of that story for nearly two decades.

Industry insights on concierge medicine growth, physician adoption, and membership-based practice trends across the United States.But there is a quiet tension in those stories worth naming. Exceptional availability, told as a contrast to the tools patients are turning to, can inadvertently communicate something unintended: that the patient who used AI did so because they lacked access to something better. Which is almost always true. And which is exactly the access problem this article is trying to address.

The physician who shows up for a patient at an inconvenient hour is not making an argument against AI. They are making an argument for presence. For availability. For relationship.

Case in point: “It’s no longer about being the best Doctor in the world anymore, it’s about being the best Doctor FOR the world, FOR your Patients and FOR your local community.” ~Editor-In-Chief, Concierge Medicine Today

That argument is stronger — not weaker — when it is paired with curiosity about the tools patients are using rather than resistance to them. Because the physician who is both present and curious, both clinically excellent and technologically engaged, is the physician that no app can compete with.

Other commentary in physician circles advises colleagues to stop wasting energy on critics, to leave resistant institutions behind, to focus only on building what they believe in. There is genuine wisdom in that framing. Leaders in every field eventually learn that not every critic deserves a response.

But there is a version of that posture that tips, over time, from discernment into disconnection. The physician who has stopped listening to outside voices — even the voices that are wrong — is also less likely to hear the voices that are right. And some of what patients are communicating through their AI use is exactly that: a form of feedback the profession has not fully received yet.

The future of primary care does not need physicians to choose between relationships and technology. It needs physicians who understand that the two are not in opposition — that the tools, used well, are in service of the relationship. That curiosity about what patients are doing between appointments is an extension of care, not a threat to it.


VIII. The Fear Inside the Group Chat — And the Leadership Move That Answers It

AI keeps coming up in healthcare professional circles.

Not “how do we use this?” More like “do you think we’ll get replaced?”

Nobody says it to your face. But it’s out there.

There’s a quiet acceptance in healthcare at-large that realizes that holding onto the supposed status-quo and mindset of ‘Well, this is how we’ve always done things’ will eventually become disrupted by a force of technology (a metaphorical freight train) you can’t stop.

It’s more popular than they are. It’s more reachable than they are. It’s easier, cheaper and more curious about the consumer than you might imagine.

That’s kind of scary.

This observation — making the rounds in entrepreneurial and leadership communities — applies directly to your work today in healthcare. You’re even starting to see how it’s impacting your practice and your patients. The fear of AI replacement is real inside healthcare. A Sermo poll found that 58 percent of physicians believe AI will change the face of healthcare, either diminishing the physician’s role or making doctors obsolete. [5] That fear is understandable. It is also not a strategy.

The leadership move is not to replace people with AI. It is to train the people around you, your team, your colleagues and maybe even your patients how to use it. You can become the director, not just the reactor or doer as some might say.

Think about it: if your personal physician hosted a free ‘how to use AI’ meet-up one month among patients currently curious (and probably already using these tool) — how many people do you think would show up to hear their Doctor talk about it and how it can help them and harm them?

I know I would show up. In fact, sign me up! That’s exactly the type of conversation and dialogue I want to have with my Physician.

The physicians who are leading through the fear — rather than being paralyzed by it — have figured out something important: the answer is not resistance to AI. It is course correction that should take you (and your patients) in a different direction. Become the adult in the room who knows how to use AI better than anyone else. Stop being the reactionary doer playing catch-up and annoyed by patients who use AI and start being the architect of the care experience that AI cannot replicate.

That is not a technology argument. That is a leadership argument. And it is exactly the kind of leadership CMT has always believed the best physicians are capable of.


IX. Where AI Is Already Outperforming — Consistently

It’s worth pointing out that the resistance narrative by physicians tends to focus on AI’s limitations in early differential diagnosis.

Notable however, what it frequently omits is the growing body of peer-reviewed evidence documenting where AI is consistently more accurate than human clinicians — not theoretically, but in direct head-to-head comparison.

Concierge doctor and patient experience in membership-based medicine.AI-based diagnostic systems have demonstrated accuracy of 90 to 95 percent for specific imaging and detection tasks. In 2024, more than half of healthcare providers were actively using AI for at least one medical imaging task, up from just 17 percent in 2018. [18]

For example, Radiologists now detect lesions 26 percent faster and identify nearly 30 percent more cases with AI assistance. Trials show faster cardiac imaging and the ability to screen 9 percent more patients. [2]

Impressive right?

Well, the train doesn’t just stop there.

A systematic review and meta-analysis published in PMC examined 30 studies covering 4,762 clinical cases and 19 large language models, comparing diagnostic accuracy between AI and clinical professionals ranging from resident physicians to specialists with over 30 years of experience. The findings documented competitive and in some cases superior AI performance in specific diagnostic categories including ophthalmology and internal medicine. [19]

Nearly a decade ago, a prominent computer scientist predicted hospitals should stop training radiologists because AI would do the job better within five years. Almost ten years later, there are more radiologists than ever. Of the 950 AI and machine learning tools that received FDA approval between 1995 and 2024, 723 were radiology devices. The machines improved. The humans did not leave. [20] That is not a story about AI losing. That is a story about what genuine collaboration between human expertise and emerging technology actually looks like in practice.

The physician who cites AI’s failures without acknowledging AI’s documented advances is not operating from a fully informed position. And in 2026, patients increasingly have access to the same research.


X. The Business Reality Physicians Are Choosing to Ignore Related to AI

The access of healthcare and treatment problem today is not only a patient problem. It is an existential business problem for primary care, pediatric and family physicians who have not thought clearly about the next 15 to 30 years. Probably other specialties to but these are the ones more likely to feel the impact the hardest.

One in four Americans would not choose a healthcare provider who refuses to adopt AI technology. The top reasons patients want AI in their care include faster service, reduction of human error, and remote healthcare access. [15] Patients are increasingly using AI, online reviews, and social media to choose healthcare providers — and practices that fail to adapt to these trends face measurable competitive consequences. [16]

Physician burnout is compounding the access crisis from the supply side. Nearly half of physicians — 49 percent — reported burnout in 2024, according to Medscape’s national survey of over 9,200 physicians across 29 specialties. [21] More than one-third of burned-out primary care physicians said they plan to stop seeing patients within one to three years. [22] A Stanford Medicine-led study found that burnout rates, while declining slightly from pandemic peaks, remain stubbornly high and are projected to worsen already critical workforce shortages and access problems across the country. [23]

The physician who spends the next decade attributing their practice struggles to insurance companies, administrative burden, and AI’s unreliability — without examining what they themselves can change — is not making a principled stand. They are making a strategic error and calling it a value.

First principles applied to primary care in 2026: a medical practice is a business that exists to serve patients. Patients are making decisions about where to go, who to trust, and what tools to use based on which physicians demonstrate awareness of the world they are actually living in. A physician who tells a patient their 2 a.m. AI search was reckless — without offering them a better alternative — has not solved the patient’s problem. They have confirmed it.

The concierge and membership medicine model was built, in part, as an answer to exactly this dynamic. Smaller panels. Real access. Longer appointments. Physicians who are genuinely reachable. These models are not anti-technology. They are pro-relationship — and the best physician-patient relationships today I believe (sitting iare increasingly built by physicians who use every available tool, including AI, to serve their patients better and more completely.


XI. What the Next Generation of Primary Care Actually Looks Like

Let’s bring it all back front and center and slow down this train at its intended destination.

Firs, the data. Even the AMA’s own survey found that 66 percent of physicians reported using health care AI in 2024 — a 78 percent jump from just one year earlier. Physician enthusiasm for the technology is growing even as legitimate concerns remain. [24]

So, that informs you and I that today, the share of healthcare organizations that have adopted or explored generative AI rose from 72 percent in the first quarter of 2024 to 85 percent by the end of the year. [2] Pretty great, right? I certainly think so.

And guess what: The profession is not standing still.

The physicians I believe who will define primary care over the next generation are the ones learning to ask better questions — not “should I use AI” but “where does AI make me better, and where does it require my irreplaceable clinical judgment?”

The Mass General Brigham study itself was explicit on this point: its findings reinforce the necessity of human physician involvement in medical decision-making — not the obsolescence of it. [11] AI at its best is not a replacement for the physician. It is an extension — of reach, of availability, of pattern recognition — that makes a skilled physician more effective and a well-designed practice more accessible to more people.

That framing requires engagement, not resistance. It requires physicians who are curious about the tools their patients are already using, honest about the limitations of both AI and human medicine, and genuinely committed to closing the access gap that is driving patients to AI in the first place.

A clinical epidemiologist responding to the Mass General Brigham study noted that AI tools may have a role to play particularly in situations or geographies where access to physicians is limited — and that we urgently need research with actual patients from those settings. [11] That is not a concession that AI is ready to practice medicine unsupervised. It is an acknowledgment that access to care is a public health crisis, and that any tool which responsibly extends that access deserves engagement rather than reflexive dismissal.

The physicians who meet that standard will not need to worry about AI replacing them. The physicians who do not may find that the question answers itself.


XII. A Final Word to the Persons Reading This

You are not wrong for using the tools available to you. You are not reckless for searching your symptoms at midnight. You are not naive for wanting more access, more information, and more involvement in your own healthcare.

You are also not wrong for wanting a physician who is genuinely present, genuinely informed, and genuinely committed to your care in ways no app can replicate.

Both things are true. The best version of healthcare holds them together. The physicians who understand that are out there — building practices, staying curious, embracing the tools that extend their reach, and showing up in the ways that matter most.

Find them. Stay with them. Tell others.


This article is published for educational and informational purposes only. It does not constitute medical, legal, or professional advice. © 2007–2026 Concierge Medicine Today, LLC. All rights reserved.


Sources & Citations

  • [1] Ohio State University Wexner Medical Center / SSRS Opinion Panel Omnibus. “Public Comfort with AI in Health Care Falls.” National poll of 1,007 adults, January 2026. wexnermedical.osu.edu
  • [2] Vention Teams. “AI in Healthcare 2025 Statistics: Market Size, Adoption, Impact.” ventionteams.com
  • [3] Massachusetts General Hospital and Harvard Medical School. Joint Survey on Public Confidence in Physicians and Hospitals, 2020–2024. As cited in Physicians Weekly. physiciansweekly.com
  • [4] Riedl, Hogeterp, and Reuter. “Do Patients Prefer a Human Doctor, Artificial Intelligence, or a Blend?” Frontiers in Psychology, August 2024. frontiersin.org
  • [5] Sermo. “Will AI Replace Doctors? What Physicians Say About 2026.” sermo.com
  • [6] Dranove and Garthwaite. “Will AI Eventually Replace Doctors?” Kellogg Insight, Northwestern University. insight.kellogg.northwestern.edu
  • [7] American Medical Association. “Doctors Often Hesitate on Tech Changes. Why AI Is Different.” ama-assn.org
  • [8] KFF. Survey on Consumer Confidence in AI-Generated Health Information, 2024. As cited in Built In. builtin.com
  • [9] Built In. “AI Doctors Are Changing Healthcare. Can They Be Trusted?” builtin.com
  • [10] Harvard University. Study on GPT Diagnostic and Triage Accuracy, 2024. As cited in Medscape. medscape.com
  • [11] Rao et al. “Large Language Model Performance and Clinical Reasoning Tasks.” JAMA Network Open, April 2026. DOI: 10.1001/jamanetworkopen.2026.4003. massgeneralbrigham.org
  • [12] National Academy of Medicine / NCBI. “Diagnostic Errors in the Emergency Department.” ncbi.nlm.nih.gov/books/NBK588113
  • [13] JAMA. “Missed or Delayed Diagnosis in Hospitalized Patients,” 2024. jamanetwork.com
  • [14] Patient Safety Journal. “Characteristics and Trends of Medical Diagnostic Errors in the United States, 1999–2018.” 2024. patientsafetyj.com
  • [15] Keragon. “AI in Healthcare Statistics: 62 Findings from 18 Research Reports.” keragon.com
  • [16] Medical Economics. “Patients Turn to AI, Social Media When Choosing Doctors.” 2025. medicaleconomics.com
  • [17] Stroud et al. As cited in Springer Nature. “Trust and Artificial Intelligence in the Doctor-Patient Relationship.” Ethics and Information Technology, 2025. link.springer.com
  • [18] TempDev. “65 Key AI in Healthcare Statistics.” 2025. tempdev.com
  • [19] PMC / National Library of Medicine. “Comparing Diagnostic Accuracy of Clinical Professionals and Large Language Models: Systematic Review and Meta-Analysis.” 2025. pmc.ncbi.nlm.nih.gov/articles/PMC12047852
  • [20] Time Magazine. “Healthcare Is AI’s Hardest Test.” April 2026. time.com/7382493
  • [21] Medscape. “Physician Burnout and Depression Report.” 2024. thedo.osteopathic.org
  • [22] Commonwealth Fund. “A Poor Prognosis: More Than One-Third of Burned-Out U.S. Primary Care Physicians Plan to Stop Seeing Patients.” December 2024. commonwealthfund.org
  • [23] Stanford Medicine. “U.S. Physician Burnout Rates Drop Yet Remain Worryingly High.” April 2025. med.stanford.edu
  • [24] American Medical Association. “2 in 3 Physicians Are Using Health AI — Up 78% from 2023.” 2025. ama-assn.org
  • [25] npj Digital Medicine. “Diverging Trajectories of Trust in Healthcare and Online Information Seeking.” Nature, January 2026. nature.com/articles/s41746-026-02408-9
  • [26] PMC / National Library of Medicine. “Recent Advances in AI-Driven Mobile Health Enhancing Healthcare.” Bioengineering, January 2026. pmc.ncbi.nlm.nih.gov/articles/PMC12837455
  • [27] Journal of Medical Internet Research. “Evolving Health Information-Seeking Behavior in the Context of Google AI Overviews, ChatGPT, and Alexa.” October 2025. jmir.org/2025/1/e79961
  • [28] Philips Future Health Index 2025. “Building Trust in Healthcare AI: Five Key Insights.” Survey of 16,000+ patients across 16 countries, December 2024–April 2025. philips.com
  • [29] Blood Cancer United / Nature Medicine 2024. “How Patients Really Feel About Artificial Intelligence in Healthcare.” January 2025. bloodcancerunited.org
  • [30] Nong et al. “Patients’ Trust in Health Systems to Use Artificial Intelligence.” JAMA Network Open, February 2025. DOI: 10.1001/jamanetworkopen.2024.60628. pmc.ncbi.nlm.nih.gov/articles/PMC11829222

Discover more from Concierge Medicine Today

Subscribe to get the latest posts sent to your email.

Categories: National Headlines