A Tale of Three Artificial Intelligence (AI) Experiences in Healthcare Interactions

AI tools are being increasingly used in healthcare, particularly for tasks like clinical notetaking during virtual visits. As a patient, I’ve had three recent experiences with AI-powered notetaking tools during appointments with the same clinician. Each time, I consented to its use, but the results were very different across the three encounters. The first two involved similar tools with mostly good results but surprising issues around pronouns and transparency of the consent process. The third was a different tool with a noticeable drop in quality. But what really stands out, when I compare these to a visit without AI, is that human errors happen too — and the healthcare system lacks effective processes for identifying and correcting errors, no matter the source.

Encounter One: Good Notes, Incorrect Pronouns

At the start of my first virtual appointment, my clinician asked for my permission to use an AI-powered tool for notetaking. I consented. After the visit, I reviewed the clinical note, and the summary at the top described me using “he/him” pronouns. I’m female, so they should have been “she/her”.

The rest of the note was detailed and clinically accurate and useful. But the pronoun error stood out. It seemed like the AI defaulted to male pronouns when gender information wasn’t explicitly mentioned, which made me wonder whether the model was trained with gender bias or if this was a design flaw in this tool.

Encounter Two: Clarifying Pronouns, Learning About Chart Access

At the next appointment, my clinician again asked for consent to use an AI-powered notetaker. I agreed and pointed out the pronoun error from the previous visit, clarifying that I am female and use she/her pronouns. My clinician looked at the prior note and was equally puzzled, commenting that this issue had come up with other patients — both directions, sometimes assigning female pronouns to male patients and vice versa. The clinician mentioned that the AI system supposedly had access to patient charts and should be able to pull gender information from existing records. That really surprised me: the consent statement had described the tool as a notetaking aid, but nothing had been said about access to my full chart. I would have given permission either way, but the fact that this hadn’t been disclosed upfront was disappointing. I had understood this to be a passive notetaking tool summarizing the visit in real time, not something actively pulling and using other parts of my health record.

This time, the pronouns in the note were correct (which could be because we talked about it and I declared the pronouns), and the overall summary was again accurate and detailed. But the fact that this was a recurring issue, with my provider seeing it in both directions across multiple patients, made it clear that pronoun errors weren’t a one-off glitch.

Encounter Three: A Different AI with Worse Results

By the third appointment, I knew what to expect. The clinician again asked for consent to use an AI notetaker, and I agreed. But after reviewing the note from this visit, two things stood out.

First, the quality of the notetaking was noticeably worse. Several errors were obvious, including situations where the note reflected the exact opposite of what had been discussed. For example, I had said that something did not happen, yet the note recorded that it did.

Second, this time the note disclosed the specific software used for notetaking at the bottom of the document. It was a different tool than the one used in the first two visits. I hadn’t been told that a different AI tool was being used, but based on the change in quality and the naming disclosure, it was clear this was a switch.

This experience reinforced that even when performing the same task — in this case, AI notetaking — the software can vary widely in accuracy and quality. I much preferred the output from the first two visits, even with the initial pronoun error, over the third experience where clinically significant details were recorded incorrectly.

Notably, there doesn’t seem to be a process or method (or if there is one, it is not communicated to patients or easily findable when searching) to give the health system feedback on the quality and accuracy of these tools. Which seems like a major flaw in most health systems’ implementations of AI-related tools, assessing and evaluating only from the healthcare provider perspective and overlooking or outright ignoring the direct impact on patients (which influences patient care, the clinician-patient relationship, trust with the health system….).

A Human-Only Encounter: Still Not Error-Free

To give further context, I want to compare these AI experiences with a separate virtual visit where no AI was involved. This was with a different clinician who took notes manually. The pronouns were correct in this note, but there were still factual inaccuracies.

A small but clear example: I mentioned using Device A, but the note stated I was using Device B. This was not a critical error at the time, but it was still incorrect.

The point here is that human documentation errors are not rare. They happen frequently, even without AI involved. Yet the narrative around AI in healthcare often frames mistakes as uniquely concerning when, in reality, this problem already exists across healthcare.

A Bigger Issue is Lack of Processes for Fixing Errors

Across all four encounters — both AI-assisted and human-driven — the most concerning pattern was not the errors themselves but the failure to correct them, even after they were pointed out.

In the first AI note where the pronouns were wrong, the note was never corrected, even after I brought it up at the next appointment. The error remains in my chart.

In the human-driven note, where the wrong device was recorded, I pointed out the error multiple times over the years. Despite that, the error persisted in my chart across multiple visits.

Eventually, it did affect my care. During a prescription renewal, the provider questioned whether I was using the device at all because they referenced the erroneous notes rather than the prescription history. I had to go back, cite old messages where I had originally pointed out the error, and clarify that the device listed in the notes was wrong.

I had stopped trying to correct this error after multiple failed attempts because it hadn’t impacted my care at the time. But years later, it suddenly mattered — and the persistence of that error caused confusion and required extra time, adding friction into what should have been a seamless prescription renewal process.

My point: the lack of effective remediation processes is not unique to either AI or human documentation. Errors get introduced and then they stay. There are no good systems for correcting clinical notes, whether written by a human or AI.

So, What Do We Do About AI in Healthcare?

Critics of AI in healthcare often argue that its potential for errors is a reason to avoid the technology altogether. But as these experiences show, human-driven documentation isn’t error-free either.

The problem isn’t AI.

It’s that healthcare systems as a whole have poor processes for identifying and correcting errors once they occur.

When we evaluate AI tools, we need to ask:

  • What types of errors are we willing to tolerate?
  • How do we ensure transparency about how the tools work and what data they access?
  • Most importantly, what mechanisms exist to correct errors after they’re identified?

This conversation needs to go beyond whether errors happen and instead focus on how we respond when they do.  It’s worth thinking about this in the same way I’ve written about errors of commission and omission in diabetes care with automated insulin delivery (AID) systems (DOI: 10.1111/dme.14687; author copy here). Errors of commission happen when something incorrect is recorded. Errors of omission occur when important details are left out. Both types of errors can affect care, and both need to be considered when evaluating the use of AI or human documentation.

In my case, despite the pronoun error in the first AI note, the notetaking quality was generally higher than the third encounter with a different AI tool. And even in the human-only note, factual errors persisted over years with no correction.

Three encounters with AI in healthcare - reflecting on errors of omission and commission that happen both with humans and AI , a blog post by Dana M. Lewis from DIYPS.orgAI can be useful for reducing clinician workload and improving documentation efficiency. But like any tool, its impact depends on how it’s implemented, how transparent the process is, and whether there are safeguards to address errors when they occur.

The reality is both AI and human clinicians make mistakes.

What matters, and what we should work on addressing, is how to fix errors in healthcare documentation and records when they occur.

Right now, this is a weakness of the healthcare system, and not unique to AI.

Leave a Reply

Your email address will not be published. Required fields are marked *