Beware “too much” and “too little” advice in Exocrine Pancreatic Insufficiency (EPI / PEI)

If I had a nickel every time I saw conflicting advice for people with EPI, I could buy (more) pancreatic enzyme replacement therapy. (PERT is expensive, so it’s significant that there’s so much conflicting advice).

One rule of thumb I find handy is to pause any time I see the words “too much” or “too little”.

This comes up in a lot of categories. For example, someone saying not to eat “too much” fat or fiber, and that a low-fat diet is better. The first part of the sentence should warrant a pause (red flag words – “too much”), and that should put a lot of skepticism on any advice that follows.

Specifically on the “low fat diet” – this is not true. A lot of outdated advice about EPI comes from historical research that no longer reflects modern treatment. In the past, low-fat diets were recommended because early enzyme formulations were not encapsulated or as effective, so people in the 1990s struggled to digest fat because the enzymes weren’t correctly working at the right time in their body. The “bandaid” fix was to eat less fat. Now that enzyme formulations are significantly improved (starting in the early 2000s, enzymes are now encapsulated so they get to the right place in our digestive system at the right time to work on the food we eat or drink), medical experts no longer recommend low-fat diets. Instead, people should eat a regular diet and adjust their enzyme intake accordingly to match that food intake, rather than the other way around (source: see section 4.6).

Think replacement of enzymes, rather than restriction of dietary intake: the “R” in PERT literally stands for replacement!

If you’re reading advice as a person with EPI (PEI), you need to have math in the back of your mind. (Sorry if you don’t like math, I’ll talk about some tools to help).

Any time people use words to indicate amounts of things, whether that’s amounts of enzymes or amounts of food (fat, protein, carbs, fiber), you need to think of specific numbers to go with these words.

And, you need to remember that everyone’s body is different, which means your body is different.

Turning words into math for pill count and enzymes for EPI

Enzyme intake should not be compared without considering multiple factors.

The first reason is because enzyme pills are not all the same size. Some prescription pancreatic enzyme replacement therapy (PERT) pills can be as small as 3,000 units of lipase or as large as 60,000 units of lipase. (They also contain thousands or hundreds of thousands of units of protease and amylase, to support protein and carbohydrate digestion. For this example I’ll stick to lipase, for fat digestion.)

If a person takes two enzyme pills per meal, that number alone tells us nothing. Or rather, it tells us only half of the equation!

The size of the pills matters. Someone taking two 10,000-lipase pills consumes 20,000 units per meal, while another person taking two 40,000-lipase pills is consuming 80,000 units per meal.

That is a big difference! Comparing the two total amounts of enzymes (80,000 units of lipase or 20,000 units of lipase) is a 4x difference.

And I hate to tell you this, but that’s still not the entire equation to consider. Hold on to your hat for a little more math, because…

The amount of fat consumed also matters.

Remember, enzymes are used to digest food. It’s not a magic pill where one (or two) pills will perfectly cover all food. It’s similar to insulin, where different people can need different amounts of insulin for the same amount of carbohydrates. Enzymes work the same way, where different people need different amounts of enzymes for the same amount of fat, protein, or carbohydrates.

And, people consume different amounts and types of food! Breakfast is a good example. Some people will eat cereal with milk – often that’s more carbs, a little bit of protein, and some fat. Some people will eat eggs and bacon – that’s very little carbs, a good amount of protein, and a larger amount of fat.

Let’s say you eat cereal with milk one day, and eggs and bacon the next day. Taking “two pills” might work for your cereal and milk, but not your eggs and bacon, if you’re the person with 10,000 units of lipase in your pill. However, taking “two pills” of 40,000 units of lipase might work for both meals. Or not: you may need more for the meal with higher amounts of fat and protein.

If someone eats the same quantity of fat and protein and carbs across all 3 meals, every day, they may be able to always consume the same number of pills. But for most of us, our food choices vary, and the protein and fat varies meal to meal, so it’s common to need different amounts at different meals. (If you want more details on how to figure out how much you need, given what you eat, check out this blog post with example meals and a lot more detail.)

You need to understand your baseline before making any comparisons

Everyone’s body is different, and enzyme needs vary widely depending on the amount of fat and protein consumed. What is “too much” for one person might be exactly the right amount for another, even when comparing the same exact food quantity. This variability makes it essential to understand your own baseline rather than following generic guidance. The key is finding what works for your specific needs rather than focusing on an arbitrary notion of “too much”, because “too much” needs to be compared to specific numbers that can be compared as apples to apples.

A useful analogy is heart rate. Some people have naturally higher or lower resting heart rates. If someone tells you (that’s not a doctor giving you direct medical advice) that your heart rate is too high, it’s like – what can you do about it? It’s not like you can grow your heart two sizes (like the Grinch). While fitness and activity can influence heart rate slightly, individual baseline differences remain significant. If you find yourself saying “duh, of course I’m not going to try to compare my heart rate to my spouse’s, our bodies are different”, that’s a GREAT frame of mind that you should apply to EPI, too.

(Another example is respiratory rate, where it varies person to person. If someone is having trouble breathing, the solution is not as simple as “breathe more” or “breathe less”—it depends on their normal range and underlying causes, and it takes understanding their normal range to figure out if they are breathing more or less than their normal, because their normal is what matters.)

If you have EPI, fiber (and anything else) also needs numbers

Fiber also follows this pattern. Some people caution against consuming “too much” fiber, but a baseline level is essential. “Too little” fiber can mimic EPI symptoms, leading to soft, messy stools. Finding the right amount of fiber is just as crucial as balancing fat and protein intake.

If you find yourself observing or hearing comments that you likely consume “too much” fiber – red flag check for “too much!” Similar to if you hear/see about ‘low fiber’. Low meaning what number?

You should get an estimate for how much you are consuming and contextualize it against the typical recommendations overall, evaluate whether fiber is contributing to your issues, and only then consider experimenting with it.

(For what it’s worth, you may need to adjust enzyme intake for fat/protein first before you play around with fiber, if you have EPI. Many people are given PERT prescriptions below standard guidelines, so it is common to need to increase dosing.)

For example, if you’re consuming 5 grams of fiber in a day, and the typical guidance is often for 25-30 grams (source, varies by age, gender and country so this is a ballpark)…. you are consuming less than the average person and the average recommendation.

In contrast, if you’re consuming 50+ grams of fiber? You’re consuming more than the average person/recommendation.

Understanding where you are (around the recommendation, quite a bit below, or above?) will then help you determine whether advice for ‘more’ or ‘less’ is actually appropriate in your case. Most people have no idea what you’re eating – and honestly, you may not either – so any advice for “too much”, “too little”, or “more” or “less” is completely unhelpful without these numbers in mind.

You don’t have to tell people these numbers, but you can and should know them if you want to consider evaluating whether YOU think you need more/less compared to your previous baseline.

How do you get numbers for fiber, fat, protein, and carbohydrates?

Instead of following vague “more” or “less” advice, first track your intake and outcomes.

If you don’t have a good way to estimate the amount of fat, protein, carbohydrates, and/or fiber, here’s a tool you can use – this is a Custom GPT that is designed to give you back estimates of fat, protein, carbohydrates, and fiber.

You can give it a meal, or a day’s worth of meals, or several days, and have it generate estimates for you. (It’s not perfect but it’s probably better than guessing, if you’re not familiar with estimating these macronutrients).

If you don’t like or can’t access ChatGPT (it works with free accounts, if you log in), you can also take this prompt, adjust it how you like, and give it to any free LLM tool you like (Gemini, Claude, etc.):

You are a dietitian with expertise in estimating the grams of fat, protein, carbohydrate, and fiber based on a plain language meal description. For every meal description given by the user, reply with structured text for grams of fat, protein, carbohydrates, and fiber. Your response should be four numbers and their labels. Reply only with this structure: “Fat: X; Protein: Y; Carbohydrates: Z; Fiber; A”. (Replace the X, Y, Z, and A with your estimates for these macronutrients.). If there is a decimal, round to the nearest whole number. If there are no grams of any of the macronutrients, mark them as 0 rather than nil. If the result is 0 for all four variables, please reply to the user: “I am unable to parse this meal description. Please try again.”

If you are asked by the user to then summarize a day’s worth of meals that you have estimated, you are able to do so. (Or a week’s worth). Perform the basic sum calculation needed to do this addition of each macronutrient for the time period requested, based on the estimates you provided for individual meals.

Another option is using an app like PERT Pilot. PERT Pilot is a free app for iOS for people with EPI that requires no login or user account information, and you can put in plain language descriptions of meals (“macaroni and cheese” or “spaghetti with meatballs”) and get back the estimates of fat, protein, and carbohydrates, and record how much enzymes you took so you can track your outcomes over time. (Android users – email me at Dana+PERTPilot@OpenAPS.org if you’d like to test the forthcoming Android version!) Note that PERT Pilot doesn’t estimate fiber, but if you want to start with fat/protein estimates, PERT Pilot is another way to get started with seeing what you typically consume. (For people without EPI, you can use Carb Pilot, another free iOS app that similarly gives estimates of macronutrients.)

Beware advice of "more" or "less" that is vague and non-numeric (not a number) unless you know your baseline numbers in exocrine pancreatic insufficiency. A blog by Dana M. Lewis from DIYPS.orgTL;DR: Instead of arbitrarily lowering or increasing fat or fiber in the diet, measure and estimate what you are consuming first. If you have EPI, assess fat digestion first by adjusting enzyme intake to minimize symptoms. (And then protein, especially for low fat / high protein meals, such as chicken or fish.) Only then consider fiber intake—some people may actually need more fiber rather than less than what they were consuming before if they experience mushy stools. Remember the importance of putting “more” or “less” into context with your own baseline numbers. Estimating current consumption is crucial because an already low-fiber diet may be contributing to the problem, and reducing fiber further could make things worse. Understanding your own baseline is the key.

You Can Create Your Own Icons (and animated gifs)

Over the years, I’ve experimented with different tools for making visuals. Some of them are just images but in the last several years I’ve made more animations, too.

But not with any fancy design program or purpose built tool. Instead, I use PowerPoint.

Making animated gifs

I first started using PowerPoint to create gifs around 2018 or 2019. At the time, PowerPoint didn’t have a built-in option to export directly to GIF format, so I had to export animations as a movie file first and then use an online converter to turn them into a GIF. Fortunately, in recent years, PowerPoint has added a direct “Export as GIF” feature.

The process of making an animated GIF in PowerPoint is similar to adding animations or transitions in a slide deck for a presentation. I’ve used this for various projects, including:

Am I especially trained? No. Do I feel like I have design skills? No.

Elbow grease and determination to try is what I have, with the goal of trying to use visuals to convey information as a summary or to illustrate a key point to accompany written text. (I also have a tendency to want to be a perfectionist, and I have to consciously let that go and let “anything is better than nothing” guide my attempts.)

Making icons is possible, too

Beyond animations, I’ve also used PowerPoint to create icons and simple logo designs.

I ended up making the logos for Carb Pilot (a free iOS app that enables you to track the macronutrients of your choice) and PERT Pilot (a free iOS app that enables people with exocrine pancreatic insufficiency, known as EPI or PEI, to track their enzyme intake) using PowerPoint.

This, and ongoing use of LLMs to help me with coding projects like these apps, is what led me to the realization that I can now make icons, too.

I was working to add a widget to Carb Pilot, so that users can have a widget on the home screen to more quickly enter meals without having to open the app and then tap; this saves a click every time. I went from having it be a single button to having 4 buttons to simulate the Carb Pilot home screen. For the “saved meals” button, I wanted a list icon, to indicate the list of previous meals. I went to SF Symbols, Apple’s icon library, and picked out the list icon I wanted to use, and referenced it in XCode. It worked, but it lacked something.

A light purple iOS widget with four buttons - top left is blue and says AI: top right is purple with a white microphone icon; bottom left is periwinkle blue with a white plus sign icon; bottom right is bright green with a custom list icon, where instead of bullets the three items are an apple, cupcake, and banana mini-icons. It occurred to me that maybe I could tweak it somehow and make the bullets of the list represent food items. I wasn’t sure how, so I asked the LLM if it was possible. Because I’ve done my other ‘design’ work in PowerPoint, I went there and quickly dropped some shapes and lines to simulate the icon, then tested exporting – yes, you can export as SVG! I spent a few more minutes tweaking versions of it and exporting it. It turns out, yes, you can export as SVG, but then the way I designed it wasn’t really suited for SVG use. When I had dropped the SVG into XCode, it didn’t show up. I asked the LLM again and it suggested trying PNG format. I exported the icon from powerpoint as PNG, dropped it into XCode, and it worked!

(That was a good reminder that even when you use the “right” format, you may need to experiment to see what actually works in practice with whatever tools you’re using, and not let the first failure be a sign that it can’t work.)

Use What Works

There’s a theme you’ll be hearing from me: try and see what works. Just try. You don’t know if you don’t try. With LLMs and other types of AI, we have more opportunities to try new and different things that we may not have known how to do before. From coding your own apps to doing data science to designing custom icons, these are all things I didn’t know how to do before but now I can. A good approach is to experiment, try different things (and different prompts), and not be afraid to use “nontraditional” tools for projects, creative or otherwise. If it works, it works!

Facing Uncertainty with AI and Rethinking What If You Could?

If you’re feeling overwhelmed by the rapid development of AI, you’re not alone. It’s moving fast, and for many people the uncertainty of the future (for any number of reasons) can feel scary. One reaction is to ignore it, dismiss it, or assume you don’t need it. Some people try it once, usually on something they’re already good at, and when AI doesn’t perform better than they do, they conclude it’s useless or overhyped, and possibly feel justified in going back to ignoring or rejecting it.

But that approach misses the point.

AI isn’t about replacing what you already do well. It’s about augmenting what you struggle with, unlocking new possibilities, and challenging yourself to think differently, all in the pursuit of enabling YOU to do more than you could yesterday.

One of the ways to navigate the uncertainty around AI is to shift your mindset. Instead of thinking, “That’s hard, and I can’t do that,” ask yourself, “What if I could do that? How could I do that?”

Sometimes I get a head start by asking an LLM just that: “How would I do X? Layout a plan or outline an approach to doing X.” I don’t always immediately jump to doing that thing, but I think about it, and probably 2 out of 3 times, laying out a possible approach means I do come back to that project or task and attempt it at a later time.

Even if you ultimately decide not to pursue something because of time constraints or competing priorities, at least you’ve explored it and possibly learned something even in the initial exploration about it. But, I want to point out that there’s a big difference between legitimately not being able to do something and choosing not to. Increasingly, the latter is what happens, where you may choose not to tackle a task or take on a project: this is very different from not being able to do so.

Finding the Right Use Cases for AI

Instead of testing AI on things you’re already an expert in, try applying it to areas where you’re blocked, stuck, overwhelmed, or burdened by the task. Think about a skill you’ve always wanted to learn but assumed was out of reach. Maybe you’ve never coded before, but you’re curious about writing a small script to automate a task. Maybe you’ve wanted to design a 3D-printed tool to solve a real-world problem but didn’t know where to start. AI can be a guide, an assistant, and sometimes even a collaborator in making these things possible.

For example, I once thought data science was beyond my skill set. For the longest time, I couldn’t even get Jupyter Notebooks to run! Even with expert help, I was clearly doing something silly and wrong, but it took a long time and finally LLM assistance to get step by step and deeper into sub-steps to figure out the step that was never in the documentation or instructions that I was missing – and I finally figured it out! From there, I learned enough to do a lot of the data science work on my own projects. You can see that represented in several recent projects. The same thing happened with iOS development, which I initially felt imposter syndrome about. And this year, after FOUR failed attempts (even 3 using LLMs), I finally got a working app for Android!

Each time, the challenge felt enormous. But by shifting from “I can’t” to “What if I could?” I found ways to break through. And each time AI became a more capable assistant, I revisited previous roadblocks and made even more progress, even when it was a project (like an Android version of PERT Pilot) I had previously failed at, and in that case, multiple times.

Revisiting Past Challenges

AI is evolving rapidly, and what wasn’t possible yesterday might be feasible today. Literally. (A great example is that I wrote a blog post about how medical literature seems like a game of telephone and was opining on AI-assisted tools to aid with tracking changes to the literature over time. The day I put that blog post in the queue, OpenAI announced their Deep Research tool, which I think can in part address some of the challenges I talked about currently being unsolved!)

One thing I have started to do that I recommend is keeping track of problems or projects that feel out of reach. Write them down. Revisit them every few months, and explore them with the latest LLM and AI tools. You might be surprised at how much has changed, and what is now possible.

Moving Forward with AI

You don’t even have to use AI for everything. (I don’t.) But if you’re not yet in the habit of using AI for certain types of tasks, I challenge you to find a way to use an LLM for *something* that you are working on.

A good place to insert this into your work/projects is to start noting when you find yourself saying or thinking “this is the way we/I do/did things”.

When you catch yourself thinking this, stop and ask:

  • Does it have to be done that way? Why do we think so?
  • What are we trying to achieve with this task/project?
  • Are there other ways we can achieve this?
  • If not, can we automate some or more steps of this process? Can some steps be eliminated?

You can ask yourself these questions, but you can also ask these questions to an LLM. And play around with what and how you ask (the prompt, or what you ask it, makes a difference).

One example for me has been working on a systematic review and meta analysis of a medical topic. I need to extract details about criteria I am analyzing across hundreds of papers. Oooph, big task, very slow. The LLM tools aren’t yet good about extracting non-obvious data from research papers, especially PDFs where the data I am interested may be tucked into tables, figure captions, or images themselves rather than explicitly stated as part of the results section. So for now, that still has to be manually done, but it’s on my list to revisit periodically with new LLMs.

However, I recognized that the way I was writing down (well, typing into a spreadsheet) the extracted data was burdensome and slow, and I wondered if I could make a quick simple HTML page to guide me through the extraction, with an output of the data in CSV that I could open in spreadsheet form when I’m ready to analyze. The goal is easier input of the data with the same output format (CSV for a spreadsheet). And so I used an LLM to help me quickly build that HTML page, set up a local server, and run it so I can use it for data extraction. This is one of those projects where I felt intimidated – I never quite understood spinning up servers and in fact didn’t quite understand fundamentally that for free I can “run” “a server” locally on my computer in order to do what I wanted to do. So in the process of working on a task I really understood (make an HTML page to capture data input), I was able to learn about spinning up and using local servers! Success, in terms of completing the task and learning something I can take forward into future projects.

Another smaller recent example is when I wanted to put together a simple case report for my doctor, summarizing symptoms etc, and then also adding in PDF pages of studies I was referencing so she had access to them. I knew from the past that I could copy and paste the thumbnails from Preview into the PDF, but it got challenging to be pasting 15+ pages in as thumbnails and they were inserting and breaking up previous sections, so the order of the pages was wrong and hard to fix. I decided to ask my LLM of choice if it was possible to automate compiling 4 PDF documents via a command line script, and it said yes. It told me what library to install (and I checked this is an existing tool and not a made up or malicious one first), and what command to run. I ran it, it appended the PDFs together into one file the way I wanted, and it didn’t require the tedious hand commands to copy and paste everything together and rearrange when the order was messed up.

The more I practice, the easier I find myself switching into the habit of saying “would it be possible to do X” or “Is there a way to do Y more simply/more efficiently/automate it?”. That then leads to portions which I can decide to implement, or not. But it feels a lot better to have those on hand, even if I choose not to take a project on, rather than to feel overwhelmed and out of control and uncertain about what AI can do (or not).

Facing uncertainty with AI and rethinking "What if you could?", a blog post by Dana M. Lewis on DIYPS.orgIf you can shift your mindset from fear and avoidance to curiosity and experimentation, you might discover new skills, solve problems you once thought were impossible, and open up entirely new opportunities.

So, the next time you think, “That’s too hard, I can’t do that,” stop and ask:

“What if I could?”

If you appreciated this post, you might like some of my other posts about AI if you haven’t read them.

The prompt matters when using Large Language Models (LLMs) and AI in healthcare

I see more and more research papers coming out these days about different uses of large language models (LLMs, a type of AI) in healthcare. There are papers evaluating it for supporting clinicians in decision-making, aiding in note-taking and improving clinical documentation, and enhancing patient education. But I see a wide-sweeping trend in the titles and conclusions of these papers, exacerbated by media headlines, making sweeping claims about the performance of one model versus another. I challenge everyone to pause and consider a critical fact that is less obvious: the prompt matters just as much as the model.

As an example of this, I will link to a recent pre-print of a research article I worked on with Liz Salmi (pre-print here).

Liz nerd-sniped me about an idea of a study to have a patient and a neuro-oncologist evaluate LLM responses related to patient-generated queries about a chart note (or visit note or open note or clinical note, however you want to call it). I say nerd-sniped because I got very interested in designing the methods of the study, including making sure we used the APIs to model these ‘chat’ sessions so that the prompts were not influenced by custom instructions, ‘memory’ features within the account or chat sessions, etc. I also wanted to test something I’ve observed anecdotally from personal LLM use across other topics, which is that with 2024-era models the prompt matters a lot for what type of output you get. So that’s the study we designed, and wrote with Jennifer Clarke, Zhiyong Dong, Rudy Fischmann, Emily McIntosh, Chethan Sarabu, and Catherine (Cait) DesRoches, and I encourage you to check out the pre-print and enjoy the methods section, which is critical for understanding the point I’m trying to make here. 

In this study, the data showed that when LLM outputs were evaluated for a healthcare task, the results varied significantly depending not just on the model but also on how the task was presented (the prompt). Specifically, persona-based prompts—designed to reflect the perspectives of different end users like clinicians and patients—yielded better results, as independently graded by both an oncologist and a patient.

The Myth of the “Best Model for the Job”

Many research papers conclude with simplified takeaways: Model A is better than Model B for healthcare tasks. While performance benchmarking is important, this approach often oversimplifies reality. Healthcare tasks are rarely monolithic. There’s a difference between summarizing patient education materials, drafting clinical notes, or assisting with complex differential diagnosis tasks.

But even within a single task, the way you frame the prompt makes a profound difference.

Consider these three prompts for the same task:

  • “Explain the treatment options for early-stage breast cancer.”
  • “You’re an oncologist. Explain the treatment options for early-stage breast cancer.”
  • “You’re an oncologist. Explain the treatment options for early-stage breast cancer as you would to a newly diagnosed patient with no medical background.”

The second and third prompt likely result in a more accessible and tailored response. If a study only tests general prompts (e.g. prompt one), it may fail to capture how much more effective an LLM can be with task-specific guidance.

Why Prompting Matters in Healthcare Tasks

Prompting shapes how the model interprets the task and generates its output. Here’s why it matters:

  • Precision and Clarity: A vague prompt may yield vague results. A precise prompt clarifies the goal and the speaker (e.g. in prompt 2), and also often the audience (e.g. in prompt 3).
  • Task Alignment: Complex medical topics often require different approaches depending on the user—whether it’s a clinician, a patient, or a researcher.
  • Bias and Quality Control: Poorly constructed prompts can inadvertently introduce biases

Selecting a Model for a Task? Test Multiple Prompts

When evaluating LLMs for healthcare tasks—or applying insights from a research paper—consider these principles:

  1. Prompt Variation Matters: If an LLM fails on a task, it may not be the model’s fault. Try adjusting your prompts before concluding the model is ineffective, and avoid broad sweeping claims about a field or topic that aren’t supported by the test you are running.
  2. Multiple Dimensions of Performance: Look beyond binary “good” vs. “bad” evaluations. Consider dimensions like readability, clinical accuracy, and alignment with user needs, as an example when thinking about performance in healthcare. In our paper, we saw some cases where a patient and provider overlapped in ratings, and other places where the ratings were different.
  3. Reproducibility and Transparency: If a study doesn’t disclose how prompts were designed or varied, its conclusions may lack context. Reproducibility in AI studies depends not just on the model, but on the interaction between the task, model, and prompt design. You should be looking for these kinds of details when reading or peer reviewing papers. Take results and conclusions with a grain of salt if these methods are not detailed in the paper.
  4. Involve Stakeholders in Evaluation: As shown in the preprint mentioned earlier, involving both clinical experts and patients in evaluating LLM outputs adds critical perspectives often missing in standard evaluations, especially as we evolve to focus research on supporting patient needs and not simply focusing on clinician and healthcare system usage of AI.

What This Means for Healthcare Providers, Researchers, and Patients

  • For healthcare providers, understand that the way you frame a question can improve the usefulness of AI tools in practice. A carefully constructed prompt, adding a persona or requesting information for a specific audience, can change the output.
  • For researchers, especially those developing or evaluating AI models, it’s essential to test prompts across different task types and end-user needs. Transparent reporting on prompt strategies strengthens the reliability of your findings.
  • For patients, recognizing that AI-generated health information is shaped by both the model and the prompt. This can support critical thinking when interpreting AI-driven health advice. Remember that LLMs can be biased, but so too can be humans in healthcare. The same approach for assessing bias and evaluating experiences in healthcare should be used for LLM output as well as human output. Everyone (humans) and everything (LLMs) are capable of bias or errors in healthcare.

Prompts matter, so consider model type as well as the prompt as a factor in assessing LLMs in healthcare. Blog by Dana M. LewisTLDR: Instead of asking “Which model is best?”, a better question might be:

“How do we design and evaluate prompts that lead to the most reliable, useful results for this specific task and audience?”

I’ve observed, and this study adds evidence, that prompt interaction with the model matters.

A Tale of Three Artificial Intelligence (AI) Experiences in Healthcare Interactions

AI tools are being increasingly used in healthcare, particularly for tasks like clinical notetaking during virtual visits. As a patient, I’ve had three recent experiences with AI-powered notetaking tools during appointments with the same clinician. Each time, I consented to its use, but the results were very different across the three encounters. The first two involved similar tools with mostly good results but surprising issues around pronouns and transparency of the consent process. The third was a different tool with a noticeable drop in quality. But what really stands out, when I compare these to a visit without AI, is that human errors happen too — and the healthcare system lacks effective processes for identifying and correcting errors, no matter the source.

Encounter One: Good Notes, Incorrect Pronouns

At the start of my first virtual appointment, my clinician asked for my permission to use an AI-powered tool for notetaking. I consented. After the visit, I reviewed the clinical note, and the summary at the top described me using “he/him” pronouns. I’m female, so they should have been “she/her”.

The rest of the note was detailed and clinically accurate and useful. But the pronoun error stood out. It seemed like the AI defaulted to male pronouns when gender information wasn’t explicitly mentioned, which made me wonder whether the model was trained with gender bias or if this was a design flaw in this tool.

Encounter Two: Clarifying Pronouns, Learning About Chart Access

At the next appointment, my clinician again asked for consent to use an AI-powered notetaker. I agreed and pointed out the pronoun error from the previous visit, clarifying that I am female and use she/her pronouns. My clinician looked at the prior note and was equally puzzled, commenting that this issue had come up with other patients — both directions, sometimes assigning female pronouns to male patients and vice versa. The clinician mentioned that the AI system supposedly had access to patient charts and should be able to pull gender information from existing records. That really surprised me: the consent statement had described the tool as a notetaking aid, but nothing had been said about access to my full chart. I would have given permission either way, but the fact that this hadn’t been disclosed upfront was disappointing. I had understood this to be a passive notetaking tool summarizing the visit in real time, not something actively pulling and using other parts of my health record.

This time, the pronouns in the note were correct (which could be because we talked about it and I declared the pronouns), and the overall summary was again accurate and detailed. But the fact that this was a recurring issue, with my provider seeing it in both directions across multiple patients, made it clear that pronoun errors weren’t a one-off glitch.

Encounter Three: A Different AI with Worse Results

By the third appointment, I knew what to expect. The clinician again asked for consent to use an AI notetaker, and I agreed. But after reviewing the note from this visit, two things stood out.

First, the quality of the notetaking was noticeably worse. Several errors were obvious, including situations where the note reflected the exact opposite of what had been discussed. For example, I had said that something did not happen, yet the note recorded that it did.

Second, this time the note disclosed the specific software used for notetaking at the bottom of the document. It was a different tool than the one used in the first two visits. I hadn’t been told that a different AI tool was being used, but based on the change in quality and the naming disclosure, it was clear this was a switch.

This experience reinforced that even when performing the same task — in this case, AI notetaking — the software can vary widely in accuracy and quality. I much preferred the output from the first two visits, even with the initial pronoun error, over the third experience where clinically significant details were recorded incorrectly.

Notably, there doesn’t seem to be a process or method (or if there is one, it is not communicated to patients or easily findable when searching) to give the health system feedback on the quality and accuracy of these tools. Which seems like a major flaw in most health systems’ implementations of AI-related tools, assessing and evaluating only from the healthcare provider perspective and overlooking or outright ignoring the direct impact on patients (which influences patient care, the clinician-patient relationship, trust with the health system….).

A Human-Only Encounter: Still Not Error-Free

To give further context, I want to compare these AI experiences with a separate virtual visit where no AI was involved. This was with a different clinician who took notes manually. The pronouns were correct in this note, but there were still factual inaccuracies.

A small but clear example: I mentioned using Device A, but the note stated I was using Device B. This was not a critical error at the time, but it was still incorrect.

The point here is that human documentation errors are not rare. They happen frequently, even without AI involved. Yet the narrative around AI in healthcare often frames mistakes as uniquely concerning when, in reality, this problem already exists across healthcare.

A Bigger Issue is Lack of Processes for Fixing Errors

Across all four encounters — both AI-assisted and human-driven — the most concerning pattern was not the errors themselves but the failure to correct them, even after they were pointed out.

In the first AI note where the pronouns were wrong, the note was never corrected, even after I brought it up at the next appointment. The error remains in my chart.

In the human-driven note, where the wrong device was recorded, I pointed out the error multiple times over the years. Despite that, the error persisted in my chart across multiple visits.

Eventually, it did affect my care. During a prescription renewal, the provider questioned whether I was using the device at all because they referenced the erroneous notes rather than the prescription history. I had to go back, cite old messages where I had originally pointed out the error, and clarify that the device listed in the notes was wrong.

I had stopped trying to correct this error after multiple failed attempts because it hadn’t impacted my care at the time. But years later, it suddenly mattered — and the persistence of that error caused confusion and required extra time, adding friction into what should have been a seamless prescription renewal process.

My point: the lack of effective remediation processes is not unique to either AI or human documentation. Errors get introduced and then they stay. There are no good systems for correcting clinical notes, whether written by a human or AI.

So, What Do We Do About AI in Healthcare?

Critics of AI in healthcare often argue that its potential for errors is a reason to avoid the technology altogether. But as these experiences show, human-driven documentation isn’t error-free either.

The problem isn’t AI.

It’s that healthcare systems as a whole have poor processes for identifying and correcting errors once they occur.

When we evaluate AI tools, we need to ask:

  • What types of errors are we willing to tolerate?
  • How do we ensure transparency about how the tools work and what data they access?
  • Most importantly, what mechanisms exist to correct errors after they’re identified?

This conversation needs to go beyond whether errors happen and instead focus on how we respond when they do.  It’s worth thinking about this in the same way I’ve written about errors of commission and omission in diabetes care with automated insulin delivery (AID) systems (DOI: 10.1111/dme.14687; author copy here). Errors of commission happen when something incorrect is recorded. Errors of omission occur when important details are left out. Both types of errors can affect care, and both need to be considered when evaluating the use of AI or human documentation.

In my case, despite the pronoun error in the first AI note, the notetaking quality was generally higher than the third encounter with a different AI tool. And even in the human-only note, factual errors persisted over years with no correction.

Three encounters with AI in healthcare - reflecting on errors of omission and commission that happen both with humans and AI , a blog post by Dana M. Lewis from DIYPS.orgAI can be useful for reducing clinician workload and improving documentation efficiency. But like any tool, its impact depends on how it’s implemented, how transparent the process is, and whether there are safeguards to address errors when they occur.

The reality is both AI and human clinicians make mistakes.

What matters, and what we should work on addressing, is how to fix errors in healthcare documentation and records when they occur.

Right now, this is a weakness of the healthcare system, and not unique to AI.

How I Use LLMs like ChatGPT And Tips For Getting Started

You’ve probably heard about new AI (artificial intelligence) tools like ChatGPT, Bard, Midjourney, DALL-E and others. But, what are they good for?

Last fall I started experimenting with them. I looked at AI art tools and found them to be challenging, at the time, for one of my purposes, which was creating characters and illustrating a storyline with consistent characters for some of my children’s books. I also tested GPT-3 (meaning version 3.0 of GPT). It wasn’t that great, to be honest. But later, GPT-3.5 was released, along with the ChatGPT chat interface to it, which WAS a big improvement for a lot of my use cases. (And now, GPT-4 is out and is an even bigger improvement, although it costs more to use. More on the cost differences below)

So what am I using these AI tools for? And how might YOU use some of these AI tools? And what are the limitations? This is what I’ve learned:

  1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.

You know the feeling of staring at a blank page and not knowing where to start? Maybe it’s the blank page of a cold email; the blank page of an essay or paper you need to write; the blank page of the outline for a presentation. Starting is hard!

Even for this blog post, I had a list of bulleted notes of things I wanted to remember to include. But I wasn’t sure how I wanted to start the blog post or incorporate them. I stuck the notes in ChatGPT and asked it to expand the notes.

What did it do? It wrote a few paragraph summary. Which isn’t what I wanted, so I asked it again to use the notes and this time “expand each bullet into a few sentences, rather than summarizing”. With these clear directions, it did, and I was able to look at this content and decide what I wanted to edit, include, or remove.

Sometimes I’m stuck on a particular writing task, and I use ChatGPT to break it down. In addition to kick-starting any type of writing overall, I’ve asked it to:

  • Take an outline of notes and summarize them into an introduction; limitations section; discussion section; conclusion; one paragraph summary; etc.
  • Take a bullet point list of notes and write full, complete sentences.
  • Take a long list of notes I’ve written about data I’ve extracted from a systematic review I was working on, and ask it about recurring themes or outlier concepts. Especially when I had 20 pages (!) of hand-written notes in bullets with some loose organization by section, I could feed in chunks of content and get help getting the big picture from that 20 pages of content I had created. It can highlight themes in the data based on the written narratives around the data.

A lot of times, the best thing it does is it prompts my brain to say “that’s not correct! It should be talking about…” and I’m able to more easily write the content that was in the back of my brain all along. I probably use 5% of what it’s written, and more frequently use it as a springboard for my writing. That might be unique to how I’m using it, though, and other simple use cases such as writing an email to someone or other simplistic content tasks may mean you can keep 90% or more of the content to use.

2. It can also help analyze data (caution alert!) if you understand how the tools work.

Huge learning moment here: these tools are called LLMs (large language models). They are trained on large amounts of language. They’re essentially designed so that, based on all of those words (language) it’s taken in previously, to predict content that “sounds” like what would come after a given prompt. So if you ask it to write a song or a haiku, it “knows” what a song or a haiku “looks” like, and can generate words to match those patterns.

It’s essentially a PATTERN MATCHER on WORDS. Yeah, I’m yelling in all caps here because this is the biggest confusion I see. ChatGPT or most of these LLMs don’t have access to the internet; they’re not looking up in a search engine for an answer. If you ask it a question about a person, it’s going to give you an answer (because it knows what this type of answer “sounds” like), but depending on the amount of information it “remembers”, some may be accurate and some may be 100% made up.

Why am I explaining this? Remember the above section where I highlighted how it can start to sense themes in the data? It’s not answering solely based on the raw data; it’s not doing analysis of the data, but mostly of the words surrounding the data. For example, you can paste in data (from a spreadsheet) and ask it questions. I did that once, pasting in some data from a pivot table and asking it the same question I had asked myself in analyzing the data. It gave me the same sense of the data that I had based on my own analysis, then pointed out it was only qualitative analysis and that I should also do quantitative statistical analysis. So I asked it if it could do quantitative statistical analysis. It said yes, it could, and spit out some numbers and described the methods of quantitative statistical analysis.

But here’s the thing: those numbers were completely made up!

It can’t actually use (in its current design) the methods it was describing verbally, and instead made up numbers that ‘sounded’ right.

So I asked it to describe how to do that statistical method in Google Sheets. It provided the formula and instructions; I did that analysis myself; and confirmed that the numbers it had given me were 100% made up.

The takeaway here is: it outright said it could do a thing (quantitative statistical analysis) that it can’t do. It’s like a human in some regards: some humans will lie or fudge and make stuff up when you talk to them. It’s helpful to be aware and query whether someone has relevant expertise, what their motivations are, etc. in determining whether or not to use their advice/input on something. The same should go for these AI tools! Knowing this is an LLM and it’s going to pattern match on language helps you pinpoint when it’s going to be prone to making stuff up. Humans are especially likely to make something up that sounds plausible in situations where they’re “expected” to know the answer. LLMs are in that situation all the time: sometimes they actually do know an answer, sometimes they have a good guess, and sometimes they’re just pattern matching and coming up with something that sounds plausible.

In short:

  • LLM’s can expand general concepts and write language about what is generally well known based on its training data.
  • Try to ask it a particular fact, though, and it’s probably going to make stuff up, whether that’s about a person or a concept – you need to fact check it elsewhere.
  • It can’t do math!

But what it can do is teach you or show you how to do the math, the coding, or whatever thing you wish it would do for you. And this gets into one of my favorite use cases for it.

3. You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.

One of the first things I did was ask ChatGPT to help me write a script. In fact, that’s what I did to expedite the process of finding tweets where I had used an image in order to get a screenshot to embed on my blog, rather than embedding the tweet.

It’s now so easy to generate code for scripts, regardless of which language you have previous experience with. I used to write all of my code as bash scripts, because that’s the format I was most familiar with. But ChatGPT likes to do things as Python scripts, so I asked it simple questions like “how do I call a python script from the command line” after I asked it to write a script and it generated a python script. Sure, you could search in a search engine or Stack Overflow for similar questions and get the same information. But one nice thing is that if you have it generate a script and then ask it step by step how to run a script, it gives you step by step instructions in context of what you were doing. So instead of saying “to run a script, type `python script.py’”, using placeholder names, it’ll say “to run the script, use ‘python actual-name-of-the-script-it-built-you.py’ “ and you can click the button to copy that, paste it in, and hit enter. It saves a lot of time for figuring out how to take placeholder information (which you would get from a traditional search engine result or Stack Overflow, where people are fond of things like saying FOOBAR and you have no idea if that means something or is meant to be a placeholder). Careful observers will notice that the latest scripts I’ve added to my Open Humans Data Tools repository (which is packed with a bunch of scripts to help work with big datasets!) are now in Python rather than bash; such as when I was adding new scripts for fellow researchers looking to check for updates in big datasets (such as the OpenAPS Data Commons). This is because I used GPT to help with those scripts!

It’s really easy now to go from an idea to a script. If you’re able to describe it logically, you can ask it to write a script, tell you how to run it, and help you debug it. Sometimes you can start by asking it a question, such as “Is it possible to do Y?” and it describes a method. You need to test the method or check for it elsewhere, but things like uploading a list of DOIs to Mendeley to save me hundreds of clicks? I didn’t realize Mendeley had an API or that I could write a script that would do that! ChatGPT helped me write the script, figure out how to create a developer account and app access information for Mendeley, and debug along the way so I ended up within an hour and a half of having a tool that easily saved me 3 hours on the very first project that I used it with.

I’m gushing about this because there’s probably a lot of ideas you have that you immediately throw out as being too hard, or you don’t know how to do it. It takes time, but I’m learning to remember to think “I should ask the LLM this” and ask it questions such as:

  • Is it possible to do X?
  • Write a script to do X.
  • I have X data. Pretend I am someone who doesn’t know how to use Y software and explain how I should do Z.

Another thing I’ve done frequently is ask it to help me quickly write a complex formula to use in a spreadsheet. Such as “write a formula that can be used in Google Sheets to take an average of the values in M3:M84 if they are greater than zero”.

It gives me the formula, and also describes it, and in some cases, gives alternative options.

Other things I’ve done with spreadsheets include:

  • Ask it to write a conditional formatting custom formula, then give me instructions for expanding the conditional formatting to apply to a certain cell range.
  • Asking it to check if a cell is filled with a particular value and then repeating the value in the new cell, in order to create new data series to use in particular charts and graphs I wanted to create from my data.
  • Help me transform my data so I could generate a box and whisker plot.
  • Ask it for other visuals that might be effective ways to illustrate and visualize the same dataset.
  • Explain the difference between two similar formulas (e.g. COUNT and COUNTA or when to use IF and IFS).

This has been incredibly helpful especially with some of my self-tracked datasets (particularly around thyroid-related symptom data) where I’m still trying to figure out the relationship between thyroid levels, thyroid antibody levels, and symptom data (and things like menstrual cycle timing). I’ve used it for creating the formulas and solutions I’ve talked about in projects such as the one where I created a “today” line that dynamically updates in a chart.

It’s also helped me get past the friction of setting up new tools. Case in point, Jupyter notebooks. I’ve used them in the web browser version before, but often had issues running the notebooks people gave me. I debugged and did all kinds of troubleshooting, but have not for years been able to get it successfully installed locally on (multiple of) my computers. I had finally given up on effectively using notebooks and definitely given up on running it locally on my machine.

However, I decided to see if I could get ChatGPT to coax me through the install process.

I told it:

“I have this table with data. Pretend I am someone who has never used R before. Tell me, step by step, how to use a Jupyter notebook to generate a box and whisker plot using this data”

(and I pasted my data that I had copied from a spreadsheet, then hit enter).

It outlined exactly what I needed to do, saying to install Jupyter Notebook locally if I hadn’t, gave me code to do that, installing the R kernel, told me how to do that, then how to start a notebook all the way down to what code to put in the notebook, the data transformed that I could copy/paste, and all the code that generated the plot.

However, remember I have never been able to successfully get Jupyter Notebooks running! For years! I was stuck on step 2, installing R. I said:

“Step 2, explain to me how I enter those commands in R? Do I do this in Terminal?”

It said “Oh apologies, no, you run those commands elsewhere, preferably in Rstudio. Here is how to download RStudio and run the commands”.

So, like humans often do, it glossed over a crucial step. But it went back and explained it to me and kept giving more detailed instructions and helping me debug various errors. After 5-6 more troubleshooting steps, it worked! And I was able to open Jupyter Notebooks locally and get it working!

All along, most of the tutorials I had been reading had skipped or glossed over that I needed to do something with R, and where that was. Probably because most people writing the tutorials are already data scientists who have worked with R and RStudio etc, so they didn’t know those dependencies were baked in! Using ChatGPT helped me be able to put in every error message or every place I got stuck, and it coached me through each spot (with no judgment or impatience). It was great!

I was then able to continue with the other steps of getting my data transformed, into the notebook, running the code, and generating my first ever box and whisker plot with R!

A box and whisker plot, illustrated simply to show that I used R and Jupyter finally successfully!

This is where I really saw the power of these tools, reducing the friction of trying something new (a tool, a piece of software, a new method, a new language, etc.) and helping you troubleshoot patiently step by step.

Does it sometimes skip steps or give you solutions that don’t work? Yes. But it’s still a LOT faster than manually debugging, trying to find someone to help, or spending hours in a search engine or Stack Overflow trying to translate generic code/advice/solutions into something that works on your setup. The beauty of these tools is you can simply paste in the error message and it goes “oh, sorry, try this to solve that error”.

Because the barrier to entry is so low (compared to before), I’ve also asked it to help me with other project ideas where I previously didn’t want to spend the time needed to learn new software and languages and all the nuances of getting from start to end of a project.

Such as, building an iOS app by myself.

I have a ton of projects where I want to temporarily track certain types of data for a short period of time. My fall back is usually a spreadsheet on my phone, but it’s not always easy to quickly enter data on a spreadsheet on your phone, even if you set up a template with a drop down menu like I’ve done in the past (for my DIY macronutrient tool, for example). For example, I want to see if there’s a correlation in my blood pressure at different times and patterns of inflammation in my eyelid and heart rate symptoms (which are symptoms, for me, of thyroid antibodies being out of range, due to Graves’ disease). That means I need to track my symptom data, but also now some blood pressure data. I want to be able to put these datasets together easily, which I can, but the hardest part (so to speak) is finding a way that I am willing to record my blood pressure data. I don’t want to use an existing BP tracking app, and I don’t want a connected BP monitor, and I don’t want to use Apple Health. (Yes, I’m picky!)

I decided to ask ChatGPT to help me accomplish this. I told it:

“You’re an AI programming assistant. Help me write a basic iOS app using Swift UI. The goal is a simple blood pressure tracking app. I want the user interface to default to the data entry screen where there should be three boxes to take the systolic, diastolic blood pressure numbers and also the pulse. There should also be selection boxes to indicate whether the BP was taken sitting up or laying down. Also, enable the selection of a section of symptom check boxes that include “HR feeling” and “Eyes”. Once entered on this screen, the data should save to a google spreadsheet.” 

This is a completely custom, DIY, n of 1 app. I don’t care about it working for anyone else, I simply want to be able to enter my blood pressure, pulse, whether I’m sitting or laying down, and the two specific, unique to me symptoms I’m trying to analyze alongside the BP data.

And it helped me build this! It taught me how to set up a new SwiftUI project in XCode, gave me code for the user interface, how to set up an API with Google Sheets, write code to save the data to Sheets, and get the app to run.

(I am still debugging the connection to Google Sheets, so in the interim I changed my mind and had it create another screen to display the stored data then enable it to email me a CSV file, because it’s so easy to write scripts or formulas to take data from two sources and append it together!)

Is it fancy? No. Am I going to try to distribute it? No. It’s meeting a custom need to enable me to collect specific data super easily over a short period of time in a way that my previous tools did not enable.

Here’s a preview of my custom app running in a simulator phone:

Simulator iphone with a basic iOS app that intakes BP, pulse, buttons for indicating whether BP was taken sitting or laying down; and toggles for key symptoms (in my case HR feeling or eyes), and a purple save button.

I did this in a few hours, rather than taking days or weeks. And now, the barrier to entry to creating more custom iOS is reduced, because now I’m more comfortable working with XCode and the file structures and what it takes to build and deploy an app! Sure, again, I could have learned to do this in other ways, but the learning curve is drastically shortened and it takes away most of the ‘getting started’ friction.

That’s the theme across all of these projects:

  • Barriers to entry are lower and it’s easier to get started
  • It’s easier to try things, even if they flop
  • There’s a quicker learning curve on new tools, technologies and languages
  • You get customized support and troubleshooting without having to translate through as many generic placeholders

PS – speaking of iOS apps, based on building this one simple app I had the confidence to try building a really complex, novel app that has never existed in the world before! It’s for people with exocrine pancreatic insufficiency like me who want to log pancreatic enzyme replacement therapy (PERT) dosing and improve their outcomes – check out PERT Pilot and how I built it here.

4. Notes about what these tools cost

I found ChatGPT useful for writing projects in terms of getting started, even though the content wasn’t that great (on GPT-3.5, too). Then they came out with GPT-4 and made a ChatGPT Pro option for $20/month. I didn’t think it was worth it and resisted it. Then I finally decided to try it, because some of the more sophisticated use cases I wanted to use it for required a longer context window, and in addition to a better model it also gave you a longer context window. I paid the first $20 assuming I’d want to cancel it by the end of the month.

Nope.

The $20 has been worth it on every single project that I’ve used it for. I’ve easily saved 5x that on most projects in terms of reducing the energy needed to start a project, whether it was writing or developing code. It has saved 10x that in time cost recouped from debugging new code and tools.

GPT-4 does have caps, though, so even with the $20/month, you can only do 25 messages every 3 hours. I try to be cognizant of which projects I default to using GPT-3.5 on (unlimited) versus saving the more sophisticated projects for my GPT-4 quota.

For example, I saw a new tool someone had built called “AutoResearcher”, downloaded it, and tried to use it. I ran into a bug and pasted the error into GPT-3.5 and got help figuring out where the problem was. Then I decided I wanted to add a feature to output to a text file, and it helped me quickly edit the code to do that, and I PR’ed it back in and it was accepted (woohoo) and now everyone using that tool can use that feature. That was pretty simple and I was able to use GPT-3.5 for that. But sometimes, when I need a larger context window for a more sophisticated or content-heavy project, I start with GPT-4. When I run into the cap, it tells me when my next window opens up (3 hours after I started using it), and I usually have an hour or two until then. I can open a new chat on GPT-3.5 (without the same context) and try to do things there; switch to another project; or come back at the time it says to continue using GPT-4 on that context/setup.

Why the limit? Because it’s a more expensive model. So you have a tradeoff between paying more and having a limit on how much you can use it, because of the cost to the company.

—–

TLDR:

Most important note: LLMs don’t “think” or “know” things the way humans do. They output language they predict you want to see, based on its training and the inputs you give it. It’s like the autocomplete of a sentence in your email, but more words on a wider range of topics!

Also, the LLM can’t do math. But they can write code. Including code to do math.

(Some, but not all, LLMs have access to the internet to look up or incorporate facts; make sure you know which LLM you are using and whether it has this feature or not.)

Ways to get started:

    1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.
      • Ask it to help you expand on notes; write summaries of existing content; or write sections of content based on instructions you give it
    2.  It can also help analyze data (caution alert!) if you understand the limitations of the LLM.
      • The most effective way to work with data is to have it tell you how to run things in analytical software, whether that’s how to use R or a spreadsheet or other software for data analysis. Remember the LLM can’t do math, but it can write code so you can then do the math!
    3.  You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.
      • Build a new habit of asking it “Can I do X” or “Is it possible to do Y” and when it says it’s possible, give it a try! Tell it to give you step-by-step instructions. Tell it where you get stuck. Give it your error messages or where you get lost and have it coach you through the process. 

What’s been your favorite way to use an LLM? I’d love to know other ways I should be using them, so please drop a comment with your favorite projects/ways of using them!

Personally, the latest project that I built with an LLM has been PERT Pilot!

How I use LLMs (like ChatGPT) and tips for getting started