The Cost-Effectiveness of Life for a Child – A Deep Dive into DALY Estimates and the 2025 Funding Gap

Life for a Child is an international non-profit organization that supports children with diabetes by providing insulin, test strips, and essential diabetes care to over 60,000 children in low-income countries who would otherwise have little to no access to treatment.

Without access to supplies and skilled medical care, children with type 1 diabetes (T1D) often die quickly, and with only intermittent access may die within a few years of diagnosis. In some countries,  limited amounts and types of older insulins may be provided by the health systems. In these ‘luckier’ countries, test strips are still not usually provided. Without regular blood glucose testing, children may survive into early adulthood, yet still experience early mortality due to long-term complications such as blindness, kidney failure, or amputations.

Life for a Child (LFAC) offers a lifeline, extending life expectancy and improving the quality of life for children at a remarkably low cost. Life for a Child also does incredibly critical work in improving care delivery infrastructures in each of these countries that they support. They work directly with local healthcare providers to co-develop critical education materials for young people living with diabetes. Further, they provide a support network to local healthcare providers and some governments. This is all to help improve sustainability of access to services, medications, and support for people with diabetes in the long run.

Scott and I have been supporting Life for a Child as our charity of choice for many years. As we wrote in our analysis here in 2017:

“Life for a Child seems like a fairly effective charity, spending about $200-$300/yr for each person they serve (thanks in part to in-kind donations from pharmaceutical firms). If we assume that providing insulin and other diabetes supplies to one individual (and hopefully keeping them alive) for 40 years is approximately the equivalent of preventing a death from malaria, that would mean that Life for a Child might be about half as effective as AMF, which is quite good compared to the far lower effectiveness of most charities, especially those that work in first world countries.”

We used some of GiveWell’s analyses to assess effective giving, especially comparing options like GiveDirectly or more specific charity options like AMF:

​For example, the Against Malaria Foundation, the recommended charity with the most transparent and straightforward impact on people’s lives, can buy and distribute an insecticide-treated bed net for about $5.  Distributing about 600-1000 such nets results in one child living who otherwise would have died, and prevents dozens of cases of malaria.  As such, donating 10% of a typical American household’s income to AMF will save the lives of 1-2 African kids *every year*.”

(Note: In addition to donations, I also have supported Life for a  Child with my time at both the US level, serving on the US-based Life for a Child US board, as well as the US representative on the international steering committee for Life for a Child.)

However, in 2025, Life for a Child faces an immediate and unexpected $300,000 funding shortfall, due to a previously committed donor no longer being able to provide this donation. This funding was for test strips, which will reduce the number of strips provided per child from three to two test strips per day.

Further, Life for a Child has additional funding needs to continue expanding to support more children who are otherwise unsupported and going without critical supplies. (The room for funding is several orders of magnitude above this year’s funding gap.)

In order to assess the need for how we (in a general sense, speaking of all of us) fill this funding gap and understanding if this is still a cost-effective way to support people with diabetes, we wanted to revisit our analysis for how cost-effective Life For a Child is.

For background, I asked Graham Ogle, head of LFAC, for some numbers. These include:

  • Life for A Child currently supports 60,000 children in 2025
  • The original expansion plan is a goal to support 100,000 children or more by 2030
  • Estimates for how much is spent per child is about $150 USD (slightly less than what Scott and I had estimated in 2017), or $160 USD if you incorporate indirect costs.

We used these numbers below to estimate the cost-effectiveness of Life for a Child’s interventions.

Estimating Life For A Child’s Cost per Disability-Adjusted Life Year (DALY)

The Disability-Adjusted Life Year (DALY) is the most commonly used metric in global health to capture both the years of life lost (YLL) due to premature death and the years lived with disability (YLD) due to a health condition, such as type 1 diabetes.

The goal of Life for a Child’s work is to reduce both of these by providing insulin and glucose monitoring as well as improved care necessary for improved health outcomes.

  1. Life for a Child support reduces Years of Life Lost (YLL) 

To estimate YLL reduction, we calculate the difference between the expected age at death for a child with T1D who receives no care versus a child receiving LFAC support:

  • Without Life for a Child :
    • In the worst-case scenario, children with T1D may die within 1-2 years due to lack of insulin, meaning an early death by age 10 instead of the typical life expectancy of 60 years in some of these countries. . This results in 50 YLLs (60 – 10 = 50).
    • In countries where insulin is available but costly and/or glucose monitoring is not affordable and readily available, children may survive into their late 20s or 30s, but still experience significant complications, reducing life expectancy. In this scenario (minimal access to insulin, glucose monitoring, etc), we make a rough assumption that children with diabetes may survive into their mid to late 30s, therefore 25 YLLs is a reasonable estimate (60 – 35 = 25).
  • With Life for a Child :
    • Life for a Child’s program significantly improves both short-term and long-term survival. We assume that children supported by Life for a Child have the potential to live to an average life expectancy of 50-60 years (instead of dying prematurely due to untreated T1D), even when considering that LFAC only supports children into early adulthood (e.g. 25-30 years of age).

If we assume the average life expectancy for children newly diagnosed with T1D increases from 15-35 years to 50-60 years with standard Life for a Child support, that gives a savings of 25-35 YLLs (DALYs) per child, accounting for most of the uncertainty in our lifespan estimates above.

  1. Years Lived with Disability (YLD) Reduction

T1D also causes significant disability when people with T1D don’t have access to insulin and/or sufficient glucose monitoring and monitoring for early signs of complications, especially due to complications like blindness, kidney failure, and amputations. Each of these conditions brings about substantial life impairment.

  • Without Life for a Child:
    • Children with poorly supported T1D face a high likelihood of severe complications as they age. We estimate the disability weight (DW) for this scenario at 0.20, reflecting significant disability as a result of some of those complications.
  • With Life for a Child:
    • Access to insulin and glucose monitoring and healthcare monitoring drastically reduces the risk of complications. We estimate a DW of 0.05, which represents a much lower level of disability, especially in terms of future complications.

With such DWs, the reduction in YLD before premature death (20%-5%=15% over 5-30 years = 1-4 DALYs), and the 5% reduction in the YLL benefit (5% * 25-35 = 1-2 DALYs) partially cancel out, and don’t change the end result much. The net gain of 1-2 DALYs due to YLD reduction is smaller than the uncertainty range on the YLL benefit.

So for purposes of cost-effectiveness calculations, we’ll ignore YLD in the rest of this post and continue using the 25-35 DALYs per child figure.

  1. Total DALYs and Cost per DALY

For this section, we’ll assume the total impact of Life for a Child’s intervention per child from the calculations above is 25-35 DALYs.

Life for a Child’s cost per child in 2025 is approximately $150 per year (or $160 including indirect costs), and if we estimate that most children receive treatment for about 15 years, meaning the total cost per child is roughly $1,500–$2,250 over that period (or $1,600-$2,400 total with indirect costs).

Thus, the cost per DALY for Life for a Child can be estimated as:

(Cost per child) / (DALYs saved per child)

Here are a variety of estimates for varying cost levels using the lower bound of 25 DALYs saved per child supported:

  • With $1,500 per lifetime per child ($150/year for 10 years) and 25 DALYs saved, that estimates $60 per DALY ($64 with indirect costs)
  • With $2,250 per lifetime per child ($150/year for 15 years) and 25 DALYs saved, that estimates $90 per DALY ($96 with indirect costs)
  • With slightly higher costs to assume the cost will rise over time of $175/year for 15 years, this is a higher estimated $2,625 per lifetime per child and 25 DALYs saved, estimating $105 per DALY.
  • With slightly higher costs to assume the cost will rise over time of $175/year for 20 years, this is a higher estimated $3,500 per lifetime per child and 25 DALYs saved, estimating $140 per DALY.

This places Life for a Child’s cost per DALY in the range of $60–$90, for conservative estimates a remarkably cost-effective intervention, and even the higher estimates of $105-$140 assuming an increase in costs and increase in years of support compares favorably to the most effective global health programs, including those recommended by GiveWell.

How did we come to this conclusion?

  • GiveWell estimates cash transfers through GiveDirectly result in $1000/DALY, based on welfare gains rather than direct health outcomes (so apples and oranges), but even apples to oranges we can estimate Life for a Child is more cost-effective by at least single digit (eg 1-9x) factors than cash giving elsewhere.
  • We know GiveWell’s top charities are around $50-$100/DALY. Given we were estimating $60-$140 with a wide swathe of estimates, we can see that Life for a Child aligns with some of GiveWell’s top charities in terms of cost per DALY and thus “compares favorably” in our analysis. 

Why You Should Donate to Life for a Child

The point of this post was for Scott and I to reassess our statement that we have been making since ~2017 or so, which is the fact that Life for a Child is a remarkably cost-effective charity overall, and likely one of the most cost-effective charities to support people living with diabetes around the world who otherwise won’t have access (or regular access) to insulin and blood glucose testing.

Life for a Child has a DALY cost in the range of $60-$140 (reflecting current versus future cost increases), depending on which input variables you use, which makes it one of the best uses of global health funding available today.

Because of this reassessment, we also hope if you’ve read this far that you, too, will consider making a life-saving and life-changing donation for people with diabetes by donating to Life for a Child.

If you’re feeling overwhelmed with world events and want to make a tangible difference in people’s lives in a measurable way, consider donating to Life for a Child.

If you want to support people with diabetes in the most cost-effective way, so that your donation dollars make the biggest impact? Donate to Life for a Child.

Your donation saves – and changes – lives.

Life for a Child is a cost-effective charity supporting people with diabetes that needs your help. A blog post from Dana M. Lewis at DIYPS.org(Thank you).

PS – feel free to reach out to me (Dana@OpenAPS.org) and/or Scott (Scott@OpenAPS.org) if you want to chat through any of the estimates or numbers in more detail and how we consider donations.

Beware “too much” and “too little” advice in Exocrine Pancreatic Insufficiency (EPI / PEI)

If I had a nickel every time I saw conflicting advice for people with EPI, I could buy (more) pancreatic enzyme replacement therapy. (PERT is expensive, so it’s significant that there’s so much conflicting advice).

One rule of thumb I find handy is to pause any time I see the words “too much” or “too little”.

This comes up in a lot of categories. For example, someone saying not to eat “too much” fat or fiber, and that a low-fat diet is better. The first part of the sentence should warrant a pause (red flag words – “too much”), and that should put a lot of skepticism on any advice that follows.

Specifically on the “low fat diet” – this is not true. A lot of outdated advice about EPI comes from historical research that no longer reflects modern treatment. In the past, low-fat diets were recommended because early enzyme formulations were not encapsulated or as effective, so people in the 1990s struggled to digest fat because the enzymes weren’t correctly working at the right time in their body. The “bandaid” fix was to eat less fat. Now that enzyme formulations are significantly improved (starting in the early 2000s, enzymes are now encapsulated so they get to the right place in our digestive system at the right time to work on the food we eat or drink), medical experts no longer recommend low-fat diets. Instead, people should eat a regular diet and adjust their enzyme intake accordingly to match that food intake, rather than the other way around (source: see section 4.6).

Think replacement of enzymes, rather than restriction of dietary intake: the “R” in PERT literally stands for replacement!

If you’re reading advice as a person with EPI (PEI), you need to have math in the back of your mind. (Sorry if you don’t like math, I’ll talk about some tools to help).

Any time people use words to indicate amounts of things, whether that’s amounts of enzymes or amounts of food (fat, protein, carbs, fiber), you need to think of specific numbers to go with these words.

And, you need to remember that everyone’s body is different, which means your body is different.

Turning words into math for pill count and enzymes for EPI

Enzyme intake should not be compared without considering multiple factors.

The first reason is because enzyme pills are not all the same size. Some prescription pancreatic enzyme replacement therapy (PERT) pills can be as small as 3,000 units of lipase or as large as 60,000 units of lipase. (They also contain thousands or hundreds of thousands of units of protease and amylase, to support protein and carbohydrate digestion. For this example I’ll stick to lipase, for fat digestion.)

If a person takes two enzyme pills per meal, that number alone tells us nothing. Or rather, it tells us only half of the equation!

The size of the pills matters. Someone taking two 10,000-lipase pills consumes 20,000 units per meal, while another person taking two 40,000-lipase pills is consuming 80,000 units per meal.

That is a big difference! Comparing the two total amounts of enzymes (80,000 units of lipase or 20,000 units of lipase) is a 4x difference.

And I hate to tell you this, but that’s still not the entire equation to consider. Hold on to your hat for a little more math, because…

The amount of fat consumed also matters.

Remember, enzymes are used to digest food. It’s not a magic pill where one (or two) pills will perfectly cover all food. It’s similar to insulin, where different people can need different amounts of insulin for the same amount of carbohydrates. Enzymes work the same way, where different people need different amounts of enzymes for the same amount of fat, protein, or carbohydrates.

And, people consume different amounts and types of food! Breakfast is a good example. Some people will eat cereal with milk – often that’s more carbs, a little bit of protein, and some fat. Some people will eat eggs and bacon – that’s very little carbs, a good amount of protein, and a larger amount of fat.

Let’s say you eat cereal with milk one day, and eggs and bacon the next day. Taking “two pills” might work for your cereal and milk, but not your eggs and bacon, if you’re the person with 10,000 units of lipase in your pill. However, taking “two pills” of 40,000 units of lipase might work for both meals. Or not: you may need more for the meal with higher amounts of fat and protein.

If someone eats the same quantity of fat and protein and carbs across all 3 meals, every day, they may be able to always consume the same number of pills. But for most of us, our food choices vary, and the protein and fat varies meal to meal, so it’s common to need different amounts at different meals. (If you want more details on how to figure out how much you need, given what you eat, check out this blog post with example meals and a lot more detail.)

You need to understand your baseline before making any comparisons

Everyone’s body is different, and enzyme needs vary widely depending on the amount of fat and protein consumed. What is “too much” for one person might be exactly the right amount for another, even when comparing the same exact food quantity. This variability makes it essential to understand your own baseline rather than following generic guidance. The key is finding what works for your specific needs rather than focusing on an arbitrary notion of “too much”, because “too much” needs to be compared to specific numbers that can be compared as apples to apples.

A useful analogy is heart rate. Some people have naturally higher or lower resting heart rates. If someone tells you (that’s not a doctor giving you direct medical advice) that your heart rate is too high, it’s like – what can you do about it? It’s not like you can grow your heart two sizes (like the Grinch). While fitness and activity can influence heart rate slightly, individual baseline differences remain significant. If you find yourself saying “duh, of course I’m not going to try to compare my heart rate to my spouse’s, our bodies are different”, that’s a GREAT frame of mind that you should apply to EPI, too.

(Another example is respiratory rate, where it varies person to person. If someone is having trouble breathing, the solution is not as simple as “breathe more” or “breathe less”—it depends on their normal range and underlying causes, and it takes understanding their normal range to figure out if they are breathing more or less than their normal, because their normal is what matters.)

If you have EPI, fiber (and anything else) also needs numbers

Fiber also follows this pattern. Some people caution against consuming “too much” fiber, but a baseline level is essential. “Too little” fiber can mimic EPI symptoms, leading to soft, messy stools. Finding the right amount of fiber is just as crucial as balancing fat and protein intake.

If you find yourself observing or hearing comments that you likely consume “too much” fiber – red flag check for “too much!” Similar to if you hear/see about ‘low fiber’. Low meaning what number?

You should get an estimate for how much you are consuming and contextualize it against the typical recommendations overall, evaluate whether fiber is contributing to your issues, and only then consider experimenting with it.

(For what it’s worth, you may need to adjust enzyme intake for fat/protein first before you play around with fiber, if you have EPI. Many people are given PERT prescriptions below standard guidelines, so it is common to need to increase dosing.)

For example, if you’re consuming 5 grams of fiber in a day, and the typical guidance is often for 25-30 grams (source, varies by age, gender and country so this is a ballpark)…. you are consuming less than the average person and the average recommendation.

In contrast, if you’re consuming 50+ grams of fiber? You’re consuming more than the average person/recommendation.

Understanding where you are (around the recommendation, quite a bit below, or above?) will then help you determine whether advice for ‘more’ or ‘less’ is actually appropriate in your case. Most people have no idea what you’re eating – and honestly, you may not either – so any advice for “too much”, “too little”, or “more” or “less” is completely unhelpful without these numbers in mind.

You don’t have to tell people these numbers, but you can and should know them if you want to consider evaluating whether YOU think you need more/less compared to your previous baseline.

How do you get numbers for fiber, fat, protein, and carbohydrates?

Instead of following vague “more” or “less” advice, first track your intake and outcomes.

If you don’t have a good way to estimate the amount of fat, protein, carbohydrates, and/or fiber, here’s a tool you can use – this is a Custom GPT that is designed to give you back estimates of fat, protein, carbohydrates, and fiber.

You can give it a meal, or a day’s worth of meals, or several days, and have it generate estimates for you. (It’s not perfect but it’s probably better than guessing, if you’re not familiar with estimating these macronutrients).

If you don’t like or can’t access ChatGPT (it works with free accounts, if you log in), you can also take this prompt, adjust it how you like, and give it to any free LLM tool you like (Gemini, Claude, etc.):

You are a dietitian with expertise in estimating the grams of fat, protein, carbohydrate, and fiber based on a plain language meal description. For every meal description given by the user, reply with structured text for grams of fat, protein, carbohydrates, and fiber. Your response should be four numbers and their labels. Reply only with this structure: “Fat: X; Protein: Y; Carbohydrates: Z; Fiber; A”. (Replace the X, Y, Z, and A with your estimates for these macronutrients.). If there is a decimal, round to the nearest whole number. If there are no grams of any of the macronutrients, mark them as 0 rather than nil. If the result is 0 for all four variables, please reply to the user: “I am unable to parse this meal description. Please try again.”

If you are asked by the user to then summarize a day’s worth of meals that you have estimated, you are able to do so. (Or a week’s worth). Perform the basic sum calculation needed to do this addition of each macronutrient for the time period requested, based on the estimates you provided for individual meals.

Another option is using an app like PERT Pilot. PERT Pilot is a free app for iOS for people with EPI that requires no login or user account information, and you can put in plain language descriptions of meals (“macaroni and cheese” or “spaghetti with meatballs”) and get back the estimates of fat, protein, and carbohydrates, and record how much enzymes you took so you can track your outcomes over time. (Android users – email me at Dana+PERTPilot@OpenAPS.org if you’d like to test the forthcoming Android version!) Note that PERT Pilot doesn’t estimate fiber, but if you want to start with fat/protein estimates, PERT Pilot is another way to get started with seeing what you typically consume. (For people without EPI, you can use Carb Pilot, another free iOS app that similarly gives estimates of macronutrients.)

Beware advice of "more" or "less" that is vague and non-numeric (not a number) unless you know your baseline numbers in exocrine pancreatic insufficiency. A blog by Dana M. Lewis from DIYPS.orgTL;DR: Instead of arbitrarily lowering or increasing fat or fiber in the diet, measure and estimate what you are consuming first. If you have EPI, assess fat digestion first by adjusting enzyme intake to minimize symptoms. (And then protein, especially for low fat / high protein meals, such as chicken or fish.) Only then consider fiber intake—some people may actually need more fiber rather than less than what they were consuming before if they experience mushy stools. Remember the importance of putting “more” or “less” into context with your own baseline numbers. Estimating current consumption is crucial because an already low-fiber diet may be contributing to the problem, and reducing fiber further could make things worse. Understanding your own baseline is the key.

Facing Uncertainty with AI and Rethinking What If You Could?

If you’re feeling overwhelmed by the rapid development of AI, you’re not alone. It’s moving fast, and for many people the uncertainty of the future (for any number of reasons) can feel scary. One reaction is to ignore it, dismiss it, or assume you don’t need it. Some people try it once, usually on something they’re already good at, and when AI doesn’t perform better than they do, they conclude it’s useless or overhyped, and possibly feel justified in going back to ignoring or rejecting it.

But that approach misses the point.

AI isn’t about replacing what you already do well. It’s about augmenting what you struggle with, unlocking new possibilities, and challenging yourself to think differently, all in the pursuit of enabling YOU to do more than you could yesterday.

One of the ways to navigate the uncertainty around AI is to shift your mindset. Instead of thinking, “That’s hard, and I can’t do that,” ask yourself, “What if I could do that? How could I do that?”

Sometimes I get a head start by asking an LLM just that: “How would I do X? Layout a plan or outline an approach to doing X.” I don’t always immediately jump to doing that thing, but I think about it, and probably 2 out of 3 times, laying out a possible approach means I do come back to that project or task and attempt it at a later time.

Even if you ultimately decide not to pursue something because of time constraints or competing priorities, at least you’ve explored it and possibly learned something even in the initial exploration about it. But, I want to point out that there’s a big difference between legitimately not being able to do something and choosing not to. Increasingly, the latter is what happens, where you may choose not to tackle a task or take on a project: this is very different from not being able to do so.

Finding the Right Use Cases for AI

Instead of testing AI on things you’re already an expert in, try applying it to areas where you’re blocked, stuck, overwhelmed, or burdened by the task. Think about a skill you’ve always wanted to learn but assumed was out of reach. Maybe you’ve never coded before, but you’re curious about writing a small script to automate a task. Maybe you’ve wanted to design a 3D-printed tool to solve a real-world problem but didn’t know where to start. AI can be a guide, an assistant, and sometimes even a collaborator in making these things possible.

For example, I once thought data science was beyond my skill set. For the longest time, I couldn’t even get Jupyter Notebooks to run! Even with expert help, I was clearly doing something silly and wrong, but it took a long time and finally LLM assistance to get step by step and deeper into sub-steps to figure out the step that was never in the documentation or instructions that I was missing – and I finally figured it out! From there, I learned enough to do a lot of the data science work on my own projects. You can see that represented in several recent projects. The same thing happened with iOS development, which I initially felt imposter syndrome about. And this year, after FOUR failed attempts (even 3 using LLMs), I finally got a working app for Android!

Each time, the challenge felt enormous. But by shifting from “I can’t” to “What if I could?” I found ways to break through. And each time AI became a more capable assistant, I revisited previous roadblocks and made even more progress, even when it was a project (like an Android version of PERT Pilot) I had previously failed at, and in that case, multiple times.

Revisiting Past Challenges

AI is evolving rapidly, and what wasn’t possible yesterday might be feasible today. Literally. (A great example is that I wrote a blog post about how medical literature seems like a game of telephone and was opining on AI-assisted tools to aid with tracking changes to the literature over time. The day I put that blog post in the queue, OpenAI announced their Deep Research tool, which I think can in part address some of the challenges I talked about currently being unsolved!)

One thing I have started to do that I recommend is keeping track of problems or projects that feel out of reach. Write them down. Revisit them every few months, and explore them with the latest LLM and AI tools. You might be surprised at how much has changed, and what is now possible.

Moving Forward with AI

You don’t even have to use AI for everything. (I don’t.) But if you’re not yet in the habit of using AI for certain types of tasks, I challenge you to find a way to use an LLM for *something* that you are working on.

A good place to insert this into your work/projects is to start noting when you find yourself saying or thinking “this is the way we/I do/did things”.

When you catch yourself thinking this, stop and ask:

  • Does it have to be done that way? Why do we think so?
  • What are we trying to achieve with this task/project?
  • Are there other ways we can achieve this?
  • If not, can we automate some or more steps of this process? Can some steps be eliminated?

You can ask yourself these questions, but you can also ask these questions to an LLM. And play around with what and how you ask (the prompt, or what you ask it, makes a difference).

One example for me has been working on a systematic review and meta analysis of a medical topic. I need to extract details about criteria I am analyzing across hundreds of papers. Oooph, big task, very slow. The LLM tools aren’t yet good about extracting non-obvious data from research papers, especially PDFs where the data I am interested may be tucked into tables, figure captions, or images themselves rather than explicitly stated as part of the results section. So for now, that still has to be manually done, but it’s on my list to revisit periodically with new LLMs.

However, I recognized that the way I was writing down (well, typing into a spreadsheet) the extracted data was burdensome and slow, and I wondered if I could make a quick simple HTML page to guide me through the extraction, with an output of the data in CSV that I could open in spreadsheet form when I’m ready to analyze. The goal is easier input of the data with the same output format (CSV for a spreadsheet). And so I used an LLM to help me quickly build that HTML page, set up a local server, and run it so I can use it for data extraction. This is one of those projects where I felt intimidated – I never quite understood spinning up servers and in fact didn’t quite understand fundamentally that for free I can “run” “a server” locally on my computer in order to do what I wanted to do. So in the process of working on a task I really understood (make an HTML page to capture data input), I was able to learn about spinning up and using local servers! Success, in terms of completing the task and learning something I can take forward into future projects.

Another smaller recent example is when I wanted to put together a simple case report for my doctor, summarizing symptoms etc, and then also adding in PDF pages of studies I was referencing so she had access to them. I knew from the past that I could copy and paste the thumbnails from Preview into the PDF, but it got challenging to be pasting 15+ pages in as thumbnails and they were inserting and breaking up previous sections, so the order of the pages was wrong and hard to fix. I decided to ask my LLM of choice if it was possible to automate compiling 4 PDF documents via a command line script, and it said yes. It told me what library to install (and I checked this is an existing tool and not a made up or malicious one first), and what command to run. I ran it, it appended the PDFs together into one file the way I wanted, and it didn’t require the tedious hand commands to copy and paste everything together and rearrange when the order was messed up.

The more I practice, the easier I find myself switching into the habit of saying “would it be possible to do X” or “Is there a way to do Y more simply/more efficiently/automate it?”. That then leads to portions which I can decide to implement, or not. But it feels a lot better to have those on hand, even if I choose not to take a project on, rather than to feel overwhelmed and out of control and uncertain about what AI can do (or not).

Facing uncertainty with AI and rethinking "What if you could?", a blog post by Dana M. Lewis on DIYPS.orgIf you can shift your mindset from fear and avoidance to curiosity and experimentation, you might discover new skills, solve problems you once thought were impossible, and open up entirely new opportunities.

So, the next time you think, “That’s too hard, I can’t do that,” stop and ask:

“What if I could?”

If you appreciated this post, you might like some of my other posts about AI if you haven’t read them.

How Medical Research Literature Evolves Over Time Like A Game of Telephone

Have you ever searched for or through medical research on a specific topic, only to find different studies saying seemingly contradictory things? Or you find something that doesn’t seem to make sense?

You may experience this, whether you’re a doctor, a researcher, or a patient.

I have found it helpful to consider that medical literature is like a game of telephone, where a fact or statement is passed from one research paper to another, which means that sometimes it is slowly (or quickly!) changing along the way. Sometimes this means an error has been introduced, or replicated.

A Game of Telephone in Research Citations

Imagine a research study from 2016 that makes a statement based on the best available data at the time. Over the next few years, other papers cite that original study, repeating the statement. Some authors might slightly rephrase it, adding their own interpretations. By 2019, newer research has emerged that contradicts the original statement. Some researchers start citing this new, corrected information, while others continue citing the outdated statement because they either haven’t updated their knowledge or are relying on older sources, especially because they see other papers pointing to these older sources and find it easiest to point to them, too. It’s not necessarily made clear that this outdated statement is now known to be incorrect. Sometimes that becomes obvious in the literature and field of study, and sometimes it’s not made explicit that the prior statement is ‘incorrect’. (And if it is incorrect, it doesn’t become known as incorrect until later – at the time it’s made, it’s considered to be correct.) 

By 2022, both the correct and incorrect statements appear in the literature. Eventually, a majority of researchers transition to citing the updated, accurate information—but the outdated statement never fully disappears. A handful of papers continue to reference the original incorrect fact, whether due to oversight, habit (of using older sources and repeating citations for simple statements), or a reluctance to accept new findings.

The gif below illustrates this concept, showing how incorrect and correct statements coexist over time. It also highlights how researchers may rely on citations from previous papers without always checking whether the original information was correct in the first place.

Animated gif illustrating how citations branch off and even if new statements are introduced to the literature, the previous statement can continue to appear over time.

This is not necessarily a criticism of researchers/authors of research publications (of which I am one!), but an acknowledgement of the situation that results from these processes. Once you’ve written a paper and cited a basic fact (let’s imagine you wrote this paper in 2017 and cite the 2016 paper and fact), it’s easy to keep using this citation over time. Imagine it’s 2023 and you’re writing a paper on the same topic area, it’s very easy to drop the same citation from 2016  in for the same basic fact, and you may not think to consider updating the citation or check if the fact is still the fact.

Why This Matters

Over time, a once-accepted “fact” may be corrected or revised, but older statements can still linger in the literature, continuing to influence new research. Understanding how this process works can help you critically evaluate medical research and recognize when a widely accepted statement might actually be outdated—or even incorrect.

If you’re looking into a medical topic, it’s important to pay attention not just to what different studies say, but also when they were published and how their key claims have evolved over time. If you notice a shift in the literature—where newer papers cite a different fact than older ones—it may indicate that scientific understanding has changed.

One useful strategy is to notice how frequently a particular statement appears in the literature over time.

Whenever I have a new diagnosis or a new topic to research on one of my chronic diseases, I find myself doing this.

I go and read a lot of abstracts and research papers about the topic; I generally observe patterns in terms of key things that everyone says, which establishes what the generally understood “facts” are, and also notice what is missing. (Usually, the question I’m asking is not addressed in the literature! But that’s another topic…)

I pay attention to the dates, observing when something is said in papers in the 1990s and whether it’s still being repeated in the 2020s era papers, or if/how it’s changed. In my head, I’m updating “this is what is generally known” and “this doesn’t seem to be answered in the literature (yet)” and “this is something that has changed over time” lists.

Re-Evaluating the Original ‘Fact’

In some cases, it turns out the original statement was never correct to begin with. This can happen when early research is based on small sample sizes, incomplete data, or incorrect assumptions. Sometimes that statement was correct, in context, but taken out of context immediately and this out of context use was never corrected. 

For example, a widely cited statement in medical literature once claimed that chronic pancreatitis is the most common cause of exocrine pancreatic insufficiency (EPI). This claim was repeated across numerous papers, reinforcing it as accepted knowledge. However, a closer examination of population data shows that while chronic pancreatitis is a known co-condition of EPI, it is far less common than diabetes—a condition that affects a much larger population and is also strongly associated with EPI. Despite this, many papers still repeat the outdated claim without checking the original data behind it.

(For a deeper dive into this example, you can read my previous post here. But TL;DR: even 80% of .03% is a smaller number than 10% of 10% of the overall population…so it is not plausible that CP is the biggest cause of EPI/PEI.)

Stay Curious

This realization can be really frustrating, because if you’re trying to do primary research to help you understand a topic or question, how do you know what the truth is? This is peer-reviewed research, but what this shows us is that the process of peer-review and publishing in a journal is not infallible. There can be errors. The process for updating errors can be messy, and it can be hard to clean up the literature over time. This makes it hard for us humans – whether in the role of patient or researcher or clinician – to sort things out.

But beyond a ‘woe is me, this is hard’ moment of frustration, I do find that this perspective of literature as a process of telephone makes me a better reader of the literature and forces me to think more critically about what I’m reading, and take papers in context of the broader landscape of literature and evolving knowledge base. It helps remove the strength I would otherwise be prone to assigning any one paper (and any one ‘fact’ or finding from a single paper), and encourages me to calibrate this against the broader knowledge base and the timeline of this knowledge base.

That can also be hard to deal with personally as a researcher/author, especially someone who tends to work in the gaps, establishing new findings and facts and introducing them to the literature. Some of my work also involves correcting errors in the literature, which I find from my outsider/patient perspective to be obvious because I’ve been able to use fresh eyes and evaluate at a systematic review level/high level view, without being as much in the weeds. That means my work, to disseminate new or corrected knowledge, is even more challenging. It’s also challenging personally as a patient, when I “just” want answers and for everything to already be studied, vetted, published, and widely known by everyone (including me and my clinician team).

But it’s usually not, and that’s just something I – and we – have to deal with. I’m curious as to whether we will eventually develop tools with AI to address this. Perhaps a mini systematic review tool that scrapes the literature and includes an analysis of how things have changed over time. This is done in systematic review or narrative reviews of the literature, when you read those types of papers, but those papers are based on researcher interests (and time and funding), and I often have so many questions that don’t have systematic reviews/narrative reviews covering them. Some I turn into papers myself (such as my paper on systematically reviewing the dosing guidelines and research on pancreatic enzyme replacement therapy for people with exocrine pancreatic insufficiency, known as EPI or PEI, or a systematic review on the prevalence of EPI in the general population or a systematic review on the prevalence of EPI in people with diabetes (Type 1 and Type 2)), but sometimes it’s just a personal question and it would be great to have a tool to help facilitate the process of seeing how information has changed over time. Maybe someone will eventually build that tool, or it’ll go on my list of things I might want to build, and I’ll build it myself like I have done with other types of research tools in the past, both without and with AI assistance. We’ll see!

TL;DR: be cognizant of the fact that medical literature changes over time, and keep this in mind when reading a single paper. Sometimes there are competing “facts” or beliefs or statements in the literature, and sometimes you can identify how it evolves over time, so that you can better assess the accuracy of research findings and avoid relying on outdated or incorrect information.

Whether you’re a researcher, a clinician, or a patient doing research for yourself, this awareness can help you better navigate the scientific literature.

A screenshot from the animated gif showing how citation strings happen in the literature, branching off over time but often still resulting in a repetition of a fact that is later considered to be incorrect, thus both the correct and incorrect fact occur in the literature at the same time.

The prompt matters when using Large Language Models (LLMs) and AI in healthcare

I see more and more research papers coming out these days about different uses of large language models (LLMs, a type of AI) in healthcare. There are papers evaluating it for supporting clinicians in decision-making, aiding in note-taking and improving clinical documentation, and enhancing patient education. But I see a wide-sweeping trend in the titles and conclusions of these papers, exacerbated by media headlines, making sweeping claims about the performance of one model versus another. I challenge everyone to pause and consider a critical fact that is less obvious: the prompt matters just as much as the model.

As an example of this, I will link to a recent pre-print of a research article I worked on with Liz Salmi (pre-print here).

Liz nerd-sniped me about an idea of a study to have a patient and a neuro-oncologist evaluate LLM responses related to patient-generated queries about a chart note (or visit note or open note or clinical note, however you want to call it). I say nerd-sniped because I got very interested in designing the methods of the study, including making sure we used the APIs to model these ‘chat’ sessions so that the prompts were not influenced by custom instructions, ‘memory’ features within the account or chat sessions, etc. I also wanted to test something I’ve observed anecdotally from personal LLM use across other topics, which is that with 2024-era models the prompt matters a lot for what type of output you get. So that’s the study we designed, and wrote with Jennifer Clarke, Zhiyong Dong, Rudy Fischmann, Emily McIntosh, Chethan Sarabu, and Catherine (Cait) DesRoches, and I encourage you to check out the pre-print and enjoy the methods section, which is critical for understanding the point I’m trying to make here. 

In this study, the data showed that when LLM outputs were evaluated for a healthcare task, the results varied significantly depending not just on the model but also on how the task was presented (the prompt). Specifically, persona-based prompts—designed to reflect the perspectives of different end users like clinicians and patients—yielded better results, as independently graded by both an oncologist and a patient.

The Myth of the “Best Model for the Job”

Many research papers conclude with simplified takeaways: Model A is better than Model B for healthcare tasks. While performance benchmarking is important, this approach often oversimplifies reality. Healthcare tasks are rarely monolithic. There’s a difference between summarizing patient education materials, drafting clinical notes, or assisting with complex differential diagnosis tasks.

But even within a single task, the way you frame the prompt makes a profound difference.

Consider these three prompts for the same task:

  • “Explain the treatment options for early-stage breast cancer.”
  • “You’re an oncologist. Explain the treatment options for early-stage breast cancer.”
  • “You’re an oncologist. Explain the treatment options for early-stage breast cancer as you would to a newly diagnosed patient with no medical background.”

The second and third prompt likely result in a more accessible and tailored response. If a study only tests general prompts (e.g. prompt one), it may fail to capture how much more effective an LLM can be with task-specific guidance.

Why Prompting Matters in Healthcare Tasks

Prompting shapes how the model interprets the task and generates its output. Here’s why it matters:

  • Precision and Clarity: A vague prompt may yield vague results. A precise prompt clarifies the goal and the speaker (e.g. in prompt 2), and also often the audience (e.g. in prompt 3).
  • Task Alignment: Complex medical topics often require different approaches depending on the user—whether it’s a clinician, a patient, or a researcher.
  • Bias and Quality Control: Poorly constructed prompts can inadvertently introduce biases

Selecting a Model for a Task? Test Multiple Prompts

When evaluating LLMs for healthcare tasks—or applying insights from a research paper—consider these principles:

  1. Prompt Variation Matters: If an LLM fails on a task, it may not be the model’s fault. Try adjusting your prompts before concluding the model is ineffective, and avoid broad sweeping claims about a field or topic that aren’t supported by the test you are running.
  2. Multiple Dimensions of Performance: Look beyond binary “good” vs. “bad” evaluations. Consider dimensions like readability, clinical accuracy, and alignment with user needs, as an example when thinking about performance in healthcare. In our paper, we saw some cases where a patient and provider overlapped in ratings, and other places where the ratings were different.
  3. Reproducibility and Transparency: If a study doesn’t disclose how prompts were designed or varied, its conclusions may lack context. Reproducibility in AI studies depends not just on the model, but on the interaction between the task, model, and prompt design. You should be looking for these kinds of details when reading or peer reviewing papers. Take results and conclusions with a grain of salt if these methods are not detailed in the paper.
  4. Involve Stakeholders in Evaluation: As shown in the preprint mentioned earlier, involving both clinical experts and patients in evaluating LLM outputs adds critical perspectives often missing in standard evaluations, especially as we evolve to focus research on supporting patient needs and not simply focusing on clinician and healthcare system usage of AI.

What This Means for Healthcare Providers, Researchers, and Patients

  • For healthcare providers, understand that the way you frame a question can improve the usefulness of AI tools in practice. A carefully constructed prompt, adding a persona or requesting information for a specific audience, can change the output.
  • For researchers, especially those developing or evaluating AI models, it’s essential to test prompts across different task types and end-user needs. Transparent reporting on prompt strategies strengthens the reliability of your findings.
  • For patients, recognizing that AI-generated health information is shaped by both the model and the prompt. This can support critical thinking when interpreting AI-driven health advice. Remember that LLMs can be biased, but so too can be humans in healthcare. The same approach for assessing bias and evaluating experiences in healthcare should be used for LLM output as well as human output. Everyone (humans) and everything (LLMs) are capable of bias or errors in healthcare.

Prompts matter, so consider model type as well as the prompt as a factor in assessing LLMs in healthcare. Blog by Dana M. LewisTLDR: Instead of asking “Which model is best?”, a better question might be:

“How do we design and evaluate prompts that lead to the most reliable, useful results for this specific task and audience?”

I’ve observed, and this study adds evidence, that prompt interaction with the model matters.

Assessing the Impact of Diabetes on Gastrointestinal Symptom Severity in Exocrine Pancreatic Insufficiency (EPI/PEI): A Diabetes Subgroup Analysis of EPI/PEI-SS Scores – Poster at #ADA2024

Last year, I recognized that there was a need to improve the documentation of symptoms of exocrine pancreatic insufficiency (known as EPI or PEI). There is no standardized way to discuss symptoms with doctors, and this influences whether or not people get the right amount of enzymes (pancreatic enzyme replacement therapy; PERT) to treat EPI and eliminate symptoms completely. It can be done, but like insulin, it requires matching PERT to the amount of food you’re consuming. I also began observing that EPI is underscreened and underdiagnosed, whether that’s in the general population or in people with diabetes. I thought that if we could create a list of common EPI symptoms and a standardized scale to rate them, this might help address some of these challenges.

I developed this scale to address these needs. It is called the “Exocrine Pancreatic Insufficiency Symptom Score” or “EPI/PEI-SS” for short.

I had a handful of people with and without EPI help me test the scale last year, and then I opened up a survey to the entire world and asked people to share their experiences with GI-related symptoms. I specifically sought people with EPI diagnoses as well as people who don’t have EPI, so that we could compare the symptom burden and experiences to people without EPI. (Thank you to everyone who contributed their data to this survey!)

After the first three weeks, I started analyzing the first set of data. While doing that, I realized that (both because of my network of people with diabetes and because I also posted in at least one diabetes-specific group), I had a large sub-group of people with diabetes who had contributed to the survey, and I was able to do a full subgroup analyses to assess whether having diabetes seemed to correlate with a different symptom experience of EPI or not.

Here’s what I found, and what my poster is about (you can view my poster as a PDF here), presented at ADA Scientific Sessions 2024 (#ADA2024):

1985-LB at #ADA2024, “Assessing the Impact of Diabetes on Gastrointestinal Symptom Severity in Exocrine Pancreatic Insufficiency (EPI/PEI): A Diabetes Subgroup Analysis of EPI/PEI-SS Scores”

Exocrine pancreatic insufficiency has a high symptom burden and is present in as many as 3 of 10 people with diabetes. (See my systematic review from last year here). To help improve conversations about symptoms of EPI, which can then be used to improve screening, diagnosis, and treatment success with EPI, I created the Exocrine Pancreatic Insufficiency Symptom Score (EPI/PEI-SS), which consists of 15 individual symptoms that people separately rate the frequency (0-5) and severity (0-3) for which they experience those symptoms, if at all. The frequency and severity get multiplied for an individual symptom score (0-15 possible) and these get added up for a total EPI/PEI-SS score (0-225 possible, because 15 symptoms times 15 possible points per symptom is 225).

I conducted a real-world study of the EPI/PEI-SS in the general population to assess the gastrointestinal symptom burden in individuals with (n=155) and without (n=169) EPI. Because there was a large cohort of PWD within these groups, I separately analyzed them to evaluate whether diabetes contributes to a difference in EPI/PEI-SS score.

Methods:

I calculated EPI/PEI-SS scores for all survey participants. Previously, I had analyzed the differences of people with and without EPI overall. For this sub-analysis, I analyzed and compared between PWD (n=118 total), with EPI (T1D: n=14; T2D: n=20) or without EPI (T1D: n=78; T2D: n=6), and people without diabetes (n=206 total) with and without EPI.

I also looked at sub-groups within the non-EPI cohorts and broke them into two groups to see whether other GI conditions contributed to a higher EPI/PEI-SS score and whether we could distinguish EPI from other GI and non-GI conditions.

Results:

People with EPI have a much higher symptom burden than people without EPI. This can be assessed by looking at the statistically significant higher mean EPI/PEI-SS score as well as the average number of symptoms; the average severity score of individual symptoms; and the average frequency score of individual symptoms.

This remains true irrespective of diabetes. In other words, diabetes does not appear to influence any of these metrics.

People with diabetes with EPI had statistically significant higher mean EPI/PEI-SS scores (102.62 out of 225, SD: 52.46) than did people with diabetes without EPI (33.64, SD: 30.38), irrespective of presence of other GI conditions (all group comparisons p<0.001). As you can see below, that is the same pattern we see in people without diabetes. And the stats confirm what you can see: there is no significant difference overall or in any of the subgroups between people with and without diabetes.

Box plot showing EPI/PEI-SS scores for people with and without diabetes, and with and without EPI or other GI conditions. The scores are higher in people with EPI regardless of whether they have diabetes. The plot makes it clear that the scores are distinct between the groups with and without EPI, even when the people without EPI have other GI conditions. This suggests the EPI/PEI-SS can be useful in distinguishing between EPI and other conditions that may cause GI symptoms, and that the EPI/PEI-SS could be a useful screening tool to help identify people who need screening for EPI.

T1D and T2D subgroups were similar
(but because the T2D cohort is small, I did not break them out separately in this graph).

For example, people with diabetes with EPI had an average of 12.59 (out of 15) symptoms, with an average frequency score of 3.06 and average severity score of 1.79, and an average individual symptom score of 5.48. This is a pretty clear contrast to people with diabetes without EPI who had had an average of 7.36 symptoms, with an average frequency score of 1.4 and average severity score of 0.8, and an average individual symptom score of 1.12. All comparisons are statistically significant (p<0.001).

A table comparing the average number of symptoms, frequency, severity, and individual symptom scores between people with diabetes with and without exocrine pancreatic insufficiency (EPI). People with EPI have more symptoms and higher frequency and severity than without EPI: regardless of diabetes.

Conclusion 

  • EPI has a high symptom burden, irrespective of diabetes.
  • High scores using the EPI/PEI-SS among people with diabetes can distinguish between EPI and other GI conditions.
  • The EPI/PEI-SS should be further studied as a possible screening method for EPI and assessed as a tool to aid people with EPI in tracking changes to EPI symptoms over time based on PERT titration.

What does this mean if you are a healthcare provider? What actionable information does this give you?

If you’re a healthcare provider, you should be aware that people with diabetes may be more likely to have EPI – rather than celiac or gastroparesis (source) – if they mention having GI symptoms. This means you should incorporate fecal elastase screening into your care plans to help further evaluate GI-related symptoms.

If you want to further improve your pre-test probability of the elastase testing, you can use the EPI/PEI-SS with your patients to assess the severity and frequency of their GI-related symptoms. I will explain the cutoff and AUC numbers we calculated, but first understand the caveat that these were calculated in the initial real-world study that included people with EPI who are already treating with PERT; thus these numbers might change a little when we repeat this study and evaluate it in people with untreated EPI. (However, I actually predict the mean score to go up in an undiagnosed population, because scores should go down with treatment.) But that different population study may change these exact cutoff and sensitivity specificity numbers, which is why I’m giving this caveat. That being said: the AUC was 0.85 which means a higher EPI/PEI-SS is pretty good for differentiating between EPI and not having EPI. (In the diabetes sub-population specifically, I calculated a suggested cutoff of 59 (out of 225) with a sensitivity of 0.81 and specificity of 0.75. This means we estimate that if people are bringing up GI symptoms to you and you have them take the EPI/PEI-SS and their score is greater than or equal to 59, you would expect that out of 100 people that 81 with EPI would be identified (and 75 of 100 people without EPI would also correctly be identified via scores lower than 59). That doesn’t mean that people with EPI can’t have a lower score; or that people with a higher score do have EPI; but it does mean that the chances of having fecal elastase <=200 ug/g is a lot more likely in those with higher EPI/PEI-SS scores.

In addition to the cutoff score, there is a notable difference in people with diabetes and EPI compared to people with diabetes without EPI in their top individual symptom scores (representing symptom burden based on frequency and severity). For example, the top 3 symptoms of those with EPI and diabetes include avoiding certain food/groups; urgent bowel movements; and avoiding eating large meals. People without EPI and diabetes also score “Avoid certain food/groups” as their top score, but the score is markedly different: the mean score of 8.94 for people with EPI as compared to 3.49 for people without EPI. In fact, the mean score on the lowest individual symptom is higher for people with EPI than the highest individual symptom score for people without EPI.

QR code for EPI/PEI-SS - takes you to https://bit.ly/EPI-PEI-SS-WebHow do you have people take the EPI/PEI-SS? You can pull this link up (https://bit.ly/EPI-PEI-SS-Web), give this link to them and ask them to take it on their phone, or save this QR code and give it to them to take later. The link (and the QR code) go to a free web-based version of the EPI/PEI-SS that will calculate the total EPI/PEI-SS score, and you can use it for shared decision making processes about whether this person would benefit from a fecal elastase test or other follow up screening for EPI. Note that the EPI/PEI-SS does not collect any identifiable information and is fully anonymous.

(Bonus: people who use this tool can opt to contribute their anonymized symptom and score data for an ongoing observational study.)

If you have feedback about whether the EPI/PEI-SS was helpful – or not – in your care of people with diabetes; or if you want to discuss collaborating on some prospective studies to evaluate EPI/PEI-SS in comparison to fecal elastase screening, please reach out anytime to Dana@OpenAPS.org

What does this mean if you are a patient (person with diabetes)? What actionable information does this give you?

If you don’t have GI symptoms that bother you, you don’t necessarily need to take action. (Just put a note in your brain that EPI is more likely than celiac or gastroparesis in people with diabetes so if you or a friend with diabetes have GI symptoms in the future, you can make sure you are assessed for EPI.) You can also choose to take the EPI/PEI-SS regardless, and also opt in to donate your data.

If you do have GI symptoms that are annoying, you may want to take the EPI/PEI-SS to help you evaluate the frequency and severity of your GI symptoms. You can take it for free and anonymously – no identifiable information is needed to access the tool. It will generate the EPI/PEI-SS score for you.

Based on the score, you may want to ask your doctor (which could be the doctor that treats your diabetes, or a primary/general care provider, or a gastroenterologist – whoever you seek routine care from or have an appointment from next) about your symptoms; share the EPI/PEI-SS score; and explain that you think you may warrant screening for EPI.

(You can also choose to contribute your anonymous symptom data to a research dataset, to help us improve the EPI/PEI-SS and help us figure out how to help improve screening and diagnosis and treatment of EPI. Remember, this tool will not ask you for any identifying information. This is 100% optional and you can opt out of doing so if you do not prefer to contribute to research, while still using the tool.)

You can see a pre-print version of the diabetes sub-study here or pre-print of the general population data here.

If you’re looking for more personal experiences about living with EPI, check out DIYPS.org/EPI, and also for people with EPI looking to improve their dosing with pancreatic enzyme replacement therapy – you may want to check out PERT Pilot (a free iOS app to record enzyme dosing).

Researchers & clinicians, if you’re interested in collaborating on studies in EPI (in diabetes, or more broadly on EPI), whether specifically on EPI/PEI-SS or broader EPI topics, please reach out! My email is Dana@OpenAPS.org

Pain and translation and using AI to improve healthcare at an individual level

I think differently from most people. Sometimes, this is a strength; and sometimes this is a challenge. This is noticeable when I approach healthcare encounters in particular: the way I perceive signals from my body is different from a typical person. I didn’t know this for the longest time, but it’s something I have been becoming more aware of over the years.

The most noticeable incident that brought me to this realization involved when I pitched head first off a mountain trail in New Zealand over five years ago. I remember yelling – in flight – help, I broke my ankle, help. When I had arrested my fall, clung on, and then the human daisy chain was pulling me back up onto the trail, I yelped and stopped because I could not use my right ankle to help me climb up the trail. I had to reposition my knee to help move me up. When we got up to the trail and had me sitting on a rock, resting, I felt a wave of nausea crest over me. People suggested that it was dehydration and I should drink. I didn’t feel dehydrated, but ok. Then because I was able to gently rest my foot on the ground at a normal perpendicular angle, the trail guides hypothesized that it was not broken, just sprained. It wasn’t swollen enough to look like a fracture, either. I felt like it hurt really bad, worse than I’d ever hurt an ankle before and it didn’t feel like a sprain, but I had never broken a bone before so maybe it was the trauma of the incident contributing to how I was feeling. We taped it and I tried walking. Nope. Too-strong pain. We made a new goal of having me use poles as crutches to get me to a nearby stream a half mile a way, to try to ice my ankle. Nope, could not use poles as crutches, even partial weight bearing was undoable. I ended up doing a mix of hopping, holding on to Scott and one of the guides. That got exhausting on my other leg pretty quickly, so I also got down on all fours (with my right knee on the ground but lifting my foot and ankle in the air behind me) to crawl some. Eventually, we realized I wasn’t going to be able to make it to the stream and the trail guides decided to call for a helicopter evacuation. The medics, too, when they arrived via helicopter thought it likely wasn’t broken. I got flown to an ER and taken to X-Ray. When the technician came out, I asked her if she saw anything obvious and whether it looked broken or not. She laughed and said oh yes, there’s a break. When the ER doc came in to talk to me he said “you must have a really high pain tolerance” and I said “oh really? So it’s definitely broken?” and he looked at me like I was crazy, saying “it’s broken in 3 different places”. (And then he gave me extra pain meds before setting my ankle and putting the cast on to compensate for the fact that I have high pain tolerance and/or don’t communicate pain levels in quite the typical way.)

A week later, when I was trying not to fall on my broken ankle and broke my toe, I knew instantly that I had broken my toe, both by the pain and the nausea that followed. Years later when I smashed another toe on another chair, I again knew that my toe was broken because of the pain + following wave of nausea. Nausea, for me, is apparently a response to very high level pain. And this is something I’ve carried forward to help me identify and communicate when my pain levels are significant, because otherwise my pain tolerance is such that I don’t feel like I’m taken seriously because my pain scale is so different from other people’s pain scales.

Flash forward to the last few weeks. I have an autoimmune disease causing issues with multiple areas of my body. I have some progressive slight muscle weakness that began to concern me, especially as it spread to multiple limbs and areas of my body. This was followed with pain in different parts of my spine which has escalated. Last weekend, riding in the car, I started to get nauseous from the pain and had to take anti-nausea medicine (which thankfully helped) as well as pain medicine (OTC, and thankfully it also helped lower it down to manageable levels). This has happened several other times.

Some of the symptoms are concerning to my healthcare provider and she agreed I should probably have a MRI and a consult from neurology. Sadly, the first available new patient appointment with the neurologist I was assigned to was in late September. Gulp. I was admittedly nervous about my symptom progression, my pain levels (intermittent as they are), and how bad things might get if we are not able to take any action between now and September. I also, admittedly, was not quite sure how I would cope with the level of pain I have been experiencing at those peak moments that cause nausea.

I had last spoken to my provider a week prior, before the spine pain started. I reached out to give her an update, confirm that my specialist appointment was not until September, and express my concern about the progression and timeline. She too was concerned and I ended up going in for imaging sooner.

Over the last week, because I’ve been having these progressive symptoms, I used Katie McCurdy’s free templates from Pictal Health to help visualize and show the progression of symptoms over time. I wasn’t planning on sending my visuals to my doctor, but it helped me concretely articulate my symptoms and confirm that I was including everything that I thought was meaningful for my healthcare providers to know. I also shared them with Scott to confirm he didn’t think I had missed anything. The icons in some cases were helpful but in other cases didn’t quite match how I was experiencing pain and I modified them somewhat to better match how I saw the pain I was experiencing.

(PS – check out Katie’s templates here, you can make a copy in Google Drive and use them yourself!)

As I spoke with the nurse who was recording my information at intake for imaging, she asked me to characterize the pain. I did and explained that it was probably usually a 7/10 then but periodically gets stronger to the point of causing nausea, which for me is a broken bone pain-level response. She asked me to characterize the pain – was it burning, tingling…? None of the words she said matched how it feels. It’s strong pain; it sometimes gets worse. But it’s not any of the words she mentioned.

When the nurse asked if it was “sharp”, Scott spoke up and explained the icon that I had used on my visual, saying maybe it was “sharp” pain. I thought about it and agreed that it was probably the closest word (at least, it wasn’t a hard no like the words burning, tingling, etc. were), and the nurse wrote it down. That became the word I was able to use as the closest approximation to how the pain felt, but again with the emphasis of it periodically reaching nausea-inducing levels equivalent to broken bone pain, because I felt saying “sharp” pain alone did not characterize it fully.

This, then, is one of the areas where I feel that artificial intelligence (AI) gives me a huge helping hand. I often will start working with an LLM (a large language model) and describing symptoms. Sometimes I give it a persona to respond as (different healthcare provider roles); sometimes I clarify my role as a patient or sometimes as a similar provider expert role. I use different words and phrases in different questions and follow ups; I then study the language it uses in response.

If you’re not familiar with LLMs, you should know it is not human intelligence; there is no brain that “knows things”. It’s not an encyclopedia. It’s a tool that’s been trained on a bajillion words, and it learns patterns of words as a result, and records “weights” that are basically cues about how those patterns of words relate to each other. When you ask it a question, it’s basically autocompleting the next word based on the likelihood of it being the next word in a similar pattern. It can therefore be wildly wrong; it can also still be wildly useful in a lot of ways, including this context.

What I often do in these situations is not looking for factual information. Again, it’s not an encyclopedia. But I myself am observing the LLM in using a pattern of words so that I am in turn building my own set of “weights” – meaning, building an understanding of the patterns of words it uses – to figure out a general outline of what is commonly known by doctors and medical literature; the common terminology that is being used likely by doctors to intake and output recommendations; and basically build a list of things that do and do not match my scenario or symptoms or words, or whatever it is I am seeking to learn about.

I can then learn (from the LLM as well as in person clinical encounters) that doctors and other providers typically ask about burning, tingling, etc and can make it clear that none of those words match at all. I can then accept from them (or Scott, or use a word I learned from an LLM) an alternative suggestion where I’m not quite sure if it’s a perfect match, but it’s not absolutely wrong and therefore is ok to use to describe somewhat of the sensation I am experiencing.

The LLM and AI, basically, have become a translator for me. Again, notice that I’m not asking it to describe my pain for me; it would make up words based on patterns that have nothing to do with me. But when I observe the words it uses I can then use my own experience to rule things in/out and decide what best fits and whether and when to use any of those, if they are appropriate.

Often, I can do this in advance of a live healthcare encounter. And that’s really helpful because it makes me a better historian (to use clinical terms, meaning I’m able to report the symptoms and chronology and characterization more succinctly without them having to play 20 questions to draw it out of me); and it saves me and the clinicians time for being able to move on to other things.

At this imaging appointment, this was incredibly helpful. I had the necessary imaging and had the results at my fingertips and was able to begin exploring and discussing the raw data with my LLM. When I then spoke with the clinician, I was able to better characterize my symptoms in context of the imaging results and ask questions that I felt were more aligned with what I was experiencing, and it was useful for a more efficient but effective conversation with the clinician about what our working hypothesis was; what next short-term and long-term pathways looked like; etc.

This is often how I use LLMs overall. If you ask an LLM if it knows who Dana Lewis is, it “does” know. It’ll tell you things about me that are mostly correct. If you ask it to write a bio about me, it will solidly make up ⅓ of it that is fully inaccurate. Again, remember it is not an encyclopedia and does not “know things”. When you remember that the LLM is autocompleting words based on the likelihood that they match the previous words – and think about how much information is on the internet and how many weights (patterns of words) it’s been able to build about a topic – you can then get a better spidey-sense about when things are slightly more or less accurate at a general level. I have actually used part of a LLM-written bio, but not by asking it to write a bio. That doesn’t work because of made up facts. I have instead asked it to describe my work, and it does a pretty decent job. This is due to the number of articles I have written and authored; the number of articles describing my work; and the number of bios I’ve actually written and posted online for conferences and such. So it has a lot of “weights” probably tied to the types of things I work on, and having it describe the type of work I do or am known for gets pretty accurate results, because it’s writing in a general high level without enough detail to get anything “wrong” like a fact about an award, etc.

This is how I recommend others use LLMs, too, especially those of us as patients or working in healthcare. LLMs pattern match on words in their training; and they output likely patterns of words. We in turn as humans can observe and learn from the patterns, while recognizing these are PATTERNS of connected words that can in fact be wrong. Systemic bias is baked into human behavior and medical literature, and this then has been pattern-matched by the LLM. (Note I didn’t say “learned”; but they’ve created weights based on the patterns they observe over and over again). You can’t necessarily course-correct the LLM (it’ll pretend to apologize and maybe for a short while adjust it’s word patterns but in a new chat it’s prone to make the same mistakes because the training has not been updated based on your feedback, so it reverts to using the ‘weights’ (patterns) it was trained on); instead, we need to create more of the correct/right information and have it voluminously available for LLMs to train on in the future. At an individual level then, we can let go of the obvious not-right things it’s saying and focus on what we can benefit from in the patterns of words it gives us.

And for people like me, with a high (or different type of) pain tolerance and a different vocabulary for what my body is feeling like, this has become a critical tool in my toolbox for optimizing my healthcare encounters. Do I have to do this to get adequate care? No. But I’m an optimizer, and I want to give the best inputs to the healthcare system (providers and my medical records) in order to increase my chances of getting the best possible outputs from the healthcare system to help me maintain and improve and save my health when these things are needed.

TLDR: LLMs can be powerful tools in the hands of patients, including for real-time or ahead-of-time translation and creating shared, understandable language for improving communication between patients and providers. Just as you shouldn’t tell a patient not to use Dr. Google, you should similarly avoid falling into the trap of telling a patient not to use LLMs because they’re “wrong”. Being wrong in some cases and some ways does not mean LLMs are useless or should not be used by patients. Each of these tools has limitations but a lot of upside and benefits; restricting patients or trying to limit use of tools is like limiting the use of other accessibility tools. I spotted a quote from Dr. Wes Ely that is relevant: “Maleficence can be created with beneficent intent”. In simple words, he is pointing out that harm can happen even with good intent.

Don’t do harm by restricting or recommending avoiding tools like LLMs.

Effective Pair Programming and Coding and Prompt Engineering and Writing with LLMs like ChatGPT and other AI tools

I’ve been puzzled when I see people online say that LLM’s “don’t write good code”. In my experience, they do. But given that most of these LLMs are used in chatbot mode – meaning you chat and give it instructions to generate the code – that might be where the disconnect lies. To get good code, you need effective prompting and to do so, you need clear thinking and ideas on what you are trying to achieve and how.

My recipe and understanding is:

Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

It also involves understanding what these systems can and can’t do. For example, as I’ve written about before, they can’t “know” things (although they can increasingly look things up) and they can’t do “mental” math. But, they can generally repeat patterns of words to help you see what is known about a topic and they can write code that you can execute (or it can execute, depending on settings) to solve a math problem.

What the system does well is help code small chunks, walk you through processes to link these sections of code up, and help you implement them (if you ask for it). The smaller the task (ask), the more effective it is. Or also – the easier it is for you to see when it completes the task and when it hasn’t been able to finish due to limitations like response length limits, information falling out of the context window (what it knows that you’ve told it); unclear prompting; and/or because you’re asking it to do things for which it doesn’t have expertise. Some of the last part – lack of expertise – can be improved with specific prompting techniques –  and that’s also true for right-sizing the task it’s focusing on.

Right-size the task by giving a clear ask

If I were to ask an LLM to write me code for an iOS app to do XYZ, it could write me some code, but it certainly wouldn’t (at this point in history, written in February 2024), write all code and give me a downloadable file that includes it all and the ability to simply run it. What it can do is start writing chunks and snippets of code for bits and pieces of files that I can take and place and build upon.

How do I know this? Because I made that mistake when trying to build my first iOS apps in April and May 2023 (last year). It can’t do that (and still can’t today; I repeated the experiment). I had zero ideas how to build an iOS app; I had a sense that it involved XCode and pushing to the Apple iOS App Store, and that I needed “Swift” as the programming language. Luckily, though, I had a much stronger sense of how I wanted to structure the app user experience and what the app needed to do.

I followed the following steps:

  1. First, I initiated chat as a complete novice app builder. I told it I was new to building iOS apps and wanted to use XCode. I had XCode downloaded, but that was it. I told it to give me step by step instructions for opening XCode and setting up a project. Success! That was effective.
  2. I opened a different chat window after that, to start a new chat. I told it that it was an expert in iOS programming using Swift and XCode. Then I described the app that I wanted to build, said where I was in the process (e.g. had opened and started a project in XCode but had no code yet), and asked it for code to put on the home screen so I could build and open the app and it would have content on the home screen. Success!
  3. From there, I was able to stay in the same chat window and ask it for pieces at a time. I wanted to have a new user complete an onboarding flow the very first time they opened the app. I explained the number of screens and content I wanted on those screens; the chat was able to generate code, tell me how to create that in a file, and how to write code that would trigger this only for new users. Success!
  4. I was able to then add buttons to the home screen; have those buttons open new screens of the app; add navigation back to the home; etc. Success!
  5. (Rinse and repeat, continuing until all of the functionality was built out a step at a time).

To someone with familiarity building and programming things, this probably follows a logical process of how you might build apps. If you’ve built iOS apps before and are an expert in Swift programming, you’re either not reading this blog post or are thinking I (the human) am dumb and inexperienced.

Inexperienced, yes, I was (in April 2023). But what I am trying to show here is for someone new to a process and language, this is how we need to break down steps and work with LLMs to give it small tasks to help us understand and implement the code it produces before moving forward with a new task (ask). It takes these small building block tasks in order to build up to a complete app with all the functionality that we want. Nowadays, even though I can now whip up a prototype project and iOS app and deploy it to my phone within an hour (by working with an LLM as described above, but skipping some of the introductory set-up steps now that I have experience in those), I still follow the same general process to give the LLM the big picture and efficiently ask it to code pieces of the puzzle I want to create.

As the human, you need to be able to keep the big picture – full app purpose and functionality – in mind while subcontracting with the LLM to generate code for specific chunks of code to help achieve new functionality in our project.

In my experience, this is very much like pair programming with a human. In fact, this is exactly what we did when we built DIYPS over ten years ago (wow) and then OpenAPS within the following year. I’ve talked endlessly about how Scott and I would discuss an idea and agree on the big picture task; then I would direct sub-tasks and asks that he, then also Ben and others would be coding on (at first, because I didn’t have as much experience coding and this was 10 years ago without LLMs; I gradually took on more of those coding steps and roles as well). I was in charge of the big picture project and process and end goal; it didn’t matter who wrote which code or how; we worked together to achieve the intended end result. (And it worked amazingly well; here I am 10 years later still using DIYPS and OpenAPS; and tens of thousands of people globally are all using open source AID systems spun off of the algorithm we built through this process!)

Two purple boxes. The one on the left says "big picture project idea" and has a bunch of smaller size boxes within labeled LLM, attempting to show how an LLM can do small-size tasks within the scope of a bigger project that you direct it to do. On the right, the box simply says "finished project". Today, I would say the same is true. It doesn’t matter – for my types of projects – if a human or an LLM “wrote” the code. What matters is: does it work as intended? Does it achieve the goal? Does it contribute to the goal of the project?

Coding can be done – often by anyone (human with relevant coding expertise) or anything (LLM with effective prompting) – for any purpose. The critical key is knowing what the purpose is of the project and keeping the coding heading in the direction of serving that purpose.

Tips for right-sizing the ask

  1. Consider using different chat windows for different purposes, rather than trying to do it all in one. Yes, context windows are getting bigger, but you’ll still likely benefit from giving different prompts in different windows (more on effective prompting below).Start with one window for getting started with setting up a project (e.g. how to get XCode on a Mac and start a project; what file structure to use for an app/project that will do XYZ; how to start a Jupyter notebook for doing data science with python; etc); brainstorming ideas to scope your project; then separately for starting a series of coding sub-tasks (e.g. write code for the home page screen for your app; add a button that allows voice entry functionality; add in HealthKit permission functionality; etc.) that serves the big picture goal.
  2. Make a list for yourself of the steps needed to build a new piece of functionality for your project. If you know what the steps are, you can specifically ask the LLM for that.Again, use a separate window if you need to. For example, if you want to add in the ability to save data to HealthKit from your app, you may start a new chat window that asks the LLM generally how does one add HealthKit functionality for an app? It’ll describe the process of certain settings that need to be done in XCode for the project; adding code that prompts the user with correct permissions; and then code that actually does the saving/revising to HealthKit.

    Make your list (by yourself or with help), then you can go ask the LLM to do those things in your coding/task window for your specific project. You can go set the settings in XCode yourself, and skip to asking it for the task you need it to do, e.g. “write code to prompt the user with HealthKit permissions when button X is clicked”.

    (Sure, you can do the ask for help in outlining steps in the same window that you’ve been prompting for coding sub-tasks, just be aware that the more you do this, the more quickly you’ll burn through your context window. Sometimes that’s ok, and you’ll get a feel for when to do a separate window with the more experience you get.)

  • Pay attention as you go and see how much code it can generate and when it falls short of an ask. This will help you improve the rate at which you successfully ask and it fully completes a task for future asks. I observe that when I don’t know – due to my lack of expertise – the right size of a task, it’s more prone to give me ½-⅔ of the code and solution but need additional prompting after that. Sometimes I ask it to continue where it cut off; other times I start implementing/working with the bits of code (the first ⅔) it gave me, and have a mental or written note that this did not completely generate all steps/code for the functionality and to come back.Part of why sometimes it is effective to get started with ⅔ of the code is because you’ll likely need to debug/test the first bit of code, anyway. Sometimes when you paste in code it’s using methods that don’t match the version you’re targeting (e.g. functionality that is outdated as of iOS 15, for example, when you’re targeting iOS 17 and newer) and it’ll flag a warning or block it from working until you fix it.

    Once you’ve debugged/tested as much as you can of the original ⅔ of code it gave you, you can prompt it to say “Ok, I’ve done X and Y. We were trying to (repeat initial instructions/prompt) – what are the remaining next steps? Please code that.” to go back and finish the remaining pieces of that functionality.

    (Note that saying “please code that” isn’t necessarily good prompt technique, see below).

    Again, much of this is paying attention to how the sub-task is getting done in service of the overall big picture goal of your project; or the chunk that you’ve been working on if you’re building new functionality. Keeping track with whatever method you prefer – in your head, a physical written list, a checklist digitally, or notes showing what you’ve done/not done – is helpful.

Most of the above I used for coding examples, but I follow the same general process when writing research papers, blog posts, research protocols, etc. My point is that this works for all types of projects that you’d work on with an LLM, whether the output generation intended is code or human-focused language that you’d write or speak.

But, coding or writing language, the other thing that makes a difference in addition to right-sizing the task is effective prompting. I’ve intuitively noticed that has made the biggest difference in my projects for getting the output matching my expertise. Conversely, I have actually peer reviewed papers for medical journals that do a horrifying job with prompting. You’ll hear people talk about “prompt engineering” and this is what it is referring to: how do you engineer (write) a prompt to get the ideal response from the LLM?

Tips for effective prompting with an LLM

    1. Personas and roles can make a difference, both for you and for the LLM. What do I mean by this? Start your prompt by telling the LLM what perspective you want it to take. Without it, you’re going to make it guess what information and style of response you’re looking for. Here’s an example: if you asked it what caused cancer, it’s going to default to safety and give you a general public answer about causes of cancer in very plain, lay language. Which may be fine. But if you’re looking to generate a better understanding of the causal mechanism of cancer; what is known; and what is not known, you will get better results if you prompt it with “You are an experienced medical oncologist” so it speaks from the generated perspective of that role. Similarly, you can tell it your role. Follow it with “Please describe the causal mechanisms of cancer and what is known and not known” and/or “I am also an experienced medical researcher, although not an oncologist” to help contextualize that you want a deeper, technical approach to the answer and not high level plain language in the response.

      Compare and contrast when you prompt the following:

      A. “What causes cancer?”

      B. “You are an experienced medical oncologist. What causes cancer? How would you explain this differently in lay language to a patient, and how would you explain this to another doctor who is not an oncologist?”

      C. “You are an experienced medical oncologist. Please describe the causal mechanisms of cancer and what is known and not known. I am also an experienced medical researcher, although not an oncologist.”

      You’ll likely get different types of answers, with some overlap between A and the first part of answer B. Ditto for a tiny bit of overlap between the latter half of answer B and for C.

      I do the same kind of prompting with technical projects where I want code. Often, I will say “You are an expert data scientist with experience writing code in Python for a Jupyter Notebook” or “You are an AI programming assistant with expertise in building iOS apps using XCode and SwiftUI”. Those will then be followed with a brief description of my project (more on why this is brief below) and the first task I’m giving it.

      The same also goes for writing-related tasks; the persona I give it and/or the role I reference for myself makes a sizable difference in getting the quality of the output to match the style and quality I was seeking in a response.

  • Be specific. Saying “please code that” or “please write that” might work, sometimes, but more often or not will get a less effective output than if you provide a more specific prompt.I am a literal person, so this is something I think about a lot because I’m always parsing and mentally reviewing what people say to me because my instinct is to take their words literally and I have to think through the likelihood that those words were intended literally or if there is context that should be used to filter those words to be less literal. Sometimes, you’ll be thinking about something and start talking to someone about something, and they have no idea what on earth you’re talking about because the last part of your out-loud conversation with them was about a completely different topic!

    LLMs are the same as the confused conversational partner who doesn’t know what you’re thinking about. LLMs only know what you’ve last/recently told it (and more quickly than humans will ‘forget’ what you told it about a project). Remember the above tips about brainstorming and making a list of tasks for a project? Providing a description of the task along with the ask (e.g. we are doing X related to the purpose of achieving Y, please code X) will get you better output more closely matching what you wanted than saying “please code that” where the LLM might code something else to achieve Y if you didn’t tell it you wanted to focus on X.

    I find this even more necessary with writing related projects. I often find I need to give it the persona “You are an expert medical researcher”, the project “we are writing a research paper for a medical journal”, the task “we need to write the methods section of the paper”, and a clear ask “please review the code and analyses and make an outline of the steps that we have completed in this process, with sufficient detail that we could later write a methods section of a research paper”. A follow up ask is then “please take this list and draft it into the methods section”. That process with all of that specific context gives better results than “write a methods section” or “write the methods” etc.

  • Be willing to start over with a new window/chat. Sometimes the LLM can get itself lost in solving a sub-task and lose sight (via lost context window) of the big picture of a project, and you’ll find yourself having to repeat over and over again what you’re asking it to do. Don’t be afraid to cut your losses and start a new chat for a sub-task that you’ve been stuck on. You may be able to eventually come back to the same window as before, or the new window might become your new ‘home’ for the project…or sometimes a third, fourth, or fifth window will.
  • Try, try again.
    I may hold the record for the longest running bug that I (and the LLM) could. Not. solve. This was so, so annoying. No users apparently noticed it but I knew about it and it bugged me for months and months. Every few weeks I would go to an old window and also start a new window, describe the problem, paste the code in, and ask for help to solve it. I asked it to identify problems with the code; I asked it to explain the code and unexpected/unintended functionality from it; I asked it what types of general things would be likely to cause that type of bug. It couldn’t find the problem. I couldn’t find the problem. Finally, one day, I did all of the above, but then also started pasting every single file from my project and asking if it was likely to include code that could be related to the problem. By forcing myself to review all my code files with this problem in mind, even though the files weren’t related at all to the file/bug….I finally spotted the problem myself. I pasted the code in, asked if it was a possibility that it was related to the problem, the LLM said yes, I tried a change and…voila! Bug solved on January 16 after plaguing me since November 8. (And probably existed before then but I didn’t have functionality built until November 8 where I realized it was a problem). I was beating myself up about it and posted to Twitter about finally solving the bug (but very much with the mindset of feeling very stupid about it). Someone replied and said “congrats! sounds like it was a tough one!”. Which I realized was a very kind framing and one that I liked, because it was a tough one; and also I am doing a tough thing that no one else is doing and I would not have been willing to try to do without an LLM to support.

    Similarly, just this last week on Tuesday I spent about 3 hours working on a sub-task for a new project. It took 3 hours to do something that on a previous project took me about 40 minutes, so I was hyper aware of the time mismatch and perceiving that 3 hours was a long time to spend on the task. I vented to Scott quite a bit on Tuesday night, and he reminded me that sure it took “3 hours” but I did something in 3 hours that would take 3 years otherwise because no one else would do (or is doing) the project that I’m working on. Then on Wednesday, I spent an hour doing another part of the project and Thursday whipped through another hour and a half of doing huge chunks of work that ended up being highly efficient and much faster than they would have been, in part because the “three hours” it took on Tuesday wasn’t just about the code but about organizing my thinking, scoping the project and research protocol, etc. and doing a huge portion of other work to organize my thinking to be able to effectively prompt the LLM to do the sub-task (that probably did actually take closer to the ~40 minutes, similar to the prior project).

    All this to say: LLMs have become pair programmers and collaborators and writers that are helping me achieve tasks and projects that no one else in the world is working on yet. (It reminds me very much of my early work with DIYPS and OpenAPS where we did the work, quietly, and people eventually took notice and paid attention, albeit slower than we wished but years faster than had we not done that work. I’m doing the same thing in a new field/project space now.) Sometimes, the first attempt to delegate a sub-task doesn’t work. It may be because I haven’t organized my thinking enough, and the lack of ideal output shows that I have not prompted effectively yet. Sometimes I can quickly fix the prompt to be effective; but sometimes it highlights that my thinking is not yet clear; my ability to communicate the project/task/big picture is not yet sufficient; and the process of achieving the clarity of thinking and translating to the LLM takes time (e.g. “that took 3 hours when it should have taken 40 minutes”) but ultimately still moves me forward to solving the problem or achieving the tasks and sub-tasks that I wanted to do. Remember what I said at the beginning:

    Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

 

  • Try it anyway.
    I am trying to get out of the habit of saying “I can’t do X”, like “I can’t code/program an iOS app”…because now I can. I’ve in fact built and shipped/launched/made available multiple iOS apps (check out Carb Pilot if you’re interested in macronutrient estimates for any reason; you can customize so you only see the one(s) you care about; or if you have EPI, check out PERT Pilot, which is the world’s first and only app for tracking pancreatic enzyme replacement therapy and has the same AI feature for generating macronutrient estimates to aid in adjusting enzyme dosing for EPI.) I’ve also made really cool, 100% custom-to-me niche apps to serve a personal purpose that save me tons of time and energy. I can do those things, because I tried. I flopped a bunch along the way – it took me several hours to solve a simple iOS programming error related to home screen navigation in my first few apps – but in the process I learned how to do those things and now I can build apps. I’ve coded and developed for OpenAPS and other open source projects, including a tool for data conversion that no one else in the world had built. Yet, my brain still tries to tell me I can’t code/program/etc (and to be fair, humans try to tell me that sometimes, too).

    I bring that up to contextualize that I’m working on – and I wish others would work on to – trying to address the reflexive thoughts of what we can and can’t do, based on prior knowledge. The world is different now and tools like LLMs make it possible to learn new things and build new projects that maybe we didn’t have time/energy to do before (not that we couldn’t). The bar to entry and the bar to starting and trying is so much lower than it was even a year ago. It really comes down to willingness to try and see, which I recognize is hard: I have those thought patterns too of “I can’t do X”, but I’m trying to notice when I have those patterns; shift my thinking to “I used to not be able to do X; I wonder if it is possible to work with an LLM to do part of X or learn how to do Y so that I could try to do X”.

    A recent real example for me is power calculations and sample size estimates for future clinical trials. That’s something I can’t do; it requires a statistician and specialized software and expertise.

    Or…does it?

    I asked my LLM how power calculations are done. It explained. I asked if it was possible to do it using Python code in a Jupyter notebook. I asked what information would be needed to do so. It walked me through the decisions I needed to make about power and significance, and highlighted variables I needed to define/collect to put into the calculation. I had generated the data from a previous study so I had all the pieces (variables) I needed. I asked it to write code for me to run in a Jupyter notebook, and it did. I tweaked the code, input my variables, ran it..and got the result. I had run a power calculation! (Shocked face here). But then I got imposter syndrome again, reached out to a statistician who I had previously worked with on a research project. I shared my code and asked if that was the correct or an acceptable approach and if I was interpreting it correctly. His response? It was correct, and “I couldn’t have done it better myself”.

    (I’m still shocked about this).

    He also kindly took my variables and put it in the specialized software he uses and confirmed that the results output matched what my code did, then pointed out something that taught me something for future projects that might be different (where the data is/isn’t normally distributed) although it didn’t influence the output of my calculation for this project.

    What I learned from this was a) this statistician is amazing (which I already knew from working with him in the past) and kind to support my learning like this; b) I can do pieces of projects that I previously thought were far beyond my expertise; c) the blocker is truly in my head, and the more we break out of or identify the patterns stopping us from trying, the farther we will get.

    “Try it anyway” also refers to trying things over time. The LLMs are improving every few months and often have new capabilities that didn’t before. Much of my work is done with GPT-4 and the more nuanced, advanced technical tasks are way more efficient than when using GPT-3.5. That being said, some tasks can absolutely be done with GPT-3.5-level AI. Doing something now and not quite figuring it out could be something that you sort out in a few weeks/months (see above about my 3 month bug); it could be something that is easier to do once you advance your thinking ; or it could be more efficiently done with the next model of the LLM you’re working with.

  • Test whether custom instructions help. Be aware though that sometimes too many instructions can conflict and also take up some of your context window. Plus if you forget what instructions you gave it, you might get seemingly unexpected responses in future chats. (You can always change the custom instructions and/or turn it on and off.)

I’m hoping this helps give people confidence or context to try things with LLMs that they were not willing to try before; or to help get in the habit of remembering to try things with LLMs; and to get the best possible output for the project that they’re working on.

Remember:

  • Right-size the task by making a clear ask.
  • You can use different chat windows for different levels of the same project.
  • Use a list to help you, the human, keep track of all the pieces that contribute to the bigger picture of the project.
  • Try giving the LLM a persona for an ask; and test whether you also need to assign yourself a persona or not for a particular type of request.
  • Be specific, think of the LLM as a conversational partner that can’t read your mind.
  • Don’t be afraid to start over with a new context window/chat.
  • Things that were hard a year ago might be easier with an LLM; you should try again.
  • You can do more, partnering with an LLM, than you can on your own, and likely can do things you didn’t realize were possible for you to do!

Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

Have any tips to help others get more effective output from LLMs? I’d love to hear them, please comment below and share your tips as well!

Tips for prompting LLMs like ChatGPT, written by Dana M. Lewis and available from DIYPS.org

Accepted, Rejected, and Conflict of Interest in Gastroenterology (And Why This Is A Symptom Of A Bigger Problem)

Recently, someone published a new clinical practice update on exocrine pancreatic insufficiency (known as EPI or PEI) in the journal called Gastroenterology, from the American Gastroenterology Association (AGA). Those of you who’ve read any of my blog posts in the last year know how much I’ve been working to raise awareness of EPI, which is very under-researched and under-treated clinically despite the prevalence rates in the general population and key sub-populations such as PWD. So when there was a new clinical practice update and another publication on EPI in general, I was jazzed and set out to read it immediately. Then frowned. Because, like so many articles about EPI, it’s not *quite* right about many things and it perpetuates a lot of the existing problems in the literature. So I did what I could, which was to check out the journal requirements for writing a letter to the editor (LTE) in response to this article and drafting and submitting a LTE article about it. To my delight, on October 17, 2023, I got an email indicating that my LTE was accepted.

You can find my LTE as a pre-print here.

See below why this pre-print version is important, and why you should read it, plus what it reminds us about what journal articles can or cannot tell us in healthcare.

Here’s an image of my acceptance email. I’ll call out a key part of the email:

A print of the acceptance email I received on October 17, 2023, indicating my letter would be sent to authors of the original articles for a chance to choose to respond (or not). Then my LTE would be published.

Letters to the Editor are sent to the authors of the original articles discussed in the letter so that they might have a chance to respond. Letters are not sent to the original article authors until the window of submission for letters responding to that article is closed (the last day of the issue month in which the article is published). Should the authors choose to respond to your letter, their response will appear alongside your letter in the journal.

Given the timeline described, I knew I wouldn’t hear more from the journal until the end of November. The article went online ahead of print in September, meaning likely officially published in October, so the letters wouldn’t be sent to authors until the end of October.

And then I did indeed hear back from the journal. On December 4, 2023, I got the following email:

A print of the email I received saying the LTE was now rejected
TLDR: just kidding, the committee – members of which published the article you’re responding to – and the editors have decided not to publish your article. 

I was surprised – and confused. The committee members, or at least 3 of them, wrote the article. They should have a chance to decide whether or not to write a response letter, which is standard. But telling the editors not to publish my LTE? That seems odd and in contrast to the initial acceptance email. What was going on?

I decided to write back and ask. “Hi (name redacted), this is very surprising. Could you please provide more detail on the decision making process for rescinding the already accepted LTE?”

The response?

Another email explaining that possible commercial affiliations influenced their choice to reject the article after accpeting it originally
In terms of this decision, possible commercial affiliations, as well as other judgments of priority and relevance among other submissions, dampened enthusiasm for this particular manuscript. Ultimately, it was not judged to be competitive for acceptance in the journal.

Huh? I don’t have any commercial affiliations. So I asked again, “Can you clarify what commercial affiliations were perceived? I have none (nor any financial conflict of interest; nor any funding related to my time spent on the article) and I wonder if there was a misunderstanding when reviewing this letter to the editor.”

The response was “There were concerns with the affiliation with OpenAPS; with the use of the term “guidelines,” which are distinct from this Clinical Practice Update; and with the overall focus being more fit for a cystic fibrosis or research audience rather than a GI audience.”

A final email saying the concern with my affiliation of OpenAPS, which is not a commercial organization nor related to the field of gastroenterology and EPI

Aha, I thought, there WAS a misunderstanding. (And the latter makes no sense in the context of my LTE – the point of it is that most research and clinical literature is a too-narrow focus, cystic fibrosis as one example – the very point is that a broad gastroenterology audience should pay attention to EPI).

I wrote back and explained how I, as a patient/independent researcher, struggle to submit articles to manuscript systems without a Ringgold-verified organization. (You can also listen to me describe the problem in a podcast, here, and I also talked about it in a peer-reviewed journal article about citizen science and health-related journal publishing here). So I use OpenAPS as an “affiliation” even though OpenAPS isn’t an organization. Let alone a commercial organization. I have no financial conflict of interest related to OpenAPS, and zero financial conflict of interest or commercial or any type of funding in gastroenterology at all, related to EPI or not. I actually go to such extremes to describe even perceived conflicts of interest, even non-financial ones, as you can see this in my disclosure statement publicly available from the New England Journal of Medicine here on our CREATE trial article (scroll to Supplemental Information and click on Disclosure Forms) where I articulate that I have no financial conflicts of interest but acknowledge openly that I created the algorithm used in the study. Yet, there’s no commercial or financial conflict of interest.

A screenshot from the publicly available disclosure form on NEJM's site, where I am so careful to indicate possible conflicts of interest that are not commercial or financial, such as the fact that I developed the algorithm that was used in that study. Again, that's a diabetes study and a diabetes example, the paper we are discussing here is on exocrine pancreatic insufficiency (EPI) and gastroenterology, which is unrelated. I have no COI in gastroenterology.

I sent this information back to the journal, explaining this, and asking if the editors would reconsider the situation, given that the authors (committee members?) have misconstrued my affiliation, and given that the LTE was originally accepted.

Sadly, there was no change. They are still declining to publish this article. And there is no change in my level of disappointment.

Interestingly, here is the article in which my LTE is in reply to, and the conflict of interest statement by the authors (committee members?) who possibly raised a flag about supposed concern about my (this is not true) commercial affiliation:

The conflict of interest statement for authors from the article "AGA Clinical Practice Update on the Epidemiology, Evaluation, and Management of Exocrine Pancreatic Insufficiency 2023"

The authors disclose the following: David C. Whitcomb: consultant for AbbVie, Nestlé, Regeneron; cofounder, consultant, board member, chief scientific officer, and equity holder for Ariel Precision Medicine. Anna M. Buchner: consultant for Olympus Corporation of America. Chris E. Forsmark: grant support from AbbVie; consultant for Nestlé; chair, National Pancreas Foundation Board of Directors.

As a side note, one of the companies with consulting and/or grant funding to two of the three authors is the biggest manufacturer of pancreatic enzyme replacement therapy (PERT), which is the treatment for EPI. I don’t think this conflict of interest makes these clinicians ineligible to write their article; nor do I think commercial interests should preclude anyone from publishing – but in my case, it is irrelevant, because I have none. But, it does seem weird given the stated COI for my (actually not a) COI then to be a reason to reject a LTE, of all things.

Here’s the point, though.

It’s not really about the fact that I had an accepted article rejected (although that is weird, to say the least…).

The point is that the presence of information in medical and research journals does not mean that they are correct. (See this post describing the incorrect facts presented about prevalence of EPI, for example.)

And similarly, the lack of presence of material in medical and research journals does not mean that something is not true or is not fact! 

There is a lot of gatekeeping in scientific and medical research. You can see it illustrated here in this accepted-rejected dance because of supposed COI (when there are zero commercial ties, let alone COI) and alluded to in terms of the priority of what gets published.

I see this often.

There is good research that goes unpublished because editors decide not to prioritize it (aka do not allow it to get published). There are many such factors in play affecting what gets published.

There are also systemic barriers.

  • Many journals require fees (called article processing charges or “APC”s) if your article is accepted for publication. If you don’t have funding, that means you can’t publish there unless you want to pay $2500 (or more) out of pocket. Some journals even have submission fees of hundreds of dollars, just to submit! (At least APCs are usually only levied if your article is accepted, but you won’t submit to these journals if you know you can’t pay the APC). That means the few journals in your field that don’t require APCs or fees are harder to get published in, because many more articles are submitted (thus, influencing the “prioritization” problem at the editor level) to the “free” journals.
  • Journals often require, as previously described, your organization to be part of a verified list (maintained by a third party org) in order for your article to be moved through the queue once submitted. Instead of n/a, I started listing “OpenAPS” as my affiliation and proactively writing to admin teams to let them know that my affiliation won’t be Ringgold-verified, explaining that it’s not an org/I’m not at any institution, and then my article can (usually) get moved through the queue ok. But as I wrote in this peer-reviewed article with a lot of other details about barriers to publishing citizen science and other patient-driven work, it’s one of many barriers involved in the publication process. It’s a little hard, every journal and submission system is a little different, and it’s a lot harder for us than it is for people who have staff/support to help them get articles published in journals.

I’ve seen grant funders say no to funding researchers who haven’t published yet; but editors also won’t prioritize them to publish on a topic in a field where they haven’t been funded yet or aren’t well known. Or they aren’t at a prestigious organization. Or they don’t have the “right” credentials. (Ahem, ahem, ahem). It can be a vicious cycle for even traditional (aka day job) researchers and clinicians. Now imagine that for people who are not inside those systems of academia or medical organizations.

Yet, think about where much of knowledge is captured, created, translated, studied – it’s not solely in these organizations.

Thus, the mismatch. What’s in journals isn’t always right, and the process of peer review can’t catch everything. It’s not a perfect system. But what I want you to take away, if you didn’t already have this context, is an understanding that what’s NOT in a journal is not because the information is not fact or does not exist. It may have not been studied yet; or it may have been blocked from publication by the systemic forces in play.

As I said at the end of my LTE:

It is also critical to update the knowledge base of EPI beyond the sub-populations of cystic fibrosis and chronic pancreatitis that are currently over-represented in the EPI-related literature. Building upon this updated research base will enable future guidelines, including those like the AGA Clinical Practice Update on EPI, to be clearer, more evidence-based, and truly patient-centric ensuring that every individual living with exocrine pancreatic insufficiency receives optimal care.

PS – want to read my LTE that was accepted then rejected, meaning it won’t be present in the journal? Here it is on a preprint server with a DOI, which means it’s still easily citable! Here’s an example citation:

Lewis, D. Navigating Ambiguities in Exocrine Pancreatic Insufficiency. OSF Preprints. 2023. DOI: 10.31219/osf.io/xcnf6

New Survey For Everyone (Including You – Yes, You!) To Help Us Learn More About Exocrine Pancreatic Insufficiency

If you’ve ever wanted to help with some of my research, this is for you. Yes, you! I am asking people in the general public to take a survey (https://bit.ly/GI-Symptom-Survey-All) and share their experiences.

Why?

Many people have stomach or digestion problems occasionally. For some people, these symptoms happen more often. In some cases, the symptoms are related to exocrine pancreatic insufficiency (known as EPI or PEI). But to date, there have been few studies looking at the frequency of symptoms – or the level of their self-rated severity – in people with EPI or what symptoms may distinguish EPI from other GI-related conditions.

That’s where this survey comes in! We want to compare the experiences of people with EPI to people without EPI (like you!).

Will you help by taking this survey?

Your anonymous participation in this survey will help us understand the unique experiences individuals have with GI symptoms, including those with conditions like exocrine pancreatic insufficiency (EPI). In particular, data contributed by people without EPI will help us understand how the EPI experience is different (or not).

A note on privacy:

  • The survey is completely anonymous; no identifying information will be collected.
  • You can stop the survey at any point.

Who designed this survey:

Dana Lewis, an independent researcher, developed the survey and will manage the survey data. This survey design and the choice to run this survey is not influenced by funding from or affiliations with any organizations.

What happens to the data collected in this survey:

The aggregated data will be analyzed for patterns and shared through blog posts and academic publications. No individual data will be shared. This will help fill some of the documented gaps in the EPI-related medical knowledge and may influence the design of targeted research studies in the future.

Have Questions?
Feel free to reach out to Dana+GISymptomSurvey@OpenAPS.org.

How else can you help?
Remember, ANYONE can take this survey. So, feel free to share the link with your family and friends – they can take it, too!

Here’s a link to the survey that you can share (after taking it yourself, of course!): https://bit.ly/GI-Symptom-Survey-All

You (yes you!) can help us learn about exocrine pancreatic insufficiency by taking the survey linked on this page.