New Systematic Review Showing General Population Prevalence of Exocrine Pancreatic Insufficiency Is Higher Than In Co-Conditions

For those unfamiliar with academic/medical journal publishing, it is slow. Very slow. I did a systematic review on EPI prevalence and submitted it to a journal on May 5, 2023. It underwent peer review and a round of revisions and was accepted on July 13, 2023. (That part is actually relatively quick.) However, it sat, and sat, and sat, and sat, and sat. I was impatient and wrote a blog post last year about the basic premise of the review, which is that despite commonly repeated statements about the prevalence of EPI being so high in co-conditions that those conditions therefore are the highest drivers of EPI… this unlikely to be true because it is mathematically improbable.

And then this paper still sat several more months until it was published online ahead of print…today! Wahoo! You can read “An Updated Review of Exocrine Pancreatic Insufficiency Prevalence finds EPI to be More Common in General Population than Rates of Co-Conditions in the Journal of Gastrointestinal and Liver Diseases ahead of print (scheduled for the March 2024 issue).

It’s open access (and I didn’t have to pay for it to be!), so click here to go read it and download your own PDF copy of the article there. (As a reminder, I also save a version of every article including those that are not open access at DIYPS.org/research, in case you’re looking for this in the future or want to read some of my other research.) If you don’t want to read the full article, here’s a summary below and key takeaways for providers and patients (aka people like me with EPI!).

I read and systematically categorized 649 articles related to exocrine pancreatic insufficiency, which is known as EPI or PEI depending on where in the world you are. EPI occurs when the pancreas no longer produces enough enzymes to successfully digest food completely; when this occurs, pancreatic enzyme replacement therapy (PERT) is needed. This means swallowing enzyme pills every time you eat or drink something with fat or protein in it.

Like many of my other EPI-related research articles, this one found that EPI is underdiagnosed; undertreated; treatment costs are high; and prevalence is widely misunderstood, possibly leading to missing screening key populations.

  • Underdiagnosis – for a clearer picture and specific disease-related example of how EPI is likely underdiagnosed in a co-condition, check out my other systematic review specifically assessing EPI in diabetes. I show in that paper how EPI is likely many times more likely than gastroparesis and celiac disease, yet it’s less likely to be screened for.
  • Undertreated – another recent systematic review that I wrote after this paper (but was published sooner) is this systematic review on PERT dosing guidelines and dosing literature, showing how the overwhelming majority of people are not prescribed enough enzymes to meet their needs. Thus, symptoms persist and the literature continues to state that symptoms can’t be managed with PERT, which is not necessarily true: it just hasn’t been studied correctly with sufficient titration protocols.
  • PERT costs are high – I highlight that although PERT costs continue to rise each year, there are studies in different co-condition populations showing PERT treatment is cost-effective and in some cases reduces the overall cost of healthcare. It’s hard to believe when we look at the individual out of pocket costs related to PERT sometimes, but the data more broadly shows that PERT treatment in many populations is cost-effective.
  • Prevalence of EPI is misunderstood. This is the bulk of the paper and goes into a lot of detail showing how the general population estimates of EPI may be as high as 11-21%. In contrast, although prevalence of EPI is much higher within co-conditions, these conditions are such a small fraction of the general population that they therefore are also likely a small fraction of the EPI population.

As I wrote in the paper:

“The overall population prevalence of cystic fibrosis, pancreatitis, cancer, and pancreatic-related surgery combined totals <0.1%, and the lower end of the estimated overall population prevalence of EPI is approximately 10%, which suggests less than 1% of the overall incidence of EPI occurs in such rare co-conditions.

We can therefore conclude that 99% of EPI occurs in those without a rare co-condition.”

I also pointed out the mismatch of research prioritization and funding to date in EPI. 56-85% of the EPI-related research is focused on those representing less than ~1% of the overall population with EPI.

So what should you take away from this research?

If you are a healthcare provider:

Make sure you are screening people who present with gastrointestinal symptoms with a fecal elastase test to check for EPI. Weight loss and malnutrition does not always occur with EPI (which is a good thing, meaning it’s caught earlier) and similarly not everyone has diarrhea as their hallmark symptoms. Messy, smelly stools are often commonly described by people with EPI, among other symptoms such as excess gas and bloating,

Remember that conditions like diabetes have a high prevalence of EPI – it’s not just chronic pancreatitis or cystic fibrosis.

If you do have a patient that you are diagnosing or have diagnosed with EPI, make sure you are aware of the current dosing guidelines (see this systematic review) and 1) prescribe a reasonable minimal starting dose; 2) tell the patient when/how they can adjust their PERT on their own and when to call back for an updated prescription as they figure out what they need, and; 3) tell them they will likely need an updated prescription and you are ready to support them when they need to do so.

If you are a person living with EPI:

Most people with EPI are not taking enough enzymes to eliminate their symptoms. Dose timing matters (take it with/throughout meals), and the quantity of PERT matters.

If you’re still having symptoms, you may still need more enzymes.

Don’t compare what you are doing to what other people are taking: it’s not a moral failing to need a different amount of enzymes (or insulin, for that matter, or any other medication) than another person! It also likely varies by what we are eating, and we all eat differently.

If you’re still experiencing symptoms, you may need to experiment with a higher dose. If you still have symptoms or have new symptoms that start after taking PERT, you may need to try a different brand of PERT. Some people do well on one but not another, and there are different kinds you can try – ask your doctor.

How to cite this systematic review:

Lewis D. An Updated Review of Exocrine Pancreatic Insufficiency Prevalence finds EPI to be More Common in General Population than Rates of Co-Conditions. Journal of Gastrointestinal and Liver Diseases. 2024. DOI: 10.15403/jgld-5005

For other posts related to EPI, see DIYPS.org/EPI for more of my personal experiences with EPI and other plain-language research summaries.

For other research articles, see DIYPS.org/research

A systematic review shows EPI prevalence is more common in the general population than in co-conditions

Systematic Review of PERT Research and Guidelines for Exocrine Pancreatic Insufficiency (EPI or PEI)

New Systematic Review And Evaluation of Pancreatic Enzyme Replacement Therapy (PERT) Dosing Guidelines and Research for Exocrine Pancreatic Insufficiency (EPI or PEI)

I wrote a new paper evaluating the research behind pancreatic enzyme replacement therapy (aka, PERT) dosing for people with exocrine pancreatic insufficiency (known as EPI or PEI). I decided to do this research and write this paper because in my previous papers on EPI, I saw a lot of inconsistencies in when PERT was studied, how it was studied, and how that research was then used to develop guidelines.

(Big thanks to Julia Blanchette, Jordan Rieke, Claudia Lewis (no relation), Khaleal Almusaylim , and Anuhya Kanchibhatla for collaborating on this research and co-authoring the paper with me!)

You can find an author copy of the paper here, or see it on the journal website here. As a reminder, all my research papers have author copies and you can find them at DIYPS.org/research! I also have several other EPI-related articles.

A note on methods – this is a systematic review, meaning I used keywords to search multiple electronic databases to find articles about exocrine pancreatic insufficiency. I screened articles to make sure they were about EPI in humans and focused on English-language articles. We then reviewed the title and abstract of 2,530 remaining articles (!) that mentioned EPI, and excluded those that were not focused on EPI or a co-condition and unlikely to include guidelines or specific dose information related to EPI. That left 820 articles, which we then screened again looking for the full text and reviewing them for relevancy. I ended up reading 257 papers that we used for the basis of the research described below!

We found 7 key findings from this body of research:

  1. PERT Titration Protocols Aren’t Very Specific (or useful as typically written)“Most PERT dosing guidelines do not articulate a specific, defined dose range. Instead, PERT is commonly dosed with a general starting dose, such as 50,000 units of lipase per meal and 25,000 units of lipase per snack. If needed, guidelines then recommend increasing (i.e., titrating) the dosage by a factor of two to three (commonly described as increasing by 2x – 3x), and if symptoms persist, adding a proton pump inhibitor (PPI) before exploring other potential diagnoses. As a result, providers are prompted to focus primarily on the starting dose, rather than the full range of recommended doses.”

    I ended up crafting a table (Table 2) for the paper that shows how this dosing process can result in much bigger doses – such as 150,000 units of lipase per meal – to contrast  how prescriptions are often given at very low doses in comparison and often are not sufficient.

    This is a similar version of the table that I had developed for a previous blog post talking about the ranges of PERT dosing:
    Examples of PERT starting doses of 25,000, 40,000, and 50,000 (plus half that for snacks) and what the dose would be if increased according to guidelines to 2x and 3x, plus the sum of the total daily dose needed at those levels.
    Most guidelines, and the underlying studies, do not do a good job describing what doses people actually took in the studies. This may influence then providers’ understanding of how much PERT is needed.

  1. People are not taking enough PERTLike I found in my own previous research, there have been numerous studies showing that people are not getting prescribed enough PERT. This is both based on people reporting ongoing symptoms and reduced quality of life, but also studies that show a huge gap between the doses recommended to start with in guidelines and the fact that >90% of the time, providers don’t prescribe anywhere near this dose (and therefore are not prescribing enough PERT).
  2. Comparing different PERT studies is challengingWhen PERT studies are done, they are typically for safety and efficacy at a specific dose. Very few studies record what dosing people take when they are allowed to take the amount that they need to effectively reduce symptoms.

    As a result, we don’t know how much PERT people need (on average) in order to reduce symptoms.

  3. PERT Dosing Studies and Guidelines Only Focus on Fat (and we need to talk about protein)If you’ve read my previous blog posts about ratios and PERT dosing, you’ll notice I talk about protein dosing. For some people with EPI, protein dosing makes a huge difference in symptom outcomes.

    However, PERT is described based on units of lipase (for fat digestion) and primarily studied for fat, which means that doctors often prescribe it and only talk about changing PERT doses for different sized meals based on fat.

    This is a huge area of need for future studies to determine what role protein malabsorption plays for people with EPI. I suspect, based on personal experience and talking to others in the EPI community about when they have symptoms, this influences a lot of PERT dosing efficacy in real life.

  4. PERT Dosing Guidelines Are Very Different Around The World – But Should They Be?There are dozens of PERT dosing guidelines by condition, and in different parts of the world. They don’t always agree!

    My hypothesis is that this is not because of a true varying need geographically for PERT dosing (meaning your PERT dosing needs aren’t likely different if you live in South America or Europe), but because of the selection of studies used to determine the guidelines. And because most studies have only looked at basic, minimal doses for safety/efficacy, they haven’t studied how much people need to eliminate symptoms. There’s also no data on what people eat in these studies, so the ‘regional’ differences perceived may be a result of different composition of foods, but we have no evidence for this because the studies are poorly described and/or the studies don’t actually record this.

  5. PERT Dosing Guidelines Are Different By Co-Condition The majority of the studies on EPI and PERT dosing are in chronic pancreatitis (CP). As I’ve written previously, this is likely a small fraction of the number of people with EPI. But because this body of research on CP and EPI is so big, it has a very loud voice in determining what the guidelines say about PERT dosing. (Cystic fibrosis (CF) is the second-most studied and also plays the second-biggest role in influencing guidelines).

    If you want to dig in to the differences between conditions, note that the guidelines are influenced by the volume of studies, and so many conditions (such as diabetes) have very few guidelines and very few studies, so most of the ‘guidance’ on dosing is extrapolated from CF and/or chronic pancreatitis. It’s therefore very possible that people with EPI need more dosing or different dosing than is studied in those co-conditions – but we don’t know more because it hasn’t been studied!

    (I have a lot of details in the paper about what has been studied, and you can look at Table 4 for a summary of some of the less-studied conditions or check out the appendix for a narrative description of all of the co-conditions and their bodies of research.)

  6. PERT Dosing Is Determined By Clinicians And They’re Not Following The GuidelinesMost doctors and clinicians are not following PERT guidelines. This means that many people are prescribed a too-low dose of PERT according to the guidelines. This could be because providers are unaware of the guidelines; or don’t agree with the guidelines; or have not seen evidence showing clear effects of PERT on symptom resolution (in part because this hasn’t been studied!).

    More work needs to be done to understand why patients with EPI are under-prescribed and under-dosed when prescribed, and understanding barriers for clinicians may be a key factor to study moving forward.

So, what next?

Here’s what I want to see studied next for EPI, based on the findings in this paper:

  1. All PERT studies should clearly document the titration protocol in a way that can be understood and reproduced.
  2. PERT studies should record what dose people take throughout or at the end of the trial.
  3. PERT should be studied for symptom resolution. (PS – take the anonymous EPI symptom survey if you haven’t already!) This should be done outside of conditions such as chronic pancreatitis, because there is pain associated with CP that is confounding the results of EPI symptoms. And, CP is a tiny fraction of EPI and should not therefore be used to determine whether PERT is effective at resolving EPI-related symptoms.
  4. We need more awareness of the prevalence of EPI and for clinicians to screen for EPI. When elastase results are low (e.g. less than or equal to 200-ish), providers should initiate a trial of PERT and aid people in increasing their doses to the point that symptoms resolve. We need to study the barriers/factors determining why providers are not screening for EPI and why they are not prescribing PERT.
  5. We need more tools to help doctors and patients increase PERT dosing to achieve symptom resolution.
  6. We need studies on the effect of protein in the diet of people with EPI and PERT dosing to improve protein digestion.

If clinicians are reading this, here is your call to action:

  • Screen for EPI using a fecal elastase test. This includes anyone presenting with GI symptoms, not just in people that you suspect have chronic pancreatitis. You’re probably missing a not-insignificant number of people coming to you with EPI. For example, a previous systematic review shows EPI is likely much more common in people with diabetes than celiac or gastroparesis!
  • If fecal elastase results are around or below 200, prescribe PERT. Yes, even if they’re close to 200 – PERT can help for those with EPI who have symptoms!

    This study was published after our systematic review, so I wasn’t able to cite it in the paper, but includes evidence that PERT also can help reduce symptoms when elastase is 200-500. Don’t get too hung up on the elastase result, it’s not very precise but that doesn’t mean you shouldn’t prescribe a trial of PERT. 
  • Prescribe PERT at a minimum of 40,000-50,000 units PER MEAL and tell patients specifically to increase dosing as needed, such as when they’re eating larger meals. Many people need much larger doses (evidence here). Give guidance on how to adjust based on meals. If you want tools, consider things like PERT Pilot or other calculators to aid in matching dosing to food intake. This matches the recent AGA Clinical Practice Update on the Epidemiology, Evaluation, and Management of Exocrine Pancreatic Insufficiency (EPI) by Whitcomb et al which emphasizes that “PERT treats the meal, not the pancreas” meaning that PERT should match food intake.The level of elastase does NOT determine the dosing need, and the size of your prescriptions shouldn’t be influenced by the elastase result.

    All EPI needs PERT, and PERT needs should be driven by the individual’s symptoms and the dose it takes to reduce or eliminate their symptoms.

Here’s how to cite this paper:

Lewis DM, Rieke, JG, Almusaylim, K, Kanchibhatla, A, Blanchette, JE, Lewis, C. Exocrine Pancreatic Insufficiency Dosing Guidelines For Pancreatic Enzyme Replacement Therapy Vary Widely Across Disease Types. Digestive Diseases and Sciences. 2023. https://doi.org/10.1007/s10620-023-08184-w

Accepted, Rejected, and Conflict of Interest in Gastroenterology (And Why This Is A Symptom Of A Bigger Problem)

Recently, someone published a new clinical practice update on exocrine pancreatic insufficiency (known as EPI or PEI) in the journal called Gastroenterology, from the American Gastroenterology Association (AGA). Those of you who’ve read any of my blog posts in the last year know how much I’ve been working to raise awareness of EPI, which is very under-researched and under-treated clinically despite the prevalence rates in the general population and key sub-populations such as PWD. So when there was a new clinical practice update and another publication on EPI in general, I was jazzed and set out to read it immediately. Then frowned. Because, like so many articles about EPI, it’s not *quite* right about many things and it perpetuates a lot of the existing problems in the literature. So I did what I could, which was to check out the journal requirements for writing a letter to the editor (LTE) in response to this article and drafting and submitting a LTE article about it. To my delight, on October 17, 2023, I got an email indicating that my LTE was accepted.

You can find my LTE as a pre-print here.

See below why this pre-print version is important, and why you should read it, plus what it reminds us about what journal articles can or cannot tell us in healthcare.

Here’s an image of my acceptance email. I’ll call out a key part of the email:

A print of the acceptance email I received on October 17, 2023, indicating my letter would be sent to authors of the original articles for a chance to choose to respond (or not). Then my LTE would be published.

Letters to the Editor are sent to the authors of the original articles discussed in the letter so that they might have a chance to respond. Letters are not sent to the original article authors until the window of submission for letters responding to that article is closed (the last day of the issue month in which the article is published). Should the authors choose to respond to your letter, their response will appear alongside your letter in the journal.

Given the timeline described, I knew I wouldn’t hear more from the journal until the end of November. The article went online ahead of print in September, meaning likely officially published in October, so the letters wouldn’t be sent to authors until the end of October.

And then I did indeed hear back from the journal. On December 4, 2023, I got the following email:

A print of the email I received saying the LTE was now rejected
TLDR: just kidding, the committee – members of which published the article you’re responding to – and the editors have decided not to publish your article. 

I was surprised – and confused. The committee members, or at least 3 of them, wrote the article. They should have a chance to decide whether or not to write a response letter, which is standard. But telling the editors not to publish my LTE? That seems odd and in contrast to the initial acceptance email. What was going on?

I decided to write back and ask. “Hi (name redacted), this is very surprising. Could you please provide more detail on the decision making process for rescinding the already accepted LTE?”

The response?

Another email explaining that possible commercial affiliations influenced their choice to reject the article after accpeting it originally
In terms of this decision, possible commercial affiliations, as well as other judgments of priority and relevance among other submissions, dampened enthusiasm for this particular manuscript. Ultimately, it was not judged to be competitive for acceptance in the journal.

Huh? I don’t have any commercial affiliations. So I asked again, “Can you clarify what commercial affiliations were perceived? I have none (nor any financial conflict of interest; nor any funding related to my time spent on the article) and I wonder if there was a misunderstanding when reviewing this letter to the editor.”

The response was “There were concerns with the affiliation with OpenAPS; with the use of the term “guidelines,” which are distinct from this Clinical Practice Update; and with the overall focus being more fit for a cystic fibrosis or research audience rather than a GI audience.”

A final email saying the concern with my affiliation of OpenAPS, which is not a commercial organization nor related to the field of gastroenterology and EPI

Aha, I thought, there WAS a misunderstanding. (And the latter makes no sense in the context of my LTE – the point of it is that most research and clinical literature is a too-narrow focus, cystic fibrosis as one example – the very point is that a broad gastroenterology audience should pay attention to EPI).

I wrote back and explained how I, as a patient/independent researcher, struggle to submit articles to manuscript systems without a Ringgold-verified organization. (You can also listen to me describe the problem in a podcast, here, and I also talked about it in a peer-reviewed journal article about citizen science and health-related journal publishing here). So I use OpenAPS as an “affiliation” even though OpenAPS isn’t an organization. Let alone a commercial organization. I have no financial conflict of interest related to OpenAPS, and zero financial conflict of interest or commercial or any type of funding in gastroenterology at all, related to EPI or not. I actually go to such extremes to describe even perceived conflicts of interest, even non-financial ones, as you can see this in my disclosure statement publicly available from the New England Journal of Medicine here on our CREATE trial article (scroll to Supplemental Information and click on Disclosure Forms) where I articulate that I have no financial conflicts of interest but acknowledge openly that I created the algorithm used in the study. Yet, there’s no commercial or financial conflict of interest.

A screenshot from the publicly available disclosure form on NEJM's site, where I am so careful to indicate possible conflicts of interest that are not commercial or financial, such as the fact that I developed the algorithm that was used in that study. Again, that's a diabetes study and a diabetes example, the paper we are discussing here is on exocrine pancreatic insufficiency (EPI) and gastroenterology, which is unrelated. I have no COI in gastroenterology.

I sent this information back to the journal, explaining this, and asking if the editors would reconsider the situation, given that the authors (committee members?) have misconstrued my affiliation, and given that the LTE was originally accepted.

Sadly, there was no change. They are still declining to publish this article. And there is no change in my level of disappointment.

Interestingly, here is the article in which my LTE is in reply to, and the conflict of interest statement by the authors (committee members?) who possibly raised a flag about supposed concern about my (this is not true) commercial affiliation:

The conflict of interest statement for authors from the article "AGA Clinical Practice Update on the Epidemiology, Evaluation, and Management of Exocrine Pancreatic Insufficiency 2023"

The authors disclose the following: David C. Whitcomb: consultant for AbbVie, Nestlé, Regeneron; cofounder, consultant, board member, chief scientific officer, and equity holder for Ariel Precision Medicine. Anna M. Buchner: consultant for Olympus Corporation of America. Chris E. Forsmark: grant support from AbbVie; consultant for Nestlé; chair, National Pancreas Foundation Board of Directors.

As a side note, one of the companies with consulting and/or grant funding to two of the three authors is the biggest manufacturer of pancreatic enzyme replacement therapy (PERT), which is the treatment for EPI. I don’t think this conflict of interest makes these clinicians ineligible to write their article; nor do I think commercial interests should preclude anyone from publishing – but in my case, it is irrelevant, because I have none. But, it does seem weird given the stated COI for my (actually not a) COI then to be a reason to reject a LTE, of all things.

Here’s the point, though.

It’s not really about the fact that I had an accepted article rejected (although that is weird, to say the least…).

The point is that the presence of information in medical and research journals does not mean that they are correct. (See this post describing the incorrect facts presented about prevalence of EPI, for example.)

And similarly, the lack of presence of material in medical and research journals does not mean that something is not true or is not fact! 

There is a lot of gatekeeping in scientific and medical research. You can see it illustrated here in this accepted-rejected dance because of supposed COI (when there are zero commercial ties, let alone COI) and alluded to in terms of the priority of what gets published.

I see this often.

There is good research that goes unpublished because editors decide not to prioritize it (aka do not allow it to get published). There are many such factors in play affecting what gets published.

There are also systemic barriers.

  • Many journals require fees (called article processing charges or “APC”s) if your article is accepted for publication. If you don’t have funding, that means you can’t publish there unless you want to pay $2500 (or more) out of pocket. Some journals even have submission fees of hundreds of dollars, just to submit! (At least APCs are usually only levied if your article is accepted, but you won’t submit to these journals if you know you can’t pay the APC). That means the few journals in your field that don’t require APCs or fees are harder to get published in, because many more articles are submitted (thus, influencing the “prioritization” problem at the editor level) to the “free” journals.
  • Journals often require, as previously described, your organization to be part of a verified list (maintained by a third party org) in order for your article to be moved through the queue once submitted. Instead of n/a, I started listing “OpenAPS” as my affiliation and proactively writing to admin teams to let them know that my affiliation won’t be Ringgold-verified, explaining that it’s not an org/I’m not at any institution, and then my article can (usually) get moved through the queue ok. But as I wrote in this peer-reviewed article with a lot of other details about barriers to publishing citizen science and other patient-driven work, it’s one of many barriers involved in the publication process. It’s a little hard, every journal and submission system is a little different, and it’s a lot harder for us than it is for people who have staff/support to help them get articles published in journals.

I’ve seen grant funders say no to funding researchers who haven’t published yet; but editors also won’t prioritize them to publish on a topic in a field where they haven’t been funded yet or aren’t well known. Or they aren’t at a prestigious organization. Or they don’t have the “right” credentials. (Ahem, ahem, ahem). It can be a vicious cycle for even traditional (aka day job) researchers and clinicians. Now imagine that for people who are not inside those systems of academia or medical organizations.

Yet, think about where much of knowledge is captured, created, translated, studied – it’s not solely in these organizations.

Thus, the mismatch. What’s in journals isn’t always right, and the process of peer review can’t catch everything. It’s not a perfect system. But what I want you to take away, if you didn’t already have this context, is an understanding that what’s NOT in a journal is not because the information is not fact or does not exist. It may have not been studied yet; or it may have been blocked from publication by the systemic forces in play.

As I said at the end of my LTE:

It is also critical to update the knowledge base of EPI beyond the sub-populations of cystic fibrosis and chronic pancreatitis that are currently over-represented in the EPI-related literature. Building upon this updated research base will enable future guidelines, including those like the AGA Clinical Practice Update on EPI, to be clearer, more evidence-based, and truly patient-centric ensuring that every individual living with exocrine pancreatic insufficiency receives optimal care.

PS – want to read my LTE that was accepted then rejected, meaning it won’t be present in the journal? Here it is on a preprint server with a DOI, which means it’s still easily citable! Here’s an example citation:

Lewis, D. Navigating Ambiguities in Exocrine Pancreatic Insufficiency. OSF Preprints. 2023. DOI: 10.31219/osf.io/xcnf6

New Survey For Everyone (Including You – Yes, You!) To Help Us Learn More About Exocrine Pancreatic Insufficiency

If you’ve ever wanted to help with some of my research, this is for you. Yes, you! I am asking people in the general public to take a survey (https://bit.ly/GI-Symptom-Survey-All) and share their experiences.

Why?

Many people have stomach or digestion problems occasionally. For some people, these symptoms happen more often. In some cases, the symptoms are related to exocrine pancreatic insufficiency (known as EPI or PEI). But to date, there have been few studies looking at the frequency of symptoms – or the level of their self-rated severity – in people with EPI or what symptoms may distinguish EPI from other GI-related conditions.

That’s where this survey comes in! We want to compare the experiences of people with EPI to people without EPI (like you!).

Will you help by taking this survey?

Your anonymous participation in this survey will help us understand the unique experiences individuals have with GI symptoms, including those with conditions like exocrine pancreatic insufficiency (EPI). In particular, data contributed by people without EPI will help us understand how the EPI experience is different (or not).

A note on privacy:

  • The survey is completely anonymous; no identifying information will be collected.
  • You can stop the survey at any point.

Who designed this survey:

Dana Lewis, an independent researcher, developed the survey and will manage the survey data. This survey design and the choice to run this survey is not influenced by funding from or affiliations with any organizations.

What happens to the data collected in this survey:

The aggregated data will be analyzed for patterns and shared through blog posts and academic publications. No individual data will be shared. This will help fill some of the documented gaps in the EPI-related medical knowledge and may influence the design of targeted research studies in the future.

Have Questions?
Feel free to reach out to Dana+GISymptomSurvey@OpenAPS.org.

How else can you help?
Remember, ANYONE can take this survey. So, feel free to share the link with your family and friends – they can take it, too!

Here’s a link to the survey that you can share (after taking it yourself, of course!): https://bit.ly/GI-Symptom-Survey-All

You (yes you!) can help us learn about exocrine pancreatic insufficiency by taking the survey linked on this page.

New Research Shows Most People With Exocrine Pancreatic Insufficiency (EPI) Are Not Taking Enough Enzymes

Last year when I was diagnosed with exocrine pancreatic insufficiency (known as EPI or PEI), I quickly noticed that many people in the online social media community I joined didn’t seem to have their pancreatic enzyme replacement therapy (PERT) working effectively for them.

Possibly because I have been counting carbohydrates and dosing insulin using a ratio of insulin to carbohydrates for ~20+ years (for type 1 diabetes), it came intuitively to me to try to develop ratios of the amount of enzymes compared to the amount of macronutrients I was consuming, For me, it worked really well (and you can read more about my methods for titrating enzymes and/or check out PERT Pilot if you have an iOS phone, which helps automating the dosing calculations based on logging what you eat).

However, I was surprised at how many people still seemed to share online that their PERT wasn’t working or that they still had symptoms. It made me curious: were these folks all newly diagnosed? How long does it take for most people to titrate their enzymes (e.g. arrive at an ideal dose or dosing strategy)? There seemed to be a mismatch between what I was seeing in real life in these communities versus what was in the medical literature about typical dosing of enzymes and expected outcomes for this community.

And so, I set out to do a survey to learn more. I sought permission from the administrators of the Facebook group, designed the survey, got the administrators’ feedback and incorporated it, had a few people trial the survey, and then shared it in the Facebook group and on Twitter.

I ended up closing the survey after 3 weeks and 111 responses, although I wish I had left it open to collect more data. I was so excited to analyze the data and get it published!

…but I forgot how long and silly the traditional medical literature publishing process is. I just now got this article published, almost a year later! Sigh. Anyway, this post is to share what we learned from the EPI Community survey and what I think people – both people with EPI and clinicians – should do based on this information.

(PS – the full research paper is available here and is open access and free to read anytime! Big thanks to Dr. Arsalan Shahid for collaborating with me on writing up the results and getting this published.

Below is a plain language summary that I wrote for those who don’t want to read the full paper.)

Understanding who took the EPI Community survey

First things first, it’s helpful to understand who ended up taking the survey to help us understand the results.

111 people with EPI filled out the survey. Most (93%) were female, and most happened to be in North America. So, this survey won’t necessarily represent the entire EPI community, based on the small sample size and the demographic makeup. (That being said, I found a previous EPI study on a smaller sample size with a majority of male participants where the findings matched pretty similarly, so I don’t think gender played a large role in the results).

But I was interested to see that the ages ended up being pretty balanced: the largest group was between 55 and 64 years (27%) followed by 65-74 (23%); 45-54 (21%); 34-44 (16%); 25-34 (6%); 75+ (5%); and 18-24 (2%). Also, the duration of how long people had EPI was also fairly distributed: diagnosed within 0-6 months (27%);  1-2 years (25%); 5+ years and 3-5 years (both 18%); or 6 months – 1 year (12%). This was all coincidental, as I did not do any particular recruitment based on age groups or length of EPI.

I was also interested and a little surprised to look at the list of other conditions that people have. 68% of people mentioned at least one other condition. Remember, we had 111 participants – and 26 of them (35%) mentioned having diabetes of any type. The next most common was celiac (10 people), followed by chronic pancreatitis (8 people) and acute pancreatitis (4 people).

This is compelling additional evidence supporting my recent systematic review that shows a higher prevalence of EPI among people with diabetes, and also adds to my argument that chronic pancreatitis and cystic fibrosis are likely NOT the biggest co-conditions associated with EPI. No, this study is not necessarily a representative sample of EPI, but this is more evidence added to these arguments. People with diabetes, celiac, and other conditions presenting with GI symptoms should be screened for EPI.

Understanding the Elastase in the EPI Community

The most common diagnosis test for EPI is the fecal elastase test. Most participants in this survey (all but 15 people) had their elastase tested, although not everyone shared the number or remembered what it was. 76 people shared their elastase results, so the sub-analyses related to elastase are based on this group rather than the overall survey participant number (111).

Of those who reported their elastase, the average was 92 (with a standard deviation of 57).

Remember that the diagnostic criteria for EPI say that anything <200 is considered to be EPI, with 100-200 being “mild/moderate” and <100 being “severe”, although the categorization technically doesn’t change anything including how much enzymes are given to people. (That being said, though, it shows that the majority of people surveyed do have severe EPI, which helps counter potential pushback on this survey that people with only slightly lowered elastase don’t have EPI. Many of us with elastase in the mild/moderate category, myself included, show clear response to symptoms on PERT no matter what the elastase number says, but there seems to be some resistance in the clinical community to prescribing PERT when elastase is 100-200.)

I ended up reviewing the elastase data by age group and also by duration of EPI (meaning how long people have had EPI). A statistical test showed that as age increases, elastase levels tend to decrease. That wasn’t surprising to me as many studies that I have read also show that older adults are more likely to have lowered elastase. I also ran a statistical test that showed that people who have had EPI for longer are more likely to have reported lowered elastase levels, again matching previous studies.

If you look at Table 1 in the paper, you can see the breakdown of enzyme dosing for meals and snacks for each of the duration sub-groups. I chose 0-6 months, 6 months to 1 year, 1 to 2 years, 3-5 years, and 5+ years as the duration groups to ask people about. In the elastase column you can clearly see that elastase lowers over the duration groups, too. You can also see the varied enzyme dosing (with standard deviations) by groups, too. Interestingly, the 0-6 month group takes the highest average enzyme dose, followed by the 5+ year group, with lower amounts in the other groups. This I haven’t seen reported in the literature as I haven’t found any other studies evaluating enzyme dosing in the real world nor any breakdowns by duration of EPI, so this would be interesting to repeat in a study that better controls for variables of age and duration of EPI.

We did not observe a statistical correlation between enzymes taken for meals or snacks and elastase levels. That didn’t surprise me personally because the enzyme dosing guidelines are not different based on elastase levels (e.g. people with elastase <100 or between 100-200 are given the same dose).

What Enzymes Are People Taking, And What About the Cost of PERT?

I had hypothesized that maybe some people adjust their meals in order to reduce enzyme cost, because PERT can be expensive.

Most people (100, which is 90% of participants) do take enzymes, and 87% are taking prescription enzymes. The results of what people take prescription-wise in terms of brand is likely influenced by the order in which the prescription options entered the US market, given that most participants are in North America. 5 people reported taking OTC enzymes only (see my comments about over the counter or OTC enzymes here), and 7 people take a combination of prescription and OTC. The biggest reason people reported taking OTCs or a mix was that the enzyme prescription was not written so that they had enough to cover a full month (which means they are not getting enough prescription enzymes from their doctor, and their prescription should be increased). 7 people also indicated that lack of insurance coverage for prescription enzymes was an issue and that even OTC enzymes were expensive for them. Otherwise, for those taking prescription enzymes, 40% have insurance and said the cost was reasonable for them; 32% find the cost of prescription enzymes expensive even with insurance.

Based on my curiosity, I had asked people how often cost played a role in choosing what to eat and/or how much enzymes to take, 32% of people said ‘yes often”, 20% said sometimes, and 40% said they do not change what they eat in order to change the amount of enzymes they’re taking.

Again, this is primarily in North America where PERT can be very expensive, so the results in other geographic regions with different health plans and coverage options for PERT would likely be very different to those questions about cost and modifying food and PERT intake!

People With EPI Are Not Taking Enough Enzymes

Here’s where I was most surprised by the data:

I knew anecdotally that  many people with EPI weren’t taking enough enzymes, but this survey showed that only 1 in 5 people believe that they are always taking enough enzymes! Another 1 in 5 people said they are usually not taking enough, and the remaining 3 of 5 people think they take enough most of the time but not always.

Additionally, the data from this survey shows that the longer duration of EPI was correlated with taking less enzymes per meal. It’s possible that people were taking enough but their elastase production lowered further over time, and they did not (or were not able to due to lack of healthcare provider support for updating prescriptions) update their dosing over time, which I think would be another interesting area for future studies.

On average, individuals who reported their elastase levels were taking 64,303 (SD: ±39,980) units of lipase per meal (minimum 0; maximum 180,000). There were 14 participants who reported taking less than or equal to 30,000 units of lipase per meal; 7 participants reported taking between 30,000 and 40,000 units of lipase per meal; 6 participants who reported between 40-50,000 units of lipase per meal and 44 participants who reported taking >=50,000 units of lipase per meal. What do these numbers mean? Well, most dosing guidelines recommend a starting dose of 40-50,000 units of lipase per meal, so this means that the majority of people are taking at least the recommended starting dose (or higher), whereas about a third are taking well under even the recommended starting dose (more from me here in this blog about starting dose and the ranges people should increase to).

It probably will surprise a lot of clinicians to see that the average intake was around 64,000 units of lipase (with a large standard deviation, which means there was a lot of variance in dose sizes). It’s surprising because this is above the typical starting dose yet the majority of this population, as described above, is still experiencing symptoms and still not always taking enough enzymes to manage these symptoms

It’s also worth noting that most people said they still have not arrived at the ideal enzyme dosing: 42% said they still weren’t there yet. For those who thought they did have the ideal enzyme dosage, it took anywhere from a few weeks (16%) to a few months (20%); more than 6 months (10%), more than a year (10%) or even up to a few years (3%).

In summary:


People with EPI are not taking enough enzymes; are not arriving at an ideal dose quickly; and it is absolutely worth it for any clinician who sees someone with EPI – even someone who has been diagnosed by another clinician or had EPI for a long time – to check to see whether their prescription is meeting their needs and/or whether they need support in increasing their dose to resolve symptoms!

Recommended Takeaways From This Study

 

Patients (aka, people living with EPI):

  • If you are still experiencing symptoms, you may need to take more enzymes. The starting doses should be around 40-50,000 and it’s common for many people to need even larger doses. Based on this study, some people take up to 180,000 units per meal!
  • Talk to your doctor if you need your script adjusted, and remember PERT pills come in different sizes so you may be able to get a higher pill size (which holds more enzymes) so you have to take fewer pills per meal.
  • If your doctor seems resistant to adjusting your prescription, I have citations in this blog post that you can share listing out the various guidelines that point to 40-50,000 units of lipase being the starting dose with guidelines to increase up to 2-3x as needed based on the individual’s symptoms – share those guidelines/citations with your clinician if needed.
  • Over time, it is possible you will need to change your enzyme dosing as your body changes.

 

Doctors who treat people living with EPI

  • Other studies show that the majority of people with EPI are undertreated, even when compared to the baseline level of starting doses. This survey shows most people need more than the ‘starting dose’, so don’t be surprised and also proactively talk with patients about increasing enzyme doses and how to do so, and be prepared to update prescriptions for PERT over time.
  • Treat people with mild/moderate EPI (fecal elastase results 100-200, and not just those <100). The symptom burden of EPI is pretty significant even in those of us with mild/moderate EPI. Yes, PERT can be expensive, but let patients make the choice to treat/manage and don’t make the choice for them by refusing to prescribe PERT for elastase <200.
  • If symptoms aren’t resolved on the initial dose given, follow the guidelines for increasing the doses 2-3x from the starting 40-50,000 dose before considering adding a PPI or investigating other causes after that. But, dropping PERT after a short trail of a dose of <40,000 is not an approved nor evidence-based approach to treating EPI. Dose according to the starting guidelines and follow up or explain to your patients how to follow up on their own in order to increase their prescription as needed. Think of PERT similarly to insulin, where dosing is also self-managed by patients at every meal.
  • Speaking of insulin and diabetes: EPI occurs in more people than you think, and people with diabetes and celiac and other conditions need to be screened for EPI. Chronic pancreatitis is not the leading cause of EPI.

The paper described in this blog post can be accessed here for free – it’s open access!

You can cite it as:

Lewis DM, Shahid A. Survey on Pancreatic Enzyme Replacement Therapy Dosing Experiences of Adults with Exocrine Pancreatic Insufficiency. Healthcare 2023, 11,2316. https://doi.org/10.3390/healthcare11162316


Want to read more about EPI? Check out DIYPS.org/EPI for other posts I have written about my personal experiences with EPI and PERT, plus links to my other EPI-related research papers (with more on the way!)


You can also contribute to another research study – take this anonymous survey to share your experiences with EPI-related symptoms!

A blue square with white text that says "New Research: Most people with EPI (PEI) are not taking enough enzymes", a blog post by Dana M. Lewis

You’d Be Surprised: Common Causes of Exocrine Pancreatic Insufficiency

Academic and medical literature often is like the game of “telephone”. You can find something commonly cited throughout the literature, but if you dig deep, you can watch the key points change throughout the literature going from a solid, evidence-backed statement to a weaker, more vague statement that is not factually correct but is widely propagated as “fact” as people cite and re-cite the new incorrect statements.

The most obvious one I have seen, after reading hundreds of papers on exocrine pancreatic insufficiency (known as EPI or PEI), is that “chronic pancreatitis is the most common cause of exocrine pancreatic insufficiency”. It’s stated here (“Although chronic pancreatitis is the most common cause of EPI“) and here (“The most frequent causes [of exocrine pancreatic insufficiency] are chronic pancreatitis in adults“) and here (“Besides cystic fibrosis and chronic pancreatitis, the most common etiologies of EPI“) and here (“Numerous conditions account for the etiology of EPI, with the most common being diseases of the pancreatic parenchyma including chronic pancreatitis, cystic fibrosis, and a history of extensive necrotizing acute pancreatitis“) and… you get the picture. I find this statement all over the place.

But guess what? This is not true.

First off, no one has done a study on the overall population of EPI and the breakdown of the most common co-conditions.

Secondly, I did research for my latest article on exocrine pancreatic insufficiency in Type 1 diabetes and Type 2 diabetes and was looking to contextualize the size of the populations. For example, I know overall that diabetes has a ~10% population prevalence, and this review found that there is a median prevalence of EPI of 33% in T1D and 29% in T2D. To put that in absolute numbers, this means that out of 100 people, it’s likely that 3 people have both diabetes and EPI.

How does this compare to the other “most common” causes of EPI?

First, let’s look at the prevalence of EPI in these other conditions:

  • In people with cystic fibrosis, 80-90% of people are estimated to also have EPI
  • In people with chronic pancreatitis, anywhere from 30-90% of people are estimated to also have EPI
  • In people with pancreatic cancer, anywhere from 20-60% of people are estimated to also have EPI

Now let’s look at how common these conditions are in the general population:

  • People with cystic fibrosis are estimated to be 0.04% of the general population.
    • This is 4 in every 10,000 people
  • People with chronic pancreatitis combined with all other types of pancreatitis are also estimated to be 0.04% of the general population, so another 4 out of 10,000.
  • People with pancreatic cancer are estimated to be 0.005% of the general population, or 1 in 20,000.

What happens if you add all of these up: cystic fibrosis, 0.04%, plus all types of pancreatitis, 0.04%, and pancreatic cancer, 0.005%? You get 0.085%, which is less than 1 in 1000 people.

This is quite a bit less than the 10% prevalence of diabetes (1 in 10 people!), or even the 3 in 100 people (3%) with both diabetes and EPI.

Let’s also look at the estimates for EPI prevalence in the general population:

  • General population prevalence of EPI is estimated to be 10-20%, and if we use 10%, that means that 1 in 10 people may have EPI.

Here’s a visual to illustrate the relative size of the populations of people with cystic fibrosis, chronic pancreatitis (visualized as all types of pancreatitis), and pancreatic cancer, relative to the sizes of the general population and the relative amount of people estimated to have EPI:

Gif showing the relative sizes of populations of people with cystic fibrosis, chronic pancreatitis, pancreatic cancer, and the % of those with EPI, contextualized against the prevalence of these in the general population and those with EPI. It's a small number of people because these conditions aren't common, therefore these conditions are not the most common cause of EPI!

What you should take away from this:

  • Yes, EPI is common within conditions such as cystic fibrosis, chronic pancreatitis (and other forms of pancreatitis), and pancreatic cancer
  • However, these conditions are not common: even combined, they add up to less than 1 in 1000!
  • Therefore, it is incorrect to conclude that any of these conditions, individually or even combined, are the most common causes of EPI.

You could say, as I do in this paper, that EPI is likely more common in people with diabetes than all of these conditions combined. You’ll notice that I don’t go so far as to say it’s the MOST common, because I haven’t seen studies to support such a statement, and as I started the post by pointing out, no one has done studies looking at huge populations of EPI and the breakdown of co-conditions at a population level; instead, studies tend to focus on the population of a co-condition and prevalence of EPI within, which is a very different thing than that co-condition’s EPI population as a percentage of the overall population of people with EPI. However, there are some great studies (and I have another systematic review accepted and forthcoming on this topic!) that support the overall prevalence estimates in the general population being in the ballpark of 10+%, so there might be other ‘more common’ causes of EPI that we are currently unaware of, or it may be that most cases of EPI are uncorrelated with any particular co-condition.

(Need a citation? This logic is found in the introduction paragraph of a systematic review found here, of which the DOI is 10.1089/dia.2023.0157. You can also access a full author copy of it and my other papers here.)


You can also contribute to a research study and help us learn more about EPI/PEI – take this anonymous survey to share your experiences with EPI-related symptoms!

New Systematic Review of Exocrine Pancreatic Insufficiency (EPI) In Type 1 Diabetes and Type 2 Diabetes – Focusing on Prevalence and Treatment

I’m thrilled that the research I did evaluating the prevalence and treatment of EPI in both Type 1 diabetes and Type 2 diabetes (also presented as a poster at #ADA2023 – read a summary of the poster here) has now been published as a full systematic review in Diabetes Technology and Therapeutics.

Here is a pre-edited submitted version of my article that you can access if you don’t have journal access; and as a reminder, copies of ALL of my research articles are available on this page: DIYPS.org/research!

And if you don’t want to read the full paper, this is what I think you should take away from it as a person with diabetes or as a healthcare provider:

    1. What is EPI? 

      Exocrine pancreatic insufficiency (known as EPI in some places, and PEI or PI in other places) occurs when the pancreas no longer produces enough enzymes to digest food. People with EPI take pancreatic enzyme replacement therapy (PERT) whenever they eat (or drink anything with fat/protein).

    2. If I have diabetes, or treat people with diabetes, why should I be reading the rest of this about EPI?EPI often occurs in people with cystic fibrosis, pancreatitis, and pancreatic cancer. However, since these diseases are rare (think <0.1% of the general population even when these groups are added up all together), the total number of people with EPI from these causes is quite low. On the other hand, EPI is also common in people with diabetes, but this is less well-studied and understood. The research on other co-conditions is more frequent and often people confuse the prevalence WITHIN those groups with the % of those conditions occurring overall in the EPI community.This paper reviews every paper that includes data on EPI and people with type 1 diabetes or type 2 diabetes to help us better understand what % of people with diabetes are likely to face EPI in their lifetime.
    3. How many people with type 1 diabetes or type 2 diabetes (or diabetes overall) get EPI?TLDR of the paper: EPI prevalence in diabetes varies widely, reported between 5.4% and 77% when the type of diabetes isn’t specified. For Type 1 diabetes, the median EPI prevalence is 33% (range 14-77.5%), and for Type 2 diabetes, the median is 29% (range 16.8-49.2%). In contrast, in non-diabetes control groups, the EPI prevalence ranges from 4.4% to 18% (median 13%). The differences in ranges might be due to geographic variability and different exclusion criteria across studies.Diabetes itself is prevalent in about 10% of the general population. As such, I hypothesize that people with diabetes likely constitute one of the largest sub-groups of individuals with EPI, in contrast to what I described above might be more commonly assumed.
    4. Is pancreatic enzyme replacement therapy (PERT) safe for people with diabetes? 

      Yes. There have been safety and efficacy studies in people with diabetes with EPI, and PERT is effective just like in any other group of people with EPI.

    5. What is the effect of pancreatic enzyme replacement therapy (PERT) on glucose levels in people with diabetes?
      PERT itself does not affect glucose levels, but PERT *d0es* impact the digestion of food, which then changes glucose levels! So, most PERT labels warn to watch for hypoglycemia or hyperglycemia but the medicine itself doesn’t directly cause changes in glucose levels. You can read a previous study I did here using CGM data to show the effect of PERT actually causing improved glucose after meals in someone with Type 1 diabetes. But, in the systematic review, I found only 4 articles that even made note of glucose levels, and only 1 (the paper I linked above) actually included CGM data. Most of the studies are old, so there are no definitive conclusions on whether hypoglycemia or hyperglycemia is more common when a person with diabetes and EPI starts taking PERT. Instead, it’s likely very individual depending on what they’re eating, insulin dosing patterns before, and whether they’re taking enough PERT to match what they’re eating.TLDR here: more studies are needed because there’s no clear single directional effect on glucose levels from PERT in people with diabetes.Note: based on the n=1 study above, and subsequent conversations with other people with diabetes, I hypothesize that high variability and non-optimal post-meal glucose outcomes may be an early ‘symptom’ of EPI in people with diabetes. I’m hoping to eventually generate some studies to evaluate whether we could use this type of data as an input to help increase screening of EPI in people with diabetes.
    6. How common is EPI (PEI / PI) compared to celiac and gastroparesis in Type 1 diabetes and Type 2 diabetes? 

      As a person with (in my case, Type 1) diabetes, I feel like I hear celiac and gastroparesis talked about often in the diabetes community. I had NEVER heard of EPI prior to realizing I had it. Yet, EPI prevalence in Type 1 and Type 2 diabetes is much higher than that of celiac or gastroparesis!The prevalence of EPI is much higher in T1 and T2 than the prevalence of celiac and gastroparesis.Celiac disease is more common in people with diabetes (~5%) than in the general public (0.5-1%). Gastroparesis, when gastric emptying is delayed, is also more common in people with diabetes (5% in PWD).However, the  prevalence of EPI is 14-77.5% (median 33%) in Type 1 diabetes and 16.8-49.2% (median 29%) in Type 2 diabetes (and 5.4-77% prevalence when type of diabetes was not specified). This again is higher than general population prevalence of EPI.

      This data emphasizes that endocrinologists and other diabetes care providers should be more prone to initiate screening (using the non-invasive fecal elastase test) for individuals presenting with gastrointestinal symptoms, as the rates of EPI in diabetes are much higher in both Type 1 and Type 2 diabetes than the rates of celiac and gastroparesis.

    7. What should I do if I think I have EPI?
      Record your symptoms and talk to your doctor and ask for a fecal elastase (FE-1) screening test for EPI. It’s non-invasive. If your results are less than or equal to 200 (μg/g), this means you have EPI and should start on PERT. If you or your doctor feel that your sample may have influenced the results of your test, you can always re-do the test. But if you’re dealing with diarrhea, going on PERT may resolve or improve the diarrhea and improve the quality of the sample for the next test result. PERT doesn’t influence the test result, so you can start PERT and re-run the test any time.Symptoms of EPI can vary. Some people experience diarrhea, while others experience constipation. Steatorrhea or smelly, messy stools that stick to the side of the toilet are also common EPI symptoms, as is bloating, abdominal pain, and generally not feeling well after you eat.

      If you’ve been diagnosed with EPI, you may also want to check out some of my other posts (DIYPS.org/EPI) about my personal experiences with EPI and also this post about the amount of enzymes needed by most people with EPI. You may also want to check out PERT Pilot, a free iOS app, for recording and evaluating your PERT dosing.

If you want to read the full article, you can find copies of all of my research articles at DIYPS.org/research

If you’d like to cite this specific article in your future research, here’s an example citation:

Lewis, D. A Systematic Review of Exocrine Pancreatic Insufficiency Prevalence and Treatment in Type 1 and Type 2 Diabetes. Diabetes Technology & Therapeutics. http://doi.org/10.1089/dia.2023.0157

Why DIY AID in 2023? #ADA2023 Debate

I was asked to participate in a ‘debate’ about AID at #ADA2023 (ADA Scientific Sessions), representing the perspective that DIY systems should be an option for people living with diabetes.

I present this perspective as a person with type 1 diabetes who has been using DIY AID for almost a decade (and as a developer/contributor to the open source AID systems used in DIY) – please note my constant reminder that I am not a medical doctor.

Dr. Gregory P. Forlenza, an Associate Professor from Barbara Davis Center, presented a viewpoint as a medical doctor practicing in the US.

FYI: here are my disclosures and Dr. Forlenza’s disclosures:

On the left is my slide (Dana M. Lewis) showing I have no commercial support or conflicts of interest. My research in the last 3 years has previously been funded by the New Zealand Health Research Council (for the CREATE Trial); JDRF; and DiabetesMine. Dr. Forlenza lists research support from NIH, JDRF, NSF, Helmsley Charitable Trust, Medtronic, Dexcom, Abbott, Insulet, Tandem, Beta Bionics, and Lilly. He also lists Consulting/Speaking/AdBoard: Medtronic, Dexcom, Abbott, Insulet, Tandem, Beta Bionics, and Lilly.

I opened the debate with my initial presentation. I talk about the history of DIY in diabetes going back to the 1970s, when people with diabetes had to “DIY” with blood glucose meters because initially healthcare providers did not want people to fingerstick at home because they might do something with the information. Similarly, even insulin pumps and CGMs have been used in different “DIY” ways over the years – notably, people with diabetes began dosing insulin using CGM data for years prior to them being approved for that purpose. It’s therefore less of a surprise in that context to think about DIY being done for AID. (If you’re reading this you probably also know that DIY AID was done years before commercial AID was even available; and that there are multiple DIY systems with multiple pump and CGM options, algorithms, and phone options).

And, for people with diabetes, using DIY is very similar to how a lot of doctors recommend or prescribe doing things off label. Diabetes has a LOT of these types of recommendations, whether it’s different types of insulins used in pumps that weren’t approved for that type of insulin; medications for Type 2 being used for Type 1 (and vice versa); and other things that aren’t regulatory approved at all but often recommended anyway. For example, GLP-1’s that are approved for weight management and not glycemic control, but are often prescribed for glycemic control reasons. Or things like Vitamin D, which are widely prescribed or recommended as a supplement even though it is not regulatory-approved as a pharmaceutical agent.

I always like to emphasize that although open source AID is not necessarily regulated (but can be: one open source system has received regulatory clearance recently), that’s not a synonym for ‘no evidence’. There’s plenty of high quality scientific evidence on DIY use and non-DIY use of open source AID. There’s even a recent RCT in the New England Journal of Medicine, not to mention several other RCTs (see here and here, plus another pending publication forthcoming). In addition to those gold-standard RCTs, there are also reviews of large-scale big data datasets from people with diabetes using AID, such as this one where we reviewed 122 people’s glucose data representing 46,070 days’ worth of data; or another forthcoming publication where we analyzed the n=75 unique (distinct from the previous dataset) DIY AID users with 36,827 days’ of data (average of 491 days per participant) and also found above goal TIR outcomes (e.g. mean TIR 70-180 mg/dL of 82.08%).

Yet, people often choose to DIY with AID not just for the glucose outcomes. Yes, commercial AID systems (especially now second-generation) can similarly reach the goal of 70+% TIR on average. DIY helps provide more choices about the type and amount of work that people with diabetes have to put IN to these systems in order to get these above-goal OUTcomes. They can choose, overall or situationally, whether to bolus, count carbs precisely, announce meals at all, or only announce relative meal size while still achieving >80% TIR, no or little hypoglycemia, and less hyperglycemia. Many people using DIY AID for years have been doing no-bolus and/or no meal announcements at all, bringing this closer to a full closed loop, or at least, an AID system with very, very little user input required on a daily basis if they so choose. I presented data back in 2018(!) showing how this was being done in DIY AID, and it was recently confirmed in a randomized control trial (hello, gold standard!) showing that between traditional use (with meal announcements and meal boluses); meal announcement only (no boluses); and no announcement nor bolusing, that they all got similar outcomes in terms of TIR (all above-goal). There was also no difference in those modes of total daily insulin dose (TDD) or amount of carb intake. There was a small difference in time below range being slightly higher in the first mode (where people were counting carbs and bolusing) as compared to the other two modes – which suggests that MORE user input may actually be limiting the capabilities of the system!

The TLDR here is that people with diabetes can do less work/provide less input into AID and still achieve the same level of ideal, above-goal outcomes – and ongoing studies are showing the increased QOL and other patient-reported outcomes that also improve as a result.

Again, people may be predisposed to think that the main difference between commercial and DIY is whether or not it is regulatory approved (and therefore prescribable by doctors and able to be supported by a company under warranty); the bigger differences are instead around interoperability across devices, data access, and transparency of how the system works.

There’s even an international consensus statement on open source AID, created by an international group of 48 medical and legal experts, endorsed by 9 national and international diabetes organizations, supporting that open source AID used in DIY AID is a safe and effective treatment option, confirming that the scientific evidence exists and it has the potential to help people with diabetes and reduce the burden of diabetes. They emphasize that doctors should support patient (and caregiver) autonomy and choice of DIY AID, and state that doctors have a responsibility to learn about all options that exist including DIY. The consensus statement is focused on open source AID but also, in my opinion, applies to all AID: they say that AID systems should fully disclose how they operate to enable informed decisions and that all users should have real-time and open access to their own data. Yes, please! (This is true of DIY but not true of all commercial systems.)

The elephant in the room that I always bring up is cost, insurance coverage, and therefore access and accessibility of AID. Many places have government or insurance that won’t cover AID. For example, the proposed NICE guidelines in the UK wouldn’t provide AID to everyone who wants one. In other places, some people can get their pump covered but not CGM, or vice versa, and must pay out of pocket. Therefore in some cases, DIY has out of pocket costs (because it’s not covered by insurance), but is still cheaper than AID with insurance coverage (if it’s even covered).

I also want to remind everyone that choosing to DIY – or not – is not a once-in-a-lifetime decision. People who use DIY choose every day to use it and continue to use it; at any time, they could and some do choose to switch to a commercial system. Others try commercial, switch back to DIY, and switch back and forth over time for various reasons. It’s not a single or permanent decision to DIY!

The key point is: DIY AID provides safety and efficacy *and* user choice for people with diabetes.

Dr. Forlenza followed my presentation, talking about commercial AID systems and how they’ve moved through development more quickly recently. He points to the RCTs for each approved commercial system that exist, saying commercial AID systems work, and describing different feature sets and variety across commercial systems. He shared his thoughts on advantages of commercial systems including integration between components by the companies; regulatory approval meaning these systems can be prescribed by healthcare providers; company-provided warranties; and company provided training and support of healthcare providers and patients.

He makes a big point about a perceived reporting bias in social media, which is a valid point, and talks about people who cherry pick (my words) data to share online about their TIR.

He puts an observational study and the CREATE Trial RCT data up next to the commercial AID systems RCT data, showing how the second generation commercial AID reach similar TIR outcomes.

He then says “what are you #notwaiting for?”, pointing out in the US that there are 4 commercial systems FDA approved for type 1 diabetes. He says “Data from the DIY trials themselves demonstrate that DIY users, even with extreme selection bias, do not achieve better glycemic control than is seen with commercial systems.” He concludes that commercial AID has a wide variety of options; commercial systems achieve target-level outcomes; a perception that both glucose outcomes and QOL are being addressed by the commercial market, and that “we do not need Unapproved DIY solutions in this space”.

After Dr. Forlenza’s presentation, I began my rebuttal, starting with pointing out that he is incorrectly conflating perceived biases/self-reporting of social media posts with gold-standard, rigorously performed scientific trials evaluating DIY. Data from DIY AID trials do not suffer from ‘selection bias’ any more than commercial AID trials do. (In fact, all clinical trials have their own aspects of selection bias, although that isn’t the point here.) I reminded the audience of the not one but multiple RCTs available as well as dozens of other prospective and retrospective clinical trials. Plus, we have 82,000+ data points analyzed showing above-goal outcomes, and many studies that evaluate this data and adjust for starting outcomes still show that people with diabetes who use DIY AID benefit from doing so, regardless of their starting A1c/TIR or demographics. This isn’t cherry-picked social media anecdata.

When studies are done rigorously, as they have been done in DIY, we agree that now second-generation commercial AID systems reach (or exceed, depending on the system) ADA standard of care outcomes. For example, Dr. Forlenza cited the OP5 study with 73.9% TIR which is similar to the CREATE Trial 74.5% TIR.

My point is not that commercial systems don’t work; my point is that DIY systems *do* work and that the fact that commercial systems work doesn’t then override the fact that DIY systems have been shown to work, also! It’s a “yes, and”! Yes, commercial AID systems work; and yes, DIY AID systems work.

The bigger point, which Dr. Forlenza does not address, is that the person with diabetes should get to CHOOSE what is best for them, which is not ONLY about glucose outcomes. Yes, a commercial system- like DIY AID – may help someone get to goal TIR (or above goal), but DIY provides more choice in terms of the input behaviors required to achieve those outcomes! There’s also possible choice of systems with different pumps or CGMs, different (often lower) cost, increased data access and interoperability of data displays, different mobile device options, and more.

Also, supporting user choice of DIY is in fact A STANDARD OF CARE!

It’s in the ADA’s Standards of Care, in fact, as I wrote about here when observing that it’s in the 2023 Standards of Care…as well as in 2022, 2021, 2020, and 2019!

I wouldn’t be surprised if there are people attending the debate who think they don’t have any – or many – patients using DIY AID. For those who think that (or are reading this thinking the same), I ask a question: how many patients have you asked if they are using DIY AID?

There’s a bunch of reasons why it may not come up, if you haven’t asked:

  • They may use the same consumables (sites, reservoirs) with a different or previous pump in a DIY AID system.
  • Their prescribed pump (particularly in Europe and non-US places that have Bluetooth-enabled pumps) may be usable in a DIY AID.
  • They may not be getting their supplies through insurance, so their prescription doesn’t match what they are currently using.
  • Or, they have more urgent priorities to discuss at appointments, so it doesn’t come up.
  • Or, it’s also possible that it hasn’t come up because they don’t need any assistance or support from their healthcare provider.

Speaking of learning and support, it’s worth noting that in DIY AID, because it is open source and the documentation is freely available, users typically begin learning more about the system prior to initiating their start of closed loop (automated insulin delivery). As a result, the process of understanding and developing trust in the system begins prior to closed loop start as well. In contrast, much of the time there is limited available education prior to receiving the prescription for a commercial AID; it often aligns more closely with the timeline of starting the device. Additionally, because it is a “black box” with fewer available details about exactly how it works (and why), the process of developing trust can be a slower process that occurs only after a user begins to use a commercial device.

With DIY AID, because it is open source and the documentation is freely available, users typically begin learning more about the system prior to initiating their start of closed loop (automated insulin delivery). As a result, the process of understanding and developing trust in the system begins prior to closed loop start as well. In contrast, much of the time there is limited available education prior to receiving the prescription for a commercial AID; it often aligns more closely with the timeline of starting the device. Additionally, because it is a black box with less available details about exactly how it works (and why), the process of developing trust can be a slower process that occurs only after a user begins to use a commercial device. The learning & trust in AID timelines is something that needs more attention in commercial AID moving forward.

I closed my rebuttal section by asking a few questions out loud:

I wonder how healthcare providers feel when patients learn something before they do – which is often what happens with DIY AID. Does it make you uncomfortable, excited, curious, or some other feeling? Why?

I encouraged healthcare providers to consider when they are comfortable with off-label prescriptions (or recommending things that aren’t approved, such as Vitamin D), and reflect on how that differs from understanding patients’ choices to DIY.

I also prompted everyone to consider whether they’ve actually evaluated (all of) the safety and efficacy data, of which many studies exist. And to consider who benefits from each type of system, not only commercial/DIY but individual systems within those buckets. And to consider who gets offered/prescribed AID systems (of any sort) and whether subconscious biases around tech literacy, previous glucose outcomes, and other factors (race, gender, other demographic variables) result in particular groups of people being excluded from accessing AID. I also remind everyone to think about what financial incentives influence access and available of AID education, and where the education comes from.

Although Dr. Forlenza’s  rebuttal followed mine, I’ll summarize it here before finishing a recap of my rebuttal: he talks about individual selection bias/cherry picked data, acknowledging it can occur in anecdotes with commercial systems as well; talks about the distinction of regulatory approval vs. off label and unapproved; legal concerns for healthcare providers; and closes pointing out that many PWD see primary care providers, he doesn’t believe it is reasonable to expect PCPs to become familiar with DIY since there are no paid device representatives to support their learning, and that growth of AID requires industry support.

People probably wanted to walk out of this debate with a black and white, clear answer on what is the ‘right’ type of AID system: DIY or commercial. The answer to that question isn’t straightforward, because it depends.

It depends on whether a system is even AVAILABLE. Not all countries have regulatory-approved systems available, meaning commercial AID is not available everywhere. Some places and people are also limited by ACCESSIBILITY, because their healthcare providers won’t prescribe an AID system to them; or insurance won’t cover it. AFFORDABILITY, even with insurance coverage, also plays a role: commercial AID systems (and even pump and CGM components without AID) are expensive and not everyone can afford them. Finally, ADAPTABILITY matters for some people, and not all systems work well for everyone.

When these factors align – they are available, accessible, affordable, and adaptable – that means for some people in some places in some situations, there are commercial systems that meet those needs. But for other people in other places in other situations, DIY systems instead or also can meet that need.

The point is, though, that we need a bigger overlap of these criteria! We need MORE AID systems to be available, accessible, affordable, and adaptable. Those can either be commercial or DIY AID systems.

The point that Dr. Forlenza and I readily agree on is that we need MORE AID – not less.

This is why I support user choice for people with diabetes and for people who want – for any variety of reasons – to use a DIY system to be able to do so.

People probably want a black and white, clear answer on what is the ‘right’ type of AID system: DIY or commercial. It depends on whether a system is even AVAILABLE. Not all countries have regulatory-approved systems available, meaning commercial AID is not available everywhere. Some places and people are also limited by ACCESSIBILITY, because their healthcare providers won’t prescribe an AID system to them; or insurance won’t cover it. AFFORDABILITY, even if insurance coverage, also plays a role: commercial AID systems (and even pump and CGM components without AID) are expensive and not everyone can afford them. Finally, ADAPTABILITY matters for some people, and not all systems work well for everyone. The point is that we need a bigger overlap of these criteria! We need more alignment of these factors - more AID (DIY and commercial) available, accessible, affordable, and adaptable for people with diabetes. I support user choice for people with diabetes, which includes DIY AID systems

PS – I also presented a poster at #ADA2023 about the high prevalence rates of exocrine pancreatic insufficiency (EPI / PEI / PI) in Type 1 and Type 2 diabetes – you can find the poster and a summary of it here.

Exocrine Pancreatic Insufficiency (EPI/PEI) In Type 1 and Type 2 Diabetes – Poster at #ADA2023

When I was invited to contribute to a debate on AID at #ADA2023 (read my debate recap here), I decided to also submit an abstract related to some of my recent work in researching and understanding the prevalence and treatment of exocrine pancreatic insufficiency (known as EPI or PEI or PI) in people with diabetes.

I have a personal interest in this topic, for those who aren’t aware – I was diagnosed with EPI last year (read more about my experience here) and now take pancreatic enzyme replacement therapy (PERT) pills with everything that I eat.

I was surprised that it took personal advocacy to get a diagnosis, and despite having 2+ known risk factors for EPI (diabetes, celiac disease), that when I presented to a gastroenterologist with GI symptoms, EPI never came up as a possibility. I looked deeper into the research to try to understand what the correlation was in diabetes and EPI and perhaps understand why awareness is low compared to gastroparesis and celiac.

Here’s what I found, and what my poster (and a forthcoming full publication in a peer-reviewed journal!) is about (you can view my poster as a PDF here):

1304-P at #ADA2023, “Exocrine Pancreatic Insufficiency (EPI / PEI)  Likely Overlooked in Diabetes as Common Cause of Gastrointestinal-Related Symptoms”

Exocrine Pancreatic Insufficiency (EPI / PEI / PI) occurs when the pancreas no longer makes enough enzymes to support digestion, and is treated with pancreatic enzyme replacement therapy (PERT). Awareness among diabetes care providers of EPI does not seem to match the likely rates of prevalence and contributes to underscreening, underdiagnosis, and undertreatment of EPI among people with diabetes.

Methods:

I performed a broader systematic review on EPI, classifying all articles based on co-condition. I then did a second specific diabetes-specific EPI search, and de-duplicated and combined the results. (See PRISMA figure).

A PRISMA diagram showing that I performed two separate literature searches - one broadly on EPI before classifying and filtering for diabetes, and one just on EPI and diabetes. After filtering out irrelevant, animal, and off topic papers, I ended up with 41

I ended up with 41 articles specifically about EPI and diabetes, and screened them for diabetes type, prevalence rates (by type of diabetes, if it was segmented), and whether there were any analyses related to glycemic outcomes. I also performed an additional literature review on gastrointestinal conditions in diabetes.

Results:

From the broader systematic review on EPI in general, I found 9.6% of the articles on specific co-conditions to be about diabetes. Most of the articles on diabetes and EPI are simply about prevalence and/or diagnostic methods. Very few (4/41) specified any glycemic metrics or outcomes for people with diabetes and EPI. Only one recent paper (disclosure – I’m a co-author, and you can see the full paper here) evaluated glycemic variability and glycemic outcomes before and after PERT using CGM.

There is a LOT of work to be done in the future to do studies with properly recording type of diabetes; using CGM and modern insulin delivery therapies; and evaluating glycemic outcomes and variabilities to actually understand the impact of PERT on glucose levels in people with diabetes.

In terms of other gastrointestinal conditions, healthcare providers typically perceive the prevalence of celiac disease and gastroparesis to be high in people with diabetes. Reviewing the data, I found that celiac has around ~5% prevalence (range 3-16%) in people with type 1 diabetes and ~1.6% prevalence in Type 2 diabetes, in contrast to the general population prevalence of 0.5-1%. For gastroparesis, the rates in Type 1 diabetes were around ~5% and in Type 2 diabetes around 1.3%, in contrast to the general population prevalence of 0.2-0.9%.

Speaking of contrasts, let’s compare this to the prevalence of EPI in Type 1 and Type 2 diabetes.

  • The prevalence of EPI in Type 1 diabetes in the studies I reviewed had a median of 33% (range 14-77.5%).
  • The prevalence of EPI in Type 2 diabetes in the studies I reviewed had a median of 29% (16.8-49.2%).

You can see this relative prevalence difference in this chart I used on my poster:

The prevalence of EPI is much higher in T1 and T2 than the prevalence of celiac and gastroparesis.

Key Findings and Takeaways:

Gastroparesis and celiac are often top of mind for diabetes care providers, yet EPI may be up to 10 times more common among people with diabetes! EPI is likely significantly underdiagnosed in people with diabetes.

Healthcare providers who see people with diabetes should increase the screening of fecal elastase (FE-1/FEL-1) for people with diabetes who mention gastrointestinal symptoms.

With FE-1 testing, results <=200 μg/g are indicative of EPI and people with diabetes should be prescribed PERT. The quality-of-life burden and long-term clinical implications of undiagnosed EPI are significant enough, and the risks are low enough (aside from cost) that PERT should be initiated more frequently for people with diabetes who present with EPI-related symptoms.

EPI symptoms aren’t just diarrhea and/or weight loss: they can include painful bloating, excessive gas, changed stools (“messy”, “oily”, “sticking to the toilet bowl”), or increased bowel movements. People with diabetes may subconsciously adjust their food choices in response to symptoms for years prior to diagnosis.

Many people with diabetes and existing EPI diagnoses may be undertreated, even years after diagnosis. Diabetes providers should periodically discuss PERT dosing and encourage self-adjustment of dosing (similar to insulin, matching food intake) for people with diabetes and EPI who have ongoing GI symptoms. This also means aiding in updating prescriptions as needed. (PERT has been studied and found to be safe and effective for people with diabetes.)

Non-optimal PERT dosing may result in seemingly unpredictable post-meal glucose outcomes. Non-optimal postprandial glycemic excursions may be a ‘symptom’ of EPI because poor digestion of fat/protein may mean carbs are digested more quickly even in a ’mixed meal’ and result in larger post-meal glucose spikes.

As I mentioned, I have a full publication with this systematic review undergoing peer review and I’ll share it once it’s published. In the meantime, if you’re looking for more personal experiences about living with EPI, check out DIYPS.org/EPI, and also for people with EPI looking to improve their dosing with pancreatic enzyme replacement therapy – you may want to check out PERT Pilot (a free iOS app to record enzyme dosing).

Researchers, if you’re interested in collaborating on studies in EPI (in diabetes, or more broadly on EPI), please reach out! My email is Dana@OpenAPS.org

How I Use LLMs like ChatGPT And Tips For Getting Started

You’ve probably heard about new AI (artificial intelligence) tools like ChatGPT, Bard, Midjourney, DALL-E and others. But, what are they good for?

Last fall I started experimenting with them. I looked at AI art tools and found them to be challenging, at the time, for one of my purposes, which was creating characters and illustrating a storyline with consistent characters for some of my children’s books. I also tested GPT-3 (meaning version 3.0 of GPT). It wasn’t that great, to be honest. But later, GPT-3.5 was released, along with the ChatGPT chat interface to it, which WAS a big improvement for a lot of my use cases. (And now, GPT-4 is out and is an even bigger improvement, although it costs more to use. More on the cost differences below)

So what am I using these AI tools for? And how might YOU use some of these AI tools? And what are the limitations? This is what I’ve learned:

  1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.

You know the feeling of staring at a blank page and not knowing where to start? Maybe it’s the blank page of a cold email; the blank page of an essay or paper you need to write; the blank page of the outline for a presentation. Starting is hard!

Even for this blog post, I had a list of bulleted notes of things I wanted to remember to include. But I wasn’t sure how I wanted to start the blog post or incorporate them. I stuck the notes in ChatGPT and asked it to expand the notes.

What did it do? It wrote a few paragraph summary. Which isn’t what I wanted, so I asked it again to use the notes and this time “expand each bullet into a few sentences, rather than summarizing”. With these clear directions, it did, and I was able to look at this content and decide what I wanted to edit, include, or remove.

Sometimes I’m stuck on a particular writing task, and I use ChatGPT to break it down. In addition to kick-starting any type of writing overall, I’ve asked it to:

  • Take an outline of notes and summarize them into an introduction; limitations section; discussion section; conclusion; one paragraph summary; etc.
  • Take a bullet point list of notes and write full, complete sentences.
  • Take a long list of notes I’ve written about data I’ve extracted from a systematic review I was working on, and ask it about recurring themes or outlier concepts. Especially when I had 20 pages (!) of hand-written notes in bullets with some loose organization by section, I could feed in chunks of content and get help getting the big picture from that 20 pages of content I had created. It can highlight themes in the data based on the written narratives around the data.

A lot of times, the best thing it does is it prompts my brain to say “that’s not correct! It should be talking about…” and I’m able to more easily write the content that was in the back of my brain all along. I probably use 5% of what it’s written, and more frequently use it as a springboard for my writing. That might be unique to how I’m using it, though, and other simple use cases such as writing an email to someone or other simplistic content tasks may mean you can keep 90% or more of the content to use.

2. It can also help analyze data (caution alert!) if you understand how the tools work.

Huge learning moment here: these tools are called LLMs (large language models). They are trained on large amounts of language. They’re essentially designed so that, based on all of those words (language) it’s taken in previously, to predict content that “sounds” like what would come after a given prompt. So if you ask it to write a song or a haiku, it “knows” what a song or a haiku “looks” like, and can generate words to match those patterns.

It’s essentially a PATTERN MATCHER on WORDS. Yeah, I’m yelling in all caps here because this is the biggest confusion I see. ChatGPT or most of these LLMs don’t have access to the internet; they’re not looking up in a search engine for an answer. If you ask it a question about a person, it’s going to give you an answer (because it knows what this type of answer “sounds” like), but depending on the amount of information it “remembers”, some may be accurate and some may be 100% made up.

Why am I explaining this? Remember the above section where I highlighted how it can start to sense themes in the data? It’s not answering solely based on the raw data; it’s not doing analysis of the data, but mostly of the words surrounding the data. For example, you can paste in data (from a spreadsheet) and ask it questions. I did that once, pasting in some data from a pivot table and asking it the same question I had asked myself in analyzing the data. It gave me the same sense of the data that I had based on my own analysis, then pointed out it was only qualitative analysis and that I should also do quantitative statistical analysis. So I asked it if it could do quantitative statistical analysis. It said yes, it could, and spit out some numbers and described the methods of quantitative statistical analysis.

But here’s the thing: those numbers were completely made up!

It can’t actually use (in its current design) the methods it was describing verbally, and instead made up numbers that ‘sounded’ right.

So I asked it to describe how to do that statistical method in Google Sheets. It provided the formula and instructions; I did that analysis myself; and confirmed that the numbers it had given me were 100% made up.

The takeaway here is: it outright said it could do a thing (quantitative statistical analysis) that it can’t do. It’s like a human in some regards: some humans will lie or fudge and make stuff up when you talk to them. It’s helpful to be aware and query whether someone has relevant expertise, what their motivations are, etc. in determining whether or not to use their advice/input on something. The same should go for these AI tools! Knowing this is an LLM and it’s going to pattern match on language helps you pinpoint when it’s going to be prone to making stuff up. Humans are especially likely to make something up that sounds plausible in situations where they’re “expected” to know the answer. LLMs are in that situation all the time: sometimes they actually do know an answer, sometimes they have a good guess, and sometimes they’re just pattern matching and coming up with something that sounds plausible.

In short:

  • LLM’s can expand general concepts and write language about what is generally well known based on its training data.
  • Try to ask it a particular fact, though, and it’s probably going to make stuff up, whether that’s about a person or a concept – you need to fact check it elsewhere.
  • It can’t do math!

But what it can do is teach you or show you how to do the math, the coding, or whatever thing you wish it would do for you. And this gets into one of my favorite use cases for it.

3. You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.

One of the first things I did was ask ChatGPT to help me write a script. In fact, that’s what I did to expedite the process of finding tweets where I had used an image in order to get a screenshot to embed on my blog, rather than embedding the tweet.

It’s now so easy to generate code for scripts, regardless of which language you have previous experience with. I used to write all of my code as bash scripts, because that’s the format I was most familiar with. But ChatGPT likes to do things as Python scripts, so I asked it simple questions like “how do I call a python script from the command line” after I asked it to write a script and it generated a python script. Sure, you could search in a search engine or Stack Overflow for similar questions and get the same information. But one nice thing is that if you have it generate a script and then ask it step by step how to run a script, it gives you step by step instructions in context of what you were doing. So instead of saying “to run a script, type `python script.py’”, using placeholder names, it’ll say “to run the script, use ‘python actual-name-of-the-script-it-built-you.py’ “ and you can click the button to copy that, paste it in, and hit enter. It saves a lot of time for figuring out how to take placeholder information (which you would get from a traditional search engine result or Stack Overflow, where people are fond of things like saying FOOBAR and you have no idea if that means something or is meant to be a placeholder). Careful observers will notice that the latest scripts I’ve added to my Open Humans Data Tools repository (which is packed with a bunch of scripts to help work with big datasets!) are now in Python rather than bash; such as when I was adding new scripts for fellow researchers looking to check for updates in big datasets (such as the OpenAPS Data Commons). This is because I used GPT to help with those scripts!

It’s really easy now to go from an idea to a script. If you’re able to describe it logically, you can ask it to write a script, tell you how to run it, and help you debug it. Sometimes you can start by asking it a question, such as “Is it possible to do Y?” and it describes a method. You need to test the method or check for it elsewhere, but things like uploading a list of DOIs to Mendeley to save me hundreds of clicks? I didn’t realize Mendeley had an API or that I could write a script that would do that! ChatGPT helped me write the script, figure out how to create a developer account and app access information for Mendeley, and debug along the way so I ended up within an hour and a half of having a tool that easily saved me 3 hours on the very first project that I used it with.

I’m gushing about this because there’s probably a lot of ideas you have that you immediately throw out as being too hard, or you don’t know how to do it. It takes time, but I’m learning to remember to think “I should ask the LLM this” and ask it questions such as:

  • Is it possible to do X?
  • Write a script to do X.
  • I have X data. Pretend I am someone who doesn’t know how to use Y software and explain how I should do Z.

Another thing I’ve done frequently is ask it to help me quickly write a complex formula to use in a spreadsheet. Such as “write a formula that can be used in Google Sheets to take an average of the values in M3:M84 if they are greater than zero”.

It gives me the formula, and also describes it, and in some cases, gives alternative options.

Other things I’ve done with spreadsheets include:

  • Ask it to write a conditional formatting custom formula, then give me instructions for expanding the conditional formatting to apply to a certain cell range.
  • Asking it to check if a cell is filled with a particular value and then repeating the value in the new cell, in order to create new data series to use in particular charts and graphs I wanted to create from my data.
  • Help me transform my data so I could generate a box and whisker plot.
  • Ask it for other visuals that might be effective ways to illustrate and visualize the same dataset.
  • Explain the difference between two similar formulas (e.g. COUNT and COUNTA or when to use IF and IFS).

This has been incredibly helpful especially with some of my self-tracked datasets (particularly around thyroid-related symptom data) where I’m still trying to figure out the relationship between thyroid levels, thyroid antibody levels, and symptom data (and things like menstrual cycle timing). I’ve used it for creating the formulas and solutions I’ve talked about in projects such as the one where I created a “today” line that dynamically updates in a chart.

It’s also helped me get past the friction of setting up new tools. Case in point, Jupyter notebooks. I’ve used them in the web browser version before, but often had issues running the notebooks people gave me. I debugged and did all kinds of troubleshooting, but have not for years been able to get it successfully installed locally on (multiple of) my computers. I had finally given up on effectively using notebooks and definitely given up on running it locally on my machine.

However, I decided to see if I could get ChatGPT to coax me through the install process.

I told it:

“I have this table with data. Pretend I am someone who has never used R before. Tell me, step by step, how to use a Jupyter notebook to generate a box and whisker plot using this data”

(and I pasted my data that I had copied from a spreadsheet, then hit enter).

It outlined exactly what I needed to do, saying to install Jupyter Notebook locally if I hadn’t, gave me code to do that, installing the R kernel, told me how to do that, then how to start a notebook all the way down to what code to put in the notebook, the data transformed that I could copy/paste, and all the code that generated the plot.

However, remember I have never been able to successfully get Jupyter Notebooks running! For years! I was stuck on step 2, installing R. I said:

“Step 2, explain to me how I enter those commands in R? Do I do this in Terminal?”

It said “Oh apologies, no, you run those commands elsewhere, preferably in Rstudio. Here is how to download RStudio and run the commands”.

So, like humans often do, it glossed over a crucial step. But it went back and explained it to me and kept giving more detailed instructions and helping me debug various errors. After 5-6 more troubleshooting steps, it worked! And I was able to open Jupyter Notebooks locally and get it working!

All along, most of the tutorials I had been reading had skipped or glossed over that I needed to do something with R, and where that was. Probably because most people writing the tutorials are already data scientists who have worked with R and RStudio etc, so they didn’t know those dependencies were baked in! Using ChatGPT helped me be able to put in every error message or every place I got stuck, and it coached me through each spot (with no judgment or impatience). It was great!

I was then able to continue with the other steps of getting my data transformed, into the notebook, running the code, and generating my first ever box and whisker plot with R!

A box and whisker plot, illustrated simply to show that I used R and Jupyter finally successfully!

This is where I really saw the power of these tools, reducing the friction of trying something new (a tool, a piece of software, a new method, a new language, etc.) and helping you troubleshoot patiently step by step.

Does it sometimes skip steps or give you solutions that don’t work? Yes. But it’s still a LOT faster than manually debugging, trying to find someone to help, or spending hours in a search engine or Stack Overflow trying to translate generic code/advice/solutions into something that works on your setup. The beauty of these tools is you can simply paste in the error message and it goes “oh, sorry, try this to solve that error”.

Because the barrier to entry is so low (compared to before), I’ve also asked it to help me with other project ideas where I previously didn’t want to spend the time needed to learn new software and languages and all the nuances of getting from start to end of a project.

Such as, building an iOS app by myself.

I have a ton of projects where I want to temporarily track certain types of data for a short period of time. My fall back is usually a spreadsheet on my phone, but it’s not always easy to quickly enter data on a spreadsheet on your phone, even if you set up a template with a drop down menu like I’ve done in the past (for my DIY macronutrient tool, for example). For example, I want to see if there’s a correlation in my blood pressure at different times and patterns of inflammation in my eyelid and heart rate symptoms (which are symptoms, for me, of thyroid antibodies being out of range, due to Graves’ disease). That means I need to track my symptom data, but also now some blood pressure data. I want to be able to put these datasets together easily, which I can, but the hardest part (so to speak) is finding a way that I am willing to record my blood pressure data. I don’t want to use an existing BP tracking app, and I don’t want a connected BP monitor, and I don’t want to use Apple Health. (Yes, I’m picky!)

I decided to ask ChatGPT to help me accomplish this. I told it:

“You’re an AI programming assistant. Help me write a basic iOS app using Swift UI. The goal is a simple blood pressure tracking app. I want the user interface to default to the data entry screen where there should be three boxes to take the systolic, diastolic blood pressure numbers and also the pulse. There should also be selection boxes to indicate whether the BP was taken sitting up or laying down. Also, enable the selection of a section of symptom check boxes that include “HR feeling” and “Eyes”. Once entered on this screen, the data should save to a google spreadsheet.” 

This is a completely custom, DIY, n of 1 app. I don’t care about it working for anyone else, I simply want to be able to enter my blood pressure, pulse, whether I’m sitting or laying down, and the two specific, unique to me symptoms I’m trying to analyze alongside the BP data.

And it helped me build this! It taught me how to set up a new SwiftUI project in XCode, gave me code for the user interface, how to set up an API with Google Sheets, write code to save the data to Sheets, and get the app to run.

(I am still debugging the connection to Google Sheets, so in the interim I changed my mind and had it create another screen to display the stored data then enable it to email me a CSV file, because it’s so easy to write scripts or formulas to take data from two sources and append it together!)

Is it fancy? No. Am I going to try to distribute it? No. It’s meeting a custom need to enable me to collect specific data super easily over a short period of time in a way that my previous tools did not enable.

Here’s a preview of my custom app running in a simulator phone:

Simulator iphone with a basic iOS app that intakes BP, pulse, buttons for indicating whether BP was taken sitting or laying down; and toggles for key symptoms (in my case HR feeling or eyes), and a purple save button.

I did this in a few hours, rather than taking days or weeks. And now, the barrier to entry to creating more custom iOS is reduced, because now I’m more comfortable working with XCode and the file structures and what it takes to build and deploy an app! Sure, again, I could have learned to do this in other ways, but the learning curve is drastically shortened and it takes away most of the ‘getting started’ friction.

That’s the theme across all of these projects:

  • Barriers to entry are lower and it’s easier to get started
  • It’s easier to try things, even if they flop
  • There’s a quicker learning curve on new tools, technologies and languages
  • You get customized support and troubleshooting without having to translate through as many generic placeholders

PS – speaking of iOS apps, based on building this one simple app I had the confidence to try building a really complex, novel app that has never existed in the world before! It’s for people with exocrine pancreatic insufficiency like me who want to log pancreatic enzyme replacement therapy (PERT) dosing and improve their outcomes – check out PERT Pilot and how I built it here.

4. Notes about what these tools cost

I found ChatGPT useful for writing projects in terms of getting started, even though the content wasn’t that great (on GPT-3.5, too). Then they came out with GPT-4 and made a ChatGPT Pro option for $20/month. I didn’t think it was worth it and resisted it. Then I finally decided to try it, because some of the more sophisticated use cases I wanted to use it for required a longer context window, and in addition to a better model it also gave you a longer context window. I paid the first $20 assuming I’d want to cancel it by the end of the month.

Nope.

The $20 has been worth it on every single project that I’ve used it for. I’ve easily saved 5x that on most projects in terms of reducing the energy needed to start a project, whether it was writing or developing code. It has saved 10x that in time cost recouped from debugging new code and tools.

GPT-4 does have caps, though, so even with the $20/month, you can only do 25 messages every 3 hours. I try to be cognizant of which projects I default to using GPT-3.5 on (unlimited) versus saving the more sophisticated projects for my GPT-4 quota.

For example, I saw a new tool someone had built called “AutoResearcher”, downloaded it, and tried to use it. I ran into a bug and pasted the error into GPT-3.5 and got help figuring out where the problem was. Then I decided I wanted to add a feature to output to a text file, and it helped me quickly edit the code to do that, and I PR’ed it back in and it was accepted (woohoo) and now everyone using that tool can use that feature. That was pretty simple and I was able to use GPT-3.5 for that. But sometimes, when I need a larger context window for a more sophisticated or content-heavy project, I start with GPT-4. When I run into the cap, it tells me when my next window opens up (3 hours after I started using it), and I usually have an hour or two until then. I can open a new chat on GPT-3.5 (without the same context) and try to do things there; switch to another project; or come back at the time it says to continue using GPT-4 on that context/setup.

Why the limit? Because it’s a more expensive model. So you have a tradeoff between paying more and having a limit on how much you can use it, because of the cost to the company.

—–

TLDR:

Most important note: LLMs don’t “think” or “know” things the way humans do. They output language they predict you want to see, based on its training and the inputs you give it. It’s like the autocomplete of a sentence in your email, but more words on a wider range of topics!

Also, the LLM can’t do math. But they can write code. Including code to do math.

(Some, but not all, LLMs have access to the internet to look up or incorporate facts; make sure you know which LLM you are using and whether it has this feature or not.)

Ways to get started:

    1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.
      • Ask it to help you expand on notes; write summaries of existing content; or write sections of content based on instructions you give it
    2.  It can also help analyze data (caution alert!) if you understand the limitations of the LLM.
      • The most effective way to work with data is to have it tell you how to run things in analytical software, whether that’s how to use R or a spreadsheet or other software for data analysis. Remember the LLM can’t do math, but it can write code so you can then do the math!
    3.  You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.
      • Build a new habit of asking it “Can I do X” or “Is it possible to do Y” and when it says it’s possible, give it a try! Tell it to give you step-by-step instructions. Tell it where you get stuck. Give it your error messages or where you get lost and have it coach you through the process. 

What’s been your favorite way to use an LLM? I’d love to know other ways I should be using them, so please drop a comment with your favorite projects/ways of using them!

Personally, the latest project that I built with an LLM has been PERT Pilot!

How I use LLMs (like ChatGPT) and tips for getting started