Assessing the Impact of Diabetes on Gastrointestinal Symptom Severity in Exocrine Pancreatic Insufficiency (EPI/PEI): A Diabetes Subgroup Analysis of EPI/PEI-SS Scores – Poster at #ADA2024

Last year, I recognized that there was a need to improve the documentation of symptoms of exocrine pancreatic insufficiency (known as EPI or PEI). There is no standardized way to discuss symptoms with doctors, and this influences whether or not people get the right amount of enzymes (pancreatic enzyme replacement therapy; PERT) to treat EPI and eliminate symptoms completely. It can be done, but like insulin, it requires matching PERT to the amount of food you’re consuming. I also began observing that EPI is underscreened and underdiagnosed, whether that’s in the general population or in people with diabetes. I thought that if we could create a list of common EPI symptoms and a standardized scale to rate them, this might help address some of these challenges.

I developed this scale to address these needs. It is called the “Exocrine Pancreatic Insufficiency Symptom Score” or “EPI/PEI-SS” for short.

I had a handful of people with and without EPI help me test the scale last year, and then I opened up a survey to the entire world and asked people to share their experiences with GI-related symptoms. I specifically sought people with EPI diagnoses as well as people who don’t have EPI, so that we could compare the symptom burden and experiences to people without EPI. (Thank you to everyone who contributed their data to this survey!)

After the first three weeks, I started analyzing the first set of data. While doing that, I realized that (both because of my network of people with diabetes and because I also posted in at least one diabetes-specific group), I had a large sub-group of people with diabetes who had contributed to the survey, and I was able to do a full subgroup analyses to assess whether having diabetes seemed to correlate with a different symptom experience of EPI or not.

Here’s what I found, and what my poster is about (you can view my poster as a PDF here), presented at ADA Scientific Sessions 2024 (#ADA2024):

1985-LB at #ADA2024, “Assessing the Impact of Diabetes on Gastrointestinal Symptom Severity in Exocrine Pancreatic Insufficiency (EPI/PEI): A Diabetes Subgroup Analysis of EPI/PEI-SS Scores”

Exocrine pancreatic insufficiency has a high symptom burden and is present in as many as 3 of 10 people with diabetes. (See my systematic review from last year here). To help improve conversations about symptoms of EPI, which can then be used to improve screening, diagnosis, and treatment success with EPI, I created the Exocrine Pancreatic Insufficiency Symptom Score (EPI/PEI-SS), which consists of 15 individual symptoms that people separately rate the frequency (0-5) and severity (0-3) for which they experience those symptoms, if at all. The frequency and severity get multiplied for an individual symptom score (0-15 possible) and these get added up for a total EPI/PEI-SS score (0-225 possible, because 15 symptoms times 15 possible points per symptom is 225).

I conducted a real-world study of the EPI/PEI-SS in the general population to assess the gastrointestinal symptom burden in individuals with (n=155) and without (n=169) EPI. Because there was a large cohort of PWD within these groups, I separately analyzed them to evaluate whether diabetes contributes to a difference in EPI/PEI-SS score.

Methods:

I calculated EPI/PEI-SS scores for all survey participants. Previously, I had analyzed the differences of people with and without EPI overall. For this sub-analysis, I analyzed and compared between PWD (n=118 total), with EPI (T1D: n=14; T2D: n=20) or without EPI (T1D: n=78; T2D: n=6), and people without diabetes (n=206 total) with and without EPI.

I also looked at sub-groups within the non-EPI cohorts and broke them into two groups to see whether other GI conditions contributed to a higher EPI/PEI-SS score and whether we could distinguish EPI from other GI and non-GI conditions.

Results:

People with EPI have a much higher symptom burden than people without EPI. This can be assessed by looking at the statistically significant higher mean EPI/PEI-SS score as well as the average number of symptoms; the average severity score of individual symptoms; and the average frequency score of individual symptoms.

This remains true irrespective of diabetes. In other words, diabetes does not appear to influence any of these metrics.

People with diabetes with EPI had statistically significant higher mean EPI/PEI-SS scores (102.62 out of 225, SD: 52.46) than did people with diabetes without EPI (33.64, SD: 30.38), irrespective of presence of other GI conditions (all group comparisons p<0.001). As you can see below, that is the same pattern we see in people without diabetes. And the stats confirm what you can see: there is no significant difference overall or in any of the subgroups between people with and without diabetes.

Box plot showing EPI/PEI-SS scores for people with and without diabetes, and with and without EPI or other GI conditions. The scores are higher in people with EPI regardless of whether they have diabetes. The plot makes it clear that the scores are distinct between the groups with and without EPI, even when the people without EPI have other GI conditions. This suggests the EPI/PEI-SS can be useful in distinguishing between EPI and other conditions that may cause GI symptoms, and that the EPI/PEI-SS could be a useful screening tool to help identify people who need screening for EPI.

T1D and T2D subgroups were similar
(but because the T2D cohort is small, I did not break them out separately in this graph).

For example, people with diabetes with EPI had an average of 12.59 (out of 15) symptoms, with an average frequency score of 3.06 and average severity score of 1.79, and an average individual symptom score of 5.48. This is a pretty clear contrast to people with diabetes without EPI who had had an average of 7.36 symptoms, with an average frequency score of 1.4 and average severity score of 0.8, and an average individual symptom score of 1.12. All comparisons are statistically significant (p<0.001).

A table comparing the average number of symptoms, frequency, severity, and individual symptom scores between people with diabetes with and without exocrine pancreatic insufficiency (EPI). People with EPI have more symptoms and higher frequency and severity than without EPI: regardless of diabetes.

Conclusion 

  • EPI has a high symptom burden, irrespective of diabetes.
  • High scores using the EPI/PEI-SS among people with diabetes can distinguish between EPI and other GI conditions.
  • The EPI/PEI-SS should be further studied as a possible screening method for EPI and assessed as a tool to aid people with EPI in tracking changes to EPI symptoms over time based on PERT titration.

What does this mean if you are a healthcare provider? What actionable information does this give you?

If you’re a healthcare provider, you should be aware that people with diabetes may be more likely to have EPI – rather than celiac or gastroparesis (source) – if they mention having GI symptoms. This means you should incorporate fecal elastase screening into your care plans to help further evaluate GI-related symptoms.

If you want to further improve your pre-test probability of the elastase testing, you can use the EPI/PEI-SS with your patients to assess the severity and frequency of their GI-related symptoms. I will explain the cutoff and AUC numbers we calculated, but first understand the caveat that these were calculated in the initial real-world study that included people with EPI who are already treating with PERT; thus these numbers might change a little when we repeat this study and evaluate it in people with untreated EPI. (However, I actually predict the mean score to go up in an undiagnosed population, because scores should go down with treatment.) But that different population study may change these exact cutoff and sensitivity specificity numbers, which is why I’m giving this caveat. That being said: the AUC was 0.85 which means a higher EPI/PEI-SS is pretty good for differentiating between EPI and not having EPI. (In the diabetes sub-population specifically, I calculated a suggested cutoff of 59 (out of 225) with a sensitivity of 0.81 and specificity of 0.75. This means we estimate that if people are bringing up GI symptoms to you and you have them take the EPI/PEI-SS and their score is greater than or equal to 59, you would expect that out of 100 people that 81 with EPI would be identified (and 75 of 100 people without EPI would also correctly be identified via scores lower than 59). That doesn’t mean that people with EPI can’t have a lower score; or that people with a higher score do have EPI; but it does mean that the chances of having fecal elastase <=200 ug/g is a lot more likely in those with higher EPI/PEI-SS scores.

In addition to the cutoff score, there is a notable difference in people with diabetes and EPI compared to people with diabetes without EPI in their top individual symptom scores (representing symptom burden based on frequency and severity). For example, the top 3 symptoms of those with EPI and diabetes include avoiding certain food/groups; urgent bowel movements; and avoiding eating large meals. People without EPI and diabetes also score “Avoid certain food/groups” as their top score, but the score is markedly different: the mean score of 8.94 for people with EPI as compared to 3.49 for people without EPI. In fact, the mean score on the lowest individual symptom is higher for people with EPI than the highest individual symptom score for people without EPI.

QR code for EPI/PEI-SS - takes you to https://bit.ly/EPI-PEI-SS-WebHow do you have people take the EPI/PEI-SS? You can pull this link up (https://bit.ly/EPI-PEI-SS-Web), give this link to them and ask them to take it on their phone, or save this QR code and give it to them to take later. The link (and the QR code) go to a free web-based version of the EPI/PEI-SS that will calculate the total EPI/PEI-SS score, and you can use it for shared decision making processes about whether this person would benefit from a fecal elastase test or other follow up screening for EPI. Note that the EPI/PEI-SS does not collect any identifiable information and is fully anonymous.

(Bonus: people who use this tool can opt to contribute their anonymized symptom and score data for an ongoing observational study.)

If you have feedback about whether the EPI/PEI-SS was helpful – or not – in your care of people with diabetes; or if you want to discuss collaborating on some prospective studies to evaluate EPI/PEI-SS in comparison to fecal elastase screening, please reach out anytime to Dana@OpenAPS.org

What does this mean if you are a patient (person with diabetes)? What actionable information does this give you?

If you don’t have GI symptoms that bother you, you don’t necessarily need to take action. (Just put a note in your brain that EPI is more likely than celiac or gastroparesis in people with diabetes so if you or a friend with diabetes have GI symptoms in the future, you can make sure you are assessed for EPI.) You can also choose to take the EPI/PEI-SS regardless, and also opt in to donate your data.

If you do have GI symptoms that are annoying, you may want to take the EPI/PEI-SS to help you evaluate the frequency and severity of your GI symptoms. You can take it for free and anonymously – no identifiable information is needed to access the tool. It will generate the EPI/PEI-SS score for you.

Based on the score, you may want to ask your doctor (which could be the doctor that treats your diabetes, or a primary/general care provider, or a gastroenterologist – whoever you seek routine care from or have an appointment from next) about your symptoms; share the EPI/PEI-SS score; and explain that you think you may warrant screening for EPI.

(You can also choose to contribute your anonymous symptom data to a research dataset, to help us improve the EPI/PEI-SS and help us figure out how to help improve screening and diagnosis and treatment of EPI. Remember, this tool will not ask you for any identifying information. This is 100% optional and you can opt out of doing so if you do not prefer to contribute to research, while still using the tool.)

You can see a pre-print version of the diabetes sub-study here or pre-print of the general population data here.

If you’re looking for more personal experiences about living with EPI, check out DIYPS.org/EPI, and also for people with EPI looking to improve their dosing with pancreatic enzyme replacement therapy – you may want to check out PERT Pilot (a free iOS app to record enzyme dosing).

Researchers & clinicians, if you’re interested in collaborating on studies in EPI (in diabetes, or more broadly on EPI), whether specifically on EPI/PEI-SS or broader EPI topics, please reach out! My email is Dana@OpenAPS.org

Pain and translation and using AI to improve healthcare at an individual level

I think differently from most people. Sometimes, this is a strength; and sometimes this is a challenge. This is noticeable when I approach healthcare encounters in particular: the way I perceive signals from my body is different from a typical person. I didn’t know this for the longest time, but it’s something I have been becoming more aware of over the years.

The most noticeable incident that brought me to this realization involved when I pitched head first off a mountain trail in New Zealand over five years ago. I remember yelling – in flight – help, I broke my ankle, help. When I had arrested my fall, clung on, and then the human daisy chain was pulling me back up onto the trail, I yelped and stopped because I could not use my right ankle to help me climb up the trail. I had to reposition my knee to help move me up. When we got up to the trail and had me sitting on a rock, resting, I felt a wave of nausea crest over me. People suggested that it was dehydration and I should drink. I didn’t feel dehydrated, but ok. Then because I was able to gently rest my foot on the ground at a normal perpendicular angle, the trail guides hypothesized that it was not broken, just sprained. It wasn’t swollen enough to look like a fracture, either. I felt like it hurt really bad, worse than I’d ever hurt an ankle before and it didn’t feel like a sprain, but I had never broken a bone before so maybe it was the trauma of the incident contributing to how I was feeling. We taped it and I tried walking. Nope. Too-strong pain. We made a new goal of having me use poles as crutches to get me to a nearby stream a half mile a way, to try to ice my ankle. Nope, could not use poles as crutches, even partial weight bearing was undoable. I ended up doing a mix of hopping, holding on to Scott and one of the guides. That got exhausting on my other leg pretty quickly, so I also got down on all fours (with my right knee on the ground but lifting my foot and ankle in the air behind me) to crawl some. Eventually, we realized I wasn’t going to be able to make it to the stream and the trail guides decided to call for a helicopter evacuation. The medics, too, when they arrived via helicopter thought it likely wasn’t broken. I got flown to an ER and taken to X-Ray. When the technician came out, I asked her if she saw anything obvious and whether it looked broken or not. She laughed and said oh yes, there’s a break. When the ER doc came in to talk to me he said “you must have a really high pain tolerance” and I said “oh really? So it’s definitely broken?” and he looked at me like I was crazy, saying “it’s broken in 3 different places”. (And then he gave me extra pain meds before setting my ankle and putting the cast on to compensate for the fact that I have high pain tolerance and/or don’t communicate pain levels in quite the typical way.)

A week later, when I was trying not to fall on my broken ankle and broke my toe, I knew instantly that I had broken my toe, both by the pain and the nausea that followed. Years later when I smashed another toe on another chair, I again knew that my toe was broken because of the pain + following wave of nausea. Nausea, for me, is apparently a response to very high level pain. And this is something I’ve carried forward to help me identify and communicate when my pain levels are significant, because otherwise my pain tolerance is such that I don’t feel like I’m taken seriously because my pain scale is so different from other people’s pain scales.

Flash forward to the last few weeks. I have an autoimmune disease causing issues with multiple areas of my body. I have some progressive slight muscle weakness that began to concern me, especially as it spread to multiple limbs and areas of my body. This was followed with pain in different parts of my spine which has escalated. Last weekend, riding in the car, I started to get nauseous from the pain and had to take anti-nausea medicine (which thankfully helped) as well as pain medicine (OTC, and thankfully it also helped lower it down to manageable levels). This has happened several other times.

Some of the symptoms are concerning to my healthcare provider and she agreed I should probably have a MRI and a consult from neurology. Sadly, the first available new patient appointment with the neurologist I was assigned to was in late September. Gulp. I was admittedly nervous about my symptom progression, my pain levels (intermittent as they are), and how bad things might get if we are not able to take any action between now and September. I also, admittedly, was not quite sure how I would cope with the level of pain I have been experiencing at those peak moments that cause nausea.

I had last spoken to my provider a week prior, before the spine pain started. I reached out to give her an update, confirm that my specialist appointment was not until September, and express my concern about the progression and timeline. She too was concerned and I ended up going in for imaging sooner.

Over the last week, because I’ve been having these progressive symptoms, I used Katie McCurdy’s free templates from Pictal Health to help visualize and show the progression of symptoms over time. I wasn’t planning on sending my visuals to my doctor, but it helped me concretely articulate my symptoms and confirm that I was including everything that I thought was meaningful for my healthcare providers to know. I also shared them with Scott to confirm he didn’t think I had missed anything. The icons in some cases were helpful but in other cases didn’t quite match how I was experiencing pain and I modified them somewhat to better match how I saw the pain I was experiencing.

(PS – check out Katie’s templates here, you can make a copy in Google Drive and use them yourself!)

As I spoke with the nurse who was recording my information at intake for imaging, she asked me to characterize the pain. I did and explained that it was probably usually a 7/10 then but periodically gets stronger to the point of causing nausea, which for me is a broken bone pain-level response. She asked me to characterize the pain – was it burning, tingling…? None of the words she said matched how it feels. It’s strong pain; it sometimes gets worse. But it’s not any of the words she mentioned.

When the nurse asked if it was “sharp”, Scott spoke up and explained the icon that I had used on my visual, saying maybe it was “sharp” pain. I thought about it and agreed that it was probably the closest word (at least, it wasn’t a hard no like the words burning, tingling, etc. were), and the nurse wrote it down. That became the word I was able to use as the closest approximation to how the pain felt, but again with the emphasis of it periodically reaching nausea-inducing levels equivalent to broken bone pain, because I felt saying “sharp” pain alone did not characterize it fully.

This, then, is one of the areas where I feel that artificial intelligence (AI) gives me a huge helping hand. I often will start working with an LLM (a large language model) and describing symptoms. Sometimes I give it a persona to respond as (different healthcare provider roles); sometimes I clarify my role as a patient or sometimes as a similar provider expert role. I use different words and phrases in different questions and follow ups; I then study the language it uses in response.

If you’re not familiar with LLMs, you should know it is not human intelligence; there is no brain that “knows things”. It’s not an encyclopedia. It’s a tool that’s been trained on a bajillion words, and it learns patterns of words as a result, and records “weights” that are basically cues about how those patterns of words relate to each other. When you ask it a question, it’s basically autocompleting the next word based on the likelihood of it being the next word in a similar pattern. It can therefore be wildly wrong; it can also still be wildly useful in a lot of ways, including this context.

What I often do in these situations is not looking for factual information. Again, it’s not an encyclopedia. But I myself am observing the LLM in using a pattern of words so that I am in turn building my own set of “weights” – meaning, building an understanding of the patterns of words it uses – to figure out a general outline of what is commonly known by doctors and medical literature; the common terminology that is being used likely by doctors to intake and output recommendations; and basically build a list of things that do and do not match my scenario or symptoms or words, or whatever it is I am seeking to learn about.

I can then learn (from the LLM as well as in person clinical encounters) that doctors and other providers typically ask about burning, tingling, etc and can make it clear that none of those words match at all. I can then accept from them (or Scott, or use a word I learned from an LLM) an alternative suggestion where I’m not quite sure if it’s a perfect match, but it’s not absolutely wrong and therefore is ok to use to describe somewhat of the sensation I am experiencing.

The LLM and AI, basically, have become a translator for me. Again, notice that I’m not asking it to describe my pain for me; it would make up words based on patterns that have nothing to do with me. But when I observe the words it uses I can then use my own experience to rule things in/out and decide what best fits and whether and when to use any of those, if they are appropriate.

Often, I can do this in advance of a live healthcare encounter. And that’s really helpful because it makes me a better historian (to use clinical terms, meaning I’m able to report the symptoms and chronology and characterization more succinctly without them having to play 20 questions to draw it out of me); and it saves me and the clinicians time for being able to move on to other things.

At this imaging appointment, this was incredibly helpful. I had the necessary imaging and had the results at my fingertips and was able to begin exploring and discussing the raw data with my LLM. When I then spoke with the clinician, I was able to better characterize my symptoms in context of the imaging results and ask questions that I felt were more aligned with what I was experiencing, and it was useful for a more efficient but effective conversation with the clinician about what our working hypothesis was; what next short-term and long-term pathways looked like; etc.

This is often how I use LLMs overall. If you ask an LLM if it knows who Dana Lewis is, it “does” know. It’ll tell you things about me that are mostly correct. If you ask it to write a bio about me, it will solidly make up ⅓ of it that is fully inaccurate. Again, remember it is not an encyclopedia and does not “know things”. When you remember that the LLM is autocompleting words based on the likelihood that they match the previous words – and think about how much information is on the internet and how many weights (patterns of words) it’s been able to build about a topic – you can then get a better spidey-sense about when things are slightly more or less accurate at a general level. I have actually used part of a LLM-written bio, but not by asking it to write a bio. That doesn’t work because of made up facts. I have instead asked it to describe my work, and it does a pretty decent job. This is due to the number of articles I have written and authored; the number of articles describing my work; and the number of bios I’ve actually written and posted online for conferences and such. So it has a lot of “weights” probably tied to the types of things I work on, and having it describe the type of work I do or am known for gets pretty accurate results, because it’s writing in a general high level without enough detail to get anything “wrong” like a fact about an award, etc.

This is how I recommend others use LLMs, too, especially those of us as patients or working in healthcare. LLMs pattern match on words in their training; and they output likely patterns of words. We in turn as humans can observe and learn from the patterns, while recognizing these are PATTERNS of connected words that can in fact be wrong. Systemic bias is baked into human behavior and medical literature, and this then has been pattern-matched by the LLM. (Note I didn’t say “learned”; but they’ve created weights based on the patterns they observe over and over again). You can’t necessarily course-correct the LLM (it’ll pretend to apologize and maybe for a short while adjust it’s word patterns but in a new chat it’s prone to make the same mistakes because the training has not been updated based on your feedback, so it reverts to using the ‘weights’ (patterns) it was trained on); instead, we need to create more of the correct/right information and have it voluminously available for LLMs to train on in the future. At an individual level then, we can let go of the obvious not-right things it’s saying and focus on what we can benefit from in the patterns of words it gives us.

And for people like me, with a high (or different type of) pain tolerance and a different vocabulary for what my body is feeling like, this has become a critical tool in my toolbox for optimizing my healthcare encounters. Do I have to do this to get adequate care? No. But I’m an optimizer, and I want to give the best inputs to the healthcare system (providers and my medical records) in order to increase my chances of getting the best possible outputs from the healthcare system to help me maintain and improve and save my health when these things are needed.

TLDR: LLMs can be powerful tools in the hands of patients, including for real-time or ahead-of-time translation and creating shared, understandable language for improving communication between patients and providers. Just as you shouldn’t tell a patient not to use Dr. Google, you should similarly avoid falling into the trap of telling a patient not to use LLMs because they’re “wrong”. Being wrong in some cases and some ways does not mean LLMs are useless or should not be used by patients. Each of these tools has limitations but a lot of upside and benefits; restricting patients or trying to limit use of tools is like limiting the use of other accessibility tools. I spotted a quote from Dr. Wes Ely that is relevant: “Maleficence can be created with beneficent intent”. In simple words, he is pointing out that harm can happen even with good intent.

Don’t do harm by restricting or recommending avoiding tools like LLMs.

Being a raccoon and living with chronic disease

Being a raccoon loading a dishwasher is a really useful analogy for figuring out: what you want to spend a lot of effort and precision on, where you can lower your effort and precision and still obtain reasonable outcomes, or where you can allow someone else to step in and help you when you don’t care how as long as the job gets done.

Huh? Raccoons?!

A few years ago Scott and I spotted a meme/joke going around that in every relationship there is a person who loads the dishwasher precisely (usually “stacks the dishwasher like a Scandinavian architect”) and one who loads the dishwasher like a “raccoon on meth” or a “rabid raccoon” or similar.

Our relationship and personality with dishwasher loading isn’t as opposite on the spectrum as that analogy suggests. However, Scott has a strong preference for how the dishwasher should be loaded, along with a high level of precision in achieving it. I have a high level of precision, but very low preference for how it gets done. Thus, we have evolved our strategy where I put things in and he re-arranges them. If I put things in with a high amount of effort and a high level of precision? He would rearrange them ANYWAY. So there is no point in me also spending high levels of effort to apply a first style of precision when that work gets undone. It is more efficient for me to put things in, and then he re-organizes as he sees fit.

Thus, I’ve embraced being the ‘raccoon’ that loads the dishwasher in this house. (Not quite as dramatic as some!)

A ChatGPT-created illustration of a cute raccoon happily loading the dishwasher, which looks fine but not precisely loaded.This came to mind because he went on a work trip, and I stuck things in the dishwasher for 2 days, and jokingly texted him to “come home and do the dishes that the raccoon left”. He came home well after dinner that night, and the next day texted when he opened the dishwasher for the first time that he “opened the raccoon cage for the first time”. (LOL).

Over the years, we’ve found other household tasks and chores where one of us has strong preferences about the way things should be done and the other person has less strong preferences. Similarly, there are some things that feel high-effort (and not worth it) for one of us but not the other. Over time, we’ve sorted tasks so things that feel high-effort can be done by the person for which it doesn’t feel high-effort, and depending on the preference level determines ‘how’ it gets done. But usually, the person who does it (because it’s low-effort) gets to apply their preferences, unless it’s a really weak preference and the preference of the non-doer doesn’t require additional effort.

Here are some examples of tasks and how our effort/preference works out. You can look at this and see that Scott ends up doing the dishwasher organization (after I load it like a raccoon) before starting the dishwasher and also has stronger preferences about laundry than I do. On the flip side, I seem to find it easier with routines for staying on top of household supply management including buying/re-ordering and acquiring and putting those away where we have them ready to go, because they’re not on a clear scheduled cadence. Ditto for managing the cats’ health via flea/tick medication schedules, scheduling and taking them to the vet, signing them up for cat camp when we travel, coordinating with the human involved in their beloved cat camp, etc. We end up doing a mix of overall work, split between the two of us.

A four-quadrant grid. Across the top it says "Effort" with low effort on the left and high effort on the right. Along the side it says "preference" with weak preference on the bottom and strong preference at the top. The implication is you can have a mix of preference and how much work certain chores are. Usually the person in the top left quadrant for a particular chore - representing easy or lower effort anad stronger preference - ends up doing that chore. For me that's household supply ordering; managing cat vet appts, etc. where due to Scott's much stronger preference his include the dishwasher, laundry, etc.
Could each of us do those tasks? Sure, and sometimes we do. But we don’t have to each do all of them, all the time, and we generally have a split list of who does which type of things as the primary doer.

Raccoons and burnout with chronic diseases

I have now lived with type 1 diabetes for almost 22 years. When I met Scott, I had been living with diabetes for 11 years. When he asked on one of our early dates what he could do to help, my answer was: “…nothing?” I’m an adult, and I’ve successfully managed my diabetes solo for decades.

Obviously, we ended up finding various ways for him to help, starting with iterating together on technological solutions for remote monitoring (DIYPS) to eventually closing the loop with an automated insulin delivery system (OpenAPS). But for the longest time, I still did all the physical tasks of ordering supplies, physically moving them around, opening them, managing them, etc. both at a 3-month-supply order level and also every 3 days with refilling reservoirs and changing pump sites and sensors.

Most of the time, these decades-long routines are literally routines and I do them without thought, the same way I put on my shoes before I leave the house. Yet when burnout is approaching – often from a combination of having five autoimmune diseases or having a lot of life going on while also juggling the ‘routine’ tasks that are voluminous – these can start to feel harder than they should.

Should, being the key word here.

Scott would offer to do something for me and I would say no, because I felt like I “should” do it because I normally can/am able to with minimal effort. However, the activation energy required (because of burnout or volume of other tasks) sometimes changes, and these minimal, low-effort tasks suddenly feel high-effort. Thus, it’s a good time to examine whether someone can – even in the short term and as a one-off – help.

It’s hard, though, to eradicate the “should”. I “should” be able to do X, I “should” be able to handle Y. But honestly? I should NOT have to deal with all the stuff and management of living with 5 autoimmune diseases and juggling them day in and day out. But I do have to deal with these and therefore do these things to stay in optimal health. “Should” is something that I catch myself thinking and now use that as a verbal flag to say “hey, just because I CAN do this usually doesn’t mean I HAVE to do it right now, and maybe it’s ok to take a break from always doing X and let Scott do X or help me with Y.”

Some of these “I should do it” tasks have actually become tasks that I’ve handed off long-term to Scott, because they’re super low effort for him but they’re mildly annoying for me because I have roughly 247 other tasks to deal with (no, I didn’t count them: that would make it 248).

For example, one time I asked him to open my shipment with 3 months worth of pump supplies, and unbox them so I could put them away. He also carried them into the room where we store supplies and put them where they belonged. Tiny, but huge! Only 246 tasks left on my list. Now, I order supplies, and he unboxes and puts them away and manages the inventory rotation: putting the oldest boxes on top (that I draw from first) and newer ones on bottom. This goes for pump supplies, CGM supplies, and anything else mail order like that.

A similar four quadrant chart with the same axes as the other graph, with effort on top (low left; high right) and execution preference (weak bottom, strong top). Similar to chores, we look at how our preferences and how much work it feels like, relative to each other, to decide if there are any tasks I can ask Scott to take on related to chronic disease management (like opening boxes and rotating stock of supplies being lower effort for him than me, due to my overall volume of tasks being higher)

This isn’t always as straightforward, but there are a lot of things I have been doing for 20+ years and thus find very low effort once the supplies are in my hand, like changing my pump site and CGM. So I do those. (If I was incapacitated, I have no doubt Scott could do those if needed.) But there’s other stuff that’s low effort and low preference like the opening of boxes and arranging of supplies that I don’t have to do and Scott is happy to take on to lower my task list of 247 things so that I only have about 240 things left to do for routine management.

Can I do them? Sure. Should I do them? Well, again, I can but that doesn’t mean I have to if there’s someone who is volunteering to help.

And sometimes that help is really useful in breaking down tasks that are USUALLY low effort – like changing a pump site – but become high effort for psychological reasons. Sometimes I’ll say out loud that I need to change my pump site, but I don’t want to. Some of that might be burnout, some of that is the mental energy it takes to figure out where to put the next pump site (and remembering the last couple of placements from previous sites, so I rotate them), combined with the physical activation energy to get up from wherever I am and go pull out the supplies to do it. In these cases, divide and conquer works! Scott often is more than happy to go and pull out a pump site and reservoir and place it where it’s convenient for me when I do get up to go do something else. For me, I often do pump site changes (putting a new one on, but I keep the old one on for a few extra hours in case the new one works) after my shower, so he’ll grab a pump site and reservoir and set it on the bathroom counter. Barrier removed. Then I don’t have to get up now and do it, but I also won’t forget to do it because it’s there in flow with my other tasks to do after my shower.

A gif showing a similar four quadrant graph (effort across top, execution preference along the side), showing a task going from the top left (low effort usually, strong preference for how it is done) moving to the right (high effort and still high preference), then showing it being split into two halves, one of which becomes a Scott task because it's lower effort sub-tasks and the remaining part is still high preference for me but has lowered the effort it takes.

There are a lot of chronic disease-related tasks like this that when I’m starting to feel burnout from the sheer number of tasks, I can look for (or sometimes Scott can spot) opportunities) to break a task into multiple steps and do them at different times, or to have someone do the task portions they can do, like getting out supplies. That then lowers the overall effort required to do that task, or lowers the activation energy depending on the task. A lot of these are simple-ish tasks, like opening something, getting something out and moving it across the house to a key action spot (like the bathroom counter for after a shower), or putting things away when they no longer need to be out. The latter is the raccoon-style approach. A lot of times I’ll have the activation energy to start and do a task, but not complete (like breaking down supply boxes for recycling). I’ll set them aside to do later, or Scott will spot this ‘raccoon’ stash of tasks and tackle it when he has time/energy, usually faster than me getting around to do it because he’s not burdened with 240 other tasks like that. (He does of course have a larger pile of tasks than without this, but the magnitude of his task list is a lot smaller, because 5 autoimmune diseases vs 0.)

Be a friend to your friend who needs to be a raccoon some of the time

I am VERY lucky to have met & fallen in love & married someone who is so incredibly able and willing to help. I recognize not everyone is in this situation. But there may be some ways our friends and family who don’t live with us can help, too. I had a really fantastic example of this lately where someone who isn’t Scott stepped up and made my raccoon-life instantly better before I even got to the stage of being a raccoon about it.

I have a bunch of things going on currently, and my doctor recommended that I have an MRI done. I haven’t had an MRI in years and the last one was pre-pandemic. Nowadays, I am still masking in any indoor spaces including healthcare appointments, and I plan to mask for my MRI. But my go-to n95 mask has metal in the nose bridge, which means I need to find a safe alternative for my upcoming MRI.

I was busy trying to schedule appointments and hadn’t gotten to the stage of figuring out what I would wear as an alternative for my MRI. But I mentioned to a friend that I was going to have an MRI and she asked what mask I was going to wear, because she knows that I mask for healthcare appointments. I told her I hadn’t figured it out yet but needed to eventually figure it out.

She instantly sprang into action. She looked up options for MRI-safe masks and asked a local friend who uses a CAN99 mask without wire whether the friend had a spare for me to try. She also ordered a sample pack of another n95 mask style that uses adhesive to stick to the face (and thus doesn’t have a metal nose bridge piece). She ordered these, collected the CAN99 from the local friend, and then told me when they’d be here, which was well over a week before I would need it for the MRI and offered to bring them by my house so I had them as soon as possible.

Meanwhile, I was gobsmacked with relief and appreciation because I would have been a hot mess of a raccoon trying to get around to sorting that out days or a week after she had sorted a variety of options for me to try. Instead, she predicted my raccoon-ness or otherwise was being a really amazing friend and stepping up to take something off my plate so I had one less thing to deal with.

Yay for helpers. In this case, she knew exactly what was needed. But a lot of times, we have friends or family who want to help but don’t know how to or aren’t equipped with the knowledge of what would be helpful. Thus, it’s useful – when you have energy – to think through how you could break apart tasks and what you could offer up or ask as a task for someone else to do that would lower the burden for you.

That might be virtual tasks or physical tasks:

  • It might be coming over and taking a bunch of supplies out of boxes (or medicine) and splitting them up and helping put them in piles or all the places those things need to go
  • It could be researching safe places for you to eat, if you have food allergies or restrictions or things like celiac
  • It could be helping divvy up food into individual portions or whatever re-sizing you need for whatever purpose
  • It could be researching and brainstorming and identifying some safe options for group activities, e.g. finding places with outdoor dining or cool places to walk and hike that suits everyone’s abilities and interests

Sometimes it’s the physical burden that it’s helpful to lift; sometimes it’s the mental energy burden that is helpful to lift; sometimes a temporary relief in all the things we feel like we have to do ourselves is more important than the task itself.

If you have a chronic disease, it’s ok to be a raccoon. There is no SHOULD.

Part of the reason I really like the raccoon analogy is because now instead of being annoyed at throwing things in the dishwasher, because whether I exert energy or not Scott is going to re-load it his way anyway, I put the dishes in without much precision and giggle about being a raccoon.

The same goes for chronic disease related-tasks. Even for tasks where Scott is not involved, but I’m starting to feel annoyed at something I need to do, I find ways to raccoon it a little bit. I change my pump site but leave the supplies on the counter because I don’t HAVE to put those away at the same moment. I usually do, but I don’t HAVE to. And so I raccoon it a bit and put the supplies away later, because it doesn’t hurt anyone or anything (including me) for those not to get put away at the same time. And that provides a little bit of comedic relief to me and lightens the task of changing my pump site.

It also helps me move away from the SHOULD weighing heavily in my brain. I should be able to get all my pump site stuff out, change it, and throw away and put away the supplies when done. It shouldn’t be hard. No one else has this challenge occasionally (or so my brain tells me).

But the burden isn’t about that task alone. It’s one task in the list of 247 things I’m doing every day to take care of myself. And sometimes, my list GROWS. January 18, my list was about 212 things I needed to do. Beginning January 19, my list jumped up to 230. Last week, it grew again. I have noticed this pattern that when my list of things to do grows, some of the existing “easy” tasks that I’ve done for 20 years suddenly feel hard. Because it is hard to split my energy across more tasks and more things to focus on; it takes time to adapt. And so being a raccoon for some of those tasks, for some of the time, provides a helpful steam-valve to output some of the challenges I’m juggling of dealing with all the tasks, because those tasks 100% don’t have to be done in the same way as I might do during “calm” static times where my task list hasn’t expanded suddenly.

And it doesn’t matter what anyone else does or what they care about. Thus, remove the should. You should be able to do this, sure – if you weren’t juggling 246 other things. But you do have 246 things and that blows apart the “should”.

Free yourself of the “should” wherever possible, and be a raccoon wherever it helps.

How to Exercise When Exercise Is Harder Than Your Normal

I’ve been spending a lot of time thinking lately about how to optimize exercise and physical activity when your body doesn’t do what it’s supposed to do (or what you want it to do). We don’t always have control over our bodies; whereas we do, sometimes, have control over our actions and what we can try to do and how we do physical activity. A lot of my strategies for optimizing exercise and physical activity have actually been updating my mental models, and I think they might be useful to other people, too.

But first, let me outline a couple of scenarios and how they differ so we have a shared framework for discussing some of the mental strategies for incorporating activity and exercise into life with chronic diseases like autoimmune diseases.

Let’s imagine you’re running and you come to a cliff.

  • In scenario A, there’s a bridge across to the other side at the same level. It’s no big deal to continue running across and continue on your way.
  • In scenario B, there’s no bridge, and you tumble off the cliff, but then you are able to (eventually) work your way back up to the other side at the same level as the person who could just stroll across the bridge.
  • In scenario C, there’s no bridge but the cliff isn’t as steep of a drop off; instead, it’s like a 2% railroad grade trail sloping away and down. You continue down it, but you end up well below the other side where a bridge would’ve connected, and there’s no way up to that level. The longer you go, the farther you are from that level.
  • In scenario D, there is a cliff that you fall off of, and you pick yourself up and keep going but it’s on that 2% railroad grade sloping away and down. Like scenario C, you end up well below – and even farther below – where you would have been if the bridge had been there.

Illustration of a runner crossing a bridge; running up a slope after the trail drops first then returns to the same height (B); running down a slope that takes them below the target height (C); and a combination of a sharp drop then slope down (D), as explained in more words throughout the blog post.

This is basically illustrative of the different types of situations you can find yourself in related to health status.

  • If all is well, you’re in scenario A: no bumps in the road, you just carry on.
  • Scenario B is like when you have a short-term injury or accident (like breaking your ankle or a toe) where you have a sudden drop in ability but you are able to build back up to the level you were at before. It may take longer and feel like a hard slog, but usually you can get there.
  • Scenario C is when you have a chronic disease (or are experiencing aging over time) where there’s small changes in the situation or in your ability. Because of these factors, you end up below where you maybe would like to be.
  • Scenario D is when there’s an acute situation that triggers or results in a significant, sudden drop followed by a chronic state that mimics the downward 2% small change slope that adds up significantly over time, meaning you are well below compared to where you would like to be.

My personal experiences and living in Scenario D

I have dealt with scenario B via a broken ankle and a broken toe in past years. Those stink. They’re hard. But they’re a different kind of hard than scenario C and scenario D, where I’ve found myself in the last few years and more acutely, I now am clearly operating in scenario D: I have had an acute drop-off in lung function and have autoimmune diseases that are affecting my ability to exercise, especially as compared to “before”. In fact, I keep having cycles of scenario D where my VO2 max drops off a cliff (losing a full point or more) within 2-3 days, then plateaus at the low level during the length of that round of symptoms, before maybe responding to my efforts to bring it back up. And it doesn’t always go back up or respond to exercise the way it used to do, “before”, because well, my lungs don’t work like they used to.

It’s been pretty frustrating. I want to keep building on the hard work I’ve put into my last 2-3 years of ultrarunning. Last year around this time, I ran a personal best 100k (62 miles) and beat my brother-in-law’s 100k time. I’m pretty proud of that because I’m pretty slow; but in ultras if you pace well and fuel well, you can beat faster runners. (As opposed to much shorter distances where speed matters more!).

This year, however, I can barely trek out – on the best day – for a 4 mile run. I had originally envisioned that, due to my fitness level and cumulative mileage build up, I would be able to train for and run a fast marathon (26.2 miles / ~42k) this spring, and that was supposed to be what I was training for. (Fast being “fast for me”.) But instead of running ~30-40 miles a week, I have been running 8-16 miles per week and have only clocked in half of the total mileage I had done by this point last year. Argh. I didn’t expect to do as much overall, but 210 instead of 420 miles by the beginning of April shows how different it’s been and how limited I have been. I’ve dropped the scheduled plan for marathon training – or any hopes of ultra training this year, unless something changes drastically in a positive way that I’m not expecting.

I finally realized that comparing my abilities to “before” is the crux of a lot of my angst. It is a little hard when you realize over time (scenario C) that you can’t do something that you think you should be able to. For example, me trying to run fast: it just has never worked the way training to run fast seems to work for other people. Eventually, in “before times”, I had settled into a strategy of running far, but doing so more slowly, and that’s turned out to be way more fun for me. But when you have an acute adjustment in ability that isn’t like scenario B (e.g. you can expect to regain strength/function/ability over time), it’s really hard to wrap your brain around. And comparisons to ‘before’ feel inevitable. They’re probably part of the grieving process in recognizing that things have changed. But at some point, it’s helpful to recognize and picture that you ARE in scenario D. This includes grappling with and accepting the fact that something has changed; and you likely do not have control over it.

I have updated my mental model with some strategies, to help me frame and recognize that on bad days, I don’t have to push myself (even if deep down I want to, because I want to rebuild/gain fitness to where I “should” be) – and that I should save that strategy for “good” days.

Here’s what I’ve landed on, for general strategy approach, which applies to whatever activity that I ultimately choose for the day:

Overlapping circles of good days and bad days, showing that regardless of which day it is, I still go out every day. Strategies for 'bad' days include lowering expectations; changing activities; pacing slower; taking breaks; turning around; and not comparing to 'before'. Good/better days can involve a slow start but speed up or add distance if it feels good, as long as I pace/do it in a way that doesn't overdo it such that I can't be active as desired any following day.
The other thing, in addition to comparing distance, time and pacing to “before” abilities, that I have struggled with, is not having a training plan or schedule. Because my ‘good’ days (where my lungs do not seem to limit my activity) are unpredictable, I can’t build a training schedule and build up mileage/ability the way I used to. Ultimately, I have had to land on a strategy that I don’t like but accept is the most feasible one for now (suggested by Scott): have a “checklist” of activities for my ‘good days’, and have a checklist of activities for my ‘bad days’. This has helped me separate my before-desire for running being my primary activity (and thinking about my running ‘schedule’ that I wish I could go back to), and instead be more realistic on the day-of about what activities are ideal for the type of day I’m actually dealing with.

For example, on my worst days, I cannot run for 30 seconds without gasping for breath and any type of intensive activity (anything more than a really slow meandering walk or a few seconds of a really slow run) feels terrible. Walking feels yuck too but it’s tolerable when I go slow enough, even though my lungs still feel physically uncomfortable inside my rib cage. On medium bad days, I maybe can do a slow, easy, short run with 20 seconds run intervals; a walk; an easy super slow hike with lots of stopping; or an e-bike ride; or easy pace cross-country skiing (when it was winter). On good days? I can do anything! Which means I can hike more elevation at clippier paces (and I can actually push myself on pace) or run with some modicum of effort above a snail’s pace or run a snail’s pace that doesn’t hurt for 30 second intervals. Those are my favorite activities, so those are high on my list (depending on whether it’s the weekday or weekend) to try to do when I’m feeling good. On the bad days or less good days, I take whatever activity is available to me however I can get it.

Activity choice check list for really bad days (e.g. walk or easy e-bike) vs less bad days (slow, easy short run or very slow hike or easy ski) versus the better days where I can run, hike longer/faster, and ski any distance I want.
There are tons of activities so if you’re reading this, know that I’m making this list based on MY favorite types of activities (and the climate I live in). You should make your list of activities and sort them if it’s helpful, to know which ones bring joy even on the worst days and those are what you should prioritize figuring out how to do more of, as the days permit.

Some of this stuff maybe seems “duh” and super intuitive to a lot of people, especially if you’re not living in Scenario D. Hello to everyone in Scenario A! But, when you’ve been thrust off a metaphorical cliff into Scenario D, and there’s no way to do what you did “before”, figuring out how to pace and push yourself to regain what fitness you can OR preserve basic health functionality as long as you can…it’s all an experiment of balancing what amount of activity pushes you in a positive way and builds strength, fitness and health and balancing against going over the point where it causes short-term harm (to the point where it impedes your activity the following days) and/or long-term harm (e.g. further hurts your lungs or other body parts in a way that is either irreversible or hard to recover from).

The pep talk I wish I got that I’m giving to you now

Before I lived in Scenario D (lung stuff), I lived a lot in Scenario C: running with type 1 diabetes AND celiac AND Grave’s AND exocrine pancreatic insufficiency (which means I have to juggle glucose management while only eating gluten free and calculating and eating enzymes for any of that gluten free food I eat as fuel while running) was a lot to juggle, in of itself. I often thought about how much I was juggling while running along, while recognizing that a lot of that juggling was invisible to the outside. Which made me think and observe that even though I feel like every other runner was flying by me and not dealing with the exact same set of balls to juggle; some of those runners WERE probably juggling their own health stuff and limitations, too (or are parents juggling jobs and kid schedules and running, etc). Everyone’s got baggage that they’re carrying, of some sort, or are juggling things in a different way. So, juggling is hard. Kudos to everyone for getting out there for juggling with what they’ve got.

But especially now in Scenario D, it’s even more important to me that it’s not about being out there and running certain paces or hiking certain distances: it’s getting out there AT ALL which is the entire point. And I’ve made it my mission to try to compliment people for getting out there, when it feels like it’s appropriate to do so.

Last week, I was handed the perfect opportunity, and it turned out to be the best conversation I’ve had in a long time. A woman was coming uphill and commented that I had not forgotten my hiking poles like she had. I said yeah, they make a difference going downhill as well as up! She said something about huffing and puffing because she has asthma. DING DING: opportunity to celebrate her for being out there hiking uphill, even with asthma. (I pretty much said that and complimented her). She and Scott were trading comments about it being the beginning of hiking season and how they had forgotten their hiking poles and we told them we were making a list throughout the hike of everything else we had forgotten. They mentioned that they were 70 (wow!) and 75 (wow!) and so they didn’t think they needed walkie talkies because they would not separate on the trail (one of the things that we forgot to bring in case Scott mountain-goated-ahead of me on the trail at any point). We gave them our sincere compliments for being out there hiking (because, goals! I am aiming hard and working hard to get to the age of 70 and be able to hike like that!). She talked about it being hard because she has asthma and was struggling to breathe at first before she remembered to take her albuterol…and I pointed out that even if she was struggling and had to stop every few minutes, it didn’t matter: she was out there, she was hiking, and it doesn’t matter how long it takes! She thought that was the best thing to hear, but it was really what I try to tell myself because I love to hear it, too, which is celebrating going and not worrying about pace/slow/whatever. I told her I had a lung condition too (she’s the first stranger I’ve ever told) and she asked if I was stopping every 2 minutes and whether I had taken an inhaler. I explained most of my lung condition doesn’t respond to an inhaler but that yes, I too had to stop and catch my breath. But it was an awesome, gorgeous day and worth hiking in and that I was glad I had gone up. Ultimately, she said a lot of things that made it seem like my shared experience helped her – but in turn, seeing her and talking to her helped ME just as much, because it reminded me that yes, everyone else is juggling things while hiking too. And it’s really not about speed/pace/time; it’s absolutely about being out there and enjoying it.

So that’s what I’m trying to do: I’m trying to move beyond the comparison from what I did before, and simply compare to “am I going out at all and trying”. Trying = winning; going = winning, and that’s the new mental model that has been working really well for me as I spend more time in Scenario D.

PS – if you read this and are in a similar situation of Scenario B, C, or D and want a virtual high five and to feel “seen” for what you’re working through – feel free to comment here or email any time. I see you going out and trying; which means you’re winning! And I’m happy to give a virtual comment the way I am trying to give comments out on the trails and compliment folks for the process of being out moving through the world in all the ways that we can, however we can. 

New Systematic Review Showing General Population Prevalence of Exocrine Pancreatic Insufficiency Is Higher Than In Co-Conditions

For those unfamiliar with academic/medical journal publishing, it is slow. Very slow. I did a systematic review on EPI prevalence and submitted it to a journal on May 5, 2023. It underwent peer review and a round of revisions and was accepted on July 13, 2023. (That part is actually relatively quick.) However, it sat, and sat, and sat, and sat, and sat. I was impatient and wrote a blog post last year about the basic premise of the review, which is that despite commonly repeated statements about the prevalence of EPI being so high in co-conditions that those conditions therefore are the highest drivers of EPI… this unlikely to be true because it is mathematically improbable.

And then this paper still sat several more months until it was published online ahead of print…today! Wahoo! You can read “An Updated Review of Exocrine Pancreatic Insufficiency Prevalence finds EPI to be More Common in General Population than Rates of Co-Conditions in the Journal of Gastrointestinal and Liver Diseases ahead of print (scheduled for the March 2024 issue).

It’s open access (and I didn’t have to pay for it to be!), so click here to go read it and download your own PDF copy of the article there. (As a reminder, I also save a version of every article including those that are not open access at DIYPS.org/research, in case you’re looking for this in the future or want to read some of my other research.) If you don’t want to read the full article, here’s a summary below and key takeaways for providers and patients (aka people like me with EPI!).

I read and systematically categorized 649 articles related to exocrine pancreatic insufficiency, which is known as EPI or PEI depending on where in the world you are. EPI occurs when the pancreas no longer produces enough enzymes to successfully digest food completely; when this occurs, pancreatic enzyme replacement therapy (PERT) is needed. This means swallowing enzyme pills every time you eat or drink something with fat or protein in it.

Like many of my other EPI-related research articles, this one found that EPI is underdiagnosed; undertreated; treatment costs are high; and prevalence is widely misunderstood, possibly leading to missing screening key populations.

  • Underdiagnosis – for a clearer picture and specific disease-related example of how EPI is likely underdiagnosed in a co-condition, check out my other systematic review specifically assessing EPI in diabetes. I show in that paper how EPI is likely many times more likely than gastroparesis and celiac disease, yet it’s less likely to be screened for.
  • Undertreated – another recent systematic review that I wrote after this paper (but was published sooner) is this systematic review on PERT dosing guidelines and dosing literature, showing how the overwhelming majority of people are not prescribed enough enzymes to meet their needs. Thus, symptoms persist and the literature continues to state that symptoms can’t be managed with PERT, which is not necessarily true: it just hasn’t been studied correctly with sufficient titration protocols.
  • PERT costs are high – I highlight that although PERT costs continue to rise each year, there are studies in different co-condition populations showing PERT treatment is cost-effective and in some cases reduces the overall cost of healthcare. It’s hard to believe when we look at the individual out of pocket costs related to PERT sometimes, but the data more broadly shows that PERT treatment in many populations is cost-effective.
  • Prevalence of EPI is misunderstood. This is the bulk of the paper and goes into a lot of detail showing how the general population estimates of EPI may be as high as 11-21%. In contrast, although prevalence of EPI is much higher within co-conditions, these conditions are such a small fraction of the general population that they therefore are also likely a small fraction of the EPI population.

As I wrote in the paper:

“The overall population prevalence of cystic fibrosis, pancreatitis, cancer, and pancreatic-related surgery combined totals <0.1%, and the lower end of the estimated overall population prevalence of EPI is approximately 10%, which suggests less than 1% of the overall incidence of EPI occurs in such rare co-conditions.

We can therefore conclude that 99% of EPI occurs in those without a rare co-condition.”

I also pointed out the mismatch of research prioritization and funding to date in EPI. 56-85% of the EPI-related research is focused on those representing less than ~1% of the overall population with EPI.

So what should you take away from this research?

If you are a healthcare provider:

Make sure you are screening people who present with gastrointestinal symptoms with a fecal elastase test to check for EPI. Weight loss and malnutrition does not always occur with EPI (which is a good thing, meaning it’s caught earlier) and similarly not everyone has diarrhea as their hallmark symptoms. Messy, smelly stools are often commonly described by people with EPI, among other symptoms such as excess gas and bloating,

Remember that conditions like diabetes have a high prevalence of EPI – it’s not just chronic pancreatitis or cystic fibrosis.

If you do have a patient that you are diagnosing or have diagnosed with EPI, make sure you are aware of the current dosing guidelines (see this systematic review) and 1) prescribe a reasonable minimal starting dose; 2) tell the patient when/how they can adjust their PERT on their own and when to call back for an updated prescription as they figure out what they need, and; 3) tell them they will likely need an updated prescription and you are ready to support them when they need to do so.

If you are a person living with EPI:

Most people with EPI are not taking enough enzymes to eliminate their symptoms. Dose timing matters (take it with/throughout meals), and the quantity of PERT matters.

If you’re still having symptoms, you may still need more enzymes.

Don’t compare what you are doing to what other people are taking: it’s not a moral failing to need a different amount of enzymes (or insulin, for that matter, or any other medication) than another person! It also likely varies by what we are eating, and we all eat differently.

If you’re still experiencing symptoms, you may need to experiment with a higher dose. If you still have symptoms or have new symptoms that start after taking PERT, you may need to try a different brand of PERT. Some people do well on one but not another, and there are different kinds you can try – ask your doctor.

How to cite this systematic review:

Lewis D. An Updated Review of Exocrine Pancreatic Insufficiency Prevalence finds EPI to be More Common in General Population than Rates of Co-Conditions. Journal of Gastrointestinal and Liver Diseases. 2024. DOI: 10.15403/jgld-5005

For other posts related to EPI, see DIYPS.org/EPI for more of my personal experiences with EPI and other plain-language research summaries.

For other research articles, see DIYPS.org/research

A systematic review shows EPI prevalence is more common in the general population than in co-conditions

A Slackbot for using Slack to access and use a chat-based LLM in public

I’ve been thinking a lot about how to help my family, friends, and colleagues use LLMs to power their work. (As I’ve written about here, and more recently here with lots of tips on prompting and effectively using LLMs for different kinds of projects). 

Scott has been on the same page, especially thinking about how to help colleagues use LLMs effectively, but taking a slightly different approach: he built a Slackbot (a bot for Slack) which uses GPT-3.5 and GPT-4 to answer questions. This uses the API of GPT but presents it to the user in Slack instead of having to use ChatGPT as the chat interface. So, it’s a LLM chatbot, different than ChatGPT (because it’s a different chat interface), but uses the same AI (GPT-3.5 and GPT-4 from OpenAI). You could implement the same idea (a chat-based bot in Slack) using different AIs/LLMs, of course.

Using a slack-based bot for an LLM achieves a couple of things:

  1. More people can try GPT-4 and compare it to GPT-3.5 to get a taste for prompting and responses, without having to pay $20/month for a ChatGPT Pro account to get access to GPT-4.
  2. If you spend a lot of time in Slack for work, you don’t have to switch windows to access GPT.
  3. If your employer doesn’t want you to use the ChatGPT consumer product for work due to security concerns, but is more comfortable with OpenAI’s confidentiality guarantees for their API, a Slack bot interface provides a more secure alternative.
  4. You can see “in public” how other people in your Slack workspace are using it, and learn from their prompts and interactions new tricks and ideas (in addition to reading my long post here with lots of tips and suggestions).
  5. You can privately (in DM by yourself) talk to the bot and ask it your “dumb” questions (which aren’t dumb), and you can reason through things at your own speed or learn how to do things that you haven’t done before, and do so in private just like if you had your own ChatGPT account. It doesn’t burn social capital or take other people’s time, which you can save for more nuanced questions.

Scott has implemented it for work, but also came up with another fun use for it: we created a family-focused Slack channel for kids in our lives (we have ten (!) nieces and nephews – you can see how I used GPT to help with special custom books for some of them recently) so their parents and they can access GPT in a safe environment, too. ChatGPT (and OpenAI’s GPT API) have some reasonable safeguards in it already to protect kids, but more usefully we want to be able to provide custom instructions to keep it always at a kid-focused level. Plus this way we can share the experience with them, too.

I asked the bot to summarize itself for a lay audience not familiar with the project, and this was the output:

### Summary of the SlackAskBot

The SlackAskBot is a smart assistant designed to operate within Slack, a popular communication platform used by teams for messaging and collaboration. Imagine having a helpful robot in your group chats or direct messages that can understand your questions and provide informative responses. That’s what the SlackAskBot does. It’s like adding a knowledgeable team member to your Slack workspace who is always ready to assist.

#### How It Works:

  1. **Integration with Slack:**

– The bot is integrated into Slack, meaning it can read messages in channels it’s added to or when directly messaged. It’s like inviting a friend into your group chat who can listen and respond to conversations.

  1. **Understanding Messages:**

– When you type a message that includes the bot (either by mentioning it with an @ symbol in a channel or by sending it a direct message), the bot pays attention. It’s designed to understand that you’re talking to it and that you’re expecting some form of response.

  1. **Processing Your Requests:**

– Once the bot receives a message, it doesn’t just look at the latest message you’ve sent. It considers the context by looking at the conversation history. This way, it can provide responses that make sense within the ongoing discussion. It’s akin to a human joining a conversation, where understanding the previous discussion points is crucial for making relevant comments.

  1. **Generating Responses:**

– The bot uses a powerful language model (think of it as a very advanced AI that understands and generates human-like text) to come up with responses to your messages. It’s as if the bot thinks about what you’ve asked and then crafts a reply that aims to be helpful and informative.

  1. **Interacting Back:**

– After thinking through your message and generating a response, the bot then sends this response back into the Slack channel or direct message. You’ll see its reply just like you would from any other team member, providing the information or assistance you requested.

  1. **Special Features:**

– The bot is designed with specific settings for different channels or direct messages. This means it can tailor its responses based on where it’s talking. For example, it might respond differently in a casual chat compared to a more formal project discussion channel.

  1. **Respecting Privacy and Permissions:**

– The bot is programmed to respect Slack’s rules and the privacy of your conversations. It only listens and responds to messages where it’s explicitly mentioned or directly messaged, ensuring that it doesn’t intrude on private discussions.

#### In Summary:

The SlackAskBot acts as a virtual assistant within your Slack workspace, ready to provide information and answers to your questions. It’s designed to understand the context of conversations, generate helpful responses, and interact with users in a way that enhances team communication and productivity. Whether you’re asking for quick facts, seeking advice on a project, or just interacting for fun, the SlackAskBot is there to make your Slack experience more informative and engaging.


The SlackAskBot is open source – you can check out the code and instructions for how to implement it in your own Slack environment in the repository here.

We are using Slack as a chat interface for LLMs like GPT, making the user experience similar to ChatGPT

Effective Pair Programming and Coding and Prompt Engineering and Writing with LLMs like ChatGPT and other AI tools

I’ve been puzzled when I see people online say that LLM’s “don’t write good code”. In my experience, they do. But given that most of these LLMs are used in chatbot mode – meaning you chat and give it instructions to generate the code – that might be where the disconnect lies. To get good code, you need effective prompting and to do so, you need clear thinking and ideas on what you are trying to achieve and how.

My recipe and understanding is:

Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

It also involves understanding what these systems can and can’t do. For example, as I’ve written about before, they can’t “know” things (although they can increasingly look things up) and they can’t do “mental” math. But, they can generally repeat patterns of words to help you see what is known about a topic and they can write code that you can execute (or it can execute, depending on settings) to solve a math problem.

What the system does well is help code small chunks, walk you through processes to link these sections of code up, and help you implement them (if you ask for it). The smaller the task (ask), the more effective it is. Or also – the easier it is for you to see when it completes the task and when it hasn’t been able to finish due to limitations like response length limits, information falling out of the context window (what it knows that you’ve told it); unclear prompting; and/or because you’re asking it to do things for which it doesn’t have expertise. Some of the last part – lack of expertise – can be improved with specific prompting techniques –  and that’s also true for right-sizing the task it’s focusing on.

Right-size the task by giving a clear ask

If I were to ask an LLM to write me code for an iOS app to do XYZ, it could write me some code, but it certainly wouldn’t (at this point in history, written in February 2024), write all code and give me a downloadable file that includes it all and the ability to simply run it. What it can do is start writing chunks and snippets of code for bits and pieces of files that I can take and place and build upon.

How do I know this? Because I made that mistake when trying to build my first iOS apps in April and May 2023 (last year). It can’t do that (and still can’t today; I repeated the experiment). I had zero ideas how to build an iOS app; I had a sense that it involved XCode and pushing to the Apple iOS App Store, and that I needed “Swift” as the programming language. Luckily, though, I had a much stronger sense of how I wanted to structure the app user experience and what the app needed to do.

I followed the following steps:

  1. First, I initiated chat as a complete novice app builder. I told it I was new to building iOS apps and wanted to use XCode. I had XCode downloaded, but that was it. I told it to give me step by step instructions for opening XCode and setting up a project. Success! That was effective.
  2. I opened a different chat window after that, to start a new chat. I told it that it was an expert in iOS programming using Swift and XCode. Then I described the app that I wanted to build, said where I was in the process (e.g. had opened and started a project in XCode but had no code yet), and asked it for code to put on the home screen so I could build and open the app and it would have content on the home screen. Success!
  3. From there, I was able to stay in the same chat window and ask it for pieces at a time. I wanted to have a new user complete an onboarding flow the very first time they opened the app. I explained the number of screens and content I wanted on those screens; the chat was able to generate code, tell me how to create that in a file, and how to write code that would trigger this only for new users. Success!
  4. I was able to then add buttons to the home screen; have those buttons open new screens of the app; add navigation back to the home; etc. Success!
  5. (Rinse and repeat, continuing until all of the functionality was built out a step at a time).

To someone with familiarity building and programming things, this probably follows a logical process of how you might build apps. If you’ve built iOS apps before and are an expert in Swift programming, you’re either not reading this blog post or are thinking I (the human) am dumb and inexperienced.

Inexperienced, yes, I was (in April 2023). But what I am trying to show here is for someone new to a process and language, this is how we need to break down steps and work with LLMs to give it small tasks to help us understand and implement the code it produces before moving forward with a new task (ask). It takes these small building block tasks in order to build up to a complete app with all the functionality that we want. Nowadays, even though I can now whip up a prototype project and iOS app and deploy it to my phone within an hour (by working with an LLM as described above, but skipping some of the introductory set-up steps now that I have experience in those), I still follow the same general process to give the LLM the big picture and efficiently ask it to code pieces of the puzzle I want to create.

As the human, you need to be able to keep the big picture – full app purpose and functionality – in mind while subcontracting with the LLM to generate code for specific chunks of code to help achieve new functionality in our project.

In my experience, this is very much like pair programming with a human. In fact, this is exactly what we did when we built DIYPS over ten years ago (wow) and then OpenAPS within the following year. I’ve talked endlessly about how Scott and I would discuss an idea and agree on the big picture task; then I would direct sub-tasks and asks that he, then also Ben and others would be coding on (at first, because I didn’t have as much experience coding and this was 10 years ago without LLMs; I gradually took on more of those coding steps and roles as well). I was in charge of the big picture project and process and end goal; it didn’t matter who wrote which code or how; we worked together to achieve the intended end result. (And it worked amazingly well; here I am 10 years later still using DIYPS and OpenAPS; and tens of thousands of people globally are all using open source AID systems spun off of the algorithm we built through this process!)

Two purple boxes. The one on the left says "big picture project idea" and has a bunch of smaller size boxes within labeled LLM, attempting to show how an LLM can do small-size tasks within the scope of a bigger project that you direct it to do. On the right, the box simply says "finished project". Today, I would say the same is true. It doesn’t matter – for my types of projects – if a human or an LLM “wrote” the code. What matters is: does it work as intended? Does it achieve the goal? Does it contribute to the goal of the project?

Coding can be done – often by anyone (human with relevant coding expertise) or anything (LLM with effective prompting) – for any purpose. The critical key is knowing what the purpose is of the project and keeping the coding heading in the direction of serving that purpose.

Tips for right-sizing the ask

  1. Consider using different chat windows for different purposes, rather than trying to do it all in one. Yes, context windows are getting bigger, but you’ll still likely benefit from giving different prompts in different windows (more on effective prompting below).Start with one window for getting started with setting up a project (e.g. how to get XCode on a Mac and start a project; what file structure to use for an app/project that will do XYZ; how to start a Jupyter notebook for doing data science with python; etc); brainstorming ideas to scope your project; then separately for starting a series of coding sub-tasks (e.g. write code for the home page screen for your app; add a button that allows voice entry functionality; add in HealthKit permission functionality; etc.) that serves the big picture goal.
  2. Make a list for yourself of the steps needed to build a new piece of functionality for your project. If you know what the steps are, you can specifically ask the LLM for that.Again, use a separate window if you need to. For example, if you want to add in the ability to save data to HealthKit from your app, you may start a new chat window that asks the LLM generally how does one add HealthKit functionality for an app? It’ll describe the process of certain settings that need to be done in XCode for the project; adding code that prompts the user with correct permissions; and then code that actually does the saving/revising to HealthKit.

    Make your list (by yourself or with help), then you can go ask the LLM to do those things in your coding/task window for your specific project. You can go set the settings in XCode yourself, and skip to asking it for the task you need it to do, e.g. “write code to prompt the user with HealthKit permissions when button X is clicked”.

    (Sure, you can do the ask for help in outlining steps in the same window that you’ve been prompting for coding sub-tasks, just be aware that the more you do this, the more quickly you’ll burn through your context window. Sometimes that’s ok, and you’ll get a feel for when to do a separate window with the more experience you get.)

  • Pay attention as you go and see how much code it can generate and when it falls short of an ask. This will help you improve the rate at which you successfully ask and it fully completes a task for future asks. I observe that when I don’t know – due to my lack of expertise – the right size of a task, it’s more prone to give me ½-⅔ of the code and solution but need additional prompting after that. Sometimes I ask it to continue where it cut off; other times I start implementing/working with the bits of code (the first ⅔) it gave me, and have a mental or written note that this did not completely generate all steps/code for the functionality and to come back.Part of why sometimes it is effective to get started with ⅔ of the code is because you’ll likely need to debug/test the first bit of code, anyway. Sometimes when you paste in code it’s using methods that don’t match the version you’re targeting (e.g. functionality that is outdated as of iOS 15, for example, when you’re targeting iOS 17 and newer) and it’ll flag a warning or block it from working until you fix it.

    Once you’ve debugged/tested as much as you can of the original ⅔ of code it gave you, you can prompt it to say “Ok, I’ve done X and Y. We were trying to (repeat initial instructions/prompt) – what are the remaining next steps? Please code that.” to go back and finish the remaining pieces of that functionality.

    (Note that saying “please code that” isn’t necessarily good prompt technique, see below).

    Again, much of this is paying attention to how the sub-task is getting done in service of the overall big picture goal of your project; or the chunk that you’ve been working on if you’re building new functionality. Keeping track with whatever method you prefer – in your head, a physical written list, a checklist digitally, or notes showing what you’ve done/not done – is helpful.

Most of the above I used for coding examples, but I follow the same general process when writing research papers, blog posts, research protocols, etc. My point is that this works for all types of projects that you’d work on with an LLM, whether the output generation intended is code or human-focused language that you’d write or speak.

But, coding or writing language, the other thing that makes a difference in addition to right-sizing the task is effective prompting. I’ve intuitively noticed that has made the biggest difference in my projects for getting the output matching my expertise. Conversely, I have actually peer reviewed papers for medical journals that do a horrifying job with prompting. You’ll hear people talk about “prompt engineering” and this is what it is referring to: how do you engineer (write) a prompt to get the ideal response from the LLM?

Tips for effective prompting with an LLM

    1. Personas and roles can make a difference, both for you and for the LLM. What do I mean by this? Start your prompt by telling the LLM what perspective you want it to take. Without it, you’re going to make it guess what information and style of response you’re looking for. Here’s an example: if you asked it what caused cancer, it’s going to default to safety and give you a general public answer about causes of cancer in very plain, lay language. Which may be fine. But if you’re looking to generate a better understanding of the causal mechanism of cancer; what is known; and what is not known, you will get better results if you prompt it with “You are an experienced medical oncologist” so it speaks from the generated perspective of that role. Similarly, you can tell it your role. Follow it with “Please describe the causal mechanisms of cancer and what is known and not known” and/or “I am also an experienced medical researcher, although not an oncologist” to help contextualize that you want a deeper, technical approach to the answer and not high level plain language in the response.

      Compare and contrast when you prompt the following:

      A. “What causes cancer?”

      B. “You are an experienced medical oncologist. What causes cancer? How would you explain this differently in lay language to a patient, and how would you explain this to another doctor who is not an oncologist?”

      C. “You are an experienced medical oncologist. Please describe the causal mechanisms of cancer and what is known and not known. I am also an experienced medical researcher, although not an oncologist.”

      You’ll likely get different types of answers, with some overlap between A and the first part of answer B. Ditto for a tiny bit of overlap between the latter half of answer B and for C.

      I do the same kind of prompting with technical projects where I want code. Often, I will say “You are an expert data scientist with experience writing code in Python for a Jupyter Notebook” or “You are an AI programming assistant with expertise in building iOS apps using XCode and SwiftUI”. Those will then be followed with a brief description of my project (more on why this is brief below) and the first task I’m giving it.

      The same also goes for writing-related tasks; the persona I give it and/or the role I reference for myself makes a sizable difference in getting the quality of the output to match the style and quality I was seeking in a response.

  • Be specific. Saying “please code that” or “please write that” might work, sometimes, but more often or not will get a less effective output than if you provide a more specific prompt.I am a literal person, so this is something I think about a lot because I’m always parsing and mentally reviewing what people say to me because my instinct is to take their words literally and I have to think through the likelihood that those words were intended literally or if there is context that should be used to filter those words to be less literal. Sometimes, you’ll be thinking about something and start talking to someone about something, and they have no idea what on earth you’re talking about because the last part of your out-loud conversation with them was about a completely different topic!

    LLMs are the same as the confused conversational partner who doesn’t know what you’re thinking about. LLMs only know what you’ve last/recently told it (and more quickly than humans will ‘forget’ what you told it about a project). Remember the above tips about brainstorming and making a list of tasks for a project? Providing a description of the task along with the ask (e.g. we are doing X related to the purpose of achieving Y, please code X) will get you better output more closely matching what you wanted than saying “please code that” where the LLM might code something else to achieve Y if you didn’t tell it you wanted to focus on X.

    I find this even more necessary with writing related projects. I often find I need to give it the persona “You are an expert medical researcher”, the project “we are writing a research paper for a medical journal”, the task “we need to write the methods section of the paper”, and a clear ask “please review the code and analyses and make an outline of the steps that we have completed in this process, with sufficient detail that we could later write a methods section of a research paper”. A follow up ask is then “please take this list and draft it into the methods section”. That process with all of that specific context gives better results than “write a methods section” or “write the methods” etc.

  • Be willing to start over with a new window/chat. Sometimes the LLM can get itself lost in solving a sub-task and lose sight (via lost context window) of the big picture of a project, and you’ll find yourself having to repeat over and over again what you’re asking it to do. Don’t be afraid to cut your losses and start a new chat for a sub-task that you’ve been stuck on. You may be able to eventually come back to the same window as before, or the new window might become your new ‘home’ for the project…or sometimes a third, fourth, or fifth window will.
  • Try, try again.
    I may hold the record for the longest running bug that I (and the LLM) could. Not. solve. This was so, so annoying. No users apparently noticed it but I knew about it and it bugged me for months and months. Every few weeks I would go to an old window and also start a new window, describe the problem, paste the code in, and ask for help to solve it. I asked it to identify problems with the code; I asked it to explain the code and unexpected/unintended functionality from it; I asked it what types of general things would be likely to cause that type of bug. It couldn’t find the problem. I couldn’t find the problem. Finally, one day, I did all of the above, but then also started pasting every single file from my project and asking if it was likely to include code that could be related to the problem. By forcing myself to review all my code files with this problem in mind, even though the files weren’t related at all to the file/bug….I finally spotted the problem myself. I pasted the code in, asked if it was a possibility that it was related to the problem, the LLM said yes, I tried a change and…voila! Bug solved on January 16 after plaguing me since November 8. (And probably existed before then but I didn’t have functionality built until November 8 where I realized it was a problem). I was beating myself up about it and posted to Twitter about finally solving the bug (but very much with the mindset of feeling very stupid about it). Someone replied and said “congrats! sounds like it was a tough one!”. Which I realized was a very kind framing and one that I liked, because it was a tough one; and also I am doing a tough thing that no one else is doing and I would not have been willing to try to do without an LLM to support.

    Similarly, just this last week on Tuesday I spent about 3 hours working on a sub-task for a new project. It took 3 hours to do something that on a previous project took me about 40 minutes, so I was hyper aware of the time mismatch and perceiving that 3 hours was a long time to spend on the task. I vented to Scott quite a bit on Tuesday night, and he reminded me that sure it took “3 hours” but I did something in 3 hours that would take 3 years otherwise because no one else would do (or is doing) the project that I’m working on. Then on Wednesday, I spent an hour doing another part of the project and Thursday whipped through another hour and a half of doing huge chunks of work that ended up being highly efficient and much faster than they would have been, in part because the “three hours” it took on Tuesday wasn’t just about the code but about organizing my thinking, scoping the project and research protocol, etc. and doing a huge portion of other work to organize my thinking to be able to effectively prompt the LLM to do the sub-task (that probably did actually take closer to the ~40 minutes, similar to the prior project).

    All this to say: LLMs have become pair programmers and collaborators and writers that are helping me achieve tasks and projects that no one else in the world is working on yet. (It reminds me very much of my early work with DIYPS and OpenAPS where we did the work, quietly, and people eventually took notice and paid attention, albeit slower than we wished but years faster than had we not done that work. I’m doing the same thing in a new field/project space now.) Sometimes, the first attempt to delegate a sub-task doesn’t work. It may be because I haven’t organized my thinking enough, and the lack of ideal output shows that I have not prompted effectively yet. Sometimes I can quickly fix the prompt to be effective; but sometimes it highlights that my thinking is not yet clear; my ability to communicate the project/task/big picture is not yet sufficient; and the process of achieving the clarity of thinking and translating to the LLM takes time (e.g. “that took 3 hours when it should have taken 40 minutes”) but ultimately still moves me forward to solving the problem or achieving the tasks and sub-tasks that I wanted to do. Remember what I said at the beginning:

    Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

 

  • Try it anyway.
    I am trying to get out of the habit of saying “I can’t do X”, like “I can’t code/program an iOS app”…because now I can. I’ve in fact built and shipped/launched/made available multiple iOS apps (check out Carb Pilot if you’re interested in macronutrient estimates for any reason; you can customize so you only see the one(s) you care about; or if you have EPI, check out PERT Pilot, which is the world’s first and only app for tracking pancreatic enzyme replacement therapy and has the same AI feature for generating macronutrient estimates to aid in adjusting enzyme dosing for EPI.) I’ve also made really cool, 100% custom-to-me niche apps to serve a personal purpose that save me tons of time and energy. I can do those things, because I tried. I flopped a bunch along the way – it took me several hours to solve a simple iOS programming error related to home screen navigation in my first few apps – but in the process I learned how to do those things and now I can build apps. I’ve coded and developed for OpenAPS and other open source projects, including a tool for data conversion that no one else in the world had built. Yet, my brain still tries to tell me I can’t code/program/etc (and to be fair, humans try to tell me that sometimes, too).

    I bring that up to contextualize that I’m working on – and I wish others would work on to – trying to address the reflexive thoughts of what we can and can’t do, based on prior knowledge. The world is different now and tools like LLMs make it possible to learn new things and build new projects that maybe we didn’t have time/energy to do before (not that we couldn’t). The bar to entry and the bar to starting and trying is so much lower than it was even a year ago. It really comes down to willingness to try and see, which I recognize is hard: I have those thought patterns too of “I can’t do X”, but I’m trying to notice when I have those patterns; shift my thinking to “I used to not be able to do X; I wonder if it is possible to work with an LLM to do part of X or learn how to do Y so that I could try to do X”.

    A recent real example for me is power calculations and sample size estimates for future clinical trials. That’s something I can’t do; it requires a statistician and specialized software and expertise.

    Or…does it?

    I asked my LLM how power calculations are done. It explained. I asked if it was possible to do it using Python code in a Jupyter notebook. I asked what information would be needed to do so. It walked me through the decisions I needed to make about power and significance, and highlighted variables I needed to define/collect to put into the calculation. I had generated the data from a previous study so I had all the pieces (variables) I needed. I asked it to write code for me to run in a Jupyter notebook, and it did. I tweaked the code, input my variables, ran it..and got the result. I had run a power calculation! (Shocked face here). But then I got imposter syndrome again, reached out to a statistician who I had previously worked with on a research project. I shared my code and asked if that was the correct or an acceptable approach and if I was interpreting it correctly. His response? It was correct, and “I couldn’t have done it better myself”.

    (I’m still shocked about this).

    He also kindly took my variables and put it in the specialized software he uses and confirmed that the results output matched what my code did, then pointed out something that taught me something for future projects that might be different (where the data is/isn’t normally distributed) although it didn’t influence the output of my calculation for this project.

    What I learned from this was a) this statistician is amazing (which I already knew from working with him in the past) and kind to support my learning like this; b) I can do pieces of projects that I previously thought were far beyond my expertise; c) the blocker is truly in my head, and the more we break out of or identify the patterns stopping us from trying, the farther we will get.

    “Try it anyway” also refers to trying things over time. The LLMs are improving every few months and often have new capabilities that didn’t before. Much of my work is done with GPT-4 and the more nuanced, advanced technical tasks are way more efficient than when using GPT-3.5. That being said, some tasks can absolutely be done with GPT-3.5-level AI. Doing something now and not quite figuring it out could be something that you sort out in a few weeks/months (see above about my 3 month bug); it could be something that is easier to do once you advance your thinking ; or it could be more efficiently done with the next model of the LLM you’re working with.

  • Test whether custom instructions help. Be aware though that sometimes too many instructions can conflict and also take up some of your context window. Plus if you forget what instructions you gave it, you might get seemingly unexpected responses in future chats. (You can always change the custom instructions and/or turn it on and off.)

I’m hoping this helps give people confidence or context to try things with LLMs that they were not willing to try before; or to help get in the habit of remembering to try things with LLMs; and to get the best possible output for the project that they’re working on.

Remember:

  • Right-size the task by making a clear ask.
  • You can use different chat windows for different levels of the same project.
  • Use a list to help you, the human, keep track of all the pieces that contribute to the bigger picture of the project.
  • Try giving the LLM a persona for an ask; and test whether you also need to assign yourself a persona or not for a particular type of request.
  • Be specific, think of the LLM as a conversational partner that can’t read your mind.
  • Don’t be afraid to start over with a new context window/chat.
  • Things that were hard a year ago might be easier with an LLM; you should try again.
  • You can do more, partnering with an LLM, than you can on your own, and likely can do things you didn’t realize were possible for you to do!

Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

Have any tips to help others get more effective output from LLMs? I’d love to hear them, please comment below and share your tips as well!

Tips for prompting LLMs like ChatGPT, written by Dana M. Lewis and available from DIYPS.org

Understanding Fecal Elastase Test Results Including Sensitivity And Specificity And What It Means For Exocrine Pancreatic Insufficiency (EPI or PEI)

One of the challenges related to diagnosing exocrine pancreatic insufficiency (known as EPI or PEI) is that there is no perfect test.

With diabetes, we can see in several different ways what glucose is doing: via fasting glucose levels, HbA1c (an average of 3 months glucose), and/or continuous glucose monitoring. We can also test for c-peptide to see if insulin production is ongoing.

Yet for EPI, the tests for assessing whether and how much the pancreas is producing digestive enzymes are much less direct, more invasive, or both.

Some of the tests include a breath test; an invasive secretin pancreatic function test; a 72-hour fecal fat collection test, or a single sample fecal elastase test.

  • A breath test is an indirect test, which assesses the end-product of digestion rather than digestion itself, and other conditions (like SIBO) can influence the results of this test. It’s also not widely available or widely used.
  • The secretin pancreatic function test is an invasive test involving inserting a tube into the small intestine after giving secretin, which is a hormone that stimulates the pancreas. The tube collects digestive juices produced by the pancreas, which are tested. It’s invasive, costly, and therefore not ideal.
  • For reliability, the 72-hour fecal fat collection test might be ideal, because it’s checking the amount of fat in the stool. It requires stopping enzymes, if someone is taking them already, and consuming a high fat diet. But that includes collecting stool samples for 3 days – ugh. (The “ugh” is my personal opinion, clearly).
  • The fecal elastase test, in contrast, does not require stopping enzymes. It measures human elastase, whereas digestive enzymes are typically pig-based, so you don’t have to stop enzymes when doing this test. It’s also a single stool sample (so you’re not collecting poop for 3 days in a row). The sensitivity and specificity are different based on the diagnostic threshold, which I’ll talk about below, and the accuracy can be influenced by the sample. Diarrhea, meaning watery poop, can make this test even less reliable. But that’s why it’s good that you can take enzymes while doing this test. Someone with diarrhea and suspected EPI could go on enzymes, reduce their diarrhea so they could have a formed (non-watery) sample for the elastase test, and get a better answer from the fecal elastase test.

The fecal elastase test is often commonly used for initial screening or diagnosis of EPI. But over the last two years, I’ve observed a series of problems with how it is being used clinically, based on reading hundreds of research and clinical practice articles and reading thousands of posts of people with EPI describing how their doctor is ordering/reviewing/evaluating this test.

Frequent problems include:

  • Doctors refuse to test elastase, because they don’t believe the test indicates EPI due to the sensitivity/specificity results for mild/moderate EPI.
  • Doctors test elastase, but won’t diagnose EPI when test results are <200 (especially if 100-200).
  • Doctors test elastase, but won’t diagnose EPI even when test results are <100!
  • Doctors test elastase, diagnose EPI, but then do not prescribe enzymes because of the level of elastase (even when <200).
  • Doctors test elastase, diagnose EPI, but prescribe a too-low level of enzymes based on the level of elastase, even though there is no evidence indicating elastase should be used to determine dosing of enzymes.

Some of the problems seem to result from the fact that the elastase test has different sensitivity and specificity at different threshold levels of elastase.

When we talk about “levels” of elastase or “levels” or “types” of EPI (PEI), that usually means the following thresholds / ranges:

  • Elastase <= 200 ug/g indicates EPI
  • Elastase 100-200 ug/g indicates “mild” or “mild/moderate” or “moderate” EPI
  • Elastase <100 ug/g often is referred to as “severe” EPI

You should know that:

  • People with severe EPI (elastase <100) could have no symptoms
  • People with mild/moderate EPI (elastase 100-200) could have a very high level of symptoms and be malnourished
  • People with any level of elastase indicating EPI (elastase <=200) can have EPI even if they don’t have malnourishment (usually meaning blood vitamin levels like A, D, E, or K are below range).

So let’s talk about sensitivity and specificity at these different levels of elastase.

First, let’s grab some sensitivity and specificity numbers for EPI.

  1. One paper that is widely cited, albeit old, is of sensitivity and specificity of fecal elastase for EPI in people with chronic pancreatitis. You’ll see me talk in other posts about how chronic pancreatitis and cystic fibrosis-related research is over-represented in EPI research, and it may or may not reflect the overarching population of people with EPI.But since it’s widely used, I’ll use it in the below examples, especially because this may be what is driving clinician misunderstanding about this test.With a cut off of <200 ug/g, they found that the sensitivity in detecting moderate/severe EPI is 100%, and 63% sensitivity for detecting mild EPI. At that <200 ug/g threshold, the specificity is 93% (which doesn’t distinguish between severities). With a cut off of <100 ug/g, the sensitivity for detecting mild EPI drops to 50%, but the specificity increases to 98%.This means that:
    1. 63% of people with mild EPI would be correctly diagnosed using an elastase threshold of 200 ug/g (vs. only 50% at 100 ug/g).
    2. 100% of people with moderate/severe EPI would be correctly diagnosed using an elastase threshold of 200 ug/g (compared to only 93% or 96% for moderate/severe at 100 ug/g).
    3. Only 7% of people testing <200 ug/g would be incorrectly diagnosed with EPI, and only 2% of people testing <100 ug/g.
  2. For comparison, a systematic review evaluated a bunch of studies (428 people from 14 studies) and found an average sensitivity of 77% (95% CI of 58-89%) and average specificity of 88% (95% CI of 78-93%).This sensitivity is a little higher than the above number, which I’ll discuss at the end for some context.

So what does sensitivity and specificity mean and why do we care?

At an abstract level, I personally find it hard to remember what sensitivity and specificity mean.

  • Sensitivity means: how often does it correctly identify the thing we want to identify?

This means a true positive. (Think about x-ray screening at airport security: how often do they find a weapon that is there?)

  • Specificity means: how often does it avoid mistakenly identifying the thing we want to identify? In other words, how often is a positive a true positive rather than a false positive?

(Think about x-ray screening at airport security: how often does it correctly identify that there are no weapons in the bag? Or how often do they accidentally think that your jam-packed bag of granola and snacks might be a weapon?)

Here is how we apply this to fecal elastase testing for EPI.

For those with moderate/severe EPI, the test is 100% sensitive at correctly detecting those cases if you use an elastase cut off of <200 ug/g. For those with mild EPI, the test drops to only being 63% sensitive at correctly detecting all of those cases. And 93% of the time, the test correctly excludes EPI when it doesn’t exist (at a <200 ug/g cut off, vs. 98% of the time at a <100 ug/g cut off). Conversely, 7% (which we get from subtracting 93% from 100%) of people with elastase <200 ug/g might not have EPI, and 2% (98% subtracted from 100%) of people with elastase <100 ug/g might not have EPI.

Here’s another way of thinking about it, using a weather forecast analogy. Think about how easy it is to predict rain when a major storm is coming. That’s like trying to detect severe EPI, it’s a lot easier and forecasters are pretty good about spotting major storms.

But in contrast, what about correctly predicting light rain? In Seattle, that feels close to impossible – it rains a lot, very lightly. It’s hard to predict, so we often carry a light rain jacket just in case!

And for mild EPI, that’s what the sensitivity of 63% means: less than two thirds of the time can it correctly spot mild EPI by looking for <200 ug/g levels, and only half the time by looking for <100 ug/g. The signal isn’t as strong so it’s easier to miss.

The specificity of 93% means that the forecast is pretty good at identifying not-rainy-days, even with a cut off of elastase >200 ug/g. But, occasionally (around 7/100 times), it’s wrong.

Table comparing the sensitivity for severe and mild EPI alongside specificity, plus comparing to weather forecast ability for rain in major storms.

Why might clinicians be incorrectly using the value of these numbers for the fecal elastase test?

I hypothesize that in many cases, for the elastase levels now considered to indicate mild/moderate EPI (elastase 100-200 ug/g), clinicians might be accidentally swapping the sensitivity (63%) and specificity (93%) numbers in their mind.

What these numbers tell us is that 63% of the time, we’ll catch mild EPI through elastase testing. This means 37/100 people with actual mild EPI might be missed!

In contrast, the specificity of 93% tells us about accidental false positives, and that 7/100 people without EPI might accidentally get flagged as having possible EPI.

Yet, the clinical practice in the real-world seems to swap these numbers, acting as if the accuracy goes the other way, suspecting that elastase 100-200 doesn’t indicate EPI (e.g. thinking 37/100 false positives, which is incorrect, the false positive rate is 7/100).

There’s plenty of peer-reviewed and published evidence that people with elastase 100-200 have a clear symptom burden. There’s even a more recent paper suggesting that those with symptoms and elastase of 200-500 benefit from enzymes!

Personally, as a person with EPI, I am frustrated when I see/hear cases of people whose clinicians refuse testing, or don’t prescribe PERT when elastase is <=200 ug/g, because they don’t believe elastase 100-200 ug/g is an accurate indicator of EPI. This data shows that’s incorrect. Regardless of which paper you use and which numbers you cite for sensitivity and specificity, they all end up with way more common rates of false negatives (missing people with EPI) than false positives.

And, remember that many people with FE 200-500 benefit from enzymes, too. At a cutoff of 200 ug/g, the number of people we are likely to miss (sensitivity) at the mild/moderate level is much higher than the number of false positives who don’t actually have EPI. That puts the risk/benefit calculation – to me – such that it warrants using this test, putting people on enzymes, and evaluating symptom resolution over time following PERT dosing guidelines. If people’s symptom burden does not improve, titrating PERT and re-testing elastase makes sense (and that is what the clinical guidelines say to do), but the cost of missing ~37 people out of 100 with EPI is too high!

Let’s also talk about elastase re-testing and what to make of changed numbers.

I often also observe people with EPI who have their elastase re-tested multiple times. Here are some examples and what they might mean.

  • A) Someone who tests initially with a fecal elastase of 14, later retests as 16, then 42 ug/g.
  • B) Someone who tests initially at 200 and later 168.
  • C) Someone who tests initially at 72 and later 142.
  • D) Someone who tests initially as 112 and later 537.

Remember the key to interpreting elastase is that <=200 ug/g is generally accepted as indicating EPI. Also it’s key to remember that the pancreas is still producing some enzymes, thus elastase production will vary slightly. But in scenarios A, B, and C – those changes are not meaningful. In scenario A, someone still has clear indicators of severe (elastase <100) EPI. Slight fluctuations don’t change that. Same for scenario B, 200 and 168 are both still in mild/moderate EPI (elastase <=200). Even scenario C isn’t very meaningful, even though there is an “increase”, this is still clearly EPI.

In most cases, the fluctuations in test results are likely a combination of both natural fluctuations in pancreas production and/or test reliability. If someone was eating a super low fat diet, taking enzymes effectively, that may influence how the pancreas is producing its natural enzymes – we don’t actually know what causes the pancreas to fluctuate the natural enzyme levels.

The only case that is meaningful in these examples is scenario D, where someone initially had a result of 112 and later clearly above the EPI threshold (e.g. 537). There are a few cases in the literature where people with celiac seem to have temporary EPI and later their elastase production returns to normal. This hasn’t been documented in other conditions, which doesn’t mean that it’s not possible, but we don’t know how common it is. It’s possible the first sample of 112 was due to a watery sample (e.g. during diarrhea) or other testing inaccuracy, too. If a third test result was >500, I’d assume it was a temporary fluctuation or test issue, and that it’s not a case of EPI. (Yay for that person!). If it were me (and I am not a doctor), I’d have them try out a period without enzymes to ensure that symptoms continued to be managed effectively. If the third test was anywhere around 200 or below, I’d suspect something going on contributing to fluctuations in pancreatic production and not be surprised if enzymes were continued to be needed, unless the cause could be resolved.

But what about scenario C where someone “went from severe to mild/moderate EPI”?!

A lot of people ask that. There’s no evidence in the hundreds (seriously, hundreds) of papers about EPI that indicate clearly that enzymes should be dosed based on elastase level, or that there’s different needs based on these different categories. The “categories” of EPI originally came from direct measurements of enzyme secretion via invasive tests, combined with quantitative measurements of bicarbonate and fat in stools. Now that fecal elastase is well established as a non-invasive diagnostic method, severities are usually estimated based on the sensitivity of these cutoffs for detecting EPI, and that’s it. The elastase level doesn’t actually indicate the severity of the experience through symptoms, and so enzymes should be dosed and adjusted based on the individual’s symptoms and their diet.

In summary:

  • Elastase <=200 ug/g is very reliable, indicates EPI, and warrants starting PERT.
  • There is one small study suggesting even people with elastase 200-500 might benefit from PERT, if they have symptoms, but this needs to be studied more widely.
  • It’s possible clinicians are conflating the sensitivity and specificity, thus misunderstanding how accurately elastase tests can detect cases of mild/moderate EPI (when elastase is 100-200 ug/g).

Let me know if anyone has questions about elastase testing, sensitivity, and specificity that I haven’t answered here! Remember I’m not a doctor, and you should certainly talk with your doctor if you have questions about your specific levels. But make sure your doctor understands the research, and feel free to recommend this post to them if they aren’t already familiar with it: https://bit.ly/elastase-sensitivity-specificity

Personalized Story Prompts for Kids Books and Early Reader Books

For the holidays this year, I decided to try my hand at creating another set of custom, illustrated stories for my nieces and nephews (and bonus nieces and nephews). I have a few that are very advanced readers and/or too old for this, but I ended up with a list of 8 kids in my life from not-yet-reading to beginning reading to early 2nd grade reading level. I wanted to write stories that would appeal to each kid, include them as the main character, be appropriate for their reading (or read-to) level, and also include some of their interests.

Their interests were varied which made it quite a challenge! Here’s the list I worked from:

  • 2nd grade reading level, Minecraft
  • early 2nd grade reading level: soccer, stunt biking, parkour, ninja, Minecraft
  • beginning reading level: soccer, stunt biking, ninja, Spiderman
  • beginning reading level: Peppa Pig, moko jumbies
  • (read to younger child): Minnie Mouse, Peppa Pig, Bluey, and tea parties
  • (read to younger child): Bluey, Olaf, Elsa, & Anna
  • (read to younger child): cars/vehicles

I enlisted ChatGPT, an LLM, and ended up creating stories for each kid, matching their grade levels and interests, then illustrating them.

But illustrating them was actually a challenge (still), trying to create images with similar characters that would be on every page of the story and similar enough throughout that they were the “same” character.

Illustration challenges and how I got successful prompts:

My first pass on images wasn’t very good. I could get basic details to repeat, but often had images that looked like this – slightly different style and character throughout:

8 different illustrations in slightly different styles and almost different characters of a girl with blonde, shoulder length hair and a purple dress in an enchanted forest

Different styles throughout and that makes it look like a different character, even though it’s the same character in the whole story. This was a book to read to a <3 year old, though, and I thought she wouldn’t mind the different styles and left it as is. I also battled with adding, for personal use, the characters that most interested her: Peppa Pig and Minnie Mouse.

Interestingly, if I described with a prompt to illustrate a scene including a character “inspired by, but distinct from, Peppa Pig”…it essentially drew Peppa Pig or a character from it. No problems.

But if you gave the same prompt “inspired by, but distinct from, Minnie Mouse”? No go. No image at all: ChatGPT would block it for copyright reasons and wouldn’t draw any of the image. I riffed a bunch of times and finally was able to prompt a good enough mouse with round ears and a red dress with white polka dots. I had to ultimately illustrate the mouse character alone with the human character, because if I tried to get a Peppa-inspired character and then separately a mouse character, it wanted to draw the mouse with a pig-style face in the correct dress! I could never work around that effectively for the time I had available (and all the other books I was trying to illustrate!) so I stopped with what I had.

This was true for other characters, too, with copyright issues. It won’t draw anything from or like Bluey – or Frozen, when prompted. But I could get it to draw “an ethereal but warm, tall female adult with icy blonde hair, blue eyes, in an icy blue dress”, which you can see in the fourth image on the top row here:

Another series of illustrations with slightly different characters but closer in style throughout. there's one image showing a Frozen-inspired female character that I got by not prompting with Frozen.

I also managed to get slightly closer matching characters throughout this, but still quite a bit of variability. Again, for a young being-read-to-child, it was good enough for my purposes. (I never could get it to draw a Bluey-like character, even when I stopped referencing Bluey by name and described the shape and character, so I gave up on that.)

I tried a variety of prompts and series of prompts for each book. Sometimes, I would give it the story and prompt it with each page’s text, asking for an illustration and to keep it in the same style and the same character as the previous image. That didn’t work well, even when I told it in every prompt to use the same style and character plus the actual image prompt. I then tried to create a “custom” GPT, with the GPT’s instructions to use the same style throughout. That started to give me slightly better results, but I still had to remind it constantly to use the same style.

I also played around with taking an image that I liked, starting a new chat, and asking it to describe that image. Then I’d use that prompt to create a new prompt, describing the character in the same way. That started to get me slightly better results, especially when I did so using the custom GPT I had designed (you can try using this GPT here). I started to get better, more consistent characters:

A series of images of a young cartoon-drawn boy with wavy blonde hair riding a bike through an enchanted forest.

 

A series of drawings of a cartoon-like character with spiky blonde hair, blue eyes, and various outfits including a ninja costume

Those two had some variability, but a lot improved beyond the first several books. They are for the beginning and second-grade reading levels, too, so they are older kids with more attention to detail so it was worth the extra effort to try to get theirs to be more consistent.

The last one with the ninja and ninja outfits is another one that ran into copyright issues. I tried to have it illustrate a character inspired by, but distinct from, Spiderman – nope, no illustration at all. I asked it to illustrate the first picture in the soccer park with a spider strand looping in the corner of the image, like Spiderman had swung by but was out of sight and not picture – NOPE. You can’t even get an image that has Spiderman in the prompt at all, even if Spiderman isn’t in the picture! (I gave up and moved on without illustrating spiderwebs, even though Spiderman is described in the story).

My other favorite and pretty consistent one was two more of the early reader ones:

A series of images showing a young cartoon boy with wavy brown hair at a car fair

The hard part from that book was actually trying to do the cars consistently, rather than the human character. The human character was fairly consistent (although in different outfits, despite clear outfit prompts – argh) throughout, because I had learned from the previous images and prompt processes and used the Custom GPT, but the cars varied more. But, for a younger reader, hopefully that doesn’t matter.

The other, more-consistent character one for an early reader had some variations in style but did a better job matching the character throughout even when the style changed.

Another example with a mostly consistent young cartoon drawn girl with whispy blonde pigtails and big blue eyes, plus moko jumbies and peppa pig

How I wrote each story:

I also found some processes for building better stories. Again, see the above list of very, varied interests for each kid. Some prompts were straight forward (Minecraft) and other were about really different characters or activities (moko jumbies and Peppa Pig? Minnie Mouse and Peppa Pig? soccer ninja and Minecraft?).

What I ended up doing for each:

  1. In a new ChatGPT window (not the custom GPT for illustrating): Describe the reading level; the name of the character(s); and the interests. Ask it to brainstorm story ideas based on these interests.
  2. It usually gave 3 story ideas in a few sentences each, including a title. Sometimes, I would pick one and move on. Other times, I would take one of the ideas and tweak it a bit and ask for more ideas based on that. Or, I’d have it try again generally, asking for 3 more ideas.
  3. Once I had an idea that I liked, I would ask it to outline the story, based on the chosen story idea and the grade level we were targeting. Sometimes I would tweak the title and other times I would take the title as-is.
  4. Once it had the outline, I could have it then write the entire story (especially for the younger, beginner reader or read-to levels that are so short), but for the “chapter” books of early 2nd and 2nd grade reading level, I had it give me a chapter at a time, based on the outline. As each chapter was generated, I edited and tweaked it and took the text to where I would build the book. Sometimes, I would re-write the whole chapter myself, then give it back the chapter text and ask it to write the next one. If you didn’t give it back, it wouldn’t know what the chapter ended up as, so this is an important step to do when you’re making more than minor sentence construction changes.
  5. Because I know my audience(s) well, I tweaked it heavily as I went, incorporating their interests. For example, in the second images I showed above, there’s a dancing dog. It’s their actual dog, with the dog named in the story along with them as characters. Or in the chapter book for the character with the bike, it described running up a big mountain on a quest and being tired. I tossed in an Aunt-Dana reference including reminding the character about run-walking as a way to keep moving forward without stopping and cover the distance that needs to be covered. I also tweaked the stories to include character traits (like kindness) that each child has, and/or behaviors that their family prioritizes.

I described the images processes first, then the story writing, in this blog post, but I actually did the opposite for each book. I would write (brainstorm, outline, write, edit, write) the entire book, then I would go start a new chat window (eventually solely using my custom GPT) and ask for illustrations. Sometimes, I would give it the page of the story’s text and ask it to illustrate it. That’s helpful when you don’t know what to illustrate, and it did fairly well for some of the images (especially the Minecraft-inspired ones!). Ultimately, though, I would often get an image, ask what the prompt was for the image, tweak the prompt, and give it back to better match the story or what I wanted to illustrate. Once I was regularly asking for the image prompts, I had realized that giving the character details repeatedly for every image helped with consistency. Then I would use the ad-nauseam details myself for a longer prompt, which resulted in better images throughout, so I spent more energy deciding myself what to illustrate to best match the story.

All in all, I made 7 custom books (and 8 copies, one of the Minecraft books I copied and converted to a different named character for a friend’s child!). Between writing and editing, and illustrating, I probably spent an average of one hour per book! That’s a lot of time, but it did get more efficient as I went, and in some cases the hour included completely starting over and re-working the images in the book for consistency compared to the version I had before. The next books I create will probably take less time, both because I figured out the above processes but also because hopefully DALL*E and other illustration tools will get better about being able to illustrate the same character consistently across multiple prompts to illustrate a story.

How other people can use this to create stories – and why:

I have been so excited about this project. I love, love, love to read and I love reading with my nieces and nephews (and bonus kids in my life) and finding books that match their interest and help spark or maintain their love of reading. That’s why I did this project, and I have been bursting for WEEKS waiting to be able to give everyone their books! I wanted it to be a surprise for their parents, too, which meant that I couldn’t tell 2/3 of my closest circles about my cool project.

One of my friends without young kids that I finally told about my project loved the idea: she works as staff at an elementary school, supporting some students who are working on their reading skills who are nonverbal. She thought it would be cool to make a book for one student in particular, and described some of her interests: violins, drums, raspberries, and unicorns. I was in the car when she told me this, and I was able to follow the same process as above in the mobile ChatGPT app and list the interests, ask for a brainstorm of story ideas for a beginning reading level style book that had some repetitive text using the interests to aid in reading. It created a story about a unicorn who gathers other animals in the forest to play in an orchestra (with drums and violins) and eat raspberries. I had it illustrate the story, and it did so (with slightly different unicorns throughout). I only had to have it re-draw one image, because it put text in one of the last images that didn’t need to be there.

Illsutrations from a quick story about a unicorn, drums, violin, and an orchestra, plus raspberries

It was quick and easy, and my friend and her student LOVED it, and the other teachers and staff at the school are now working on personalized books for a lot of other students to help them with reading skills!

It really is an efficient and relatively easy way to generate personalized content; it can do so at different reading levels (especially when a teacher or someone who knows the student can tweak it to better match the reading level or sounds and words they are working on next); and you can generate pretty good matching illustrations too.

The hardest part is consistent characters; but when you don’t need consistency throughout a whole book, the time it takes drops to ~5 or so minutes to write, tweak, and illustrate an entire story.

Illustrations require a paid ChatGPT account, but if you have one and want to try out the custom GPT I built for (slightly more consistent) illustrations of stories, you can check it out here.

Custom stories: prompting and effective illustrating with ChatGPT, a blog post by Dana M. Lewis from DIYPS.org

Accepted, Rejected, and Conflict of Interest in Gastroenterology (And Why This Is A Symptom Of A Bigger Problem)

Recently, someone published a new clinical practice update on exocrine pancreatic insufficiency (known as EPI or PEI) in the journal called Gastroenterology, from the American Gastroenterology Association (AGA). Those of you who’ve read any of my blog posts in the last year know how much I’ve been working to raise awareness of EPI, which is very under-researched and under-treated clinically despite the prevalence rates in the general population and key sub-populations such as PWD. So when there was a new clinical practice update and another publication on EPI in general, I was jazzed and set out to read it immediately. Then frowned. Because, like so many articles about EPI, it’s not *quite* right about many things and it perpetuates a lot of the existing problems in the literature. So I did what I could, which was to check out the journal requirements for writing a letter to the editor (LTE) in response to this article and drafting and submitting a LTE article about it. To my delight, on October 17, 2023, I got an email indicating that my LTE was accepted.

You can find my LTE as a pre-print here.

See below why this pre-print version is important, and why you should read it, plus what it reminds us about what journal articles can or cannot tell us in healthcare.

Here’s an image of my acceptance email. I’ll call out a key part of the email:

A print of the acceptance email I received on October 17, 2023, indicating my letter would be sent to authors of the original articles for a chance to choose to respond (or not). Then my LTE would be published.

Letters to the Editor are sent to the authors of the original articles discussed in the letter so that they might have a chance to respond. Letters are not sent to the original article authors until the window of submission for letters responding to that article is closed (the last day of the issue month in which the article is published). Should the authors choose to respond to your letter, their response will appear alongside your letter in the journal.

Given the timeline described, I knew I wouldn’t hear more from the journal until the end of November. The article went online ahead of print in September, meaning likely officially published in October, so the letters wouldn’t be sent to authors until the end of October.

And then I did indeed hear back from the journal. On December 4, 2023, I got the following email:

A print of the email I received saying the LTE was now rejected
TLDR: just kidding, the committee – members of which published the article you’re responding to – and the editors have decided not to publish your article. 

I was surprised – and confused. The committee members, or at least 3 of them, wrote the article. They should have a chance to decide whether or not to write a response letter, which is standard. But telling the editors not to publish my LTE? That seems odd and in contrast to the initial acceptance email. What was going on?

I decided to write back and ask. “Hi (name redacted), this is very surprising. Could you please provide more detail on the decision making process for rescinding the already accepted LTE?”

The response?

Another email explaining that possible commercial affiliations influenced their choice to reject the article after accpeting it originally
In terms of this decision, possible commercial affiliations, as well as other judgments of priority and relevance among other submissions, dampened enthusiasm for this particular manuscript. Ultimately, it was not judged to be competitive for acceptance in the journal.

Huh? I don’t have any commercial affiliations. So I asked again, “Can you clarify what commercial affiliations were perceived? I have none (nor any financial conflict of interest; nor any funding related to my time spent on the article) and I wonder if there was a misunderstanding when reviewing this letter to the editor.”

The response was “There were concerns with the affiliation with OpenAPS; with the use of the term “guidelines,” which are distinct from this Clinical Practice Update; and with the overall focus being more fit for a cystic fibrosis or research audience rather than a GI audience.”

A final email saying the concern with my affiliation of OpenAPS, which is not a commercial organization nor related to the field of gastroenterology and EPI

Aha, I thought, there WAS a misunderstanding. (And the latter makes no sense in the context of my LTE – the point of it is that most research and clinical literature is a too-narrow focus, cystic fibrosis as one example – the very point is that a broad gastroenterology audience should pay attention to EPI).

I wrote back and explained how I, as a patient/independent researcher, struggle to submit articles to manuscript systems without a Ringgold-verified organization. (You can also listen to me describe the problem in a podcast, here, and I also talked about it in a peer-reviewed journal article about citizen science and health-related journal publishing here). So I use OpenAPS as an “affiliation” even though OpenAPS isn’t an organization. Let alone a commercial organization. I have no financial conflict of interest related to OpenAPS, and zero financial conflict of interest or commercial or any type of funding in gastroenterology at all, related to EPI or not. I actually go to such extremes to describe even perceived conflicts of interest, even non-financial ones, as you can see this in my disclosure statement publicly available from the New England Journal of Medicine here on our CREATE trial article (scroll to Supplemental Information and click on Disclosure Forms) where I articulate that I have no financial conflicts of interest but acknowledge openly that I created the algorithm used in the study. Yet, there’s no commercial or financial conflict of interest.

A screenshot from the publicly available disclosure form on NEJM's site, where I am so careful to indicate possible conflicts of interest that are not commercial or financial, such as the fact that I developed the algorithm that was used in that study. Again, that's a diabetes study and a diabetes example, the paper we are discussing here is on exocrine pancreatic insufficiency (EPI) and gastroenterology, which is unrelated. I have no COI in gastroenterology.

I sent this information back to the journal, explaining this, and asking if the editors would reconsider the situation, given that the authors (committee members?) have misconstrued my affiliation, and given that the LTE was originally accepted.

Sadly, there was no change. They are still declining to publish this article. And there is no change in my level of disappointment.

Interestingly, here is the article in which my LTE is in reply to, and the conflict of interest statement by the authors (committee members?) who possibly raised a flag about supposed concern about my (this is not true) commercial affiliation:

The conflict of interest statement for authors from the article "AGA Clinical Practice Update on the Epidemiology, Evaluation, and Management of Exocrine Pancreatic Insufficiency 2023"

The authors disclose the following: David C. Whitcomb: consultant for AbbVie, Nestlé, Regeneron; cofounder, consultant, board member, chief scientific officer, and equity holder for Ariel Precision Medicine. Anna M. Buchner: consultant for Olympus Corporation of America. Chris E. Forsmark: grant support from AbbVie; consultant for Nestlé; chair, National Pancreas Foundation Board of Directors.

As a side note, one of the companies with consulting and/or grant funding to two of the three authors is the biggest manufacturer of pancreatic enzyme replacement therapy (PERT), which is the treatment for EPI. I don’t think this conflict of interest makes these clinicians ineligible to write their article; nor do I think commercial interests should preclude anyone from publishing – but in my case, it is irrelevant, because I have none. But, it does seem weird given the stated COI for my (actually not a) COI then to be a reason to reject a LTE, of all things.

Here’s the point, though.

It’s not really about the fact that I had an accepted article rejected (although that is weird, to say the least…).

The point is that the presence of information in medical and research journals does not mean that they are correct. (See this post describing the incorrect facts presented about prevalence of EPI, for example.)

And similarly, the lack of presence of material in medical and research journals does not mean that something is not true or is not fact! 

There is a lot of gatekeeping in scientific and medical research. You can see it illustrated here in this accepted-rejected dance because of supposed COI (when there are zero commercial ties, let alone COI) and alluded to in terms of the priority of what gets published.

I see this often.

There is good research that goes unpublished because editors decide not to prioritize it (aka do not allow it to get published). There are many such factors in play affecting what gets published.

There are also systemic barriers.

  • Many journals require fees (called article processing charges or “APC”s) if your article is accepted for publication. If you don’t have funding, that means you can’t publish there unless you want to pay $2500 (or more) out of pocket. Some journals even have submission fees of hundreds of dollars, just to submit! (At least APCs are usually only levied if your article is accepted, but you won’t submit to these journals if you know you can’t pay the APC). That means the few journals in your field that don’t require APCs or fees are harder to get published in, because many more articles are submitted (thus, influencing the “prioritization” problem at the editor level) to the “free” journals.
  • Journals often require, as previously described, your organization to be part of a verified list (maintained by a third party org) in order for your article to be moved through the queue once submitted. Instead of n/a, I started listing “OpenAPS” as my affiliation and proactively writing to admin teams to let them know that my affiliation won’t be Ringgold-verified, explaining that it’s not an org/I’m not at any institution, and then my article can (usually) get moved through the queue ok. But as I wrote in this peer-reviewed article with a lot of other details about barriers to publishing citizen science and other patient-driven work, it’s one of many barriers involved in the publication process. It’s a little hard, every journal and submission system is a little different, and it’s a lot harder for us than it is for people who have staff/support to help them get articles published in journals.

I’ve seen grant funders say no to funding researchers who haven’t published yet; but editors also won’t prioritize them to publish on a topic in a field where they haven’t been funded yet or aren’t well known. Or they aren’t at a prestigious organization. Or they don’t have the “right” credentials. (Ahem, ahem, ahem). It can be a vicious cycle for even traditional (aka day job) researchers and clinicians. Now imagine that for people who are not inside those systems of academia or medical organizations.

Yet, think about where much of knowledge is captured, created, translated, studied – it’s not solely in these organizations.

Thus, the mismatch. What’s in journals isn’t always right, and the process of peer review can’t catch everything. It’s not a perfect system. But what I want you to take away, if you didn’t already have this context, is an understanding that what’s NOT in a journal is not because the information is not fact or does not exist. It may have not been studied yet; or it may have been blocked from publication by the systemic forces in play.

As I said at the end of my LTE:

It is also critical to update the knowledge base of EPI beyond the sub-populations of cystic fibrosis and chronic pancreatitis that are currently over-represented in the EPI-related literature. Building upon this updated research base will enable future guidelines, including those like the AGA Clinical Practice Update on EPI, to be clearer, more evidence-based, and truly patient-centric ensuring that every individual living with exocrine pancreatic insufficiency receives optimal care.

PS – want to read my LTE that was accepted then rejected, meaning it won’t be present in the journal? Here it is on a preprint server with a DOI, which means it’s still easily citable! Here’s an example citation:

Lewis, D. Navigating Ambiguities in Exocrine Pancreatic Insufficiency. OSF Preprints. 2023. DOI: 10.31219/osf.io/xcnf6