Piecing together your priorities when your pieces keep changing

When dealing with chronic illnesses, it sometimes feels like you have less energy or time in the day to work with than someone without chronic diseases. The “spoon theory” is a helpful analogy to illustrate this. In spoon theory, each person has a certain number of “spoons” representing their daily energy available for tasks including activities of daily living, activity or recreation activity, work, etc. For example, an average person might have 10 spoons per day, using just one spoon for daily tasks. However, someone with chronic illness may start with only 8 spoons and require 2-3 spoons for the same daily tasks, leaving them with fewer spoons for other activities.

I’ve been thinking about this differently lately. My priorities on a daily basis are mixed between activities of daily living (which includes things like eating, managing diabetes stuff like changing pump site or CGM, etc); exercise or physical activity like walking or cross-country skiing (in winter) or hiking (at other times of the year); and “work”. (“Work” for me is a mix of funded projects and my ongoing history of unfunded projects of things that move the needle, such as developing the world’s first app for exocrine pancreatic insufficiency or developing a symptom score and validating it through research or OpenAPS, to name a few.)

A raccooon juggles three spoonsAs things change in my body (I have several autoimmune diseases and have gained more over the years), my ‘budget’ on any given day has changed, and so have my priorities. During times when I feel like I’m struggling to get everything done that I want to prioritize, it sometimes feels like I don’t have enough energy to do it all, compared to other times when I’ve had sufficient energy to do the same amount of daily activities, and with extra energy left over. (Sometimes I feel like a raccoon juggling three spoons of different weights.)

In my head, I can think about how the relative amount of energy or time (these are not always identical variables) are shaped differently or take up different amounts of space in a given day, which only has 24 hours. It’s a fixed budget.

I visualize activities of daily living as the smallest amount of time, but it’s not insignificant. It’s less than the amount of time I want to spend on work/projects, and my physical activity/recreation also takes up quite a bit of space. (Note: this isn’t going to be true for everyone, but remember for me I like ultrarunning for context!)

ADLs are green, work/projects are purple, and physical activity is blue:

Example of two blocks stacked on each other (green), four blocks in an l shape (purple), three blocks in a corner shape (blue)

They almost look like Tetris pieces, don’t they? Imagine all the ways they can fit together. But we have a fixed budget, remember – only 24 hours in the day – so to me they become Tangram puzzle pieces and it’s a question every day of how I’m going to construct my day to fit everything in as best as possible.

Preferably, I want to fit EVERYTHING in. I want to use up all available time and perfectly match my energy to it. Luckily, there are a number of ways these pieces fit together. For example, check out these different variations:

8 squares with different color combinations with a double block, an l shaped block, and a corner (three pieces) block. All squares are completely full, but in different combinations/layouts of the blocks

But sometimes even this feels impossible, and I’m left feeling like I can’t quite perfectly line everything up and things are getting dropped.

Example of a square where the blocks don't all fit inside the squareIt’s important to remember that even if the total amount of time is “a lot”, it doesn’t have to be done all at once. Historically, a lot of us might work 8 hour days (or longer days). For those of us with desk jobs, we sometimes have options to split this up. For example, working a few hours and then taking a lunch break, or going for a walk / hitting the gym, then returning to work. Instead of a static 9-5, it may look like 8-11:30, 1:30-4:30, 8-9:30.

The same is true for other blocks of time, too, such as activities of daily living: they’re usually not all in one block of time, but often at least two (waking up and going to bed) plus sprinkled throughout the day.

In other words, it’s helpful to recognize that these big “blocks” can be broken down into smaller subunits:

Tangram-puzzle-pieces-different-shapes-closeup-DanaMLewis

And from there… we have a lot more possibilities for how we might fit “everything” (or our biggest priorities) into a day:

Showing full blocks filled with individual blocks, sometimes linked but in different shapes than the L and corner shapes from before.

For me, these new blocks are more common. Sometimes I have my most typical day with a solid block of exercise and work just how I’d prefer them (top left). Other times, I have less exercise and several work blocks in a day (top right). Other days, I don’t have energy for physical activity, activities of daily living take more energy or I have more tasks to do and I also don’t have quite as much time for longer work sections (bottom left). There’s also non-work days too where I prioritize getting as much activity as I can in a day (bottom right!). But in general, the point of this is that instead of thinking about the way we USED to do things or thinking we SHOULD do things a certain way, we should think about what needs to be done; the minimum of how it needs to be done; and think creatively about how we CAN accomplish these tasks, goals, and priorities.

A useful trigger phrase to check is if you find yourself saying “I should ______”. Stop and ask yourself: should, according to what/who? Is it actually a requirement? Is the requirement about exactly how you do it, or is it about the end state?

“I should work 8 hours a day” doesn’t mean (in all cases) that you have to do it 8 straight hours in a row, other than a lunch break.

If you find yourself should-ing, try changing the wording of your sentence, from “I should do X” to “I want to do X because Y”. It helps you figure out what you’re trying to do and why (Y), which may help you realize that there are more ways (X or Z or A) to achieve it, so “X” isn’t the requirement you thought it was.

If you find yourself overwhelmed because it feels like you have a big block task that you need to do, this is also helpful then to break it down into steps. Start small, as small as opening a document and writing what you need to do.

My recent favorite trick that is working well for me is putting the item of “start writing prompt for (project X)” on my to-do list. I don’t have to run the prompt; I don’t have to read the output then; I don’t have to do the next steps after that…but only start writing the prompt. It turns out that writing the prompt for an LLM helps me organize my thoughts in a way that it then makes the subsequent next steps easier and clearer, and I often then bridge into completing several of those follow up tasks! (More tips about starting that one small step here.)

The TL;DR: perhaps is that while we might yearn to fit everything in perfectly and optimize it all, it’s not going to always turn out like that. Our priorities change, our energy availability changes (due to health or kids’ schedules or other life priorities), and if we strive to be more flexible we will find more options to try to fit it all in.

Sometimes we can’t, but sometimes breaking things down can help us get closer.

Showing how the blocks on the left have fixed shapes and have certain combinations, then an arrow to the right with example blocks using the individual unit blocks rather than the fixed shapes, so the blocks look very different but are all filled, also.

Just Do Something (Permission Granted)

Just do it. And by it, I mean anything. You don’t need permission, but if you want permission, you have it.

If you’ve ever found yourself feeling stuck, overwhelmed (by uncertainty or the status of the world), and not sure what to do, I’ll tell you what to do.

Do something, no matter how small. Just don’t wait, go do it now.

Let’s imagine you have a grand vision for a project, but it’s something you feel like you need funding for, or partners for, or other people to work on, or any number of things that leave you feeling frozen and unable to do anything to get started. Or it’s something you want the world to have but it’s something that requires expertise to build or do, and you don’t have that expertise.

The reality is…you don’t need those things to get started.

You can get started RIGHT NOW.

The trick is to start small. As small as opening up a document and writing one sentence. But first, tell yourself, “I am not going to write an entire plan for Z”. Nope, you’re not going to do that. But what you are going to do is open the document and write down what the document is for. “This document is where I will keep notes about Plan Z”. If you have some ideas so far, write them down. Don’t make them pretty! Typos are great. You can even use voice dictation and verbalize your notes. For example “develop overall strategy, prompt an LLM for an outline of steps, write an email to person A about their interest in project Z”.

Thanks to advances in technology, you now have a helper to get started or tackle the next step, no matter how big or small. You can come back later and say “I’m not going to do all of this, but I am going to write the prompt for my LLM to create an outline of steps to develop the strategy for project Z”. That’s all you have to do: write the prompt. But you may find yourself wanting to go ahead and paste the prompt and hit run on the LLM. You don’t have to read the output yet, but it’s there for next time. Then next time, you can copy and paste the output into your doc and review it. Maybe there will be some steps you feel like taking then, or maybe you generate follow up prompts. Maybe your next step is to ask the LLM to write an email to person A about the project Z, based on the outline it generated. (Some other tips for prompting and getting started with LLMs here, if you want them.)

The beauty of starting small is that once you have something, anything, then you are making forward progress! You need progress to make a snowball, not just a snowflake in the air. Everything you do adds to the snowball. And the more you do, the easier it will get because you will have practice breaking things down into the smallest possible next step. Every time you find yourself procrastinating or saying “I can’t do thing B”, get in the habit of catching yourself and saying: 1) what could I do next? And write that down, even if you don’t do it then, and 2) ask an LLM “is it possible” or “how might I do thing B?” and break it down further and further until there’s steps you think you could take, even if you don’t take them then.

I’ve seen posts suggesting that increasingly funders (such as VCs, but I imagine it applies to other types of funders too) are going to be less likely to take projects seriously that don’t have a working prototype or an MVP or something in the works. It’s now easier than ever to build things, thanks to LLMs, and that means it’s easier for YOU to build things, too.

Yes, you. Even if you’re “not technical”, even if you “don’t know how to code”, or even if you’re “not a computer person”. Your excuses are gone. If you don’t do it, it’s because you don’t WANT to do it. Not knowing how to do it is no longer valid. Sure, maybe you don’t have time or don’t want to prioritize it – fine. But if it’s important to you to get other people involved (with funding or applications for funding or recruiting developers), then you should invest some of your time first and do something, anything, to get it started and figure out how to get things going. It doesn’t have to be perfect, it just has to be started. The more progress you make, the easier it is to share and the more people can discover your vision and jump on board with helping you move faster.

Another trigger you can watch for is finding yourself thinking or saying “I wish someone would do Y” or “I wish someone would make Z”. Stop and ask yourself “what would it take to build Y or Z?” and consider prompting an LLM to lay out what it would take. You might decide not to do it, but information is power, and you can make a more informed decision about whether this is something that’s important enough for you to prioritize doing.

And maybe you don’t have an idea for a project yet, but if you’re stewing with uncertainty these days, you can still make an impact by taking action, no matter how small. Remember, small adds up. Doing something for someone else is better than anything you could do for yourself, and I can say from experience it feels really good to make even small actions, whether it’s at the global level or down to the neighborhood level.

You probably know more what your local community needs, but to start you brainstorming, things you can do include:

  • Go sign up for a session to volunteer at a local food bank
  • Take groceries to the local food bank
  • Ask the local food bank if they have specific needs related to allergies etc, such as whether they need donations of gluten-free food for people with celiac
  • Go take books and deposit them at the free little libraries around your neighborhood
  • Sign up for a shift or get involved at a community garden
  • Paint rocks and go put them out along your local walking trails for people to discover
  • Write a social media post about your favorite charity and why you support it, and post it online or email it to your friends
  • Do a cost-effective analysis for your favorite nonprofit and share it with them (you may need some data from them first) and also post it publicly

Just-do-something-you-have-permission-DanaMLewisI’ve learned from experience that waiting rarely creates better outcomes. It only delays impact.

Progress doesn’t require permission: it requires action.

What are you waiting for? Go do something.

Scale yourself

One of the things I wish people would consider more often when thinking about AI is how they can use it to scale themselves. What are some time-consuming things that they currently have to do themselves that AI could do for them to streamline their output and increase their productivity? Productivity for giving them more time to do the things only they can do, the things they want to do, or the things they love to do. (And to help stop procrastinating on things they have to do.)

I have a habit of trying to scale myself. These days, it’s often related to EPI (exocrine pancreatic insufficiency, which some areas of the world know by the acronym PEI). I developed a strong knowledge base first from personal experience, then by doing research – including a systematic review where I read hundreds, plural, of research papers on key topics related to design protocols and guidelines. As a result of both personal and research experience, I have a lot of knowledge. It gets tapped almost daily in the EPI support groups that I’m a part of.

Whenever I notice myself answering the same question repeatedly, I make a mental note of it. Eventually, if a topic comes up often enough, I turn my response into a blog post. This way, I can provide a well-structured, comprehensive answer with more time and context than a quick comment on social media allows – and with the ability to give the same, high quality answer to multiple people (and in some cases, hundreds or thousands of people rather than the few who might see the comment buried in a response thread).

A few examples of this include:

One of my favorite things with this approach is then seeing other people begin to share the links to my longer-form content to help answer common questions. By writing things down in a shareable way, it also enables and supports other people to scale your work by sharing it easily. This has started to happen more and more with the elastase blog post, in part because there are so few resources that cover this information all in one place.

For me, I lean toward writing, but for other people that could be videos, podcast/audio recording, or other formats that can capture things you know and make them shareable, thus scaling yourself.

For me, this approach of “scaling myself” and thinking about longer form content to post online instead of re-typing similar answers over and over again isn’t unique to EPI.

I have been doing this for over a decade. I developed this pattern early after we developed and shared OpenAPS (the first open source automated insulin delivery algorithm) with the world. Early on, I found myself answering the same technical questions repeatedly in online discussions with the same answers. Typing out explanations on my phone was inefficient, and if one person had a question, others likely had the same one. Instead of repeating myself, I took the time to document answers. I would often pause, write up the information in the documentation, and share that instead. This made it easier and quicker to go find and share a link instead of retyping responses, and it also took less time, so I was willing to do it more quickly than if I had to delay what I was doing in real life in order to type out a long yet already-answered question. Over time, I had to do less one-off typing on my phone (and could save that time and energy for true, one-off unique questions) and could share links with a lot more information more easily.

How do I use AI to scale this type of work?

A lot of the above tasks are related to writing. There are different ways you can use AI for writing, without having it write something completely. You can give it notes – whether you type or voice dictate them – and have it clean up your notes, so you can focus on thinking and not about typing or fixing typos that break your flow. You can have it convert the notes into full sentences. You can ask it to write a paragraph or an article based on the notes. You can ask it to suggest wording for a particular sentence you want to clarify for your audience.

If you think about the AI as an intern and/or a partner/collaborator who you would ask to review or edit for you, you’ll likely find even more ways to integrate AI into different parts of your writing process, even if it’s not doing the full writing for you.

I have also tried to task the AI with writing for me, with mixed results. This doesn’t mean I don’t use it, but I’ve been practicing and learning where it generates usable content and where it doesn’t.

A lot of it depends on the prompt and the topic (as much as it does the output in terms of style, length, intended audience etc).

If it’s a topic that’s “known”, it can write more content that I can take and edit and transform, as opposed to when I am trying to write about a concept that is far from the current knowledge base. (I mean far for both humans and of AI – a lot of my work is bleeding edge, pushing fields towards new developments and leading humans there.) Sometimes I ask it to write something and end up using none of the content, but by saying “ugh no” my brain has jumped to saying to myself “it should really say…” and I am able to more quickly springboard into manually writing the content I was previously slow on. In other words, it can be a brainstorming tool in the opposite sense, showing me what I do not want to say on a topic! And on some of my frontier/bleeding edge topics, it reflects what is commonly ‘known’ and when what is known is now wrong (example, as always, of how it’s commonly incorrectly reported that chronic pancreatitis is the most common cause of EPI), it helps me more clearly distinguish the new content from the old, wrong, or misinformed.

(Also, it’s worth reminding you what I have to remind myself, that AI is changing constantly and new tools override what is known about what tasks do and don’t do well! For example, in between writing this and posting it, OpenAI released GPT4.5, which is reportedly better at writing-related tasks than GPT-4o and other older models. I’ll have to test it and see if that’s true and for what kinds of writing tasks!)

This isn’t the only way you can scale yourself with AI, though. Scaling yourself doesn’t have to be limited to writing and documentation style tasks. AI and other tools can help with many tasks (more examples here and here), such as:

  • Cleaning and transforming data into different formats
  • Converting a CSV file into a more readable table
  • Writing code to automate tedious data processing
  • Drafting plain-language instructions for engineers or programmers
  • Checking whether instructions or explanations are clear and understandable, and identifying any gaps in logic that you missed on your first pass

By leveraging AI and other automation tools, you can free up time and energy for higher-value work: the things you are uniquely suited to do in the world, and the things that you want or love to do. And do them more easily!

Pro tip: if you find yourself procrastinating a task, this may be a good sign that you could use AI for some of it. 

I’m trying to use noticing procrastination as a trigger for considering AI for a task.

An example of this is an upcoming post with a bunch of math and meaty cost analysis that I originally did by hand. I needed (wanted) to re-do these estimates with different numbers, but procrastinated a bit because having to carefully re-do all the estimates and replace them throughout the blog post seemed tedious, so my brain wanted to procrastinate. So, I took the blog post and dumped it in with a prompt asking it to write Jupyter Notebook code to replicate the analyses explained via the plain language post, with the ability to adjust all input variables and see the results in a table so I could compare the original and updated numbers. It took less than 1 minute to generate this code and about 5 minutes for me to copy/paste, update the numbers, run it, and evaluate the output and decide what to update in the post. Manually, this would’ve taken 30-60 minutes due to needing to check my work manually and trace it throughout the post. Instead, this automated the tedious bit and will result in this new post coming out next week rather than weeks from now (read about it here – it’s an analysis on how cost-effect Life for a Child is, a charity supporting people living with diabetes in low- and middle-income countries that can use your help to save lives.)

Scale yourself: automate more, so you can handle what matters, a blog by Dana M. Lewis from DIYPS.orgI encourage you to think about scaling yourself and identifying a task or series of tasks where you can get in the habit of leveraging these tools to do so. Like most things, the first time or two might take a little more time. But once you figure out what tasks or projects are suited for this, the time savings escalate. Just like learning how to use any new software, tool, or approach. A little bit of invested time up front will likely save you a lot of time in the future.

Beware “too much” and “too little” advice in Exocrine Pancreatic Insufficiency (EPI / PEI)

If I had a nickel every time I saw conflicting advice for people with EPI, I could buy (more) pancreatic enzyme replacement therapy. (PERT is expensive, so it’s significant that there’s so much conflicting advice).

One rule of thumb I find handy is to pause any time I see the words “too much” or “too little”.

This comes up in a lot of categories. For example, someone saying not to eat “too much” fat or fiber, and that a low-fat diet is better. The first part of the sentence should warrant a pause (red flag words – “too much”), and that should put a lot of skepticism on any advice that follows.

Specifically on the “low fat diet” – this is not true. A lot of outdated advice about EPI comes from historical research that no longer reflects modern treatment. In the past, low-fat diets were recommended because early enzyme formulations were not encapsulated or as effective, so people in the 1990s struggled to digest fat because the enzymes weren’t correctly working at the right time in their body. The “bandaid” fix was to eat less fat. Now that enzyme formulations are significantly improved (starting in the early 2000s, enzymes are now encapsulated so they get to the right place in our digestive system at the right time to work on the food we eat or drink), medical experts no longer recommend low-fat diets. Instead, people should eat a regular diet and adjust their enzyme intake accordingly to match that food intake, rather than the other way around (source: see section 4.6).

Think replacement of enzymes, rather than restriction of dietary intake: the “R” in PERT literally stands for replacement!

If you’re reading advice as a person with EPI (PEI), you need to have math in the back of your mind. (Sorry if you don’t like math, I’ll talk about some tools to help).

Any time people use words to indicate amounts of things, whether that’s amounts of enzymes or amounts of food (fat, protein, carbs, fiber), you need to think of specific numbers to go with these words.

And, you need to remember that everyone’s body is different, which means your body is different.

Turning words into math for pill count and enzymes for EPI

Enzyme intake should not be compared without considering multiple factors.

The first reason is because enzyme pills are not all the same size. Some prescription pancreatic enzyme replacement therapy (PERT) pills can be as small as 3,000 units of lipase or as large as 60,000 units of lipase. (They also contain thousands or hundreds of thousands of units of protease and amylase, to support protein and carbohydrate digestion. For this example I’ll stick to lipase, for fat digestion.)

If a person takes two enzyme pills per meal, that number alone tells us nothing. Or rather, it tells us only half of the equation!

The size of the pills matters. Someone taking two 10,000-lipase pills consumes 20,000 units per meal, while another person taking two 40,000-lipase pills is consuming 80,000 units per meal.

That is a big difference! Comparing the two total amounts of enzymes (80,000 units of lipase or 20,000 units of lipase) is a 4x difference.

And I hate to tell you this, but that’s still not the entire equation to consider. Hold on to your hat for a little more math, because…

The amount of fat consumed also matters.

Remember, enzymes are used to digest food. It’s not a magic pill where one (or two) pills will perfectly cover all food. It’s similar to insulin, where different people can need different amounts of insulin for the same amount of carbohydrates. Enzymes work the same way, where different people need different amounts of enzymes for the same amount of fat, protein, or carbohydrates.

And, people consume different amounts and types of food! Breakfast is a good example. Some people will eat cereal with milk – often that’s more carbs, a little bit of protein, and some fat. Some people will eat eggs and bacon – that’s very little carbs, a good amount of protein, and a larger amount of fat.

Let’s say you eat cereal with milk one day, and eggs and bacon the next day. Taking “two pills” might work for your cereal and milk, but not your eggs and bacon, if you’re the person with 10,000 units of lipase in your pill. However, taking “two pills” of 40,000 units of lipase might work for both meals. Or not: you may need more for the meal with higher amounts of fat and protein.

If someone eats the same quantity of fat and protein and carbs across all 3 meals, every day, they may be able to always consume the same number of pills. But for most of us, our food choices vary, and the protein and fat varies meal to meal, so it’s common to need different amounts at different meals. (If you want more details on how to figure out how much you need, given what you eat, check out this blog post with example meals and a lot more detail.)

You need to understand your baseline before making any comparisons

Everyone’s body is different, and enzyme needs vary widely depending on the amount of fat and protein consumed. What is “too much” for one person might be exactly the right amount for another, even when comparing the same exact food quantity. This variability makes it essential to understand your own baseline rather than following generic guidance. The key is finding what works for your specific needs rather than focusing on an arbitrary notion of “too much”, because “too much” needs to be compared to specific numbers that can be compared as apples to apples.

A useful analogy is heart rate. Some people have naturally higher or lower resting heart rates. If someone tells you (that’s not a doctor giving you direct medical advice) that your heart rate is too high, it’s like – what can you do about it? It’s not like you can grow your heart two sizes (like the Grinch). While fitness and activity can influence heart rate slightly, individual baseline differences remain significant. If you find yourself saying “duh, of course I’m not going to try to compare my heart rate to my spouse’s, our bodies are different”, that’s a GREAT frame of mind that you should apply to EPI, too.

(Another example is respiratory rate, where it varies person to person. If someone is having trouble breathing, the solution is not as simple as “breathe more” or “breathe less”—it depends on their normal range and underlying causes, and it takes understanding their normal range to figure out if they are breathing more or less than their normal, because their normal is what matters.)

If you have EPI, fiber (and anything else) also needs numbers

Fiber also follows this pattern. Some people caution against consuming “too much” fiber, but a baseline level is essential. “Too little” fiber can mimic EPI symptoms, leading to soft, messy stools. Finding the right amount of fiber is just as crucial as balancing fat and protein intake.

If you find yourself observing or hearing comments that you likely consume “too much” fiber – red flag check for “too much!” Similar to if you hear/see about ‘low fiber’. Low meaning what number?

You should get an estimate for how much you are consuming and contextualize it against the typical recommendations overall, evaluate whether fiber is contributing to your issues, and only then consider experimenting with it.

(For what it’s worth, you may need to adjust enzyme intake for fat/protein first before you play around with fiber, if you have EPI. Many people are given PERT prescriptions below standard guidelines, so it is common to need to increase dosing.)

For example, if you’re consuming 5 grams of fiber in a day, and the typical guidance is often for 25-30 grams (source, varies by age, gender and country so this is a ballpark)…. you are consuming less than the average person and the average recommendation.

In contrast, if you’re consuming 50+ grams of fiber? You’re consuming more than the average person/recommendation.

Understanding where you are (around the recommendation, quite a bit below, or above?) will then help you determine whether advice for ‘more’ or ‘less’ is actually appropriate in your case. Most people have no idea what you’re eating – and honestly, you may not either – so any advice for “too much”, “too little”, or “more” or “less” is completely unhelpful without these numbers in mind.

You don’t have to tell people these numbers, but you can and should know them if you want to consider evaluating whether YOU think you need more/less compared to your previous baseline.

How do you get numbers for fiber, fat, protein, and carbohydrates?

Instead of following vague “more” or “less” advice, first track your intake and outcomes.

If you don’t have a good way to estimate the amount of fat, protein, carbohydrates, and/or fiber, here’s a tool you can use – this is a Custom GPT that is designed to give you back estimates of fat, protein, carbohydrates, and fiber.

You can give it a meal, or a day’s worth of meals, or several days, and have it generate estimates for you. (It’s not perfect but it’s probably better than guessing, if you’re not familiar with estimating these macronutrients).

If you don’t like or can’t access ChatGPT (it works with free accounts, if you log in), you can also take this prompt, adjust it how you like, and give it to any free LLM tool you like (Gemini, Claude, etc.):

You are a dietitian with expertise in estimating the grams of fat, protein, carbohydrate, and fiber based on a plain language meal description. For every meal description given by the user, reply with structured text for grams of fat, protein, carbohydrates, and fiber. Your response should be four numbers and their labels. Reply only with this structure: “Fat: X; Protein: Y; Carbohydrates: Z; Fiber; A”. (Replace the X, Y, Z, and A with your estimates for these macronutrients.). If there is a decimal, round to the nearest whole number. If there are no grams of any of the macronutrients, mark them as 0 rather than nil. If the result is 0 for all four variables, please reply to the user: “I am unable to parse this meal description. Please try again.”

If you are asked by the user to then summarize a day’s worth of meals that you have estimated, you are able to do so. (Or a week’s worth). Perform the basic sum calculation needed to do this addition of each macronutrient for the time period requested, based on the estimates you provided for individual meals.

Another option is using an app like PERT Pilot. PERT Pilot is a free app for iOS for people with EPI that requires no login or user account information, and you can put in plain language descriptions of meals (“macaroni and cheese” or “spaghetti with meatballs”) and get back the estimates of fat, protein, and carbohydrates, and record how much enzymes you took so you can track your outcomes over time. (Android users – email me at Dana+PERTPilot@OpenAPS.org if you’d like to test the forthcoming Android version!) Note that PERT Pilot doesn’t estimate fiber, but if you want to start with fat/protein estimates, PERT Pilot is another way to get started with seeing what you typically consume. (For people without EPI, you can use Carb Pilot, another free iOS app that similarly gives estimates of macronutrients.)

Beware advice of "more" or "less" that is vague and non-numeric (not a number) unless you know your baseline numbers in exocrine pancreatic insufficiency. A blog by Dana M. Lewis from DIYPS.orgTL;DR: Instead of arbitrarily lowering or increasing fat or fiber in the diet, measure and estimate what you are consuming first. If you have EPI, assess fat digestion first by adjusting enzyme intake to minimize symptoms. (And then protein, especially for low fat / high protein meals, such as chicken or fish.) Only then consider fiber intake—some people may actually need more fiber rather than less than what they were consuming before if they experience mushy stools. Remember the importance of putting “more” or “less” into context with your own baseline numbers. Estimating current consumption is crucial because an already low-fiber diet may be contributing to the problem, and reducing fiber further could make things worse. Understanding your own baseline is the key.

You Can Create Your Own Icons (and animated gifs)

Over the years, I’ve experimented with different tools for making visuals. Some of them are just images but in the last several years I’ve made more animations, too.

But not with any fancy design program or purpose built tool. Instead, I use PowerPoint.

Making animated gifs

I first started using PowerPoint to create gifs around 2018 or 2019. At the time, PowerPoint didn’t have a built-in option to export directly to GIF format, so I had to export animations as a movie file first and then use an online converter to turn them into a GIF. Fortunately, in recent years, PowerPoint has added a direct “Export as GIF” feature.

The process of making an animated GIF in PowerPoint is similar to adding animations or transitions in a slide deck for a presentation. I’ve used this for various projects, including:

Am I especially trained? No. Do I feel like I have design skills? No.

Elbow grease and determination to try is what I have, with the goal of trying to use visuals to convey information as a summary or to illustrate a key point to accompany written text. (I also have a tendency to want to be a perfectionist, and I have to consciously let that go and let “anything is better than nothing” guide my attempts.)

Making icons is possible, too

Beyond animations, I’ve also used PowerPoint to create icons and simple logo designs.

I ended up making the logos for Carb Pilot (a free iOS app that enables you to track the macronutrients of your choice) and PERT Pilot (a free iOS app that enables people with exocrine pancreatic insufficiency, known as EPI or PEI, to track their enzyme intake) using PowerPoint.

This, and ongoing use of LLMs to help me with coding projects like these apps, is what led me to the realization that I can now make icons, too.

I was working to add a widget to Carb Pilot, so that users can have a widget on the home screen to more quickly enter meals without having to open the app and then tap; this saves a click every time. I went from having it be a single button to having 4 buttons to simulate the Carb Pilot home screen. For the “saved meals” button, I wanted a list icon, to indicate the list of previous meals. I went to SF Symbols, Apple’s icon library, and picked out the list icon I wanted to use, and referenced it in XCode. It worked, but it lacked something.

A light purple iOS widget with four buttons - top left is blue and says AI: top right is purple with a white microphone icon; bottom left is periwinkle blue with a white plus sign icon; bottom right is bright green with a custom list icon, where instead of bullets the three items are an apple, cupcake, and banana mini-icons. It occurred to me that maybe I could tweak it somehow and make the bullets of the list represent food items. I wasn’t sure how, so I asked the LLM if it was possible. Because I’ve done my other ‘design’ work in PowerPoint, I went there and quickly dropped some shapes and lines to simulate the icon, then tested exporting – yes, you can export as SVG! I spent a few more minutes tweaking versions of it and exporting it. It turns out, yes, you can export as SVG, but then the way I designed it wasn’t really suited for SVG use. When I had dropped the SVG into XCode, it didn’t show up. I asked the LLM again and it suggested trying PNG format. I exported the icon from powerpoint as PNG, dropped it into XCode, and it worked!

(That was a good reminder that even when you use the “right” format, you may need to experiment to see what actually works in practice with whatever tools you’re using, and not let the first failure be a sign that it can’t work.)

Use What Works

There’s a theme you’ll be hearing from me: try and see what works. Just try. You don’t know if you don’t try. With LLMs and other types of AI, we have more opportunities to try new and different things that we may not have known how to do before. From coding your own apps to doing data science to designing custom icons, these are all things I didn’t know how to do before but now I can. A good approach is to experiment, try different things (and different prompts), and not be afraid to use “nontraditional” tools for projects, creative or otherwise. If it works, it works!

Facing Uncertainty with AI and Rethinking What If You Could?

If you’re feeling overwhelmed by the rapid development of AI, you’re not alone. It’s moving fast, and for many people the uncertainty of the future (for any number of reasons) can feel scary. One reaction is to ignore it, dismiss it, or assume you don’t need it. Some people try it once, usually on something they’re already good at, and when AI doesn’t perform better than they do, they conclude it’s useless or overhyped, and possibly feel justified in going back to ignoring or rejecting it.

But that approach misses the point.

AI isn’t about replacing what you already do well. It’s about augmenting what you struggle with, unlocking new possibilities, and challenging yourself to think differently, all in the pursuit of enabling YOU to do more than you could yesterday.

One of the ways to navigate the uncertainty around AI is to shift your mindset. Instead of thinking, “That’s hard, and I can’t do that,” ask yourself, “What if I could do that? How could I do that?”

Sometimes I get a head start by asking an LLM just that: “How would I do X? Layout a plan or outline an approach to doing X.” I don’t always immediately jump to doing that thing, but I think about it, and probably 2 out of 3 times, laying out a possible approach means I do come back to that project or task and attempt it at a later time.

Even if you ultimately decide not to pursue something because of time constraints or competing priorities, at least you’ve explored it and possibly learned something even in the initial exploration about it. But, I want to point out that there’s a big difference between legitimately not being able to do something and choosing not to. Increasingly, the latter is what happens, where you may choose not to tackle a task or take on a project: this is very different from not being able to do so.

Finding the Right Use Cases for AI

Instead of testing AI on things you’re already an expert in, try applying it to areas where you’re blocked, stuck, overwhelmed, or burdened by the task. Think about a skill you’ve always wanted to learn but assumed was out of reach. Maybe you’ve never coded before, but you’re curious about writing a small script to automate a task. Maybe you’ve wanted to design a 3D-printed tool to solve a real-world problem but didn’t know where to start. AI can be a guide, an assistant, and sometimes even a collaborator in making these things possible.

For example, I once thought data science was beyond my skill set. For the longest time, I couldn’t even get Jupyter Notebooks to run! Even with expert help, I was clearly doing something silly and wrong, but it took a long time and finally LLM assistance to get step by step and deeper into sub-steps to figure out the step that was never in the documentation or instructions that I was missing – and I finally figured it out! From there, I learned enough to do a lot of the data science work on my own projects. You can see that represented in several recent projects. The same thing happened with iOS development, which I initially felt imposter syndrome about. And this year, after FOUR failed attempts (even 3 using LLMs), I finally got a working app for Android!

Each time, the challenge felt enormous. But by shifting from “I can’t” to “What if I could?” I found ways to break through. And each time AI became a more capable assistant, I revisited previous roadblocks and made even more progress, even when it was a project (like an Android version of PERT Pilot) I had previously failed at, and in that case, multiple times.

Revisiting Past Challenges

AI is evolving rapidly, and what wasn’t possible yesterday might be feasible today. Literally. (A great example is that I wrote a blog post about how medical literature seems like a game of telephone and was opining on AI-assisted tools to aid with tracking changes to the literature over time. The day I put that blog post in the queue, OpenAI announced their Deep Research tool, which I think can in part address some of the challenges I talked about currently being unsolved!)

One thing I have started to do that I recommend is keeping track of problems or projects that feel out of reach. Write them down. Revisit them every few months, and explore them with the latest LLM and AI tools. You might be surprised at how much has changed, and what is now possible.

Moving Forward with AI

You don’t even have to use AI for everything. (I don’t.) But if you’re not yet in the habit of using AI for certain types of tasks, I challenge you to find a way to use an LLM for *something* that you are working on.

A good place to insert this into your work/projects is to start noting when you find yourself saying or thinking “this is the way we/I do/did things”.

When you catch yourself thinking this, stop and ask:

  • Does it have to be done that way? Why do we think so?
  • What are we trying to achieve with this task/project?
  • Are there other ways we can achieve this?
  • If not, can we automate some or more steps of this process? Can some steps be eliminated?

You can ask yourself these questions, but you can also ask these questions to an LLM. And play around with what and how you ask (the prompt, or what you ask it, makes a difference).

One example for me has been working on a systematic review and meta analysis of a medical topic. I need to extract details about criteria I am analyzing across hundreds of papers. Oooph, big task, very slow. The LLM tools aren’t yet good about extracting non-obvious data from research papers, especially PDFs where the data I am interested may be tucked into tables, figure captions, or images themselves rather than explicitly stated as part of the results section. So for now, that still has to be manually done, but it’s on my list to revisit periodically with new LLMs.

However, I recognized that the way I was writing down (well, typing into a spreadsheet) the extracted data was burdensome and slow, and I wondered if I could make a quick simple HTML page to guide me through the extraction, with an output of the data in CSV that I could open in spreadsheet form when I’m ready to analyze. The goal is easier input of the data with the same output format (CSV for a spreadsheet). And so I used an LLM to help me quickly build that HTML page, set up a local server, and run it so I can use it for data extraction. This is one of those projects where I felt intimidated – I never quite understood spinning up servers and in fact didn’t quite understand fundamentally that for free I can “run” “a server” locally on my computer in order to do what I wanted to do. So in the process of working on a task I really understood (make an HTML page to capture data input), I was able to learn about spinning up and using local servers! Success, in terms of completing the task and learning something I can take forward into future projects.

Another smaller recent example is when I wanted to put together a simple case report for my doctor, summarizing symptoms etc, and then also adding in PDF pages of studies I was referencing so she had access to them. I knew from the past that I could copy and paste the thumbnails from Preview into the PDF, but it got challenging to be pasting 15+ pages in as thumbnails and they were inserting and breaking up previous sections, so the order of the pages was wrong and hard to fix. I decided to ask my LLM of choice if it was possible to automate compiling 4 PDF documents via a command line script, and it said yes. It told me what library to install (and I checked this is an existing tool and not a made up or malicious one first), and what command to run. I ran it, it appended the PDFs together into one file the way I wanted, and it didn’t require the tedious hand commands to copy and paste everything together and rearrange when the order was messed up.

The more I practice, the easier I find myself switching into the habit of saying “would it be possible to do X” or “Is there a way to do Y more simply/more efficiently/automate it?”. That then leads to portions which I can decide to implement, or not. But it feels a lot better to have those on hand, even if I choose not to take a project on, rather than to feel overwhelmed and out of control and uncertain about what AI can do (or not).

Facing uncertainty with AI and rethinking "What if you could?", a blog post by Dana M. Lewis on DIYPS.orgIf you can shift your mindset from fear and avoidance to curiosity and experimentation, you might discover new skills, solve problems you once thought were impossible, and open up entirely new opportunities.

So, the next time you think, “That’s too hard, I can’t do that,” stop and ask:

“What if I could?”

If you appreciated this post, you might like some of my other posts about AI if you haven’t read them.

The prompt matters when using Large Language Models (LLMs) and AI in healthcare

I see more and more research papers coming out these days about different uses of large language models (LLMs, a type of AI) in healthcare. There are papers evaluating it for supporting clinicians in decision-making, aiding in note-taking and improving clinical documentation, and enhancing patient education. But I see a wide-sweeping trend in the titles and conclusions of these papers, exacerbated by media headlines, making sweeping claims about the performance of one model versus another. I challenge everyone to pause and consider a critical fact that is less obvious: the prompt matters just as much as the model.

As an example of this, I will link to a recent pre-print of a research article I worked on with Liz Salmi (pre-print here).

Liz nerd-sniped me about an idea of a study to have a patient and a neuro-oncologist evaluate LLM responses related to patient-generated queries about a chart note (or visit note or open note or clinical note, however you want to call it). I say nerd-sniped because I got very interested in designing the methods of the study, including making sure we used the APIs to model these ‘chat’ sessions so that the prompts were not influenced by custom instructions, ‘memory’ features within the account or chat sessions, etc. I also wanted to test something I’ve observed anecdotally from personal LLM use across other topics, which is that with 2024-era models the prompt matters a lot for what type of output you get. So that’s the study we designed, and wrote with Jennifer Clarke, Zhiyong Dong, Rudy Fischmann, Emily McIntosh, Chethan Sarabu, and Catherine (Cait) DesRoches, and I encourage you to check out the pre-print and enjoy the methods section, which is critical for understanding the point I’m trying to make here. 

In this study, the data showed that when LLM outputs were evaluated for a healthcare task, the results varied significantly depending not just on the model but also on how the task was presented (the prompt). Specifically, persona-based prompts—designed to reflect the perspectives of different end users like clinicians and patients—yielded better results, as independently graded by both an oncologist and a patient.

The Myth of the “Best Model for the Job”

Many research papers conclude with simplified takeaways: Model A is better than Model B for healthcare tasks. While performance benchmarking is important, this approach often oversimplifies reality. Healthcare tasks are rarely monolithic. There’s a difference between summarizing patient education materials, drafting clinical notes, or assisting with complex differential diagnosis tasks.

But even within a single task, the way you frame the prompt makes a profound difference.

Consider these three prompts for the same task:

  • “Explain the treatment options for early-stage breast cancer.”
  • “You’re an oncologist. Explain the treatment options for early-stage breast cancer.”
  • “You’re an oncologist. Explain the treatment options for early-stage breast cancer as you would to a newly diagnosed patient with no medical background.”

The second and third prompt likely result in a more accessible and tailored response. If a study only tests general prompts (e.g. prompt one), it may fail to capture how much more effective an LLM can be with task-specific guidance.

Why Prompting Matters in Healthcare Tasks

Prompting shapes how the model interprets the task and generates its output. Here’s why it matters:

  • Precision and Clarity: A vague prompt may yield vague results. A precise prompt clarifies the goal and the speaker (e.g. in prompt 2), and also often the audience (e.g. in prompt 3).
  • Task Alignment: Complex medical topics often require different approaches depending on the user—whether it’s a clinician, a patient, or a researcher.
  • Bias and Quality Control: Poorly constructed prompts can inadvertently introduce biases

Selecting a Model for a Task? Test Multiple Prompts

When evaluating LLMs for healthcare tasks—or applying insights from a research paper—consider these principles:

  1. Prompt Variation Matters: If an LLM fails on a task, it may not be the model’s fault. Try adjusting your prompts before concluding the model is ineffective, and avoid broad sweeping claims about a field or topic that aren’t supported by the test you are running.
  2. Multiple Dimensions of Performance: Look beyond binary “good” vs. “bad” evaluations. Consider dimensions like readability, clinical accuracy, and alignment with user needs, as an example when thinking about performance in healthcare. In our paper, we saw some cases where a patient and provider overlapped in ratings, and other places where the ratings were different.
  3. Reproducibility and Transparency: If a study doesn’t disclose how prompts were designed or varied, its conclusions may lack context. Reproducibility in AI studies depends not just on the model, but on the interaction between the task, model, and prompt design. You should be looking for these kinds of details when reading or peer reviewing papers. Take results and conclusions with a grain of salt if these methods are not detailed in the paper.
  4. Involve Stakeholders in Evaluation: As shown in the preprint mentioned earlier, involving both clinical experts and patients in evaluating LLM outputs adds critical perspectives often missing in standard evaluations, especially as we evolve to focus research on supporting patient needs and not simply focusing on clinician and healthcare system usage of AI.

What This Means for Healthcare Providers, Researchers, and Patients

  • For healthcare providers, understand that the way you frame a question can improve the usefulness of AI tools in practice. A carefully constructed prompt, adding a persona or requesting information for a specific audience, can change the output.
  • For researchers, especially those developing or evaluating AI models, it’s essential to test prompts across different task types and end-user needs. Transparent reporting on prompt strategies strengthens the reliability of your findings.
  • For patients, recognizing that AI-generated health information is shaped by both the model and the prompt. This can support critical thinking when interpreting AI-driven health advice. Remember that LLMs can be biased, but so too can be humans in healthcare. The same approach for assessing bias and evaluating experiences in healthcare should be used for LLM output as well as human output. Everyone (humans) and everything (LLMs) are capable of bias or errors in healthcare.

Prompts matter, so consider model type as well as the prompt as a factor in assessing LLMs in healthcare. Blog by Dana M. LewisTLDR: Instead of asking “Which model is best?”, a better question might be:

“How do we design and evaluate prompts that lead to the most reliable, useful results for this specific task and audience?”

I’ve observed, and this study adds evidence, that prompt interaction with the model matters.

A Tale of Three Artificial Intelligence (AI) Experiences in Healthcare Interactions

AI tools are being increasingly used in healthcare, particularly for tasks like clinical notetaking during virtual visits. As a patient, I’ve had three recent experiences with AI-powered notetaking tools during appointments with the same clinician. Each time, I consented to its use, but the results were very different across the three encounters. The first two involved similar tools with mostly good results but surprising issues around pronouns and transparency of the consent process. The third was a different tool with a noticeable drop in quality. But what really stands out, when I compare these to a visit without AI, is that human errors happen too — and the healthcare system lacks effective processes for identifying and correcting errors, no matter the source.

Encounter One: Good Notes, Incorrect Pronouns

At the start of my first virtual appointment, my clinician asked for my permission to use an AI-powered tool for notetaking. I consented. After the visit, I reviewed the clinical note, and the summary at the top described me using “he/him” pronouns. I’m female, so they should have been “she/her”.

The rest of the note was detailed and clinically accurate and useful. But the pronoun error stood out. It seemed like the AI defaulted to male pronouns when gender information wasn’t explicitly mentioned, which made me wonder whether the model was trained with gender bias or if this was a design flaw in this tool.

Encounter Two: Clarifying Pronouns, Learning About Chart Access

At the next appointment, my clinician again asked for consent to use an AI-powered notetaker. I agreed and pointed out the pronoun error from the previous visit, clarifying that I am female and use she/her pronouns. My clinician looked at the prior note and was equally puzzled, commenting that this issue had come up with other patients — both directions, sometimes assigning female pronouns to male patients and vice versa. The clinician mentioned that the AI system supposedly had access to patient charts and should be able to pull gender information from existing records. That really surprised me: the consent statement had described the tool as a notetaking aid, but nothing had been said about access to my full chart. I would have given permission either way, but the fact that this hadn’t been disclosed upfront was disappointing. I had understood this to be a passive notetaking tool summarizing the visit in real time, not something actively pulling and using other parts of my health record.

This time, the pronouns in the note were correct (which could be because we talked about it and I declared the pronouns), and the overall summary was again accurate and detailed. But the fact that this was a recurring issue, with my provider seeing it in both directions across multiple patients, made it clear that pronoun errors weren’t a one-off glitch.

Encounter Three: A Different AI with Worse Results

By the third appointment, I knew what to expect. The clinician again asked for consent to use an AI notetaker, and I agreed. But after reviewing the note from this visit, two things stood out.

First, the quality of the notetaking was noticeably worse. Several errors were obvious, including situations where the note reflected the exact opposite of what had been discussed. For example, I had said that something did not happen, yet the note recorded that it did.

Second, this time the note disclosed the specific software used for notetaking at the bottom of the document. It was a different tool than the one used in the first two visits. I hadn’t been told that a different AI tool was being used, but based on the change in quality and the naming disclosure, it was clear this was a switch.

This experience reinforced that even when performing the same task — in this case, AI notetaking — the software can vary widely in accuracy and quality. I much preferred the output from the first two visits, even with the initial pronoun error, over the third experience where clinically significant details were recorded incorrectly.

Notably, there doesn’t seem to be a process or method (or if there is one, it is not communicated to patients or easily findable when searching) to give the health system feedback on the quality and accuracy of these tools. Which seems like a major flaw in most health systems’ implementations of AI-related tools, assessing and evaluating only from the healthcare provider perspective and overlooking or outright ignoring the direct impact on patients (which influences patient care, the clinician-patient relationship, trust with the health system….).

A Human-Only Encounter: Still Not Error-Free

To give further context, I want to compare these AI experiences with a separate virtual visit where no AI was involved. This was with a different clinician who took notes manually. The pronouns were correct in this note, but there were still factual inaccuracies.

A small but clear example: I mentioned using Device A, but the note stated I was using Device B. This was not a critical error at the time, but it was still incorrect.

The point here is that human documentation errors are not rare. They happen frequently, even without AI involved. Yet the narrative around AI in healthcare often frames mistakes as uniquely concerning when, in reality, this problem already exists across healthcare.

A Bigger Issue is Lack of Processes for Fixing Errors

Across all four encounters — both AI-assisted and human-driven — the most concerning pattern was not the errors themselves but the failure to correct them, even after they were pointed out.

In the first AI note where the pronouns were wrong, the note was never corrected, even after I brought it up at the next appointment. The error remains in my chart.

In the human-driven note, where the wrong device was recorded, I pointed out the error multiple times over the years. Despite that, the error persisted in my chart across multiple visits.

Eventually, it did affect my care. During a prescription renewal, the provider questioned whether I was using the device at all because they referenced the erroneous notes rather than the prescription history. I had to go back, cite old messages where I had originally pointed out the error, and clarify that the device listed in the notes was wrong.

I had stopped trying to correct this error after multiple failed attempts because it hadn’t impacted my care at the time. But years later, it suddenly mattered — and the persistence of that error caused confusion and required extra time, adding friction into what should have been a seamless prescription renewal process.

My point: the lack of effective remediation processes is not unique to either AI or human documentation. Errors get introduced and then they stay. There are no good systems for correcting clinical notes, whether written by a human or AI.

So, What Do We Do About AI in Healthcare?

Critics of AI in healthcare often argue that its potential for errors is a reason to avoid the technology altogether. But as these experiences show, human-driven documentation isn’t error-free either.

The problem isn’t AI.

It’s that healthcare systems as a whole have poor processes for identifying and correcting errors once they occur.

When we evaluate AI tools, we need to ask:

  • What types of errors are we willing to tolerate?
  • How do we ensure transparency about how the tools work and what data they access?
  • Most importantly, what mechanisms exist to correct errors after they’re identified?

This conversation needs to go beyond whether errors happen and instead focus on how we respond when they do.  It’s worth thinking about this in the same way I’ve written about errors of commission and omission in diabetes care with automated insulin delivery (AID) systems (DOI: 10.1111/dme.14687; author copy here). Errors of commission happen when something incorrect is recorded. Errors of omission occur when important details are left out. Both types of errors can affect care, and both need to be considered when evaluating the use of AI or human documentation.

In my case, despite the pronoun error in the first AI note, the notetaking quality was generally higher than the third encounter with a different AI tool. And even in the human-only note, factual errors persisted over years with no correction.

Three encounters with AI in healthcare - reflecting on errors of omission and commission that happen both with humans and AI , a blog post by Dana M. Lewis from DIYPS.orgAI can be useful for reducing clinician workload and improving documentation efficiency. But like any tool, its impact depends on how it’s implemented, how transparent the process is, and whether there are safeguards to address errors when they occur.

The reality is both AI and human clinicians make mistakes.

What matters, and what we should work on addressing, is how to fix errors in healthcare documentation and records when they occur.

Right now, this is a weakness of the healthcare system, and not unique to AI.

Pain and translation and using AI to improve healthcare at an individual level

I think differently from most people. Sometimes, this is a strength; and sometimes this is a challenge. This is noticeable when I approach healthcare encounters in particular: the way I perceive signals from my body is different from a typical person. I didn’t know this for the longest time, but it’s something I have been becoming more aware of over the years.

The most noticeable incident that brought me to this realization involved when I pitched head first off a mountain trail in New Zealand over five years ago. I remember yelling – in flight – help, I broke my ankle, help. When I had arrested my fall, clung on, and then the human daisy chain was pulling me back up onto the trail, I yelped and stopped because I could not use my right ankle to help me climb up the trail. I had to reposition my knee to help move me up. When we got up to the trail and had me sitting on a rock, resting, I felt a wave of nausea crest over me. People suggested that it was dehydration and I should drink. I didn’t feel dehydrated, but ok. Then because I was able to gently rest my foot on the ground at a normal perpendicular angle, the trail guides hypothesized that it was not broken, just sprained. It wasn’t swollen enough to look like a fracture, either. I felt like it hurt really bad, worse than I’d ever hurt an ankle before and it didn’t feel like a sprain, but I had never broken a bone before so maybe it was the trauma of the incident contributing to how I was feeling. We taped it and I tried walking. Nope. Too-strong pain. We made a new goal of having me use poles as crutches to get me to a nearby stream a half mile a way, to try to ice my ankle. Nope, could not use poles as crutches, even partial weight bearing was undoable. I ended up doing a mix of hopping, holding on to Scott and one of the guides. That got exhausting on my other leg pretty quickly, so I also got down on all fours (with my right knee on the ground but lifting my foot and ankle in the air behind me) to crawl some. Eventually, we realized I wasn’t going to be able to make it to the stream and the trail guides decided to call for a helicopter evacuation. The medics, too, when they arrived via helicopter thought it likely wasn’t broken. I got flown to an ER and taken to X-Ray. When the technician came out, I asked her if she saw anything obvious and whether it looked broken or not. She laughed and said oh yes, there’s a break. When the ER doc came in to talk to me he said “you must have a really high pain tolerance” and I said “oh really? So it’s definitely broken?” and he looked at me like I was crazy, saying “it’s broken in 3 different places”. (And then he gave me extra pain meds before setting my ankle and putting the cast on to compensate for the fact that I have high pain tolerance and/or don’t communicate pain levels in quite the typical way.)

A week later, when I was trying not to fall on my broken ankle and broke my toe, I knew instantly that I had broken my toe, both by the pain and the nausea that followed. Years later when I smashed another toe on another chair, I again knew that my toe was broken because of the pain + following wave of nausea. Nausea, for me, is apparently a response to very high level pain. And this is something I’ve carried forward to help me identify and communicate when my pain levels are significant, because otherwise my pain tolerance is such that I don’t feel like I’m taken seriously because my pain scale is so different from other people’s pain scales.

Flash forward to the last few weeks. I have an autoimmune disease causing issues with multiple areas of my body. I have some progressive slight muscle weakness that began to concern me, especially as it spread to multiple limbs and areas of my body. This was followed with pain in different parts of my spine which has escalated. Last weekend, riding in the car, I started to get nauseous from the pain and had to take anti-nausea medicine (which thankfully helped) as well as pain medicine (OTC, and thankfully it also helped lower it down to manageable levels). This has happened several other times.

Some of the symptoms are concerning to my healthcare provider and she agreed I should probably have a MRI and a consult from neurology. Sadly, the first available new patient appointment with the neurologist I was assigned to was in late September. Gulp. I was admittedly nervous about my symptom progression, my pain levels (intermittent as they are), and how bad things might get if we are not able to take any action between now and September. I also, admittedly, was not quite sure how I would cope with the level of pain I have been experiencing at those peak moments that cause nausea.

I had last spoken to my provider a week prior, before the spine pain started. I reached out to give her an update, confirm that my specialist appointment was not until September, and express my concern about the progression and timeline. She too was concerned and I ended up going in for imaging sooner.

Over the last week, because I’ve been having these progressive symptoms, I used Katie McCurdy’s free templates from Pictal Health to help visualize and show the progression of symptoms over time. I wasn’t planning on sending my visuals to my doctor, but it helped me concretely articulate my symptoms and confirm that I was including everything that I thought was meaningful for my healthcare providers to know. I also shared them with Scott to confirm he didn’t think I had missed anything. The icons in some cases were helpful but in other cases didn’t quite match how I was experiencing pain and I modified them somewhat to better match how I saw the pain I was experiencing.

(PS – check out Katie’s templates here, you can make a copy in Google Drive and use them yourself!)

As I spoke with the nurse who was recording my information at intake for imaging, she asked me to characterize the pain. I did and explained that it was probably usually a 7/10 then but periodically gets stronger to the point of causing nausea, which for me is a broken bone pain-level response. She asked me to characterize the pain – was it burning, tingling…? None of the words she said matched how it feels. It’s strong pain; it sometimes gets worse. But it’s not any of the words she mentioned.

When the nurse asked if it was “sharp”, Scott spoke up and explained the icon that I had used on my visual, saying maybe it was “sharp” pain. I thought about it and agreed that it was probably the closest word (at least, it wasn’t a hard no like the words burning, tingling, etc. were), and the nurse wrote it down. That became the word I was able to use as the closest approximation to how the pain felt, but again with the emphasis of it periodically reaching nausea-inducing levels equivalent to broken bone pain, because I felt saying “sharp” pain alone did not characterize it fully.

This, then, is one of the areas where I feel that artificial intelligence (AI) gives me a huge helping hand. I often will start working with an LLM (a large language model) and describing symptoms. Sometimes I give it a persona to respond as (different healthcare provider roles); sometimes I clarify my role as a patient or sometimes as a similar provider expert role. I use different words and phrases in different questions and follow ups; I then study the language it uses in response.

If you’re not familiar with LLMs, you should know it is not human intelligence; there is no brain that “knows things”. It’s not an encyclopedia. It’s a tool that’s been trained on a bajillion words, and it learns patterns of words as a result, and records “weights” that are basically cues about how those patterns of words relate to each other. When you ask it a question, it’s basically autocompleting the next word based on the likelihood of it being the next word in a similar pattern. It can therefore be wildly wrong; it can also still be wildly useful in a lot of ways, including this context.

What I often do in these situations is not looking for factual information. Again, it’s not an encyclopedia. But I myself am observing the LLM in using a pattern of words so that I am in turn building my own set of “weights” – meaning, building an understanding of the patterns of words it uses – to figure out a general outline of what is commonly known by doctors and medical literature; the common terminology that is being used likely by doctors to intake and output recommendations; and basically build a list of things that do and do not match my scenario or symptoms or words, or whatever it is I am seeking to learn about.

I can then learn (from the LLM as well as in person clinical encounters) that doctors and other providers typically ask about burning, tingling, etc and can make it clear that none of those words match at all. I can then accept from them (or Scott, or use a word I learned from an LLM) an alternative suggestion where I’m not quite sure if it’s a perfect match, but it’s not absolutely wrong and therefore is ok to use to describe somewhat of the sensation I am experiencing.

The LLM and AI, basically, have become a translator for me. Again, notice that I’m not asking it to describe my pain for me; it would make up words based on patterns that have nothing to do with me. But when I observe the words it uses I can then use my own experience to rule things in/out and decide what best fits and whether and when to use any of those, if they are appropriate.

Often, I can do this in advance of a live healthcare encounter. And that’s really helpful because it makes me a better historian (to use clinical terms, meaning I’m able to report the symptoms and chronology and characterization more succinctly without them having to play 20 questions to draw it out of me); and it saves me and the clinicians time for being able to move on to other things.

At this imaging appointment, this was incredibly helpful. I had the necessary imaging and had the results at my fingertips and was able to begin exploring and discussing the raw data with my LLM. When I then spoke with the clinician, I was able to better characterize my symptoms in context of the imaging results and ask questions that I felt were more aligned with what I was experiencing, and it was useful for a more efficient but effective conversation with the clinician about what our working hypothesis was; what next short-term and long-term pathways looked like; etc.

This is often how I use LLMs overall. If you ask an LLM if it knows who Dana Lewis is, it “does” know. It’ll tell you things about me that are mostly correct. If you ask it to write a bio about me, it will solidly make up ⅓ of it that is fully inaccurate. Again, remember it is not an encyclopedia and does not “know things”. When you remember that the LLM is autocompleting words based on the likelihood that they match the previous words – and think about how much information is on the internet and how many weights (patterns of words) it’s been able to build about a topic – you can then get a better spidey-sense about when things are slightly more or less accurate at a general level. I have actually used part of a LLM-written bio, but not by asking it to write a bio. That doesn’t work because of made up facts. I have instead asked it to describe my work, and it does a pretty decent job. This is due to the number of articles I have written and authored; the number of articles describing my work; and the number of bios I’ve actually written and posted online for conferences and such. So it has a lot of “weights” probably tied to the types of things I work on, and having it describe the type of work I do or am known for gets pretty accurate results, because it’s writing in a general high level without enough detail to get anything “wrong” like a fact about an award, etc.

This is how I recommend others use LLMs, too, especially those of us as patients or working in healthcare. LLMs pattern match on words in their training; and they output likely patterns of words. We in turn as humans can observe and learn from the patterns, while recognizing these are PATTERNS of connected words that can in fact be wrong. Systemic bias is baked into human behavior and medical literature, and this then has been pattern-matched by the LLM. (Note I didn’t say “learned”; but they’ve created weights based on the patterns they observe over and over again). You can’t necessarily course-correct the LLM (it’ll pretend to apologize and maybe for a short while adjust it’s word patterns but in a new chat it’s prone to make the same mistakes because the training has not been updated based on your feedback, so it reverts to using the ‘weights’ (patterns) it was trained on); instead, we need to create more of the correct/right information and have it voluminously available for LLMs to train on in the future. At an individual level then, we can let go of the obvious not-right things it’s saying and focus on what we can benefit from in the patterns of words it gives us.

And for people like me, with a high (or different type of) pain tolerance and a different vocabulary for what my body is feeling like, this has become a critical tool in my toolbox for optimizing my healthcare encounters. Do I have to do this to get adequate care? No. But I’m an optimizer, and I want to give the best inputs to the healthcare system (providers and my medical records) in order to increase my chances of getting the best possible outputs from the healthcare system to help me maintain and improve and save my health when these things are needed.

TLDR: LLMs can be powerful tools in the hands of patients, including for real-time or ahead-of-time translation and creating shared, understandable language for improving communication between patients and providers. Just as you shouldn’t tell a patient not to use Dr. Google, you should similarly avoid falling into the trap of telling a patient not to use LLMs because they’re “wrong”. Being wrong in some cases and some ways does not mean LLMs are useless or should not be used by patients. Each of these tools has limitations but a lot of upside and benefits; restricting patients or trying to limit use of tools is like limiting the use of other accessibility tools. I spotted a quote from Dr. Wes Ely that is relevant: “Maleficence can be created with beneficent intent”. In simple words, he is pointing out that harm can happen even with good intent.

Don’t do harm by restricting or recommending avoiding tools like LLMs.

A Slackbot for using Slack to access and use a chat-based LLM in public

I’ve been thinking a lot about how to help my family, friends, and colleagues use LLMs to power their work. (As I’ve written about here, and more recently here with lots of tips on prompting and effectively using LLMs for different kinds of projects). 

Scott has been on the same page, especially thinking about how to help colleagues use LLMs effectively, but taking a slightly different approach: he built a Slackbot (a bot for Slack) which uses GPT-3.5 and GPT-4 to answer questions. This uses the API of GPT but presents it to the user in Slack instead of having to use ChatGPT as the chat interface. So, it’s a LLM chatbot, different than ChatGPT (because it’s a different chat interface), but uses the same AI (GPT-3.5 and GPT-4 from OpenAI). You could implement the same idea (a chat-based bot in Slack) using different AIs/LLMs, of course.

Using a slack-based bot for an LLM achieves a couple of things:

  1. More people can try GPT-4 and compare it to GPT-3.5 to get a taste for prompting and responses, without having to pay $20/month for a ChatGPT Pro account to get access to GPT-4.
  2. If you spend a lot of time in Slack for work, you don’t have to switch windows to access GPT.
  3. If your employer doesn’t want you to use the ChatGPT consumer product for work due to security concerns, but is more comfortable with OpenAI’s confidentiality guarantees for their API, a Slack bot interface provides a more secure alternative.
  4. You can see “in public” how other people in your Slack workspace are using it, and learn from their prompts and interactions new tricks and ideas (in addition to reading my long post here with lots of tips and suggestions).
  5. You can privately (in DM by yourself) talk to the bot and ask it your “dumb” questions (which aren’t dumb), and you can reason through things at your own speed or learn how to do things that you haven’t done before, and do so in private just like if you had your own ChatGPT account. It doesn’t burn social capital or take other people’s time, which you can save for more nuanced questions.

Scott has implemented it for work, but also came up with another fun use for it: we created a family-focused Slack channel for kids in our lives (we have ten (!) nieces and nephews – you can see how I used GPT to help with special custom books for some of them recently) so their parents and they can access GPT in a safe environment, too. ChatGPT (and OpenAI’s GPT API) have some reasonable safeguards in it already to protect kids, but more usefully we want to be able to provide custom instructions to keep it always at a kid-focused level. Plus this way we can share the experience with them, too.

I asked the bot to summarize itself for a lay audience not familiar with the project, and this was the output:

### Summary of the SlackAskBot

The SlackAskBot is a smart assistant designed to operate within Slack, a popular communication platform used by teams for messaging and collaboration. Imagine having a helpful robot in your group chats or direct messages that can understand your questions and provide informative responses. That’s what the SlackAskBot does. It’s like adding a knowledgeable team member to your Slack workspace who is always ready to assist.

#### How It Works:

  1. **Integration with Slack:**

– The bot is integrated into Slack, meaning it can read messages in channels it’s added to or when directly messaged. It’s like inviting a friend into your group chat who can listen and respond to conversations.

  1. **Understanding Messages:**

– When you type a message that includes the bot (either by mentioning it with an @ symbol in a channel or by sending it a direct message), the bot pays attention. It’s designed to understand that you’re talking to it and that you’re expecting some form of response.

  1. **Processing Your Requests:**

– Once the bot receives a message, it doesn’t just look at the latest message you’ve sent. It considers the context by looking at the conversation history. This way, it can provide responses that make sense within the ongoing discussion. It’s akin to a human joining a conversation, where understanding the previous discussion points is crucial for making relevant comments.

  1. **Generating Responses:**

– The bot uses a powerful language model (think of it as a very advanced AI that understands and generates human-like text) to come up with responses to your messages. It’s as if the bot thinks about what you’ve asked and then crafts a reply that aims to be helpful and informative.

  1. **Interacting Back:**

– After thinking through your message and generating a response, the bot then sends this response back into the Slack channel or direct message. You’ll see its reply just like you would from any other team member, providing the information or assistance you requested.

  1. **Special Features:**

– The bot is designed with specific settings for different channels or direct messages. This means it can tailor its responses based on where it’s talking. For example, it might respond differently in a casual chat compared to a more formal project discussion channel.

  1. **Respecting Privacy and Permissions:**

– The bot is programmed to respect Slack’s rules and the privacy of your conversations. It only listens and responds to messages where it’s explicitly mentioned or directly messaged, ensuring that it doesn’t intrude on private discussions.

#### In Summary:

The SlackAskBot acts as a virtual assistant within your Slack workspace, ready to provide information and answers to your questions. It’s designed to understand the context of conversations, generate helpful responses, and interact with users in a way that enhances team communication and productivity. Whether you’re asking for quick facts, seeking advice on a project, or just interacting for fun, the SlackAskBot is there to make your Slack experience more informative and engaging.


The SlackAskBot is open source – you can check out the code and instructions for how to implement it in your own Slack environment in the repository here.

We are using Slack as a chat interface for LLMs like GPT, making the user experience similar to ChatGPT