How I Use LLMs like ChatGPT And Tips For Getting Started

You’ve probably heard about new AI (artificial intelligence) tools like ChatGPT, Bard, Midjourney, DALL-E and others. But, what are they good for?

Last fall I started experimenting with them. I looked at AI art tools and found them to be challenging, at the time, for one of my purposes, which was creating characters and illustrating a storyline with consistent characters for some of my children’s books. I also tested GPT-3 (meaning version 3.0 of GPT). It wasn’t that great, to be honest. But later, GPT-3.5 was released, along with the ChatGPT chat interface to it, which WAS a big improvement for a lot of my use cases. (And now, GPT-4 is out and is an even bigger improvement, although it costs more to use. More on the cost differences below)

So what am I using these AI tools for? And how might YOU use some of these AI tools? And what are the limitations? This is what I’ve learned:

  1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.

You know the feeling of staring at a blank page and not knowing where to start? Maybe it’s the blank page of a cold email; the blank page of an essay or paper you need to write; the blank page of the outline for a presentation. Starting is hard!

Even for this blog post, I had a list of bulleted notes of things I wanted to remember to include. But I wasn’t sure how I wanted to start the blog post or incorporate them. I stuck the notes in ChatGPT and asked it to expand the notes.

What did it do? It wrote a few paragraph summary. Which isn’t what I wanted, so I asked it again to use the notes and this time “expand each bullet into a few sentences, rather than summarizing”. With these clear directions, it did, and I was able to look at this content and decide what I wanted to edit, include, or remove.

Sometimes I’m stuck on a particular writing task, and I use ChatGPT to break it down. In addition to kick-starting any type of writing overall, I’ve asked it to:

  • Take an outline of notes and summarize them into an introduction; limitations section; discussion section; conclusion; one paragraph summary; etc.
  • Take a bullet point list of notes and write full, complete sentences.
  • Take a long list of notes I’ve written about data I’ve extracted from a systematic review I was working on, and ask it about recurring themes or outlier concepts. Especially when I had 20 pages (!) of hand-written notes in bullets with some loose organization by section, I could feed in chunks of content and get help getting the big picture from that 20 pages of content I had created. It can highlight themes in the data based on the written narratives around the data.

A lot of times, the best thing it does is it prompts my brain to say “that’s not correct! It should be talking about…” and I’m able to more easily write the content that was in the back of my brain all along. I probably use 5% of what it’s written, and more frequently use it as a springboard for my writing. That might be unique to how I’m using it, though, and other simple use cases such as writing an email to someone or other simplistic content tasks may mean you can keep 90% or more of the content to use.

2. It can also help analyze data (caution alert!) if you understand how the tools work.

Huge learning moment here: these tools are called LLMs (large language models). They are trained on large amounts of language. They’re essentially designed so that, based on all of those words (language) it’s taken in previously, to predict content that “sounds” like what would come after a given prompt. So if you ask it to write a song or a haiku, it “knows” what a song or a haiku “looks” like, and can generate words to match those patterns.

It’s essentially a PATTERN MATCHER on WORDS. Yeah, I’m yelling in all caps here because this is the biggest confusion I see. ChatGPT or most of these LLMs don’t have access to the internet; they’re not looking up in a search engine for an answer. If you ask it a question about a person, it’s going to give you an answer (because it knows what this type of answer “sounds” like), but depending on the amount of information it “remembers”, some may be accurate and some may be 100% made up.

Why am I explaining this? Remember the above section where I highlighted how it can start to sense themes in the data? It’s not answering solely based on the raw data; it’s not doing analysis of the data, but mostly of the words surrounding the data. For example, you can paste in data (from a spreadsheet) and ask it questions. I did that once, pasting in some data from a pivot table and asking it the same question I had asked myself in analyzing the data. It gave me the same sense of the data that I had based on my own analysis, then pointed out it was only qualitative analysis and that I should also do quantitative statistical analysis. So I asked it if it could do quantitative statistical analysis. It said yes, it could, and spit out some numbers and described the methods of quantitative statistical analysis.

But here’s the thing: those numbers were completely made up!

It can’t actually use (in its current design) the methods it was describing verbally, and instead made up numbers that ‘sounded’ right.

So I asked it to describe how to do that statistical method in Google Sheets. It provided the formula and instructions; I did that analysis myself; and confirmed that the numbers it had given me were 100% made up.

The takeaway here is: it outright said it could do a thing (quantitative statistical analysis) that it can’t do. It’s like a human in some regards: some humans will lie or fudge and make stuff up when you talk to them. It’s helpful to be aware and query whether someone has relevant expertise, what their motivations are, etc. in determining whether or not to use their advice/input on something. The same should go for these AI tools! Knowing this is an LLM and it’s going to pattern match on language helps you pinpoint when it’s going to be prone to making stuff up. Humans are especially likely to make something up that sounds plausible in situations where they’re “expected” to know the answer. LLMs are in that situation all the time: sometimes they actually do know an answer, sometimes they have a good guess, and sometimes they’re just pattern matching and coming up with something that sounds plausible.

In short:

  • LLM’s can expand general concepts and write language about what is generally well known based on its training data.
  • Try to ask it a particular fact, though, and it’s probably going to make stuff up, whether that’s about a person or a concept – you need to fact check it elsewhere.
  • It can’t do math!

But what it can do is teach you or show you how to do the math, the coding, or whatever thing you wish it would do for you. And this gets into one of my favorite use cases for it.

3. You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.

One of the first things I did was ask ChatGPT to help me write a script. In fact, that’s what I did to expedite the process of finding tweets where I had used an image in order to get a screenshot to embed on my blog, rather than embedding the tweet.

It’s now so easy to generate code for scripts, regardless of which language you have previous experience with. I used to write all of my code as bash scripts, because that’s the format I was most familiar with. But ChatGPT likes to do things as Python scripts, so I asked it simple questions like “how do I call a python script from the command line” after I asked it to write a script and it generated a python script. Sure, you could search in a search engine or Stack Overflow for similar questions and get the same information. But one nice thing is that if you have it generate a script and then ask it step by step how to run a script, it gives you step by step instructions in context of what you were doing. So instead of saying “to run a script, type `python script.py’”, using placeholder names, it’ll say “to run the script, use ‘python actual-name-of-the-script-it-built-you.py’ “ and you can click the button to copy that, paste it in, and hit enter. It saves a lot of time for figuring out how to take placeholder information (which you would get from a traditional search engine result or Stack Overflow, where people are fond of things like saying FOOBAR and you have no idea if that means something or is meant to be a placeholder). Careful observers will notice that the latest scripts I’ve added to my Open Humans Data Tools repository (which is packed with a bunch of scripts to help work with big datasets!) are now in Python rather than bash; such as when I was adding new scripts for fellow researchers looking to check for updates in big datasets (such as the OpenAPS Data Commons). This is because I used GPT to help with those scripts!

It’s really easy now to go from an idea to a script. If you’re able to describe it logically, you can ask it to write a script, tell you how to run it, and help you debug it. Sometimes you can start by asking it a question, such as “Is it possible to do Y?” and it describes a method. You need to test the method or check for it elsewhere, but things like uploading a list of DOIs to Mendeley to save me hundreds of clicks? I didn’t realize Mendeley had an API or that I could write a script that would do that! ChatGPT helped me write the script, figure out how to create a developer account and app access information for Mendeley, and debug along the way so I ended up within an hour and a half of having a tool that easily saved me 3 hours on the very first project that I used it with.

I’m gushing about this because there’s probably a lot of ideas you have that you immediately throw out as being too hard, or you don’t know how to do it. It takes time, but I’m learning to remember to think “I should ask the LLM this” and ask it questions such as:

  • Is it possible to do X?
  • Write a script to do X.
  • I have X data. Pretend I am someone who doesn’t know how to use Y software and explain how I should do Z.

Another thing I’ve done frequently is ask it to help me quickly write a complex formula to use in a spreadsheet. Such as “write a formula that can be used in Google Sheets to take an average of the values in M3:M84 if they are greater than zero”.

It gives me the formula, and also describes it, and in some cases, gives alternative options.

Other things I’ve done with spreadsheets include:

  • Ask it to write a conditional formatting custom formula, then give me instructions for expanding the conditional formatting to apply to a certain cell range.
  • Asking it to check if a cell is filled with a particular value and then repeating the value in the new cell, in order to create new data series to use in particular charts and graphs I wanted to create from my data.
  • Help me transform my data so I could generate a box and whisker plot.
  • Ask it for other visuals that might be effective ways to illustrate and visualize the same dataset.
  • Explain the difference between two similar formulas (e.g. COUNT and COUNTA or when to use IF and IFS).

This has been incredibly helpful especially with some of my self-tracked datasets (particularly around thyroid-related symptom data) where I’m still trying to figure out the relationship between thyroid levels, thyroid antibody levels, and symptom data (and things like menstrual cycle timing). I’ve used it for creating the formulas and solutions I’ve talked about in projects such as the one where I created a “today” line that dynamically updates in a chart.

It’s also helped me get past the friction of setting up new tools. Case in point, Jupyter notebooks. I’ve used them in the web browser version before, but often had issues running the notebooks people gave me. I debugged and did all kinds of troubleshooting, but have not for years been able to get it successfully installed locally on (multiple of) my computers. I had finally given up on effectively using notebooks and definitely given up on running it locally on my machine.

However, I decided to see if I could get ChatGPT to coax me through the install process.

I told it:

“I have this table with data. Pretend I am someone who has never used R before. Tell me, step by step, how to use a Jupyter notebook to generate a box and whisker plot using this data”

(and I pasted my data that I had copied from a spreadsheet, then hit enter).

It outlined exactly what I needed to do, saying to install Jupyter Notebook locally if I hadn’t, gave me code to do that, installing the R kernel, told me how to do that, then how to start a notebook all the way down to what code to put in the notebook, the data transformed that I could copy/paste, and all the code that generated the plot.

However, remember I have never been able to successfully get Jupyter Notebooks running! For years! I was stuck on step 2, installing R. I said:

“Step 2, explain to me how I enter those commands in R? Do I do this in Terminal?”

It said “Oh apologies, no, you run those commands elsewhere, preferably in Rstudio. Here is how to download RStudio and run the commands”.

So, like humans often do, it glossed over a crucial step. But it went back and explained it to me and kept giving more detailed instructions and helping me debug various errors. After 5-6 more troubleshooting steps, it worked! And I was able to open Jupyter Notebooks locally and get it working!

All along, most of the tutorials I had been reading had skipped or glossed over that I needed to do something with R, and where that was. Probably because most people writing the tutorials are already data scientists who have worked with R and RStudio etc, so they didn’t know those dependencies were baked in! Using ChatGPT helped me be able to put in every error message or every place I got stuck, and it coached me through each spot (with no judgment or impatience). It was great!

I was then able to continue with the other steps of getting my data transformed, into the notebook, running the code, and generating my first ever box and whisker plot with R!

A box and whisker plot, illustrated simply to show that I used R and Jupyter finally successfully!

This is where I really saw the power of these tools, reducing the friction of trying something new (a tool, a piece of software, a new method, a new language, etc.) and helping you troubleshoot patiently step by step.

Does it sometimes skip steps or give you solutions that don’t work? Yes. But it’s still a LOT faster than manually debugging, trying to find someone to help, or spending hours in a search engine or Stack Overflow trying to translate generic code/advice/solutions into something that works on your setup. The beauty of these tools is you can simply paste in the error message and it goes “oh, sorry, try this to solve that error”.

Because the barrier to entry is so low (compared to before), I’ve also asked it to help me with other project ideas where I previously didn’t want to spend the time needed to learn new software and languages and all the nuances of getting from start to end of a project.

Such as, building an iOS app by myself.

I have a ton of projects where I want to temporarily track certain types of data for a short period of time. My fall back is usually a spreadsheet on my phone, but it’s not always easy to quickly enter data on a spreadsheet on your phone, even if you set up a template with a drop down menu like I’ve done in the past (for my DIY macronutrient tool, for example). For example, I want to see if there’s a correlation in my blood pressure at different times and patterns of inflammation in my eyelid and heart rate symptoms (which are symptoms, for me, of thyroid antibodies being out of range, due to Graves’ disease). That means I need to track my symptom data, but also now some blood pressure data. I want to be able to put these datasets together easily, which I can, but the hardest part (so to speak) is finding a way that I am willing to record my blood pressure data. I don’t want to use an existing BP tracking app, and I don’t want a connected BP monitor, and I don’t want to use Apple Health. (Yes, I’m picky!)

I decided to ask ChatGPT to help me accomplish this. I told it:

“You’re an AI programming assistant. Help me write a basic iOS app using Swift UI. The goal is a simple blood pressure tracking app. I want the user interface to default to the data entry screen where there should be three boxes to take the systolic, diastolic blood pressure numbers and also the pulse. There should also be selection boxes to indicate whether the BP was taken sitting up or laying down. Also, enable the selection of a section of symptom check boxes that include “HR feeling” and “Eyes”. Once entered on this screen, the data should save to a google spreadsheet.” 

This is a completely custom, DIY, n of 1 app. I don’t care about it working for anyone else, I simply want to be able to enter my blood pressure, pulse, whether I’m sitting or laying down, and the two specific, unique to me symptoms I’m trying to analyze alongside the BP data.

And it helped me build this! It taught me how to set up a new SwiftUI project in XCode, gave me code for the user interface, how to set up an API with Google Sheets, write code to save the data to Sheets, and get the app to run.

(I am still debugging the connection to Google Sheets, so in the interim I changed my mind and had it create another screen to display the stored data then enable it to email me a CSV file, because it’s so easy to write scripts or formulas to take data from two sources and append it together!)

Is it fancy? No. Am I going to try to distribute it? No. It’s meeting a custom need to enable me to collect specific data super easily over a short period of time in a way that my previous tools did not enable.

Here’s a preview of my custom app running in a simulator phone:

Simulator iphone with a basic iOS app that intakes BP, pulse, buttons for indicating whether BP was taken sitting or laying down; and toggles for key symptoms (in my case HR feeling or eyes), and a purple save button.

I did this in a few hours, rather than taking days or weeks. And now, the barrier to entry to creating more custom iOS is reduced, because now I’m more comfortable working with XCode and the file structures and what it takes to build and deploy an app! Sure, again, I could have learned to do this in other ways, but the learning curve is drastically shortened and it takes away most of the ‘getting started’ friction.

That’s the theme across all of these projects:

  • Barriers to entry are lower and it’s easier to get started
  • It’s easier to try things, even if they flop
  • There’s a quicker learning curve on new tools, technologies and languages
  • You get customized support and troubleshooting without having to translate through as many generic placeholders

PS – speaking of iOS apps, based on building this one simple app I had the confidence to try building a really complex, novel app that has never existed in the world before! It’s for people with exocrine pancreatic insufficiency like me who want to log pancreatic enzyme replacement therapy (PERT) dosing and improve their outcomes – check out PERT Pilot and how I built it here.

4. Notes about what these tools cost

I found ChatGPT useful for writing projects in terms of getting started, even though the content wasn’t that great (on GPT-3.5, too). Then they came out with GPT-4 and made a ChatGPT Pro option for $20/month. I didn’t think it was worth it and resisted it. Then I finally decided to try it, because some of the more sophisticated use cases I wanted to use it for required a longer context window, and in addition to a better model it also gave you a longer context window. I paid the first $20 assuming I’d want to cancel it by the end of the month.

Nope.

The $20 has been worth it on every single project that I’ve used it for. I’ve easily saved 5x that on most projects in terms of reducing the energy needed to start a project, whether it was writing or developing code. It has saved 10x that in time cost recouped from debugging new code and tools.

GPT-4 does have caps, though, so even with the $20/month, you can only do 25 messages every 3 hours. I try to be cognizant of which projects I default to using GPT-3.5 on (unlimited) versus saving the more sophisticated projects for my GPT-4 quota.

For example, I saw a new tool someone had built called “AutoResearcher”, downloaded it, and tried to use it. I ran into a bug and pasted the error into GPT-3.5 and got help figuring out where the problem was. Then I decided I wanted to add a feature to output to a text file, and it helped me quickly edit the code to do that, and I PR’ed it back in and it was accepted (woohoo) and now everyone using that tool can use that feature. That was pretty simple and I was able to use GPT-3.5 for that. But sometimes, when I need a larger context window for a more sophisticated or content-heavy project, I start with GPT-4. When I run into the cap, it tells me when my next window opens up (3 hours after I started using it), and I usually have an hour or two until then. I can open a new chat on GPT-3.5 (without the same context) and try to do things there; switch to another project; or come back at the time it says to continue using GPT-4 on that context/setup.

Why the limit? Because it’s a more expensive model. So you have a tradeoff between paying more and having a limit on how much you can use it, because of the cost to the company.

—–

TLDR:

Most important note: LLMs don’t “think” or “know” things the way humans do. They output language they predict you want to see, based on its training and the inputs you give it. It’s like the autocomplete of a sentence in your email, but more words on a wider range of topics!

Also, the LLM can’t do math. But they can write code. Including code to do math.

(Some, but not all, LLMs have access to the internet to look up or incorporate facts; make sure you know which LLM you are using and whether it has this feature or not.)

Ways to get started:

    1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.
      • Ask it to help you expand on notes; write summaries of existing content; or write sections of content based on instructions you give it
    2.  It can also help analyze data (caution alert!) if you understand the limitations of the LLM.
      • The most effective way to work with data is to have it tell you how to run things in analytical software, whether that’s how to use R or a spreadsheet or other software for data analysis. Remember the LLM can’t do math, but it can write code so you can then do the math!
    3.  You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.
      • Build a new habit of asking it “Can I do X” or “Is it possible to do Y” and when it says it’s possible, give it a try! Tell it to give you step-by-step instructions. Tell it where you get stuck. Give it your error messages or where you get lost and have it coach you through the process. 

What’s been your favorite way to use an LLM? I’d love to know other ways I should be using them, so please drop a comment with your favorite projects/ways of using them!

Personally, the latest project that I built with an LLM has been PERT Pilot!

How I use LLMs (like ChatGPT) and tips for getting started

How to Pick Food (Fuel) For Ultramarathon Running

I’ve previously written about ultrarunning preparation and a little bit about how I approach fueling. But it occurred to me there might be others out there wondering exactly HOW to find fuel that works for them, because it’s an iterative process.

The way I approach fueling is based on a couple of variables.

First and foremost, everything has to be gluten free (because I have celiac). So that limits a lot of the common ultrarunning fuel options. Things like bars (some are GF, most are not), Uncrustables, PopTarts, and many other common recommendations in the ultra community just aren’t an option for me. Some, I can find or make alternatives to, but it’s worth noting that being gluten free for celiac (where cross-contamination is also an issue, not just the ingredients) or having a food allergy and being an ultrarunner can make things more challenging.

Then, I also have exocrine pancreatic insufficiency. This doesn’t limit what I eat, but it factors in to how I approach ideal fueling options, because I have to match the enzyme amounts to the amount of food I’m eating. So naturally, the pill size options I have of OTC enzymes (one is lipase only and covers ~6g of fat for me, the other is a multi-enzyme option that includes protease to cover protein, and only enough lipase to cover ~4g of fat for me; I also have one much larger that covers ~15g of fat but I don’t typically use this one while running) influence the portion sizes of what I choose.

That being said, I probably – despite EPI – still tend toward higher fat options than most people. This is in part because I have had type 1 diabetes for 20+ years. While I by no means consume a low c-a-r-b diet, I typically consume less than the people with insulin-producing pancreases in my life, and lean slightly toward higher fat options because a) my taste buds like them and b) they’ve historically had less impact on my glucose levels. Reason A is probably the main reason now, thanks to automated insulin delivery, but regardless of reason, 20+ years of a higher level than most people’s fat consumption means I’m also probably better fat-adapted for exercise than most people.

Plus, ultrarunning tends to be slower than shorter runs (like marathons and shorter for most people), so that’s also more amenable to fat and other nutrient digestion. So, ultrarunners in general tend to have more options in terms of not just needing “gu” and “gel” and “blocks” and calorie-sugar drinks as fuel options (although if that is what you prefer and works well for you, great!).

All of these reasons lead me toward generally preferring fuel portions that are:

  1. Gluten free with no cross-contamination risk
  2. ~20g of carbs
  3. ~10g of fat or less
  4. ~5-10g of protein or less

Overall, I shoot for consuming ~250 calories per hour. Some people like to measure hourly fuel consumption by calories. Others prefer carb consumption. But given that I have a higher tolerance for fat and protein consumption – thanks to the enzymes I need for EPI plus decades of practice – calories as a metric for hourly consumption makes sense for me. If I went for the level of carb intake many recommend for ultrarunners, I’d find it harder to consistently manage glucose levels while running for a zillion hours. I by no means think any of my above numbers are necessarily what’s best for anyone else, but that’s what I use based on my experiences to date as a rough outline of what to shoot for.

After I’ve thought through my requirements: gluten free, 250 calories per hour, and preferably no single serving portion size that is greater than 20ish grams of carbs or 10g of fat or 5-10g or protein, I can move on to making a list of foods I like and that I think would “work” for ultrarunning.

“Work” by my definition is not too messy to carry or eat (won’t melt easily, won’t require holding in my hands to eat and get them messy).

My initial list has included (everything here gluten free):

  • Oreos or similar sandwich type cookies
  • Yogurt/chocolate covered pretzels
  • PB or other filled pretzel nuggets
  • Chili cheese Fritos
  • Beef sticks
  • PB M&M’s
  • Reese’s Pieces
  • Snickers
  • Mini PayDays
  • Macaroons
  • Muffins
  • Fruit snacks
  • Fruit/date bars
  • GF (only specific flavors are GF which is why I’m noting this) of Honey Stinger Stroopwaffles

I wish I could include more chip/savory options on my lists, and that’s something I’ve been working on. Fritos are easy enough to eat from a snack size baggie without having to touch them with my hands or pull individual chips out to eat; I can just pour portions into my mouth. Most other chips, though, are too big and too ‘sharp’ feeling for my mouth to eat this way, so chili cheese Fritos are my primary savory option, other than beef sticks (that are surprisingly moist and easy to swallow on the run!).

Some of the foods I’ve tried from the above list and have eventually taken OFF my list include:

  • PB pretzel nuggets, because they get stale in baggies pretty fast and then they feel dry and obnoxious to chew and swallow.
  • Muffins – I tried both banana muffin halves and chocolate chip muffin halves. While they’re moist and delicious straight out of the oven, I found they are challenging to swallow while running (probably because they’re more dry).
  • Gluten free Oreos – actual Oreo brand GF Oreos, which I got burnt out on about the time I realized I had EPI, but also they too have a pretty dry mouthfeel. I’ve tried other brand chocolate sandwich cookies and also for some reason find them challenging to swallow. I did try a vanilla sandwich cookie (Glutino brand) recently and that is working better – the cookie is harder but doesn’t taste as dry – so that’s tentatively on my list as a replacement.

Other than “do I like this food” and “does it work for carrying on runs”, I then move on to “optimizing” my intake in terms of macronutrients.  Ideally, each portion size and item has SOME fat, protein, and carbs, but not TOO MUCH fat, protein and carbs.

Most of my snacks are some fat, a little more carb, and a tiny bit of protein. The outlier is my beef sticks, which are the highest protein option out of my shelf-stable running fuel options (7g of fat, 8g of protein). Most of the others are typically 1-3g of protein, 5-10g of fat (perfect, because that is 1-2 enzyme OTC pills), and 10-20g of carb (ideal, because it’s a manageable amount for glucose levels at any one time).

Sometimes, I add things to my list based on the above criteria (gluten free with no cross-contamination list; I like to eat it; not messy to carry) and work out a possible serving size. For example, the other day I was brainstorming more fuel options and it occurred to me that I like brownies and a piece of brownie in a baggie would probably be moist and nice tasting and would be fine in a baggie. I planned to make a batch of brownies and calculated how I would cut them to get consistent portion sizes (so I would know the macronutrients for enzymes).

However, once I made my brownies, and started to cut them, I immediately went “nope” and scratched them off my list for using on runs. Mainly because, I hate cutting them and they crumbled. The idea of having to perfect how to cook them to be able to cut them without them crumbling just seems like too much work. So I scratched them off my list, and am just enjoying eating the brownies as brownies at home, not during runs!

I first started taking these snacks on runs and testing each one, making sure that they tasted good and also worked well for me (digestion-wise) during exercise, not just when I was sitting around. All of them, other than the ones listed above for ‘dry’ reasons or things like brownies (crossed off because of the hassle to prepare), have stayed on the list.

I also started looking at the total amount of calories I was consuming during training runs, to see how close I was to my goal of ~250 calories per hour. It’s not an exact number and a hard and fast “must have”, but given that I’m a slower runner (who run/walks, so I have lower calorie burn than most ultrarunners), I typically burn in the ballpark of ~300-400 calories per hour. I generally assume ~350 calories for a reasonable average. (Note, again, this is much lower than most people’s burn, but it’s roughly my burn rate and I’m trying to show the process itself of how I make decisions about fuel).

Aiming for ~250 calories per hour means that I only have a deficit of 100 calories per hour. Over the course of a ~100 mile race that might take 30 hours, this means I’ll “only” have an estimated deficit of 3,000 calories. Which is a lot less than most people’s estimated deficit, both because I have a lower burn rate (I’m slower) and because, as described above and below, I am trying to be very strategic about fueling for a number of reasons, including not ending up under fueling for energy purposes. For shorter runs, like a 6 hour run, that means I only end up ~600 calories in deficit – which is relatively easy to make up with consumption before and after the run, to make sure that I’m staying on top of my energy needs.

It turns out, some of my preferred snacks are a lot lower and higher calories than each other! And this can add up.

For example, fruit snacks – super easy to chew (or swallow without much chewing). 20g of carb, 0g of fat or protein, and only 80 calories. Another easy to quickly chew and swallow option: a mini date (fruit) bar. 13g carb, 5g fat, 2 protein. And…90 calories. My beef stick? 7g of fat, 8g of protein, and only 100 calories!

My approach that works for me has been to eat every 30 minutes, which means twice per hour. Those are three of my favorite (because they’re easy to consume) fuel options. If I eat two of those in the same hour, say fruit snacks and the date bar, that’s only 170 calories. Well below the goal of 250 for the hour! Combining either with my beef stick (so 180 or 190 calories, depending), is still well below goal.

This is why I have my macronutrient fuel library with carbs, fat, protein, *and* calories (and sodium, more on that below) filled out, so I can keep an eye on patterns of what I tend to prefer by default – which is often more of these smaller, fewer calorie options as I get tired at the end of the runs, when it’s even more important to make sure I’m at (or near) my calorie goals.

Tracking this for each training run has been really helpful, so I can see my default tendency to choose “smaller” and “easier to swallow” – but that also means likely fewer calories – options. This is also teaching me that I need to pair larger calorie options with them or follow on with a larger calorie option. For example, I have certain items on my list like Snickers. I get the “share size” bars that are actually 2 individual bars, and open them up and put one in each baggie. ½ of the share size package (aka 1 bar) is 220 calories! That’s a lot (relative to other options), so if I eat a <100 calorie option like fruit snacks or a date bar, I try to make it in the same hour as the above average option, like the ½ snickers. 220+80 is 300 calories, which means it’s above goal for the hour.

And that works well for me. Sometimes I do have hours where I am slightly below goal – say 240 calories. That’s fine! It’s not precise. But 250 calories per hour as a goal seems to work well as a general baseline, and I know that if I have several hours of at or greater than 250 calories, one smaller hour (200-250) is not a big deal. But this tracking and reviewing my data during the run via my tracking spreadsheet helps make sure I don’t get on a slippery slope to not consuming enough fuel to match the demands I’m putting on my body.

And the same goes for sodium. I have read a lot of literature on sodium consumption and/or supplementation in ultrarunning. Most of the science suggests it may not matter in terms of sodium concentration in the blood and/or muscle cramps, which is why a lot of people choose sodium supplementation. But for me, I have a very clear, distinct feeling when I get not enough sodium. It is almost like a chemical feeling in my chest, and is a cousin (but distinct) feeling to feeling ketones. I’ve had it happen before on long hikes where I drank tons to stay hydrated and kept my glucose levels in range but didn’t eat snacks with sodium nor supplement my water. I’ve also had it happen on runs. So for me, I do typically need sodium supplementation because that chemical-like feeling builds up and starts to make me feel like I’m wheezing in my chest (although my lungs are fine and have no issues during this). And what I found works for me is targeting around 500mg/hour of sodium consumption, through a combination of electrolyte pills and food.

(Side note, most ultrarunning blogs I’ve read suggest you’ll be just fine based on food you graze at the aid station. Well, I do most of my ultras as solo endeavors – no grazing, everything is pre-planned – and even if I did do an organized race, because of celiac I can’t eat 95% of the food (due to ingredients, lack of labeling, and/or cross contamination)…so that just doesn’t work for me to rely on aid station food to supplement me sodium-wise. But maybe it would work for other people, it just doesn’t for me given the celiac situation.)

I used to just target 500mg/hour of sodium through electrolyte pills. However, as I switched to actually fueling my runs and tracking carbs, fat, protein, and calories (as described above), I realized it’d be just as easy to track sodium intake in the food, and maybe that would enable me to have a different strategy on electrolyte pill consumption – and it did!

I went back to my spreadsheet and re-added information for sodium to all of my food items in my fuel library, and added it to the template that I duplicate for every run. Some of my food items, just like they can be outliers on calories or protein or fat or carbs, are also outliers on sodium. Biggest example? My beef stick, the protein outlier, is also a sodium outlier: 370mg of sodium! Yay! Same for my chili cheese Fritos – 210mg of sodium – which is actually the same amount of sodium that’s in the type of electrolyte pills I’m currently using.

I originally had a timer set and every 45 minutes, I’d take an electrolyte pill. However, in the last year I gradually realized that sometimes that made me over by quite a bit on certain hours and in some cases, I ended up WAY under my 500mg sodium goal. I actually noticed this in the latter portion of my 82 mile run – I started to feel the low-sodium chest feeling that I get, glanced at my sheet (that I hadn’t been paying close attention to because of So. Much. Rain) and realized – oops – that I had an hour of 323mg of sodium followed by a 495mg hour. I took another electrolyte pill to catch up and chose some higher sodium snacks for my next few fuels. There were a couple hours earlier in the run (hours 4 and 7) where I had happened to – based on some of my fresh fuel options like mashed potatoes – to end up with over 1000mg of sodium. I probably didn’t need that much, and so in subsequent hours I learned I could skip the electrolyte pill when I had had mashed potatoes in the last hour. Eventually, after my 82-mile run when I started training long runs again, I realized that keeping an eye on my rolling sodium tallies and tracking it like I tracked calories, taking an electrolyte pill when my hourly average dropped <500mg and not based on a pre-set time when it was >500mg, began to work well for me.

And that’s what I’ve been experimenting with for my last half dozen runs, which has worked – all of those runs have ended up with a total average slightly above 500mg of sodium and slightly above 250 calories for all hours of the run!

An example chart that automatically updates (as a pivot table) summarizing each hour's intake of sodium and calories during a run. At the bottom, an average is calculated, showing this 6 hour run example achieved 569 mg/hr of sodium and 262 calories per hour, reaching both goals.

Now, you may be wondering – she tracks calories and sodium, what about fat and protein and carbs?

I don’t actually care about or use these in real-time for an hourly average; I use these solely as real-time decision in points as 1) for carbs, to know how much insulin I might need dependent on my glucose levels at the time (because I have Type 1 diabetes); and 2) the fat and protein is to make sure I take the right amount of enzymes so I can actually digest the fuel (because I have exocrine pancreatic insufficiency and can’t digest fuel without enzyme pills). I do occasionally look back at these numbers cumulatively, but for the most part, they’re solely there for real-time decision making at the moment I decide what to eat. Which is 95% of the time based on my taste buds after I’ve decided whether I need to factor in a higher calorie or sodium option!

For me, my higher sodium options are chili cheese Fritos, beef stick, yogurt covered pretzels.

For me, my higher calorie options are the ½ share size Snickers; chili cheese Fritos; Reese’s pieces; yogurt covered pretzels; GF honey stinger stroopwaffle; and 2 mini PayDay bars.

Those are all shelf-stable options that I keep in snack size baggies and ready to throw into my running vest.

Most of my ‘fresh’ food options, that I’d have my husband bring out to the ‘aid station’/turnaround point of my runs for refueling, tend to be higher calorie options. This includes ¼ of a GF PB&J sandwich (which I keep frozen so it lasts longer in my vest without getting squishy); ¼ of a GF ham and cheese quesadilla; a mashed potato cup prepared in the microwave and stuck in another baggie (a jillion, I mean, 690mg of sodium if you consume the whole thing but it’s occasionally hard to eat allll those mashed potatoes out of a baggie in one go when you’re not actually very hungry); sweet potato tots; etc.

So again, my recommendation is to find foods you like in general and then figure out your guiding principles. For example:

  • Do you have any dietary restrictions, food allergies or intolerances, or have already learned foods that your body Does Not Like while running?
  • Are you aiming to do carbs/hr, calories/hr, or something else? What amounts are those?
  • Do you need to track your fuel consumption to help you figure out how you’re not hitting your fuel goals? If so, how? Is it by wrappers? Do you want to start with a list of fuel and cross it off or tear it off as you go? Or like me, use a note on your phone or a drop down list in your spreadsheet to log it (my blog post here has a template if you’d like to use it)?

My guiding principles are:

  • Gluten free with no cross contamination risk (because celiac)
  • ~250 calories per hour, eating twice per hour to achieve this
  • Each fuel (every 30 min) should be less than ~20g of carb, ~10g of fat, and ~5-10g of protein
  • I also want ~500mg of sodium each hour through the 2x fuel and when needed, electrolyte pills that have 210mg of sodium each
  • Dry food is harder to swallow; mouthfeel (ability to chew and swallow it) is something to factor in.
  • I prefer to eat my food on the go while I’m run/walking, so it should be all foods that can go in a snack or sandwich size baggie in my vest. Other options (like chicken broth, soup, and messy food items) can be on my backup list to be consumed at the aid station but unless I have a craving for them, they are secondary options.
  • Not a hassle to make/prepare/measure out into individual serving sizes.

Find foods that you like, figure out your guiding principles, and keep revising your list as you find what options work well for you in different situations and based on your running needs!

Food (fuel) for ultramarathon running by Dana Lewis at DIYPS.org

Functional Self-Tracking is The Only Self-Tracking I Do

“I could never do that,” you say.

And I’ve heard it before.

Eating gluten free for the rest of your life, because you were diagnosed with celiac disease? Heard that response (I could never do that) for going on 14 years.

Inject yourself with insulin or fingerstick test your blood glucose 14 times a day? Wear an insulin pump on your body 24/7/365? Wear a CGM on your body 24/7/365?

Yeah, I’ve heard you can’t do that, either. (For 20 years and counting.) Which means I and the other people living with the situations that necessitate these behaviors are…doing this for fun?

We’re not.

More recently, I’ve heard this type of comment come up about tracking what I’m eating, and in particular, tracking what I’m eating when I’m running. I definitely don’t do that for fun.

I have a 20+ year strong history of hating tracking things, actually. When I was diagnosed with type 1 diabetes, I was given a physical log book and asked to write down my blood glucose numbers.

“Why?” I asked. They’re stored in the meter.

The answer was because supposedly the medical team was going to review them.

And they did.

And it was useless.

“Why were you high on February 22, 2003?”

Whether we were asking this question in March of 2003 or January of 2023 (almost 20 years later), the answer would be the same: I have no idea.

BG data, by itself, is like a single data point for a pilot. It’s useless without the contextual stream of data as well as other metrics (in the diabetes case, things like what was eaten, what activity happened, what my schedule was before this point, and all insulin dosed potentially in the last 12-24h).

So you wouldn’t be surprised to find out that I stopped tracking. I didn’t stop testing my blood glucose levels – in fact, I tested upwards of 14 times a day when I was in high school, because the real-time information was helpful. Retrospectively? Nope.

I didn’t start “tracking” things again (for diabetes) until late 2013, when we realized that I could get my CGM data off the device and into the laptop beside my bed, dragging the CGM data into a CSV file in Dropbox and sending it to the cloud so an app called “Pushover” would make a louder and different alarm on my phone to wake me up to overnight hypoglycemia. The only reason I added any manual “tracking” to this system was because we realized we could create an algorithm to USE the information I gave it (about what I was eating and the insulin I was taking) combined with the real-time CGM data to usefully predict glucose levels in the future. Predictions meant we could make *predictive* alarms, instead of solely having *reactive* alarms, which is what the status quo in diabetes has been for decades.

So sure, I started tracking what I was eating and dosing, but not really. I was hitting buttons to enter this information into the system because it was useful, again, in real time. I didn’t bother doing much with the data retrospectively. I did occasional do things like reflect on my changes in sensitivity after I got the norovirus, for example, but again this was mostly looking in awe at how the real-time functionality of autosensitivity, an algorithm feature we designed to adjust to real-time changes in sensitivity to insulin, dealt throughout the course of being sick.

At the beginning of 2020, my life changed. Not because of the pandemic (although also because of that), but because I began to have serious, very bothersome GI symptoms that dragged on throughout 2020 and 2021. I’ve written here about my experiences in eventually self-diagnosing (and confirming) that I have exocrine pancreatic insufficiency, and began taking pancreatic enzyme replacement therapy in January 2022.

What I haven’t yet done, though, is explain all my failed attempts at tracking things in 2020 and 2021. Or, not failed attempts, but where I started and stopped and why those tracking attempts weren’t useful.

Once I realized I had GI symptoms that weren’t going away, I tried writing down everything I ate. I tried writing in a list on my phone in spring of 2020. I couldn’t see any patterns. So I stopped.

A few months later, in summer of 2020, I tried again, this time using a digital spreadsheet so I could enter data from my phone or my computer. Again, after a few days, I still couldn’t see any patterns. So I stopped.

I made a third attempt to try to look at ingredients, rather than categories of food or individual food items. I came up with a short list of potential contenders, but repeated testing of consuming those ingredients didn’t do me any good. I stopped, again.

When I first went to the GI doctor in fall of 2020, one of the questions he asked was whether there was any pattern between my symptoms and what I was eating. “No,” I breathed out in a frustrated sigh. “I can’t find any patterns in what I’m eating and the symptoms.”

So we didn’t go down that rabbit hole.

At the start of 2021, though, I was sick and tired (of being sick and tired with GI symptoms for going on a year) and tried again. I decided that some of my “worst” symptoms happened after I consumed onions, so I tried removing obvious sources of onion from my diet. That evolved to onion and garlic, but I realized almost everything I ate also had onion powder or garlic powder, so I tried avoiding those. It helped, some. That then led me to research more, learn about the categorization of FODMAPs, and try a low-FODMAP diet in mid/fall 2021. That helped some.

Then I found out I actually had exocrine pancreatic insufficiency and it all made sense: what my symptoms were, why they were happening, and why the numerous previous tracking attempts were not successful.

You wouldn’t think I’d start tracking again, but I did. Although this time, finally, was different.

When I realized I had EPI, I learned that my body was no longer producing enough digestive enzymes to help my body digest fat, protein, and carbs. Because I’m a person with type 1 diabetes and have been correlating my insulin doses to my carbohydrate consumption for 20+ years, it seemed logical to me to track the amount of fat and protein in what I was eating, track my enzyme (PERT) dosing, and see if there were any correlations that indicated my doses needed to be more or less.

My spreadsheet involved recording the outcome of the previous day’s symptoms, and I had a section for entering multiple things that I ate throughout the day and the number of enzymes. I wrote a short description of my meal (“butter chicken” or “frozen pizza” or “chicken nuggets and veggies”), the estimate of fat and protein counts for the meal, and the number of enzymes I took for that meal. I had columns on the left that added up the total amount of fat and protein for the day, and the total number of enzymes.

It became very apparent to me – within two days – that the dose of the enzymes relative to the quantity of fat and protein I was eating mattered. I used this information to titrate (adjust) my enzyme dose and better match the enzymes to the amount of fat or protein I was eating. It was successful.

I kept writing down what I was eating, though.

In part, because it became a quick reference library to find the “counts” of a previous meal that I was duplicating, without having to re-do the burdensome math of adding up all the ingredients and counting them out for a typical portion size.

It also helped me see that within the first month, I was definitely improving, but not all the way – in terms of fully reducing and eliminating all of my symptoms. So I continued to use it to titrate my enzyme doses.

Then it helped me carefully work my way through re-adding food items and ingredients that I had been avoiding (like onions, apples, and pears) and proving to my brain that those were the result of enzyme insufficiency, not food intolerances. Once I had a working system for determining how to dose enzymes, it became a lot easier to see when I had slight symptoms from slightly getting my dosing wrong or majorly mis-estimating the fat and protein in what I was eating.

It provided me with a feedback loop that doesn’t really exist in EPI and GI conditions, and it was a daily, informative, real-time feedback loop.

As I reached the end of my first year of dosing with PERT, though, I was still using my spreadsheet. It surprised me, actually. Did I need to be using it? Not all the time. But the biggest reason I kept using it relates to how I often eat. I often look at an ‘entree’ for protein and then ‘build’ the rest of my meal around that, to help make sure I’m getting enough protein to fuel my ultrarunning endeavors. So I pick my entree/main thing I’m eating and put it in my spreadsheet under the fat and protein columns (=17 g of fat, =20 g of protein), for example, then decide what I’m going to eat to go with it. Say I add a bag of cheddar popcorn, so that becomes (=17+9 g of fat) and (=20+2 g of protein), and when I hit enter, those cells now tell me it’s 26 g of fat and 22 g of protein for the meal, which tells my brain (and I also tell the spreadsheet) that I’ll take 1 PERT pill for that. So I use the spreadsheet functionally to “build” what I’m eating and calculate the total grams of protein and fat; which helps me ‘calculate’ how much PERT to take (based on my previous titration efforts I know I can do up to 30g of fat and protein each in one PERT pill of the size of my prescription)

Example in my spreadsheet showing a meal and the in-progress data entry of entering the formula to add up two meal items' worth of fat and protein

Essentially, this has become a real-time calculator to add up the numbers every time I eat. Sure, I could do this in my head, but I’m usually multitasking and deciding what I want to eat and writing it down, doing something else, doing yet something else, then going to make my food and eat it. This helps me remember, between the time I decided – sometimes minutes, sometimes hours in advance of when I start eating and need to actually take the enzymes – what the counts are and what the PERT dosing needs to be.

I have done some neat retrospective analysis, of course – last year I had estimated that I took thousands of PERT pills (more on that here). I was able to do that not because it’s “fun” to track every pill that I swallow, but because I had, as a result of functional self-tracking of what I was eating to determine my PERT dosing for everything I ate, had a record of 99% of the enzyme pills that I took last year.

I do have some things that I’m no longer entering in my spreadsheet, which is why it’s only 99% of what I eat. There are some things like a quick snack where I grab it and the OTC enzymes to match without thought, and swallow the pills and eat the snack and don’t write it down. That maybe happens once a week. Generally, though, if I’m eating multiple things (like for a meal), then it’s incredibly useful in that moment to use my spreadsheet to add up all the counts to get my dosing right. If I don’t do that, my dosing is often off, and even a little bit “off” can cause uncomfortable and annoying symptoms the rest of the day, overnight, and into the next morning.

So, I have quite the incentive to use this spreadsheet to make sure that I get my dosing right. It’s functional: not for the perceived “fun” of writing things down.

It’s the same thing that happens when I run long runs. I need to fuel my runs, and fuel (food) means enzymes. Figuring out how many enzymes to dose as I’m running 6, 9, or 25 hours into a run gets increasingly harder. I found that what works for me is having a pre-built list of the fuel options; and a spreadsheet where I quickly on my phone open it and tap a drop down list to mark what I’m eating, and it pulls in the counts from the library and tells me how many enzymes to take for that fuel (which I’ve already pre-calculated).

It’s useful in real-time for helping me dose the right amount of enzymes for the fuel that I need and am taking every 30 minutes throughout my run. It’s also useful for helping me stay on top of my goal amounts of calories and sodium to make sure I’m fueling enough of the right things (for running in general), which is something that can be hard to do the longer I run. (More about this method and a template for anyone who wants to track similarly here.)

The TL;DR point of this is: I don’t track things for fun. I track things if and when they’re functionally useful, and primarily that is in real-time medical decision making.

These methods may not make sense to you, and don’t have to.

It may not be a method that works for you, or you may not have the situation that I’m in (T1D, Graves, celiac, and EPI – fun!) that necessitates these, or you may not have the goals that I have (ultrarunning). That’s ok!

But don’t say that you “couldn’t” do something. You ‘couldn’t’ track what you consumed when you ran or you ‘couldn’t’ write down what you were eating or you ‘couldn’t’ take that many pills or you ‘couldn’t’ inject insulin or…

You could, if you needed to, and if you decided it was the way that you could and would be able to achieve your goals.

Looking Back Through 2022 (What You May Have Missed)

I ended up writing a post last year recapping 2021, in part because I felt like I did hardly anything – which wasn’t true. In part, that was based on my body having a number of things going on that I didn’t know at the time. I figured those out in 2022 which made 2022 hard and also provided me with a sense of accomplishment as I tackled some of these new challenges.

For 2022, I have a very different feeling looking back on the entire year, which makes me so happy because it was night and day (different) compared to this time last year.

One major example? Exocrine Pancreatic Insufficiency.

I started taking enzymes (pancreatic enzyme replacement therapy, known as PERT) in early January. And they clearly worked, hooray!

I quickly realized that like insulin, PERT dosing needed to be based on the contents of my meals. I figured out how to effectively titrate for each meal and within a month or two was reliably dosing effectively with everything I was eating and drinking. And, I was writing and sharing my knowledge with others – you can see many of the posts I wrote collected at DIYPS.org/EPI.

I also designed and built an open source web calculator to help others figure out their ratios of lipase and fat and protease and protein to help them improve their dosing.

I even published a peer-reviewed journal article about EPI – submitted within 4 months of confirming that I had it! You can read that paper here with an analysis of glucose data from both before and after starting PERT. It’s a really neat example that I hope will pave the way for answering many questions we all have about how particular medications possibly affect glucose levels (instead of simply being warned that they “may cause hypoglycemia or hyperglycemia” which is vague and unhelpful.)

I also had my eyes opened to having another chronic disease that has very, very expensive medication with no generic medication option available (and OTCs may or may not work well). Here’s some of the math I did on the cost of living with EPI and diabetes (and celiac and Graves) for a year, in case you missed it.

Another other challenge+success was running (again), but with a 6 week forced break (ha) because I massively broke a toe in July 2022.

That was physically painful and frustrating for delaying my ultramarathon training.

I had been successfully figuring out how to run and fuel with enzymes for EPI; I even built a DIY macronutrient tracker and shared a template so others can use it. I ran a 50k with a river crossing in early June and was on track to target my 100 mile run in early fall.

However with the broken toe, I took the time off needed and carefully built back up, put a lot of planning into it, and made my attempt in late October instead.

I succeeded in running ~82 miles in ~25 hours, all in one go!

I am immensely proud of that run for so many reasons, some of which are general pride at the accomplishment and others are specific, including:

  • Doing something I didn’t think I could do which is running all day and all night without stopping
  • Doing this as a solo or “DIY” self-organized ultra
  • Eating every 30 minutes like clockwork, consuming enzymes (more than 92 pills!), which means 50 snacks consumed. No GI issues, either, which is remarkable even for an ultrarunner without EPI!
  • Generally figuring out all the plans and logistics needed to be able to handle such a run, especially when dealing with type 1 diabetes, celiac, EPI, and Graves
  • Not causing any injuries, and in fact recovering remarkably fast which shows how effective my training and ‘race’ strategy were.

On top of this all, I achieved my biggest-ever running year, with more than 1,333 miles run this year. This is 300+ more than my previous best from last year which was the first time I crossed 1,000 miles in a year.

Professionally, I did quite a lot of miscellaneous writing, research, and other activities.

I spent a lot of time doing research. I also peer reviewed more than 24 papers for academic journals. I was asked to join an editorial board for a journal. I served on 2 grant review committees/programs.

I also wrote a lot.

*by ton, I mean way more than the past couple of years combined. Some of that has been due to getting some energy back once I’ve fixed missing enzyme and mis-adjusted hormone levels in my body! I’m up to 40+ blog posts this year.

And personally, the punches felt like they kept coming, because this year we also found out that I have Graves’ disease, taking my chronic disease count up to 4. Argh. (T1D, celiac, EPI, and now Graves’, for those curious about my list.)

My experience with Graves’ has included symptoms of subclinical hyperthyroidism (although my T3 and T4 are in range), and I have chosen to try thyroid medication in order to manage the really bothersome Graves’-related eye symptoms. That’s been an ongoing process and the symptoms of this have been up and down a number of times as I went on medication, reduced medication levels, etc.

What I’ve learned from my experience with both EPI and Graves’ in the same year is that there are some huge gaps in medical knowledge around how these things actually work and how to use real-world data (whether patient-recorded data or wearable-tracked data) to help with diagnosis, treatment (including medication titration), etc. So the upside to this is I have quite a few new projects and articles coming to fruition to help tackle some of the gaps that I fell into or spotted this year.

And that’s why I’m feeling optimistic, and like I accomplished quite a bit more in 2022 than in 2021. Some of it is the satisfaction of knowing the core two reasons why the previous year felt so physically bad; hopefully no more unsolved mysteries or additional chronic diseases will pop up in the next few years. Yet some of it is also the satisfaction of solving problems and creating solutions that I’m uniquely poised, due to my past experiences and skillsets, to solve. That feels good, and it feels good as always to get to channel my experiences and expertise to try to create solutions with words or code or research to help other people.

Replacing Embedded Tweets With Images

If you’re like me, you may have been thrilled when (back in the day) it became possible to embed public social media posts such as tweets on websites and blogs. It enabled people who read here to pop over to related Twitter discussions or see images more easily.

However, with how things have been progressing (PS – you can find me @DanaMLewis@med-mastodon.com as well), it’s increasingly possible that a social media account could get suspended/banned/taken down arbitrarily for things that are retrospectively against new policies. It occurred to me that one of the downsides to this is the impact it would have on embedded post content here on my blog, so I started thinking through how I could replace the live embedded links with screenshots of the content.

There’s no automatic way to do this, but the most efficient method that I’ve decided on has been the following:

1 ) Export an XML file of your blog/site content.

If you use WordPress, there’s an “Export” option under “Tools”. You can export all content, it doesn’t matter.

2 ) Run a script (that I wrote with the help of ChatGPT).

I called my script “embedded-links.sh” and it searches the XML file for URLs found between “[ embed ]” and “[\embed]” and generates a CSV file. Opening the CSV with Excel, I can then see the list of every embedded tweet throughout the site.

I originally was going to have the script pair the embedded links (twitter URLs) to the post it was found within to make it easier to go swap them out with images, but realized I didn’t need this.

(See no. 4 for more on why not and the alternative).

3 ) I created screenshots with the URLs in my file.

I went through and pasted each URL (only about 60, thankfully) into https://htmlcsstoimage.com/examples/twitter-tweet-screenshot’s example HTML code and then clicked “re-generate image” in the top right corner under the image tab. Then, I right-clicked the image and chose “Save As” and saved it to a folder. I made sure to rename the image file as I saved it each time descriptively; this is handy for the next step.

I did hit the free demo limit on that tool after about 30 images, and I had 60, so after about 20 minutes I went back and checked and was able to do my second batch of tweets.

(There are several types of these screenshot generators you could use, this one happened to be quick and easy for my use case.)

4 ) I then opened up my blog and grabbed the first link and pasted it into the search box on the Posts page.

It pulled up the list of blog posts that had that URL.

I opened the blog post, scrolled to the embedded tweet, deleted it, and replaced it with the image instead.

(Remember to write alt text for your image during this step!)

Remember to ‘update’/save your post, too, after you input the image.

It took maybe half an hour to do the final step, and maybe 2-3 hours total including all the time I spent working on the script in number 2, so if you have a similar ~60 or so links I would expect this to take ~1-2 focused hours.

Replacing embedded web content with images by Dana M. Lewis

Understanding the Difference Between Open Source and DIY in Diabetes

There’s been a lot of excitement (yay!) about the results of the CREATE trial being published in NEJM, followed by the presentation of the continuation results at EASD. This has generated a lot of blog posts, news articles, and discussion about what was studied and what the implications are.

One area that I’ve noticed is frequently misunderstood is how “open source” and “DIY” are different.

Open source means that the source code is openly available to view. There are different licenses with open source; most allow you to also take and reuse and modify the code however you like. Some “copy-left” licenses commercial entities to open-source any software they build using such code. Most companies can and do use open source code, too, although in healthcare most algorithms and other code related to FDA-regulated activity is proprietary. Most open source licenses allow free individual use.

For example, OpenAPS is open source. You can find the core code of the algorithm here, hosted on Github, and read every line of code. You can take it, copy it, use it as-is or modify it however you like, because the MIT license we put on the code says you can!

As an individual, you can choose to use the open source code to “DIY” (do-it-yourself) an automated insulin delivery system. You’re DIY-ing, meaning you’re building it yourself rather than buying it or a service from a company.

In other words, you can DIY with open source. But open source and DIY are not the same thing!

Open source can and is usually is used commercially in most industries. In healthcare and in diabetes specifically, there are only a few examples of this. For OpenAPS, as you can read in our plain language reference design, we wanted companies to use our code as well as individuals (who would DIY with it). There’s at least one commercial company now using ideas from the OpenAPS codebase and our safety design as a safety layer against their ML algorithm, to make sure that the insulin dosing decisions are checked against our safety design. How cool!

However, they’re a company, and they have wrapped up their combination of proprietary software and the open source software they have implemented, gotten a CE mark (European equivalent of FDA approval), and commercialized and sold their AID product to people with diabetes in Europe. So, those customers/users/people with diabetes are benefitting from open source, although they are not DIY-ing their AID.

Outside of healthcare, open source is used far more pervasively. Have you ever used Zoom? Zoom uses open source; you then use Zoom, although not in a DIY way. Same with Firefox, the browser. Ever heard of Adobe? They use open source. Facebook. Google. IBM. Intel. LinkedIn. Microsoft. Netflix. Oracle. Samsung. Twitter. Nearly every product or service you use is built with, depends on, or contains open source components. Often times open source is more commonly used by companies to then provide products to users – but not always.

So, to more easily understand how to talk about open source vs DIY:

  • The CREATE trial used a version of open source software and algorithm (the OpenAPS algorithm inside a modified version of the AndroidAPS application) in the study.
  • The study was NOT on “DIY” automated insulin delivery; the AID system was handed/provided to participants in the study. There was no DIY component in the study, although the same software is used both in the study and in the real world community by those who do DIY it. Instead, the point of the trial was to study the safety and efficacy of this version of open source AID.
  • Open source is not the same as DIY.
  • OpenAPS is open source and can be used by anyone – companies that want to commercialize, or individuals who want to DIY. For more information about our vision for this, check out the OpenAPS plain language reference design.
Venn diagram showing a small overlap between a bigger open source circle and a smaller DIY circle. An arrow points to the overlapping section, along with text of "OpenAPS". Below it text reads: "OpenAPS is open source and can be used DIY. DIY in diabetes often uses open source, but not always. Not all open source is used DIY."

More Thoughts And Strategies For Managing Wildfire Smoke And Problematic Air Quality

In 2020 we had a bad wildfire smoke year with days of record-high heat and poor air quality. It was especially problematic in the greater Seattle area and Pacific Northwest (PNW) where most people don’t have air conditioning. I previously wrote about some of our strategies here, such as box fans with furnace filters; additional air purifiers; and n95 masks. All of those are strategies we have continued to use in the following years, and while our big HEPA air purifier felt expensive at the time, it was a good investment and has definitely done what it needs to do.

This year, we got to September 2022 before we had bad wildfire smoke. I had been crossing my fingers and hoped we’d skip it entirely, but nope. Thankfully, we didn’t have the record heat and the smoke at the same time, but we did end up having smoke blowing in from other states, and then a local wildfire 30-40 miles away that has been making things tricky for several days on and off…several different times.

I’ve been training for an ultramarathon, so it’s been frustrating to have to look not only at the weather but also the air quality to determine how/when to run. I don’t necessarily have a medical condition that makes me higher risk to poor air quality (that I know of), but I think there’s some correlation with being allergic to a lot of environmental things (like dust, mold, trees, grass, etc) that makes it so that I also am more sensitive to most people I know to poor air quality.

Tired of wildfire smoke making it hard to exercise easily outdoors

Everyone’s sensitivity is different, but I’ve been figuring out thanks to multiple stretches of up and down AQI that my threshold for masking outside is about 50 AQI. If it gets to be around 100 or above, I don’t want to be walking or running outside, even with a mask. And as it gets above 150 outside, it becomes yucky inside for me, too, even with the doors and windows closed, the vents on our windows taped shut, and air purifiers and box fans etc running. My throat was scratchy and my eyes hurt, and my chest started to feel yucky, too.

It got so bad last week that I took a small, portable mini air purifier that I had bought to help mitigate COVID-19 exposure on planes, and stuck it in front of my face. It noticeably made my throat stop feeling scratchy, so it was clearly cleaning the air to a degree. On the worst days, I’ve been sitting at my desk working with the stream of air blowing in my face, and I’ve also been leaving it turned on and pointed at my face overnight.

This is kind of a subjective, arbitrary “this helps”, but today we ended up being able to quantify how much it helps to have our big air purifier, box fans with furnace filters, smaller air purifier, and the mini air purifier. Scott ordered a small, portable PM2.5 / PM10 monitor to be able to see what the PM2.5 and PM10 levels are in that exact spot, as opposed to relying on IQAir or similar locally reported sensors that only tell us generally how bad things are in our area.

It also turned out to be useful for checking how effective each of our things are.

It turns out that our box fans with furnace filters taped to the back are most effective at fan speed “1” (they all go up to 3), probably because putting it up to 3 is prone to stirring up dust from the floor (despite robot vacuuming multiple times of day) and increasing PM10 levels. A box fan with 2” MERV 10 filter taped to the back doesn’t affect the already-low PM2.5 levels indoors; on fan level 1 the PM10 gets reduced to zero as long as it’s not pointed at the carpet and stirring up dust. So while it doesn’t help with smoke, these fans are good with increasing circulating air (so it feels cooler) and getting rid of the dust and cat hair that I’m allergic to.

The big HEPA air purifier we bought has a connected app that tells us the PM2.5 levels, and our portable PM2.5 monitor confirms that it’s putting out air with a PM2.5 level of 0. Yay! This sits in our kitchen by our front door, so it helps clean the smoky hallway air coming inside.

A cat sticking it's face toward the phone camera. Behind the cat, a portable PM 2.5 / PM 10 air monitor sits on the floor by a door to measure incoming air.

The hallway air is TERRIBLE. The hallway opens directly to the parking garage, and is usually about as smoky as the outdoor air: it only has a single A/C duct for the whole building, which isn’t always running. The stairwell leading outside is a little cleaner than the hallway and outside. (So I’m glad we have our best air purifier situated to take on the air coming in when we open the hallway door). So we won’t be spending time exercising in the hallways, either; with that level of air quality you might as well be outside anyway, because we need to be masked either way.

The other purifier we have is a smaller purifier. I have it sitting on the counter in our bathroom, because the air exchange to outside is really reduced compared to what it should be (and the building management doesn’t seem very interested in trying to figure out how to fix it). That purifier gets PM2.5 down from 4 to 1 ug/m^3, or about a 4x improvement! Which is pretty good, although not quite as good as the big purifier in our kitchen/entry. Since it’s small enough to sit on a desk or bedside table and blow clean air at me where I’m working or sleeping, we decided to order 2 more of these smaller purifiers for my office and our bedroom, since the box fans take care of PM10 but not the PM2.5.

PM2.5 and PM10 readings from the portable monitor, from on top of the air purifier; next to my office; next to a box fan with filter; in the hallway; in the stairwell; and outside. This is roughly in order of best (inside over the air purifier) to worst (hallway and outside; the stairwell is slightly better than the hallway).

Since the portable air quality monitor would be hard to fit inside his mask or his mouth, and impossible to read there, Scott also held up the PM2.5/10 monitor to the exhaust valve on his n95 mask (note: not all our n95 masks our valved but the valved ones are good for wildfire smoke and managing temperature levels inside your mask when exercising) while outside, and the average PM2.5 level there is about half that of the ambient air. Since about half the time he’s breathing in (and the meter is sucking in outside air) and the other half of the time he’s breathing out (so it’s getting the mask-filtered air he inhaled and then exhaled), this suggests that the mask is doing it’s job of reducing PM2.5 levels he’s breathing inside the mask to very low levels (probably about the same as our very clean indoor air).

He also held it over the small air purifier that I’ve been keeping my face over. It, too, reduces PM2.5 down to about 2 – so not as good as the bigger purifiers, but a ~2x improvement over the ~4 in the ambient air that I would otherwise be breathing.

TLDR:

  • Box fans with MERV 10 filters are great for allergens and PM10, but don’t noticeably reduce the PM2.5. Higher MERV filters might do better, but are very expensive, and probably less cost-effective than a purifier with a proper HEPA filter.
  • Small and big air purifiers work well for reducing PM2.5.
  • N95 masks are effective at drastically reducing the PM2.5 you’d be exposed to outside.
  • If you’re like me and are bothered inside when the air quality outside is bad, additional air purifiers (small or big) might help improve your quality of life during these smoky days that we are increasingly getting every year.

Continuation Results On 48 Weeks of Use Of Open Source Automated Insulin Delivery From the CREATE Trial: Safety And Efficacy Data

In addition to the primary endpoint results from the CREATE trial, which you can read more about in detail here or as published in the New England Journal of Medicine, there was also a continuation phase study of the CREATE trial. This meant that all participants from the CREATE trial, including those who were randomized to the automated insulin delivery (AID) arm and those who were randomized to sensor-augmented insulin pump therapy (SAPT, which means just a pump and CGM, no algorithm), had the option to continue for another 24 weeks using the open source AID system.

These results were presented by Dr. Mercedes J. Burnside at #EASD2022, and I’ve summarized her presentation and the results below on behalf of the CREATE study team.

What is the “continuation phase”?

The CREATE trial was a multi-site, open-labeled, randomized, parallel-group, 24-week superiority trial evaluating the efficacy and safety of an open-source AID system using the OpenAPS algorithm in a modified version of AndroidAPS. Our study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14 percentage points higher among those who used the open-source AID system (95% confidence interval [CI], 9.2 to 18.8; P<0.001) compared to those who used sensor augmented pump therapy; a difference that corresponds to 3 hours 21 minutes more time spent in target range per day. The system did not contribute to any additional hypoglycemia. Glycemic improvements were evident within the first week and were maintained over the 24-week trial. This illustrates that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. This initial study concluded that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS, a widely used open-source AID solution, is efficacious and safe. These results were from the first 24-week phase when the two groups were randomized into SAPT and AID, accordingly.

The second 24-week phase is known as the “continuation phase” of the study.

There were 52 participants who were randomized into the SAPT group that chose to continue in the study and used AID for the 24 week continuation phase. We refer to those as the “SAPT-AID” group. There were 42 participants initially randomized into AID who continued to use AID for another 24 weeks (the AID-AID group).

One slight change to the continuation phase was that those in the SAPT-AID used a different insulin pump than the one used in the primary phase of the study (and 18/42 AID-AID participants also switched to this different pump during the continuation phase), but it was a similar Bluetooth-enabled pump that was interoperable with the AID system (app/algorithm) and CGM used in the primary outcome phase.

All 42 participants in AID-AID completed the continuation phase; 6 participants (out of 52) in the SAPT-AID group withdrew. One withdrew from infusion site issues; three with pump issues; and two who preferred SAPT.

What are the results from the continuation phase?

In the continuation phase, those in the SAPT-AID group saw a change in time in range (TIR) from 55±16% to 69±11% during the continuation phase when they used AID. In the SAPT-AID group, the percentage of participants who were able to achieve the target goals of TIR > 70% and time below range (TBR) <4% increased from 11% of participants during SAPT use to 49% during the 24 week AID use in the continuation phase. Like in the primary phase for AID-AID participants; the SAPT-AID participants saw the greatest treatment effect overnight with a TIR difference of 20.37% (95% CI, 17.68 to 23.07; p <0.001), and 9.21% during the day (95% CI, 7.44 to 10.98; p <0.001) during the continuation phase with open source AID.

Those in the AID-AID group, meaning those who continued for a second 24 week period using AID, saw similar TIR outcomes. Prior to AID use at the start of the study, TIR for that group was 61±14% and increased to 71±12% at the end of the primary outcome phase; after the next 6 months of the continuation phase, TIR was maintained at 70±12%. In this AID-AID group, the percentage of participants achieving target goals of TIR >70% and TBR <4% was 52% of participants in the first 6 months of AID use and 45% during the continuation phase. Similarly to the primary outcomes phase, in the continuation phase there was also no treatment effect by age interaction (p=0.39).

The TIR outcomes between both groups (SAPT-AID and AID-AID) were very similar after each group had used AID for 24 weeks (SAPT-AID group using AID for 24 weeks during the continuation phase and AID-AID using AID for 24 weeks during the initial RCT phase).. The adjusted difference in TIR between these groups was 1% (95% CI, -4 to 6; p=-0.67). There were no glycemic outcome differences between those using the two different study pumps (n=69, which was the SAPT-AID user group and 18 AID-AID participants who switched for continuation; and n=25, from the AID-AID group who elected to continue on the pump they used in the primary outcomes phase).

In the initial primary results (first 24 weeks of trial comparing the AID group to the SAPT group), there was a 14 percentage point difference between the groups. In the continuation phase, all used AID and the adjusted mean difference in TIR between AID and the initial SAPT results was a similar 12.10 percentage points (95% CI, p<0.001, SD 8.40).

Similar to the primary phase, there was no DKA or severe hypoglycemia. Long-term use (over 48 weeks, representing 69 person-years) did not detect any rare severe adverse events.

CREATE results from the full 48 weeks on open source AID with both SAPT (control) and AID (intervention) groups plotted on the graph.

Conclusion of the continuation study from the CREATE trial

In conclusion, the continuation study from the CREATE trial found that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS is efficacious and safe with various hardware (pumps), and demonstrates sustained glycaemic improvements without additional safety concerns.

Key points to takeaway:

  • Over 48 weeks total of the study (6 months or 24 weeks in the primary phase; 6 months/24 weeks in the continuation phase), there were 64 person-years of use of open source AID in the study, compared to 59 person-years of use of sensor-augmented pump therapy.
  • A variety of pump hardware options were used in the primary phase of the study among the SAPT group, due to hardware (pump) availability limitations. Different pumps were also used in the SAPT-AID group during the AID continuation phase, compared to the pumps available in the AID-AID group throughout both phases of trial. (Also, 18/42 of AID-AID participants chose to switch to the other pump type during the continuation phase).
  • The similar TIR results (14 percentage points difference in primary and 12 percentage points difference in continuation phase between AID and SAPT groups) shows durability of the open source AID and algorithm used, regardless of pump hardware.
  • The SAPT-AID group achieved similar TIR results at the end of their first 6 months of use of AID when compared to the AID-AID group at both their initial 6 months use and their total 12 months/48 weeks of use at the end of the continuation phase.
  • The safety data showed no DKA or severe hypoglycemia in either the primary phase or the continuation phases.
  • Glycemic improvements from this version of open source AID (the OpenAPS algorithm in a modified version of AndroidAPS) are not only immediate but also sustained, and do not increase safety concerns.
CREATE Trial Continuation Results were presented at #EASD2022 on 48 weeks of use of open source AID

Reasons to “DIY” or Self-Organize Your Own Solo Ultramarathon or Ultra Run

I’ve now run two ultramarathons (both happened to be 50k races, with a race report for the second race here), and was planning my third ultrarace. I had my eye on the 50 mile (50M) version of the 50k I ran last year. It’s on a course I adore – a 6 foot wide crushed gravel trail that’s slightly uphill (about 1,000 feet) for the first 30 miles and then downhill at 2% grade for the remaining 20 miles. It happens to be close to home (hour and a half drive to the start), which helps for logistics.

I started training for the 50M weeks after my 50k this year, including talking my husband into taking me out to run some of the segments along the first 25 miles of the course. I’ve done the back half of the course several times through training and racing the 50k, and I wanted to check out each of the earlier segments to get a sense of what trail bathrooms existed on the course, make notes about milestones to watch for at various distances, etc.

After the first training run out there, when I started talking goal paces to get through the first and main cutoff at mile 30 (cutoffs got progressively easier from there, and even walking very slowly you could finish if you wanted to), my husband started to suggest that I should just run the course some other time on my own, so I didn’t have to worry about the cutoffs. I told him I didn’t want to do that. The cutoffs are a good incentive to help me push myself, and it’s worth the stress it causes in order to try to perform my best. (My target pace would get me through a comfortable 15 minutes before cutoff, and I could dial up the effort if needed to achieve cutoff). However, he suggested it another time and pointed out that even when running an organized race, I tend to run self-supported, so I don’t really don’t benefit as much from running in a race. I protested and talked again about the camaraderie of running when everyone else did, the fact that there were aid stations, the excellent search and rescue support, the t-shirt, the medal, the pictures! Then out loud I realized that I would be running at the back of the pack that I would miss the pictures at 25 miles because the photographer heads to the finish before I would get there. And they stop finish line pictures 3 hours before the end of the race. (Why, I don’t know!) And so I’d miss those photos too. And last year, I didn’t partake in the 50k staffed aid stations because I couldn’t eat any of their food and didn’t want any extra COVID exposure. Instead, my husband crewed me and refilled my hydration at two points on the course. The un-staffed aid stations didn’t have the plethora of supplies promised, and one race report from someone near the front of the pack said they were low on water! So it was a good thing I didn’t rely on the aid stations. I didn’t wear the tshirt last year, because it wasn’t a tech tee. Medals aren’t that exciting. So…why was I running the organized race?

My only remaining reasons were good search and rescue (still true) and the motivation of signing up for and committing to running on that date. It’s a commitment device. And my husband then smashed that reason, too, by reminding me that the only commitment device I typically need is a spreadsheet. If I decide I’m going to work toward a goal, I do. Signing up doesn’t make a difference.

And to be fair, he crews me whether it’s an organized race or not! So to him, it makes no difference whether I’m running an organized race or a self-organized long ultra.

And so I decided to give it some thought. Where would I run, if I could run anywhere in an hour’s distance from home? Do the same 50 mile course? Was that course worth it? Or was there somewhere closer to home where I could run that would be easier for my husband to crew?

He suggested running on our “home” trails, which is a network of hundreds of miles of paved trail that’s a short walk away. I immediately scoffed, then took the suggestion seriously. If I ran “from home”, he could crew from home and either drive out or e-bike out or walk out to bring me supplies along my route. If the park trail bathrooms ended up getting locked, I could always use the bathroom at home (although not ideal in terms of motivating myself to move quickly and get back out on the trail). I’d have a bigger variety of fueling options, since he could microwave and bring me out more options than if it had to be shelf-stable.

The list of benefits of potentially doing my own DIY or self-organized ultra grew.

(And then, I broke my toe. Argh. This further solidified my willingness to do a DIY ultra, because I could train up until when I was ready, and then run my distance, without having to choose between a non-refundable signup and not running or risking injury from running before I was ready.)

Eventually, my plans evolved (in part due to my broken toe). I was originally going to DIY a 50M or 100k (62M) over Labor Day weekend, recover, then re-train up and run a DIY 100 mile (100M) in late October or early November. When I broke my toe, I decided to scratch the “test” 50M/100k and just train and run the 100M, since that was my ultimate goal distance for the year.

Here are the pros of running a DIY ultra or a “self-organized” ultra, rather than an organized race with other people:

  • For me specifically, I have better trail options and route options. I can run a 95% flat course on paved, wide, safe trails through my local community.
  • These are so local that they are only a few minutes walk from my door.
  • The location means it’s easy for Scott to reach me at any point. He can walk out and bring me water and fuel and any needed supplies when I complete a loop every 4 or so hours. If needed, he could also e-bike out to bring me anything I need if I ran out or had a more urgent need for supplies. He can also drive out and access the course every half mile or mile for most of my planned route.

    This also means I have more fuel options that I can prepare and have for Scott to bring out. This is awesome because I can have him warm up ¼ of a ham and cheese quesadilla, or a corn dog, or sweet potato tots, or any other fuel options that I wouldn’t be able to use if I had to rely on pre-packed shelf stable options for a 30 hour race.

    (Note that even if I did an organized race, I most likely still wouldn’t benefit from aid station food. In part, because I have celiac and have to have everything gluten free. I also have to watch cross contamination, so a bowl of any kind of food that’s not individually packaged is something that’s likely contaminated by gluten. COVID has helped reduce this but not completely. Plus, I have diabetes so I need to be roughly aware of the amount of carbs I’m eating to decide whether or not to dose insulin for them, given what is happening to my blood sugar at the time. And, I have exocrine pancreatic insufficiency (EPI) which means I have to dose enzymes for everything I eat. Grazing is hard with EPI; it’s easier to dose and eat the amount that matches my enzymes, so pre-packaged snacks that I know the carb and fat and protein count means I know what insulin I need and what enzymes I need for each set of “fuel”. Guessing carb counts or enzyme counts in the middle of the night while running long distance is likely not going to be very effective or fun. So as a result of all that – pre-planned food is the way to go for me. Related, you can read about my approach for tracking fuel on the go with a spreadsheet and pre-planned fuel library here.)

  • There is regular public bathroom access along my chosen route.
  • I’ve designed out and back laps and loops that have me coming back by my start (remember, only a few minutes walk from home) and that make it so I am passing the bathrooms multiple times on a regular basis in case I need them.

    These laps and loops also make for mentally smaller chunks to tackle. Instead of 100 miles, I’ve got a ~24 mile out and back, a 13 mile loop, a 16 mile out and back, a repeat of the 13 mile loop, repeating again the 16 mile out and back followed by the 13 mile loop one more time, and then a quick 5 mile total out and back (so 2.5 out and back). These are also all routes I know well, so mentally finding waypoints to focus on and know how far I’ve gone are a huge benefit for mentally breaking down the distance into something my brain and body “know”.

  • There are no cutoffs or pace requirements. If I slow down to a 20 minute mile (or slower)…well hey, it’s faster than I was walking with my hands-free knee crutch a few months ago! (I rocked anywhere from a 45 minute mile to a 25 minute mile).

    There’s no pressure to go faster, which means I won’t have pressure to push my effort, especially at the start. Hopefully, that means I can maintain an “easy”, even effort throughout and maybe cause less stress to my body’s hormone systems than I would otherwise.

    The only pressure I have will be the pressure I put on myself to finish (eventually), which could be 26 hours or could be 30 hours or could be 36 hours or even slower… basically I have to finish before my husband gives up on coming out to refuel me!

  • And, once I finish, it’ll be ‘fast’ to get home, shower, refuel, and be done. This is in comparison to a race where I’d have an hour+ drive to get home. I’ll need to walk home which might actually take me much longer than that after I’ve ambulated for 100 miles…but it should hopefully be shorter than an hour!
  • Finally, the major benefit is flexibility. I can set my race date for a weekend when I’ve trained enough to do it. I can move it around a week or two based on the weather (if it’s too cold or too rainy). I can even decide to move it to the spring (although I’d really love to do it this year).

Here are some of the cons of running a DIY ultra or a “self-organized” ultra, rather than an organized race with other people:

  • Theoretically, it would be easier to stop because I am so close to home. I haven’t committed money or drive time or dragged my husband to far away places to wait for me to finish my run. (However, I’m pretty stubborn so in my case I think this is less of an issue than it might be for others?)
  • Yet, out and back loops and the route I’ve chosen could get monotonous. I chose these loops and the route because I know the distance and almost every tenth mile of the route super well. The first 6 miles of all the laps/loops are the same, so I’ll run those same 6 miles repeated 7 times over the course of the run.
  • I won’t have the camaraderie and knowledge that other people are out here tackling the same distance. I’m a back of the pack runner (and celebrate being places from last the way most people celebrate places from first!) and often don’t see anyone running after the start…yet there’s comfort in knowing I’m one of dozens or hundreds out here covering the same course on the same day with the same goal. I do think I’ll miss this part.
  • There is no one to cheer for me. There’s no aid station volunteers, fellow runners, or anyone (other than my amazing husband who will crew me) to cheer for me and encourage me and tell me I’m moving well.
  • There’s no medal (not a big deal), t-shirt (not a big deal), or official finishing time (also not a big deal for me).
  • There’s no cutoffs or pace requirements to motivate me to keep pushing when things get hard.

All in all, the benefits pretty clearly outweigh the downsides – for me. Again, I’m a back of the pack super slow runner (in fact, I typically run 30 seconds and walk 60 seconds throughout my whole race consistently) who can’t eat aid station food (because celiac/EPI makes it complicated) coming off of a broken toe injury (which messed up my training and racing plans), so my pros/cons lean pretty heavily toward making a DIY/self-organized solo ultra run an obvious choice. Others might have different pro/con list based on the above variables and their situations, but hopefully this helps someone else think through some of the ways they might decide between organized and un-organized ultramarathon efforts!

Reasons to "DIY" or self-organize your own ultramarathon run

NEJM Publishes RCT On Open Source Automated Insulin Delivery (OpenAPS Algorithm in AndroidAPS in the CREATE TRIAL)

First page of NEJM article on Open Source AID in T1D, which contains the text of the abstract. I’m thrilled to share that the results of the first RCT on open source automated insulin delivery (AID) is now published in a peer-reviewed medical journal (New England Journal of Medicine, known as NEJM). You can find it at NEJM here, or view an author copy here. You can also see a Twitter post here, if you are interested in sharing the study with your networks.

(I previously wrote a plain language summary of the study results after they were presented at ADA Scientific Sessions in June. You can read the plain language summary here, if you haven’t already seen it.)

I wanted to highlight a key few takeaway messages from the study:

  • The CREATE study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy. This difference reflects 3 hours 21 minutes more time spent in target range per day!
  • For children AID users, they spent 3 hours 1 minute more time in target range daily (95% CI, 1h 22m to 4h 41m).
  • For adult AID users, they spent 3 hours 41 minutes more time in target range daily (95% CI, 2h 4m to 5h 18m).
  • Glycemic improvements were evident within the first week and were maintained over the 24-week trial. Meaning: things got better quickly and stayed so through the entire 24-week time period of the trial!
  • The CREATE study also found that the greatest improvements in time in range (TIR) were seen in participants with lowest TIR at baseline. This means one major finding of the CREATE study is that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. There is also no age effect observed in the trail, meaning that the results of the CREATE Trial demonstrated that open-source AID is safe and effective in children and adults with type 1 diabetes.

I’d also like to highlight some meta aspects of this trial and the significance of these results being published in NEJM.

The algorithm (open source, from OpenAPS) used in the trial, as well as the open source app (AndroidAPS) used to automate insulin delivery, were built by people with diabetes and their loved ones. The algorithm/initial AID work was made open source so other people with diabetes could use it if they chose to, but also so that researchers and clinicians could research it, learn from it, use it, etc. Speaking on behalf of Scott (Leibrand) who worked with me endlessly to iterate upon the algorithm and then also Ben West whose work was critical in communicating with insulin pumps and putting the pieces together into the first open source “closed loop” automated insulin delivery system: we all wanted this to be open source for many reasons. You’ll see some of those reasons listed at the bottom of the plain language OpenAPS “reference design” we shared with the world in February 2015. And it is exceptionally thrilling to see it go from n=1 (me, as the first user) to thousands worldwide using it and other open source AID systems over the years, and be studied further in the “gold standard” setting of an RCT to validate the real-world outcomes that people with diabetes have experienced with open source AID.

But these results are not new to those of us using these systems. These results every day are WHY we use and continue to choose each day to use these systems. This study highlights just a fraction of the benefits people with diabetes experience with AID. Over the years, I’ve heard any of the following reasons why people have chosen to use open source AID:

  • It’s peaceful and safer sleep with less fear of dying.
  • It’s the ability to imagine a future where they live to see their children grow up.
  • It’s the ability to manage glucose levels more effectively so they can more easily plan for or manage the process of having children.
  • It’s less time spent doing physical diabetes tasks throughout the days, weeks, and years.
  • It’s less time spent thinking about diabetes, diabetes-related short-term tasks, and the long-term aspects of living with diabetes.

All of this would not be possible without hundreds of volunteer contributors and developers who iterated upon the algorithm; adapted the concept into different formats (e.g. Milos Kozak’s work to develop AndroidAPS using the OpenAPS algorithm); wrote documentation; troubleshot and tested with different pumps, CGMs, hardware, phones, software, timezones, etc; helped others interested in using these systems; etc. There are many unsung heroes among this community of people with diabetes (and you can hear more of their stories and other milestones in the open source diabetes community in a previous presentation I gave here).

There are thousands of hours of work behind this open source technology which led to the trial which led to these results and this publication. Both the results and the fact of its publication in the NEJM are meaningful. This is technology developed by people with diabetes (and their loved ones) for people with diabetes, which more people will now learn is an option; it will fuel additional conversations with healthcare providers who support people with diabetes; and it will likely spur additional research and energy in the ongoing development of diabetes technologies.

From developers, to community contributors and community members, to the study team and staff who made this trial happen, to the participants in the trial, and to the peer reviewers and editor(s) who reviewed and recommended accepting the now-published article in the New England Journal of Medicine:

Thank you.