Functional Self-Tracking is The Only Self-Tracking I Do

“I could never do that,” you say.

And I’ve heard it before.

Eating gluten free for the rest of your life, because you were diagnosed with celiac disease? Heard that response (I could never do that) for going on 14 years.

Inject yourself with insulin or fingerstick test your blood glucose 14 times a day? Wear an insulin pump on your body 24/7/365? Wear a CGM on your body 24/7/365?

Yeah, I’ve heard you can’t do that, either. (For 20 years and counting.) Which means I and the other people living with the situations that necessitate these behaviors are…doing this for fun?

We’re not.

More recently, I’ve heard this type of comment come up about tracking what I’m eating, and in particular, tracking what I’m eating when I’m running. I definitely don’t do that for fun.

I have a 20+ year strong history of hating tracking things, actually. When I was diagnosed with type 1 diabetes, I was given a physical log book and asked to write down my blood glucose numbers.

“Why?” I asked. They’re stored in the meter.

The answer was because supposedly the medical team was going to review them.

And they did.

And it was useless.

“Why were you high on February 22, 2003?”

Whether we were asking this question in March of 2003 or January of 2023 (almost 20 years later), the answer would be the same: I have no idea.

BG data, by itself, is like a single data point for a pilot. It’s useless without the contextual stream of data as well as other metrics (in the diabetes case, things like what was eaten, what activity happened, what my schedule was before this point, and all insulin dosed potentially in the last 12-24h).

So you wouldn’t be surprised to find out that I stopped tracking. I didn’t stop testing my blood glucose levels – in fact, I tested upwards of 14 times a day when I was in high school, because the real-time information was helpful. Retrospectively? Nope.

I didn’t start “tracking” things again (for diabetes) until late 2013, when we realized that I could get my CGM data off the device and into the laptop beside my bed, dragging the CGM data into a CSV file in Dropbox and sending it to the cloud so an app called “Pushover” would make a louder and different alarm on my phone to wake me up to overnight hypoglycemia. The only reason I added any manual “tracking” to this system was because we realized we could create an algorithm to USE the information I gave it (about what I was eating and the insulin I was taking) combined with the real-time CGM data to usefully predict glucose levels in the future. Predictions meant we could make *predictive* alarms, instead of solely having *reactive* alarms, which is what the status quo in diabetes has been for decades.

So sure, I started tracking what I was eating and dosing, but not really. I was hitting buttons to enter this information into the system because it was useful, again, in real time. I didn’t bother doing much with the data retrospectively. I did occasional do things like reflect on my changes in sensitivity after I got the norovirus, for example, but again this was mostly looking in awe at how the real-time functionality of autosensitivity, an algorithm feature we designed to adjust to real-time changes in sensitivity to insulin, dealt throughout the course of being sick.

At the beginning of 2020, my life changed. Not because of the pandemic (although also because of that), but because I began to have serious, very bothersome GI symptoms that dragged on throughout 2020 and 2021. I’ve written here about my experiences in eventually self-diagnosing (and confirming) that I have exocrine pancreatic insufficiency, and began taking pancreatic enzyme replacement therapy in January 2022.

What I haven’t yet done, though, is explain all my failed attempts at tracking things in 2020 and 2021. Or, not failed attempts, but where I started and stopped and why those tracking attempts weren’t useful.

Once I realized I had GI symptoms that weren’t going away, I tried writing down everything I ate. I tried writing in a list on my phone in spring of 2020. I couldn’t see any patterns. So I stopped.

A few months later, in summer of 2020, I tried again, this time using a digital spreadsheet so I could enter data from my phone or my computer. Again, after a few days, I still couldn’t see any patterns. So I stopped.

I made a third attempt to try to look at ingredients, rather than categories of food or individual food items. I came up with a short list of potential contenders, but repeated testing of consuming those ingredients didn’t do me any good. I stopped, again.

When I first went to the GI doctor in fall of 2020, one of the questions he asked was whether there was any pattern between my symptoms and what I was eating. “No,” I breathed out in a frustrated sigh. “I can’t find any patterns in what I’m eating and the symptoms.”

So we didn’t go down that rabbit hole.

At the start of 2021, though, I was sick and tired (of being sick and tired with GI symptoms for going on a year) and tried again. I decided that some of my “worst” symptoms happened after I consumed onions, so I tried removing obvious sources of onion from my diet. That evolved to onion and garlic, but I realized almost everything I ate also had onion powder or garlic powder, so I tried avoiding those. It helped, some. That then led me to research more, learn about the categorization of FODMAPs, and try a low-FODMAP diet in mid/fall 2021. That helped some.

Then I found out I actually had exocrine pancreatic insufficiency and it all made sense: what my symptoms were, why they were happening, and why the numerous previous tracking attempts were not successful.

You wouldn’t think I’d start tracking again, but I did. Although this time, finally, was different.

When I realized I had EPI, I learned that my body was no longer producing enough digestive enzymes to help my body digest fat, protein, and carbs. Because I’m a person with type 1 diabetes and have been correlating my insulin doses to my carbohydrate consumption for 20+ years, it seemed logical to me to track the amount of fat and protein in what I was eating, track my enzyme (PERT) dosing, and see if there were any correlations that indicated my doses needed to be more or less.

My spreadsheet involved recording the outcome of the previous day’s symptoms, and I had a section for entering multiple things that I ate throughout the day and the number of enzymes. I wrote a short description of my meal (“butter chicken” or “frozen pizza” or “chicken nuggets and veggies”), the estimate of fat and protein counts for the meal, and the number of enzymes I took for that meal. I had columns on the left that added up the total amount of fat and protein for the day, and the total number of enzymes.

It became very apparent to me – within two days – that the dose of the enzymes relative to the quantity of fat and protein I was eating mattered. I used this information to titrate (adjust) my enzyme dose and better match the enzymes to the amount of fat or protein I was eating. It was successful.

I kept writing down what I was eating, though.

In part, because it became a quick reference library to find the “counts” of a previous meal that I was duplicating, without having to re-do the burdensome math of adding up all the ingredients and counting them out for a typical portion size.

It also helped me see that within the first month, I was definitely improving, but not all the way – in terms of fully reducing and eliminating all of my symptoms. So I continued to use it to titrate my enzyme doses.

Then it helped me carefully work my way through re-adding food items and ingredients that I had been avoiding (like onions, apples, and pears) and proving to my brain that those were the result of enzyme insufficiency, not food intolerances. Once I had a working system for determining how to dose enzymes, it became a lot easier to see when I had slight symptoms from slightly getting my dosing wrong or majorly mis-estimating the fat and protein in what I was eating.

It provided me with a feedback loop that doesn’t really exist in EPI and GI conditions, and it was a daily, informative, real-time feedback loop.

As I reached the end of my first year of dosing with PERT, though, I was still using my spreadsheet. It surprised me, actually. Did I need to be using it? Not all the time. But the biggest reason I kept using it relates to how I often eat. I often look at an ‘entree’ for protein and then ‘build’ the rest of my meal around that, to help make sure I’m getting enough protein to fuel my ultrarunning endeavors. So I pick my entree/main thing I’m eating and put it in my spreadsheet under the fat and protein columns (=17 g of fat, =20 g of protein), for example, then decide what I’m going to eat to go with it. Say I add a bag of cheddar popcorn, so that becomes (=17+9 g of fat) and (=20+2 g of protein), and when I hit enter, those cells now tell me it’s 26 g of fat and 22 g of protein for the meal, which tells my brain (and I also tell the spreadsheet) that I’ll take 1 PERT pill for that. So I use the spreadsheet functionally to “build” what I’m eating and calculate the total grams of protein and fat; which helps me ‘calculate’ how much PERT to take (based on my previous titration efforts I know I can do up to 30g of fat and protein each in one PERT pill of the size of my prescription)

Example in my spreadsheet showing a meal and the in-progress data entry of entering the formula to add up two meal items' worth of fat and protein

Essentially, this has become a real-time calculator to add up the numbers every time I eat. Sure, I could do this in my head, but I’m usually multitasking and deciding what I want to eat and writing it down, doing something else, doing yet something else, then going to make my food and eat it. This helps me remember, between the time I decided – sometimes minutes, sometimes hours in advance of when I start eating and need to actually take the enzymes – what the counts are and what the PERT dosing needs to be.

I have done some neat retrospective analysis, of course – last year I had estimated that I took thousands of PERT pills (more on that here). I was able to do that not because it’s “fun” to track every pill that I swallow, but because I had, as a result of functional self-tracking of what I was eating to determine my PERT dosing for everything I ate, had a record of 99% of the enzyme pills that I took last year.

I do have some things that I’m no longer entering in my spreadsheet, which is why it’s only 99% of what I eat. There are some things like a quick snack where I grab it and the OTC enzymes to match without thought, and swallow the pills and eat the snack and don’t write it down. That maybe happens once a week. Generally, though, if I’m eating multiple things (like for a meal), then it’s incredibly useful in that moment to use my spreadsheet to add up all the counts to get my dosing right. If I don’t do that, my dosing is often off, and even a little bit “off” can cause uncomfortable and annoying symptoms the rest of the day, overnight, and into the next morning.

So, I have quite the incentive to use this spreadsheet to make sure that I get my dosing right. It’s functional: not for the perceived “fun” of writing things down.

It’s the same thing that happens when I run long runs. I need to fuel my runs, and fuel (food) means enzymes. Figuring out how many enzymes to dose as I’m running 6, 9, or 25 hours into a run gets increasingly harder. I found that what works for me is having a pre-built list of the fuel options; and a spreadsheet where I quickly on my phone open it and tap a drop down list to mark what I’m eating, and it pulls in the counts from the library and tells me how many enzymes to take for that fuel (which I’ve already pre-calculated).

It’s useful in real-time for helping me dose the right amount of enzymes for the fuel that I need and am taking every 30 minutes throughout my run. It’s also useful for helping me stay on top of my goal amounts of calories and sodium to make sure I’m fueling enough of the right things (for running in general), which is something that can be hard to do the longer I run. (More about this method and a template for anyone who wants to track similarly here.)

The TL;DR point of this is: I don’t track things for fun. I track things if and when they’re functionally useful, and primarily that is in real-time medical decision making.

These methods may not make sense to you, and don’t have to.

It may not be a method that works for you, or you may not have the situation that I’m in (T1D, Graves, celiac, and EPI – fun!) that necessitates these, or you may not have the goals that I have (ultrarunning). That’s ok!

But don’t say that you “couldn’t” do something. You ‘couldn’t’ track what you consumed when you ran or you ‘couldn’t’ write down what you were eating or you ‘couldn’t’ take that many pills or you ‘couldn’t’ inject insulin or…

You could, if you needed to, and if you decided it was the way that you could and would be able to achieve your goals.

Looking Back Through 2022 (What You May Have Missed)

I ended up writing a post last year recapping 2021, in part because I felt like I did hardly anything – which wasn’t true. In part, that was based on my body having a number of things going on that I didn’t know at the time. I figured those out in 2022 which made 2022 hard and also provided me with a sense of accomplishment as I tackled some of these new challenges.

For 2022, I have a very different feeling looking back on the entire year, which makes me so happy because it was night and day (different) compared to this time last year.

One major example? Exocrine Pancreatic Insufficiency.

I started taking enzymes (pancreatic enzyme replacement therapy, known as PERT) in early January. And they clearly worked, hooray!

I quickly realized that like insulin, PERT dosing needed to be based on the contents of my meals. I figured out how to effectively titrate for each meal and within a month or two was reliably dosing effectively with everything I was eating and drinking. And, I was writing and sharing my knowledge with others – you can see many of the posts I wrote collected at DIYPS.org/EPI.

I also designed and built an open source web calculator to help others figure out their ratios of lipase and fat and protease and protein to help them improve their dosing.

I even published a peer-reviewed journal article about EPI – submitted within 4 months of confirming that I had it! You can read that paper here with an analysis of glucose data from both before and after starting PERT. It’s a really neat example that I hope will pave the way for answering many questions we all have about how particular medications possibly affect glucose levels (instead of simply being warned that they “may cause hypoglycemia or hyperglycemia” which is vague and unhelpful.)

I also had my eyes opened to having another chronic disease that has very, very expensive medication with no generic medication option available (and OTCs may or may not work well). Here’s some of the math I did on the cost of living with EPI and diabetes (and celiac and Graves) for a year, in case you missed it.

Another other challenge+success was running (again), but with a 6 week forced break (ha) because I massively broke a toe in July 2022.

That was physically painful and frustrating for delaying my ultramarathon training.

I had been successfully figuring out how to run and fuel with enzymes for EPI; I even built a DIY macronutrient tracker and shared a template so others can use it. I ran a 50k with a river crossing in early June and was on track to target my 100 mile run in early fall.

However with the broken toe, I took the time off needed and carefully built back up, put a lot of planning into it, and made my attempt in late October instead.

I succeeded in running ~82 miles in ~25 hours, all in one go!

I am immensely proud of that run for so many reasons, some of which are general pride at the accomplishment and others are specific, including:

  • Doing something I didn’t think I could do which is running all day and all night without stopping
  • Doing this as a solo or “DIY” self-organized ultra
  • Eating every 30 minutes like clockwork, consuming enzymes (more than 92 pills!), which means 50 snacks consumed. No GI issues, either, which is remarkable even for an ultrarunner without EPI!
  • Generally figuring out all the plans and logistics needed to be able to handle such a run, especially when dealing with type 1 diabetes, celiac, EPI, and Graves
  • Not causing any injuries, and in fact recovering remarkably fast which shows how effective my training and ‘race’ strategy were.

On top of this all, I achieved my biggest-ever running year, with more than 1,333 miles run this year. This is 300+ more than my previous best from last year which was the first time I crossed 1,000 miles in a year.

Professionally, I did quite a lot of miscellaneous writing, research, and other activities.

I spent a lot of time doing research. I also peer reviewed more than 24 papers for academic journals. I was asked to join an editorial board for a journal. I served on 2 grant review committees/programs.

I also wrote a lot.

*by ton, I mean way more than the past couple of years combined. Some of that has been due to getting some energy back once I’ve fixed missing enzyme and mis-adjusted hormone levels in my body! I’m up to 40+ blog posts this year.

And personally, the punches felt like they kept coming, because this year we also found out that I have Graves’ disease, taking my chronic disease count up to 4. Argh. (T1D, celiac, EPI, and now Graves’, for those curious about my list.)

My experience with Graves’ has included symptoms of subclinical hyperthyroidism (although my T3 and T4 are in range), and I have chosen to try thyroid medication in order to manage the really bothersome Graves’-related eye symptoms. That’s been an ongoing process and the symptoms of this have been up and down a number of times as I went on medication, reduced medication levels, etc.

What I’ve learned from my experience with both EPI and Graves’ in the same year is that there are some huge gaps in medical knowledge around how these things actually work and how to use real-world data (whether patient-recorded data or wearable-tracked data) to help with diagnosis, treatment (including medication titration), etc. So the upside to this is I have quite a few new projects and articles coming to fruition to help tackle some of the gaps that I fell into or spotted this year.

And that’s why I’m feeling optimistic, and like I accomplished quite a bit more in 2022 than in 2021. Some of it is the satisfaction of knowing the core two reasons why the previous year felt so physically bad; hopefully no more unsolved mysteries or additional chronic diseases will pop up in the next few years. Yet some of it is also the satisfaction of solving problems and creating solutions that I’m uniquely poised, due to my past experiences and skillsets, to solve. That feels good, and it feels good as always to get to channel my experiences and expertise to try to create solutions with words or code or research to help other people.

Dealing With And Avoiding Chronic Disease Management Burnout

I’ve been thinking about juggling lately, especially as this year I’ve had to add a series of new habits and behaviors and medications to manage not one but two new chronic diseases. Getting one new chronic disease is hard; getting another is hard; and the challenges aren’t necessarily linear or exponential, and they’re not necessarily obvious up front.

But sometimes the challenges do compound over time.

In January when I started taking pancreatic enzyme replacement therapy (PERT) for exocrine pancreatic insufficiency (EPI or PEI), I had to teach myself to remember to take enzymes at every meal. Not just some time around the meal, but 100% every time before (by only a few minutes) or right at the start of the meal. With PERT, the timing matters for efficacy. I have a fast/short feedback loop – if I mis-time my enzymes or don’t take them, I get varying symptoms within a few hours that then bother me for the rest of the day, overnight, and into the next morning. So I’m very incentivized to take the enzymes and time them effectively when I eat. However, as I started to travel (my first trip out of the country since the pandemic started), I was nervous about trying to adapt to travel and being out of my routine at home where I’ve placed enzymes in visible eye sight of every location where I might consume food. Thankfully, that all went well and I managed not to forget taking enzymes when I ate and all was well. But I know I’m still building the habit of taking enzymes and eating, and that involves both always having enzymes with me and remembering to get them out and take them. It sounds like a trivial amount of things to remember, but this is added on top of everything else I’m doing for managing my health and well-being.

This includes other “simple” things like taking my allergy medications – because I’m allergic to cats (and we have them!), trees, dust, etc. And vitamins (I’m vitamin D deficient when I don’t take vitamin D).

And brushing my teeth and flossing.

You do that too, right? Or maybe you’re one of those people who struggle to remember to floss. It’s normal.

The list of well-being management gets kind of long when you think about all the every day activities and habits you have to help you stay at your best possible health.

Eat healthy! (You do that, right? 😉 )

Hydrate!

Exercise!

Etc.

I’ve also got the background habits of 20 years of living with diabetes: keeping my pump sites on my body; refilling the reservoir and changing the pump site every few days; making sure the insulin doesn’t get too hot or cold; making sure my CGM data isn’t too noisy; changing my CGM sensor when needed; estimating ballpark carbs and entering them and/or temporary targets to indicate exercise into my open source AID; keeping my AID powered; keeping my pump powered; keeping my phone – which has my CGM visibility on it – powered and nearby. Ordering supplies – batteries and pump sites and reservoirs and CGM transmitters and CGM sensors and insulin and glucagon.

Some of these are daily or every few days tasks; others are once or twice a month or every three months.

Those stack up sometimes where I need to refill a reservoir and oops, get another bottle of insulin out of the fridge which reminds me to make a note to check on my shipment of insulin which hasn’t arrived yet. I also need to change my pump site and my CGM sensor is expiring at bedtime so I need to also go ahead and change it so the CGM warmup period will be done by the time I go to sleep. I want to refill my reservoir and change the pump site after dinner since the dinner insulin is more effective on the existing site; I think of this as I pull my enzymes out to swallow as I start eating. I’ll do the CGM insertion when I do my pump site change. But the CGM warmup period is then in the after-dinner timeframe so I then have to keep an eye on things manually because my AID can’t function without CGM data so 2 hours (or more) of warmup means extra manual diabetes attention. While I’m doing that, I also need to remember to take my allergy medication and vitamin D, plus remembering to take my new thyroid medication at bedtime.

Any given day, that set of overlapping scenarios may be totally fine, and I don’t think anything of them.

On other days, where I might be stressed or overwhelmed by something else – even if it’s not health-related – that can make the above scenario feel overwhelmingly difficult.

One of the strategies I discussed in a previous post relative to planning travel or busy periods like holidays is trying to separate tasks in advance (like pre-filling a reservoir), so the action tasks (inserting a pump site and hooking it up to a new reservoir) don’t take as long. That works well, if you know the busy period is coming.

But sometimes you don’t have awareness of a forthcoming busy period and life happens. Or it’s not necessarily busy, per se, but you start to get overwhelmed and stressed and that leaks over into the necessary care and feeding of medical stuff, like managing pump sites and reservoirs and sensors and medication.

You might start negotiating with yourself: “do I really need to change that pump site today? It can wait until tomorrow”. Or you might wait until your reservoir actually hits the ‘0’ level (which isn’t fully 0; there’s a few units plus or minus some bubbles left) to refill it. Or other things like that, whether it’s not entering carbs into your pump or AID or not bolusing. Depending on your system/setup, those things may not be a big deal. And for a day or two, they’re likely not a big deal overall.

But falling into the rut of these becoming the new normal is not optimal – that’s burnout, and I try to avoid getting there.

When I start to have some of those thought patterns and recognize that I have begun negotiating with myself, I try to voice how I’m feeling to myself and my spouse or family or friends. I tell them I’m starting to feel “crispy” (around the edges) – indicating I’m not fully burnt out, but I could get all the way to burnout if I don’t temporarily change some things. (Or permanently, but often for me temporary shifts are effective.)

One of the first things I do is think through what is the bare minimum necessary care I need to take. I go above and beyond and optimize a LOT of things to get above-target outcomes in most areas. While I like to do those things, they’re not necessary. So I think through the list of necessary things, like: keeping a working pump site on my body; keeping insulin in a reservoir attached to my pump; keeping my CGM sensor working; and keeping my AID powered and nearby.

That then leaves a pile of tasks to consider:

  1. Not doing at all for ___ period of time
  2. Not doing myself but asking someone else to do for ____ period of time

And then I either ask or accept the offers of help I get to do some of those things.

When I was in high school and college, I would have weekends where I would ask my parents to help. They would take on the task of carb counting (or estimating) so I didn’t have to. (They also did HEAPS of work for years while I was on their insurance to order and keep supplies in the house and wrangle with insurance so I didn’t have to – that was huge background help that I greatly appreciated.)

Nowadays, there are still things I can and do get other people to help with. Sometimes it’s listening to me vent (with a clear warning that I’m just venting and don’t need suggestions); my parents often still fill that role for me! Since I’m now married and no longer living alone, Scott offers a lot of support especially during those times. Sometimes he fills reservoirs for me, or more often will bring me supplies from the cabinet or fridge to wherever I’m sitting (or even in bed so I don’t have to get up to go change my site). Or he’ll help evaluate and determine that something can wait until a later time to do (e.g. change pump site at another time). Sometimes I get him to open boxes for me and we re-organize how my supplies are to make them easier to grab and go.

Those are diabetes-specific examples, but I’ve also written about how helpful additional help can be sometimes for EPI too, especially with weighing and estimating macronutrient counts so I can figure out my PERT dosing. Or making food once I’ve decided what I want to eat, again so I can separate deciding what to eat and what the counts/dosing is from the action tasks of preparing or cooking the food.

For celiac, one of the biggest changes that has helped was Scott asking family members to load the “Find Me Gluten Free” app on their phone. That way, if we were going out to eat or finding a takeout option, instead of everyone ALWAYS turning to me and saying “what are the gluten free options?”, they could occasionally also skim the app to see what some of the obvious choices were, so I wasn’t always having to drive the family decision making on where to eat.

If you don’t have a chronic illness (or multiple chronic illnesses), these might not sound like a big deal. If you do (even if you have a different set of chronic disease(s)), maybe you recognize some of this.

There are estimates that people with diabetes make hundreds of decisions and actions a day for managing living with diabetes. Multiply that times 20 years. Ditto for celiac, for identifying and preparing and guarding against cross-contamination of said gluten-free food – multiply that work every day times 14 years. And now a year’s worth of *every* time I consider eating anything to estimate (with reading nutrition labels or calculating combinations based on food labels or weighing and googling and estimating compared to other nutrition labels) how much enzymes to take and remembering to swallow the right number of pills at the optimal times. Plus the moral and financial weight of deciding how to balance efficacy with cost of these enzymes. Plus several months now of an additional life-critical medication.

It’s so much work.

It’s easy to get outright burnt out, and common to start to feel a little “crispy” around the edges at times.

If you find yourself in this position, know that it’s normal.

You’re doing a lot, and you’re doing a great job to keep yourself alive.

You can’t do 110% all the time, though, so it is ok to figure out what is the bare minimum and some days throughout the year, just do that, so you can go back to 110%-ing it (or 100%-ing) the other days.

With practice, you will increasingly be able to spot patterns of scenarios or times of the year when you typically get crispy, and maybe you can eventually figure out strategies to adapt in advance (see me over here pre-filling reservoirs ahead of Thanksgiving last week and planning when I’d change my pump site and planning exactly what I would eat for 3 days).

TLDR:

  • Living with chronic disease is hard. And the more diseases you have, the harder it can be.
  • If you live with or love someone with chronic disease(s), ask them if you can help. If they’re venting, ask if they want you to listen (valuable!) or to let you know if at any point they want help brainstorming or for you to provide suggestions (helpful *if* desired and requested).
  • If you’re the one living with chronic disease(s), consider asking for help, even with small things. Don’t let your own judgment (“I should be able to do this!”) get in your way of asking for help. Try it for a day or for a weekend.
Dealing with and avoiding chronic disease burnout by Dana M. Lewis

Regulatory Approval Is A Red Herring

One of the most common questions I have been asked over the last 8 years is whether or not we are submitting OpenAPS to the FDA for regulatory approval.

This question is a big red herring.

Regulatory approval is often seen and discussed as the one path for authenticating and validating safety and efficacy.

It’s not the only way.

It’s only one way.

As background, you need to understand what OpenAPS is. We took an already-approved insulin pump that I already had, a continuous glucose monitor (CGM) that I already had, and found a way to read data from those devices and also to use the already-built commands in the pump to send back instructions to automate insulin delivery via the decision-making algorithm that we created. The OpenAPS algorithm was the core innovation, along with the realization that this already-approved pump had those capabilities built in. We used various off the shelf hardware (mini-computers and radio communication boards) to interoperate with my already approved medical devices. There was novelty in how we put all the pieces together, though the innovation was the algorithm itself.

The caveat, though, is that although the pump I was using was regulatory-approved and on the market, which is how I already had it, it had later been recalled after researchers, the manufacturer, and the FDA realized that you could use the already-built commands in the pump’s infrastructure. So these pumps, while not causing harm to anyone and no cases of harm have ever been recorded, were no longer being sold. It wasn’t a big deal to the company; it was a voluntary recall, and people like me often chose to keep our pumps if we were not concerned about this potential risk.

We had figured out how to interoperate with these other devices. We could have taken our system to the FDA. But because we were using already-off-the-market pumps, there was no way the FDA would approve it. And at the time (circa 2014), there was no vision or pathway for interoperable devices, so they didn’t have the infrastructure to approve “just” an automated insulin delivery algorithm. (That changed many years later and they now have infrastructure for reviewing interoperable pumps, CGM, and algorithms which they call controllers).

The other relevant fact is that the FDA has jurisdiction based on the commerce clause in the US Constitution: Congress used its authority to authorize the FDA to regulate interstate commerce in food, drugs, and medical devices. So if you’re intending to be a commercial entity and sell products, you must submit for regulatory approval.

But if you’re not going to sell products…

This is the other aspect that many people don’t seem to understand. All roads do not lead to regulatory approval because not everyone wants to create a company and spend 5+ years dedicating all their time to it. That’s what we would have had to do in order to have a company to try to pursue regulatory approval.

And the key point is: given such a strict regulatory environment, we (speaking for Dana and Scott) did not want to commercialize anything. Therefore there was no point in submitting for regulatory approval. Regardless of whether or not the FDA was likely to approve given the situation at the time, we did not want to create a company, spend years of our life dealing with regulatory and compliance issues full time, and maybe eventually get permission to sell a thing (that we didn’t care about selling).

The aspect of regulatory approval is a red herring in the story of the understanding of OpenAPS and the impact it is having and could have.

Yes, we could have created a company. But then we would not have been able to spend the thousands of hours that we spent improving the system we made open source and helping thousands of individuals who were able to use the algorithm and subsequent systems with a variety of pumps, CGMs, and mobile devices as an open source automated insulin delivery system. We intentionally chose this path to not commercialize and thus not to pursue regulatory approval.

As a result of our work (and others from the community), the ecosystem has now changed.

Time has also passed: it’s been 8 years since I first automated insulin delivery for myself!

The commercial players have brought multiple commercial AIDs to market now, too.

We created OpenAPS when there was NO commercial option at the time. Now there are a few commercial options.

But it is also an important note that I, and many thousands of other people, are still choosing to use open source AID systems.

Why?

This is another aspect of the red herring of regulatory approval.

Just because something is approved does not mean it’s available to order.

If it’s available to order (and not all countries have approved AID systems!), it doesn’t mean it’s accessible or affordable.

Insurance companies are still fighting against covering pumps and CGMs as standalone devices. New commercial AID systems are even more expensive, and the insurance companies are fighting against coverage for them, too. So just because someone wants an AID and has one approved in their country doesn’t mean that they will be able to access and/or afford it. Many people with diabetes struggle with the cost of insulin, or the cost of CGM and/or their insulin pump.

Sometimes providers refuse to prescribe devices, based on preconceived notions (and biases) about who might do “well” with new therapies based on past outcomes with different therapies.

For some, open source AID is still the most accessible and affordable option.

And in some places, it is still the ONLY option available to automate insulin delivery.

(And in most places, open source AID is still the most advanced, flexible, and customizable option.)

Understanding the many reasons why someone might choose to use open source automated insulin delivery folds back into the understanding of how someone chooses to use open source automated insulin delivery.

It is tied to the understanding that manual insulin delivery – where someone makes all the decisions themselves and injects or presses buttons manually to deliver insulin – is inherently risky.

Automated insulin delivery reduces risk compared to manual insulin delivery. While some new risk is introduced (as is true of any additional devices), the net risk reduction overall is significantly large compared to manual insulin delivery.

This net risk reduction is important to contextualize.

Without automated insulin delivery, people overdose or underdose on insulin multiple times a day, causing adverse effects and bad outcomes and decreasing their quality of life. Even when they’re doing everything right, this is inevitable because the timing of insulin is so challenging to manage alongside dozens of other variables that at every decision point play a role in influencing the glucose outcomes.

With open source automated insulin delivery, it is not a single point-in-time decision to use the system.

Every moment, every day, people are actively choosing to use their open source automated insulin delivery system because it is better than the alternative of managing diabetes manually without automated insulin delivery.

It is a conscious choice that people make every single day. They could otherwise choose to not use the automated components and “fall back” to manual diabetes care at any moment of the day or night if they so choose. But most don’t, because it is safer and the outcomes are better with automated insulin delivery.

Each individual’s actions to use open source AID on an ongoing basis are data points on the increased safety and efficacy.

However, this paradigm of patient-generated data and patient choice as data contributing toward safety and efficacy is new. There are not many, if any, other examples of patient-developed technology that does not go down the commercial path, so there are not a lot of comparisons for open source AID systems.

As a result, when there were questions about the safety and efficacy of the system (e.g., “how do you know it works for someone else other than you, Dana?”), we began to research as a community to address the questions. We published data at the world’s biggest scientific conference and were peer-reviewed by scientists and accepted to present a poster. We did so. We were cited in a piece in Nature as a result. We then were invited to submit a letter to the editor of a traditional diabetes journal to summarize our findings; we did so and were published.

I then waited for the rest of the research community to pick up this lead and build on the work…but they didn’t. I picked it up again and began facilitating research directly with the community, coordinating efforts to make anonymized pools of data for individuals with open source AID to submit their data to and for years have facilitated access to dozens of researchers to use this data for additional research. This has led to dozens of publications further documenting the efficacy of these solutions.

Yet still, there was concern around safety because the healthcare world didn’t know how to assess these patient-generated data points of choice to use this system because it was better than the alternative every single day.

So finally, as a direct result of presenting this community-based research again at the world’s largest diabetes scientific conference, we were able to collaborate and design a grant proposal that received grant funding from New Zealand’s Health Research Council (the equivalent of the NIH in the US) for a randomized control trial of the OpenAPS algorithm in an open source AID system.

An RCT is often seen as the gold standard in science, so the fact that we received funding for such a study alone was a big milestone.

And this year, in 2022, the RCT was completed and our findings were published in one of the world’s largest medical journals, the New England Journal of Medicine, establishing that the use of the OpenAPS algorithm in an open source AID was found to be safe and effective in children and adults.

No surprises here, though. I’ve been using this system for more than 8 years, and seeing thousands of others choose the OpenAPS algorithm on an ongoing, daily basis for similar reasons.

So today, it is possible that someone could take an open source AID system using the OpenAPS algorithm to the FDA for regulatory approval. It won’t likely be me, though.

Why not? The same reasons apply from 8 years ago: I am not a company, I don’t want to create a company to be able to sell things to end users. The path to regulatory approval primarily matters for those who want to sell commercial products to end users.

Also, regulatory approval (if someone got the OpenAPS algorithm in an open source AID or a different algorithm in an open source AID) does not mean it will be commercially available, even if it will be approved.

It requires a company that has pumps and CGMs it can sell alongside the AID system OR commercial partnerships ready to go that are able to sell all of the interoperable, approved components to interoperate with the AID system.

So regulatory approval of an AID system (algorithm/mobile controller design) without a commercial partnership plan ready to go is not very meaningful to people with diabetes in and of itself. It sounds cool, but will it actually do anything? In and of itself, no.

Thus, the red herring.

Might it be meaningful eventually? Yes, possibly, especially if we collectively have insurers to get over themselves and provide coverage for AID systems given that AID systems all massively improve short-term and long-term outcomes for people with diabetes.

But as I said earlier, regulatory approval does necessitate access nor affordability, so an approved system that’s not available and affordable to people is not a system that can be used by many.

We have a long way to go before commercial AID systems are widely accessible and affordable, let alone available in every single country for people with diabetes worldwide.

Therefore, regulatory approval is only one piece of this puzzle.

And it is not the only way to assess safety and efficacy.

The bigger picture this has shown me over the years is that while systems are created to reduce harm toward people – and this is valid and good – there have been tendencies to convert to the assumption that therefore the systems are the only way to achieve the goal of harm reduction or to assess safety and efficacy.

They aren’t the only way.

As explained above, FDA approval is one method of creating a rubber stamp as a shorthand for “is this considered to be safe and effective”.

That’s also legally necessary for companies to use if they want to sell products. For situations that aren’t selling products, it’s not the only way to assess safety and efficacy, which we have shown with OpenAPS.

With open source automated insulin delivery systems, individuals have access to every line of code and can test and choose for themselves, not just once, but every single day, whether they consider it to be safer and more effective for them than manual insulin dosing. Instead of blindly trusting a company, they get the choice to evaluate what they’re using in a different way – if they so choose.

So any questions around seeking regulatory approval are red herrings.

A different question might be: What’s the future of the OpenAPS algorithm?

The answer is written in our OpenAPS plain language reference design that we posted in February of 2015. We detailed our vision for individuals like us, researchers, and companies to be able to use it in the future.

And that’s how it’s being used today, by 1) people like me; and 2)  in research, to improve what we can learn about diabetes itself and improve AID; and 3) by companies, one of whom has already incorporated parts of our safety design as part of a safety layer in their ML-based AID system and has CE mark approval and is being sold and used by thousands of people in Europe.

It’s possible that someone will take it for regulatory approval; but that’s not necessary for the thousands of people already using it. That may or may not make it more available for thousands more (see earlier caveats about needing commercial partnerships to be able to interoperate with pumps and CGMs).

And regardless, it is still being used to change the world for thousands of people and help us learn and understand new things about the physiology of diabetes because of the way it was designed.

That’s how it’s been used and that’s the future of how it will continue to be used.

No rubber stamps required.

Regulatory Approval: A Red Herring

Understanding the Difference Between Open Source and DIY in Diabetes

There’s been a lot of excitement (yay!) about the results of the CREATE trial being published in NEJM, followed by the presentation of the continuation results at EASD. This has generated a lot of blog posts, news articles, and discussion about what was studied and what the implications are.

One area that I’ve noticed is frequently misunderstood is how “open source” and “DIY” are different.

Open source means that the source code is openly available to view. There are different licenses with open source; most allow you to also take and reuse and modify the code however you like. Some “copy-left” licenses commercial entities to open-source any software they build using such code. Most companies can and do use open source code, too, although in healthcare most algorithms and other code related to FDA-regulated activity is proprietary. Most open source licenses allow free individual use.

For example, OpenAPS is open source. You can find the core code of the algorithm here, hosted on Github, and read every line of code. You can take it, copy it, use it as-is or modify it however you like, because the MIT license we put on the code says you can!

As an individual, you can choose to use the open source code to “DIY” (do-it-yourself) an automated insulin delivery system. You’re DIY-ing, meaning you’re building it yourself rather than buying it or a service from a company.

In other words, you can DIY with open source. But open source and DIY are not the same thing!

Open source can and is usually is used commercially in most industries. In healthcare and in diabetes specifically, there are only a few examples of this. For OpenAPS, as you can read in our plain language reference design, we wanted companies to use our code as well as individuals (who would DIY with it). There’s at least one commercial company now using ideas from the OpenAPS codebase and our safety design as a safety layer against their ML algorithm, to make sure that the insulin dosing decisions are checked against our safety design. How cool!

However, they’re a company, and they have wrapped up their combination of proprietary software and the open source software they have implemented, gotten a CE mark (European equivalent of FDA approval), and commercialized and sold their AID product to people with diabetes in Europe. So, those customers/users/people with diabetes are benefitting from open source, although they are not DIY-ing their AID.

Outside of healthcare, open source is used far more pervasively. Have you ever used Zoom? Zoom uses open source; you then use Zoom, although not in a DIY way. Same with Firefox, the browser. Ever heard of Adobe? They use open source. Facebook. Google. IBM. Intel. LinkedIn. Microsoft. Netflix. Oracle. Samsung. Twitter. Nearly every product or service you use is built with, depends on, or contains open source components. Often times open source is more commonly used by companies to then provide products to users – but not always.

So, to more easily understand how to talk about open source vs DIY:

  • The CREATE trial used a version of open source software and algorithm (the OpenAPS algorithm inside a modified version of the AndroidAPS application) in the study.
  • The study was NOT on “DIY” automated insulin delivery; the AID system was handed/provided to participants in the study. There was no DIY component in the study, although the same software is used both in the study and in the real world community by those who do DIY it. Instead, the point of the trial was to study the safety and efficacy of this version of open source AID.
  • Open source is not the same as DIY.
  • OpenAPS is open source and can be used by anyone – companies that want to commercialize, or individuals who want to DIY. For more information about our vision for this, check out the OpenAPS plain language reference design.
Venn diagram showing a small overlap between a bigger open source circle and a smaller DIY circle. An arrow points to the overlapping section, along with text of "OpenAPS". Below it text reads: "OpenAPS is open source and can be used DIY. DIY in diabetes often uses open source, but not always. Not all open source is used DIY."

Continuation Results On 48 Weeks of Use Of Open Source Automated Insulin Delivery From the CREATE Trial: Safety And Efficacy Data

In addition to the primary endpoint results from the CREATE trial, which you can read more about in detail here or as published in the New England Journal of Medicine, there was also a continuation phase study of the CREATE trial. This meant that all participants from the CREATE trial, including those who were randomized to the automated insulin delivery (AID) arm and those who were randomized to sensor-augmented insulin pump therapy (SAPT, which means just a pump and CGM, no algorithm), had the option to continue for another 24 weeks using the open source AID system.

These results were presented by Dr. Mercedes J. Burnside at #EASD2022, and I’ve summarized her presentation and the results below on behalf of the CREATE study team.

What is the “continuation phase”?

The CREATE trial was a multi-site, open-labeled, randomized, parallel-group, 24-week superiority trial evaluating the efficacy and safety of an open-source AID system using the OpenAPS algorithm in a modified version of AndroidAPS. Our study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14 percentage points higher among those who used the open-source AID system (95% confidence interval [CI], 9.2 to 18.8; P<0.001) compared to those who used sensor augmented pump therapy; a difference that corresponds to 3 hours 21 minutes more time spent in target range per day. The system did not contribute to any additional hypoglycemia. Glycemic improvements were evident within the first week and were maintained over the 24-week trial. This illustrates that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. This initial study concluded that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS, a widely used open-source AID solution, is efficacious and safe. These results were from the first 24-week phase when the two groups were randomized into SAPT and AID, accordingly.

The second 24-week phase is known as the “continuation phase” of the study.

There were 52 participants who were randomized into the SAPT group that chose to continue in the study and used AID for the 24 week continuation phase. We refer to those as the “SAPT-AID” group. There were 42 participants initially randomized into AID who continued to use AID for another 24 weeks (the AID-AID group).

One slight change to the continuation phase was that those in the SAPT-AID used a different insulin pump than the one used in the primary phase of the study (and 18/42 AID-AID participants also switched to this different pump during the continuation phase), but it was a similar Bluetooth-enabled pump that was interoperable with the AID system (app/algorithm) and CGM used in the primary outcome phase.

All 42 participants in AID-AID completed the continuation phase; 6 participants (out of 52) in the SAPT-AID group withdrew. One withdrew from infusion site issues; three with pump issues; and two who preferred SAPT.

What are the results from the continuation phase?

In the continuation phase, those in the SAPT-AID group saw a change in time in range (TIR) from 55±16% to 69±11% during the continuation phase when they used AID. In the SAPT-AID group, the percentage of participants who were able to achieve the target goals of TIR > 70% and time below range (TBR) <4% increased from 11% of participants during SAPT use to 49% during the 24 week AID use in the continuation phase. Like in the primary phase for AID-AID participants; the SAPT-AID participants saw the greatest treatment effect overnight with a TIR difference of 20.37% (95% CI, 17.68 to 23.07; p <0.001), and 9.21% during the day (95% CI, 7.44 to 10.98; p <0.001) during the continuation phase with open source AID.

Those in the AID-AID group, meaning those who continued for a second 24 week period using AID, saw similar TIR outcomes. Prior to AID use at the start of the study, TIR for that group was 61±14% and increased to 71±12% at the end of the primary outcome phase; after the next 6 months of the continuation phase, TIR was maintained at 70±12%. In this AID-AID group, the percentage of participants achieving target goals of TIR >70% and TBR <4% was 52% of participants in the first 6 months of AID use and 45% during the continuation phase. Similarly to the primary outcomes phase, in the continuation phase there was also no treatment effect by age interaction (p=0.39).

The TIR outcomes between both groups (SAPT-AID and AID-AID) were very similar after each group had used AID for 24 weeks (SAPT-AID group using AID for 24 weeks during the continuation phase and AID-AID using AID for 24 weeks during the initial RCT phase).. The adjusted difference in TIR between these groups was 1% (95% CI, -4 to 6; p=-0.67). There were no glycemic outcome differences between those using the two different study pumps (n=69, which was the SAPT-AID user group and 18 AID-AID participants who switched for continuation; and n=25, from the AID-AID group who elected to continue on the pump they used in the primary outcomes phase).

In the initial primary results (first 24 weeks of trial comparing the AID group to the SAPT group), there was a 14 percentage point difference between the groups. In the continuation phase, all used AID and the adjusted mean difference in TIR between AID and the initial SAPT results was a similar 12.10 percentage points (95% CI, p<0.001, SD 8.40).

Similar to the primary phase, there was no DKA or severe hypoglycemia. Long-term use (over 48 weeks, representing 69 person-years) did not detect any rare severe adverse events.

CREATE results from the full 48 weeks on open source AID with both SAPT (control) and AID (intervention) groups plotted on the graph.

Conclusion of the continuation study from the CREATE trial

In conclusion, the continuation study from the CREATE trial found that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS is efficacious and safe with various hardware (pumps), and demonstrates sustained glycaemic improvements without additional safety concerns.

Key points to takeaway:

  • Over 48 weeks total of the study (6 months or 24 weeks in the primary phase; 6 months/24 weeks in the continuation phase), there were 64 person-years of use of open source AID in the study, compared to 59 person-years of use of sensor-augmented pump therapy.
  • A variety of pump hardware options were used in the primary phase of the study among the SAPT group, due to hardware (pump) availability limitations. Different pumps were also used in the SAPT-AID group during the AID continuation phase, compared to the pumps available in the AID-AID group throughout both phases of trial. (Also, 18/42 of AID-AID participants chose to switch to the other pump type during the continuation phase).
  • The similar TIR results (14 percentage points difference in primary and 12 percentage points difference in continuation phase between AID and SAPT groups) shows durability of the open source AID and algorithm used, regardless of pump hardware.
  • The SAPT-AID group achieved similar TIR results at the end of their first 6 months of use of AID when compared to the AID-AID group at both their initial 6 months use and their total 12 months/48 weeks of use at the end of the continuation phase.
  • The safety data showed no DKA or severe hypoglycemia in either the primary phase or the continuation phases.
  • Glycemic improvements from this version of open source AID (the OpenAPS algorithm in a modified version of AndroidAPS) are not only immediate but also sustained, and do not increase safety concerns.
CREATE Trial Continuation Results were presented at #EASD2022 on 48 weeks of use of open source AID

What is in my running pack for running ultramarathons or training for a marathon

After three years of using a multi-purpose activity backpack as my running pack, the strap connector broke, and I had to find and re-stock a new running pack. I use a running pack for when I’m doing long runs for marathon or ultramarathon training.  I ended up pulling everything out of my old backpack and evaluating whether I still wanted to carry it on every long run. For the most part, everything got moved over to the new pack. There were a few cases where I had excessive duplicates (more on that below and why) where I ended up reducing the quantity. But everything else made the list for what I carry with me on long runs every single time.

  1. Hydration – via a camelbak or other bladder with a hose (example). I prefer straight water in my hydration pack and to separately manage electrolytes and fuel separately. The bonus of just having water is it’s easier to clean the hydration pack after each run!Tips: put ice cubes in your bladder and fill it with cold water. Cold water is awesome for long, hot runs in the sun. Also, my old hydration pack had an insulated compartment that kept the ice water cold for hours. My new running vest does not, and in fact has holes in the back for air flow that also means the heat from my back melts my ice pretty fast. To work around this in the new vest is to slide the filled hydration bladder into a padded mailing envelope that’s open at the top. It’s not quite as insulated as true insulation, but it protects the bladder from some of the heat coming off of your back and it probably stays cool 60% instead of 20% as long as before, which is a huge improvement.Extra tip: use a Qtip or similar to clean out the mouthpiece of your hose every few runs!
  2. Diabetes backups  – this means things like a backup insulin pump site. On long unsupported runs, it can also mean my blood glucose meter. (I wear a CGM so I don’t always take a meter along on runs unless it’s in an unsupported area where I don’t have easy crew access or support within a few miles). I’ve had several runs where my pump site has stopped working or ripped out, so having a backup pump site is just as necessary as having bandaids.The other source of backups is extra low carbs, e.g. sugar in case my blood sugar goes low. I usually keep a stash of carbs in my shorts pocket, but I also keep extra in my backpack in case I run through everything in my pocket. This is in addition to regular food/fuel for ultrafueling, it has to be faster-acting glucose/sugar that can more quickly fix a dropping or already-low blood sugar level.(This is one of the places I mentioned where I had excessive duplicates. I have continued to add extra to my backup stashes, and ended up with well over 100+ grams of “backup” carbs just in case. I ended up cutting down the total amount of carbs to closer to ~50 grams instead.)

    Emergency backup carbs maybe don't need to be 100g worth

    You can read some more about my strategy for running with diabetes here.

  3. Baggie with extra socks – I always carry a pair of extra socks, although I’ve never needed them in a normal training long run, I did end up using them in my 50k that involved crossing a river up to my knees five times.
  4. Bandaids – Just like hiking, but I carry bandaids in case of bleeding cuts or scratches or worse, blisters on my heels, feet, or toes. I carry some that are blister-style and some regular style, smaller ones and larger ones, all the way up to large multi-inch squares that can cover the backs of my heels if I don’t already have them covered.More recently, I also started carrying small squares and strips of kinesiology tape for the same purpose. I originally did kinesio tape strips in case my knee needed some extra support, but I’ve found the kinesio tape also works well to cover my toes or backs of my heels in lieu of bandaids for blister prevention. For fixing blisters, I have to dry my feet really well or the kinesio tape doesn’t stay well or easily rubs off; so I tend to cover the toes that blister frequently as well as my heels prior to my runs so they’re less likely to generate blisters and require fixing mid-runs. I get a large roll of kinesiology tape (example) and cut it into smaller pieces as needed for all of these uses cases.I also keep at least one mini individual packet of antibiotic ointment (example) in the baggie as well.
  5. Lubrication – I carry a lubrication stick (Squirrel Nut Butter, because it works for me and is easy to reapply) to making sure between my thighs and other areas don’t chafe. When I sweat a lot, I often have to reapply every few hours to my thighs. While this can also be accomplished by carrying dabs of vaseline or your preferred lubrication in a baggie, the SNB stick is lightweight and I don’t mind carrying it so it’s easy to reapply and the hassle doesn’t prevent me from wanting to prevent chafing.
  6. Stuff to fix GI problems – it’s common to have GI issues when running, but I also had a two-year stretch of known GI issues that ultimately turned out to be undiscovered exocrine pancreatic insufficiency. During this time, I always carried individual Immodium and GasX in case I needed them.
  7. Electrolyte pills – I prefer to measure and track electrolytes separate from my hydration, so I use electrolyte pills (example) that I swallow on a scheduled basis to keep my electrolyte levels topped off. I’ve tried chew kinds (but they make me burp), so I stick with a baggie full of electrolyte pills. I bring extra just in case I drop some, but I generally eyeball and count out to make sure I have enough for each super long run.
  8. Any medication you need during the run – For me, that includes enzymes for fuel because I have exocrine pancreatic insufficiency and I need enzymes to help me digest any of my fuel. I have expensive, larger dose prescription pills that I usually use for meals, but it would make running even more expensive if I had to use a $9 pill every 30 minutes for a fuel snack. Luckily, there are over the counter versions of enzyme pills (more about that here) that are single-enzyme or multi-enzyme, that are more in the ballpark of $0.35 per pill, and I have a baggie of both kinds that I use to cover each snack.
  9. Fuel or snacks – A lot of ultra runners use gels, but I have been experimenting with ‘real’ foods. Basically, anything that’s around ~20g of carbs and less than ~10g of fat and 5-10g of protein that I like to eat. So far, that list includes chili Cheese Fritos, yogurt covered pretzels, peanut butter pretzel nuggets, beef sticks, Honey Stinger Stroopwaffles (the gluten free kinds – beware that only some of their flavors are GF!), mini date or fruit bars, fruit snacks, sweet potato tots, ¼ of a ham and cheese quesadilla, ¼ of a PBJ sandwich, a waffle, mini PayDay bars…. Noting that all of these are gluten free versions or are naturally gluten free, because I have celiac disease. I do a lot of work in advance to test these snacks carefully on training runs before I commit to using them repeatedly throughout longer runs so I know my body likes them during runs as well as other times. I only take the fresh/hot snacks (sweet potato tots, quesadilla etc) and eat those at the start or when my husband re-fills my pack for me mid-run, so I don’t have to worry about them spoiling. Everything else is shelf stable so when I pack a few more than I need per run and leave some in my pack, they’re not an issue to sit there for weeks until I manage to eat them in my rotation of snacks on a future run.
  10. Miscellaneous other supplies – car keys, house keys, hand sanitizer, a mask for going into trail bathrooms, and a battery and cord for charging my phone.

Phew. That’s a lot of stuff. And yes, it does end up being more supplies and more weight than most people carry. But…I use pretty much everything in my pack every few runs. Stuff happens: pump sites fall out, blisters happen, chafing happens, GI stuff happens..and I’ve found that training and running with a little extra weight in my pack is worth having the proper supplies when I need them, rather than having to end runs early due to lack of preparation or minor supplies that would enable me to keep running.

Every time I go out for a run, I add the requisite amount of snacks, enzymes, electrolyte pills, and hydration for the run. Any time I come back from a run and I have depleted a supply off of the above list – such as using my backup pump site – I immediately go and refill that supply so I don’t have to remember to refill it prior to the next run. Keeping the above supplies topped off and ready to go always in my backpack means they’re always there when I need them, and the peace of mind of knowing how I can handle and that I can handle these situations while running is priceless.

Note: previously I was using a backpack, because it was $30 and for my running it was good enough. However, when the strap broke, I looked to buy the same backpack again and it was $60. It was fine for $30 but if I was going to double the cost, I decided to research alternative running packs and vests. Vests seem to be more common in ultrarunners, so I looked for those, although they’re a lot more expensive (often $125-200). I was disappointed with how small of a volume some of them held, or they were just ugly. I liked the look of a purple one I found that came with a 1.5L bladder….but ugh. I fit a 3L bladder in my previous backpack and typically fill it 2-2.5L full as a baseline, and all the way up for a longer (6h+) unsupported run. I decided to risk getting this vest even though it was smaller and try putting my larger 3L capacity bladder in the new vest. (Luckily it was on sale for $90 at the time  which made it a little less annoying to buy compared to a $150 one.) The bladder does fit, but it sticks out the top and hits the back of my neck if it’s all the way full (3L). So for the most part, I’m filling the 3L capacity bladder about 2L full (and as noted in this post earlier, putting it inside an insulated envelope to help retain the cold for longer), and that works for me.

One thing I do like a lot from my new running vest is the front pockets. My old backpack I had to partially take off and twist around me in order to get snacks out. With two large front pockets, I can fit several hours of fuel in there so there is no twisting involved to get my fuel out, which is helping with my goal to fuel every 30 minutes. I do wish there was a separate smaller pouch – my old backpack had a small old school flip phone size “cell phone” pocket that I used to keep my baggies of enzymes and electrolytes in. Right now, I just have those baggies floating around the top of those pockets and it’s fairly easily to grab and pull out the right baggie, but I’m toying with adding some kind of small strap-on holster/pouch to the shoulder just for enzymes so I don’t have to worry as much about them jostling out when my pockets are completely full of snacks. But otherwise, these front pockets are overall a nice improvement.

A purple running vest on the left; supplies described in blog post in the middle laid out on the ground, and my old purple backpack used for running on the right.
A cat in mid air jumping over the purple runing vest in the left of the picture; another cat sitting to the right of the old purple backpack used for running.
Outtake! Mint jumping over my new running vest and running supplies while Mo looks on from the right next to my old running backpack.
A cat sitting on and sniffing the new smells of a new, purple running vest
Mint helpfully inspected my new running vest as soon as I set it on the ground.

A DIY Fuel Enzyme Macronutrient Tracker for Running Ultras (Ultramarathons)

It takes a lot of energy to run ultramarathons (ultras).

To ensure they have enough fuel to complete the run, people usually want to eat X-Y calories per hour, or A-B carbs per hour, while running ultramarathons. It can be hard to know if you’re staying on top of fueling, especially as the hours drag on and your brain gets tired; plus, you can be throwing away your trash as you go so you may not have a pile of wrappers to tell you what you ate.

During training, it may be useful to have a written record of what you did for each run, so you can establish a baseline and work on improving your fueling if that’s something you want to focus on.

For me specifically, I also find it helpful to record what enzyme dosing I am taking, as I have EPI (exocrine pancreatic insufficiency, which you can read more about here) and if I have symptoms it can help me identify where my dosing might have been off from the previous day. It’s not only the amount of enzymes but also the timing that matters, alongside the timing of carbs and insulin, because I have type 1 diabetes, celiac, and EPI to juggle during runs.

Previously, I’ve relied on carb entries to Nightscout (an open source CGM remote monitoring platform which I use for visualizing diabetes data including OpenAPS data) as a record of what I ate, because I know 15g of carbs tracks to a single serving of chili cheese Fritos that are 10g of fat and 2g of protein, and I take one lipase-only and one pancrelipase (multi-enzyme) pill for that; and 21g of carbs is a Honey Stinger Gluten Free Stroopwaffle that is 6g of fat and 1g of protein, and I typically take one lipase-only. You can see from my most recent ultra (a 50k) where I manually took those carb entries and mapped them on to my blood sugar (CGM) graph to visualize what happened in terms of fuel and blood sugar over the course of my ultra.

However, that was “just” a 50k and I’m working toward bigger runs: a 50 mile, maybe a 100k (62 miles), and/or a 100 mile, which means instead of running for 7-8 hours I’ll be running for 12-14 and 24-30(ish) hours! That’s a lot of fuel to need to eat, and to keep track of, and I know from experience my brain starts to get tired of thinking about and eating food around 7 hours. So, I’ll need something better to help me keep track of fuel, enzymes, and electrolytes over the course of longer runs.

I also am planning on being well supported by my “crew” – my husband Scott, who will e-bike around the course of my ultra or my DIY ultra loops and refill my pack with water and fuel. In some cases, with a DIY ultra, he’ll be bringing food from home that I pre-made and he warms up in the microwave.

One of the strategies I want to test is for him to actually hand me the enzymes for the food he’s bringing me. For example, hand me a baggie of mashed potatoes and also hand me the one multi-enzyme (pancrelipase, OTC) pill I need to go with it. That reduces mental effort for me to look up or remember what enzyme amount I take for mashed potatoes; saves me from digging out my baggie of enzymes and having to get the pill out and swallow it, put the baggie away without dropping it, all while juggling the snack in my hands.

He doesn’t necessarily know the counts of enzymes for each fuel (although he could reproduce it, it’s better if I pre-make a spreadsheet library of my fuel options and that helps me both just pick it off a drop down and have an easy reference for him to glance at. He won’t be running 50-100 miles, but he will be waking up every 2-3 hours overnight and that does a number on his brain, too, so it’s easier all around if he can just reference the math I’ve already done!

So, for my purposes: 1) easy tracking of fuel counts for real-time “am I eating according to plan” and 2) retrospective “how did I do overall and should I do something next time” and 3) for EPI and BG analysis (“what should I do differently if I didn’t get the ideal outcome?”), it’s ideal to have a tracking spreadsheet to log my fuel intake.

Here’s what I did to build my ultimate fuel self-tracking self-populating spreadsheet:

First, I created a tab in my spreadsheet as a “Fuel Library”, where I listed out all of my fuel. This ranges from snacks (chili cheese Fritos; Honey Stinger Gluten Free Stroopwaffle; yogurt-covered pretzels, etc.); to fast-acting carbs (e.g. Airhead Minis, Circus Peanuts) that I take for fixing blood sugars; to other snack/treats like chocolate candy bars or cookies; as well as small meals and warm food, such as tomato soup or part of a ham and cheese quesadilla. (All gluten free, since I have celiac. Everything I ever write about is always gluten free!)

After I input the list of snacks, I made columns to input the sodium, calories, fat, protein, and carb counts. I don’t usually care about calories but a lot of recommendations for ultras are calories/hr and carbs/hr. I tend to be lower on the carb side in my regular daily consumption and higher on fat than most people without T1D, so I’m using the calories for ultrarunning comparison to see overall where I’m landing nutrient-wise without fixating on carbs, since I have T1D and what I personally prefer for BG management is likely different than those without T1D.

I also input the goal amount of enzymes. I have three different types of pills: a prescription pancrelipase (I call PERT, which stands for pancreatic enzyme replacement therapy, and when I say PERT I’m referring to the expensive, prescription pancrelipase that’s been tested and studied for safety and efficacy in EPI); an over-the-counter (OTC) lipase-only pill; and an OTC multi-enzyme pancrelipase pill that contains much smaller amounts of all three enzymes (lipase, protease, amylase) than my PERT but hasn’t been tested for safety and efficacy for EPI. So, I have three enzyme columns: Lipase, OTC Pancrelipase, and PERT. For each fuel I calculate which I need (usually one lipase, or a lipase plus a OTC pancrelipase, because these single servings are usually fairly low fat and protein; but for bigger meal-type foods with more protein I may ‘round up’ and choose to take a full PERT, especially if I eat more of it), and input the number in the appropriate column.

Then, I opened another tab on my spreadsheet. I created a row of headers for what I ate (the fuel); time; and then all the macronutrients again. I moved this down to row 3, because I also want to include at the top of the spreadsheet a total of everything for the day.

Example-DIY-Fuel-Enzyme-Tracker-ByDanaMLewis

In Column A, I selected the first cell (A4) for me, then went to Data > Data Validation and clicked on it. It opens this screen, which I input the following – A4 is the cell I’m in for “cell range”, the criteria is “list from a range”, and then I popped over to the tab with my ‘fuel library’ and highlighted the relevant data that I wanted to be in the menu: Food. So that was C2-C52 for my list of food. Make sure “show dropdown list in cell” is marked, because that’s what creates the dropdown in the cell. Click save.

Use the data validation section to choose to show a dropbox in each cell

You’ll want to drag that down to apply the drop-down to all the cells you want. Mine now looked like this, and you can see clicking the dropdown shows the menu to tap on.

Clicking a dropbox in the cell brings up the "menu" of food options from my Fuel Library tab

After I selected from my menu, I wanted column B to automatically put in the time. This gets obnoxious: google sheets has NOW() to put in the current time, but DO NOT USE THIS as the formula updates with the latest time any time you touch the spreadsheet.

I ended up having to use a google script (go to “Extensions” > Apps Script, here’s a tutorial with more detail) to create a function called onEdit() that I could reference in my spreadsheet. I use the old style legacy script editor in my screenshot below.

Older style app script editor for adding scripts to spreadsheet, showing the onEdit() function (see text below in post for what the script is)

Code I used, if you need to copy/paste:

function onEdit(e) {

var rr = e.range;

var ss = e.range.getSheet();

var headerRows = 2;  // # header rows to ignore

if (rr.getRow() <= headerRows) return;

var row = e.range.getRow();

var col = e.range.getColumn();

if(col == 1){

e.source.getActiveSheet().getRange(row,2).setValue(new Date());

}

}

After saving that script (File > Save), I went back to my spreadsheet and put this formula into the B column cells: =IFERROR(onEdit(),””). It fills in the current date/time (because onEdit tells it to if the A cell has been updated), and otherwise sits with a blank until it’s been changed.

Note: if you test your sheet, you’ll have to go back and paste in the formula to overwrite the date/time that gets updated by the script. I keep the formula without the “=” in a cell in the top right of my spreadsheet so I can copy/paste it when I’m testing and updating my sheet. You can also find it in a cell below and copy/paste from there as well.

Next, I wanted to populate my macronutrients on the tracker spreadsheet. For each cell in row 4, I used a VLOOKUP with the fuel name from A4 to look at the sheet with my library, and then use the column number from the fuel library sheet to reference which data element I want. I actually have things in a different order in my fuel library and my tracking sheet; so if you use my template later on or are recreating your own, pay attention to matching the headers from your tracker sheet and what’s in your library. The formula for this cell ended up being “=IFERROR(VLOOKUP(A4,’Fuel Library’!C:K,4, FALSE),””)”, again designed to leave the column blank if column A didn’t have a value, but if it does have a value, to prefill the number from Column 4 matching the fuel entry into this cell. Columns C-J on my tracker spreadsheet all use that formula, with the updated values to pull from the correctly matching column, to pre-populate my counts in the tracker spreadsheet.

Finally, the last thing I wanted was to track easily when I last ate. I could look at column B, but with a tired brain I want something more obvious that tracks how long it’s been. This also is again to maybe help Scott, who will be tasked with helping me stay on top of things, be able to check if I’m eating regularly and encourage me gently or less gently to be eating more as the hours wear on in my ultras.

I ended up creating a cell in the header that would track the last entry from column B. To do this, the formula I found is “INDEX(B4:B,MATCH(143^143,B4:B))”, which checks for the last number in column B starting in B4 and onward. It correctly pulls in the latest timestamp on the list.

Then, in another cell, I created “=NOW()-B2”, which is a good use for the NOW() formula I warned about, because it’s constantly updating every time the sheet gets touched, so that any time I go to update it’ll tell me how long it’s been since between now and the last time I ate.

But, that only updates every time I update the sheet, so if I want to glance at the sheet, it will be only from the last time I updated it… which is not what I want. To fix it, I need to change the autorefresh calculation settings. Go to File > Settings. Click “Calculations” tab, and the first drop down you want to change to be “On change and every minute”.

Under File > Settings there is a "Calculate" tab with a dropdown menu to choose to update on change plus every minute

Now it does what I want, updating that cell that uses the NOW() formula every minute, so this calculation is up to date even when the sheet hasn’t been changed!

However, I also decided I want to log electrolytes in my same spreadsheet, but not include it in my top “when did I last eat” calculator. So, I created column K and inserted the formula IF(A4=”Electrolytes”,””,B4), which checks to see if the Dropdown menu selection was Electrolytes. If so, it doesn’t do anything. If it’s not electrolytes, it repeats the B4 value, which is my formula to put the date and time. Then, I changed B2 to index and match on column K instead of B. My B2 formula now is INDEX(K4:K,MATCH(143^143,K4:K)), because K now has the food-only list of date and time stamps that I want to be tracking in my “when did I last eat” tracker. (If you don’t log electrolytes or don’t have anything else to exclude, you can keep B2 as indexing and matching on column B. But if you want to exclude anything, you can follow my example of using an additional column (my K) to check for things you do want to include and exclude the ones you don’t want. Also, you can hide columns if you don’t want to see them, so column K (or your ‘check for exclusions’ column wherever it ends up) could be hidden from view so it doesn’t distract your brain.

I also added conditional formatting to my tracker. Anytime A2, the time since eaten cell, is between 0-30 minutes, it’s green: indicating I’m on top of my fueling. 30-45 minutes it turns yellow as a warning that it’s time to eat. After 45 minutes, it’ll turn light red as a strong reminder that I’m off schedule.

I kept adding features, such as totaling my sodium consumption per hour, too, so I could track electrolytes+fuel sodium totals. Column L gets the formula =IF(((ABS((NOW()-B4))*1440)<60),F4,””) to check for the difference between the current time and the fuel entry, multiplying it by 1440 to convert to minutes and checking to see that it’s less than 60 minutes. If it is, then it prints the sodium value, and otherwise leaves it blank. (You could skip the ABS part as I was testing current, past, and future values and wanted it to stop throwing errors for future times that were calculated as negatives in the first argument). I then in C2 calculate the sum of those values for the total sodium for that hour, using =SUM(L4:L)

(I thought about tracking the past sodium per hour values to average and see how I did throughout the run on an hourly basis…but so far on my 3 long runs where I’ve used the spreadsheet, the very fact that I am using the tracker and glancing at the hourly total has kept me well on top of sodium and so I haven’t need that yet. However, if I eventually start to have long enough runs where this is an issue, I’ll probably go back and have it calculate the absolute hour sodium totals for retrospective analysis.)

This works great in the Google Sheets app on my phone, which is how I’ll be updating it during my ultras, although Scott can have it open on a browser tab when he’s at home working at his laptop. Every time I go for a long training run, I duplicate the template tab and label it with the date of the run and use it for logging my fueling.

(PS – if you didn’t know, you can rearrange the order of tabs in your sheet, so you can drag the one you want to be actively using to the left. This is useful in case the app closes on your phone and you’re re-opening the sheet fresh, so you don’t have to scroll to re-find the correct tab you want to be using for that run. In a browser, you can either drag and drop the tabs, or click the arrow next to the tab name and select “move left” or “move right”.)

Clicking the arrow to the right of a tab name in google sheets brings up a menu that includes the option to move the tab left or right

Click here to make a copy of my spreadsheet.

If you click to make a copy of a google spreadsheet, it pops up a link confirming you want to make a copy, and also might bring the app script functionality with it. If so, you can click the button to view the script (earlier in the blog post). If it doesn't include the warning about the script, remember to add the script yourself after you make a copy.

Take a look at my spreadsheet after you make a copy (click here to generate a copy if you didn’t use the previous mentioned link), and you’ll note in the README tab a few reminders to modify the fuel library and make sure you follow the steps to ensure that the script is associated with the sheet and validation is updated.

Obviously, you may not need lipase/pancrelipase/PERT and enzyme counts; if you do, your counts of enzymes needed and types of enzyme and quantity of enzymes will need updating; you may not need or want all of these macronutrients; and you’ll definitely be eating different fuel than I am, so you can update it however you like with what you’re eating and what you want to track.

This spreadsheet and the methods for building it can also be used for other purposes, such as tracking wait times or how long it took you to do something, etc.

(If you do find this blog post and use this spreadsheet concept, let me know – I’d love to hear if this is useful for you!)

2022 Strawberry Fields Forever Ultramarathon Race Report Recap

I recently ran my second-ever 50k ultramarathon. This is my attempt to provide a race recap or “race report”, which in part is to help people in the future considering this race and this course. (I couldn’t find a lot of race reports investigating this race!)

It’s also an effort to provide an example of how I executed fueling, enzyme dosing (because I have exocrine pancreatic insufficiency, known as EPI), and blood sugar management (because I have type 1 diabetes), because there’s also not a lot of practical guidance or examples of how people do this. A lot of it is individual, and what works for me won’t necessarily work for anyone, but if anything hopefully it will help other people feel not alone as they work to figure out what works for them!

Context of my running and training in preparation

I wrote quite a bit in this previous post about my training last year for a marathon and my first 50k. Basically, I’m slow, and I also choose to run/walk for my training and racing. This year I’ve been doing 30:60 intervals, meaning I run 30 seconds and walk 60 seconds.

Due to a combination of improved training (and having a year of training last year), as well as now having recognized I was not getting sufficient pancreatic enzymes so that I was not digesting and using the food I was eating effectively, this year has been going really well. I ended up training as far as a practice 50k about 5 weeks out from my race. I did several more mid- to high-20 mile runs as well. I also did a next-day run following my long runs, starting around 3-4 miles and eventually increasing to 8 miles the day after my 50k. The goal of these next-day runs was to practice running on tired legs.

Overall, I think this training was very effective for me. My training runs were easy paced, and I always felt like I could run more after I was done. I recovered well, and the next-day runs weren’t painful and I did not have to truncate or skip any of those planned runs. (Previous years, running always felt hard and I didn’t know what it was like to recover “well” until this year.) My paces also increased to about a minute/mile faster than last year’s easy pace. Again, that’s probably a combination of increased running overall and better digestion and recovery.

Last year I chose to run a marathon and then do a 50k while I was “trained up” for my marathon. This year, I wanted to do a 50k as a fitness assessment on the path to a 50 mile race this fall. I looked for local-ish 50k options that did not have much elevation, and found the Strawberry Fields Forever Ultra.

Why I chose this race, and the basics about this race

The Strawberry Fields Forever Ultra met most of my goal criteria, including that it was around the time that I wanted to run a 50k, so that I had almost 6 months to train and also before it got to be too hot and risked being during wildfire smoke season. (Sadly, that’s a season that now overlaps significantly with the summers here.) It’s local-ish, meaning we could drive to it, although we did spend the night before the race in the area just to save some stress the morning of the race. The race nicely started at 9am, and we drove home in the evening after the race.

The race is on a 10k (6.2 miles) looped course in North Bonneville, Washington, and hosted a 10k event (1 lap), a 50k event (5 laps), and also had 100k (10 laps) or (almost) 100 miles (16 laps). It does have a little bit of elevation – or “little” by ultramarathon standards. The site and all reports describe one hill and net 200 feet of elevation gain and loss. I didn’t love the idea of a 200 foot hill, but thought I could make do. It also describes the course as “grass and dirt” trails. You’ll see a map later where I’ve described some key points on the course, and it’s also worth noting that this course is very “crew-able”. Most people hang out at the start/finish, since it’s “just” a 10k loop and people are looping through pretty frequently. However, if you want to, either for moral or practical support, crew could walk over to various points, or my husband brought his e-bike and biked around between points on the course very easily using a mix of the other trails and actual roads nearby.

The course is well marked. Any turn had a white sign with a black arrow on it and also white arrows drawn on the ground, and there were dozens of little red/pink fluorescent flags marking the course. Any time there was a fork in the path, these flags (usually 2-3 for emphasis, which was excellent for tired brains) would guide you to the correct direction.

The nice thing about this race is it includes the 100 mile option and that has a course limit of 30 hours, which means all the other distances also have this course limit of 30 hours. That’s fantastic when a lot of 50k or 50 mile (or 100k, which is 62 miles) courses might have 12 hour or similar tighter course limits. If you wanted to have a nice long opportunity to cover the distance, with the ability to stop and rest (or nap/sleep), this is a great option for that.

With the 50k, I was aiming to match or ideally beat my time from my first 50k, recognizing that this course is harder given the terrain and hill. However, I think my fitness is higher, so beating that time even with the elevation gain seemed reasonable.

Special conditions and challenges of the 2022 Strawberry Fields Forever Ultramarathon

It’s worth noting that in 2021 there was a record abnormal heat wave due to a “heat dome” that made it 100+ degrees (F) during the race. Yikes. I read about that and I am not willing to run a race when I have not trained for that type of heat (or any heat), so I actually waited until the week before the race to officially sign up after I saw the forecast for the race. The forecast originally was 80 F, then bounced around mid 60s to mid 70s, all of which seemed doable. I wouldn’t mind some rain during the race, either, as rainy 50s and 60s is what I’ve been training in for months.

But just to make things interesting, for the 2022 event the Pacific Northwest got an “atmospheric river” that dumped inches of rain on Thursday..and Friday. Gulp. Scott and I drove down to spend the night Friday night before the race, and it was dumping hard rain. I began to worry about the mud that would be on the course before we even started the race. However, the rain finished overnight and we woke up to everything being wet, but not actively raining. It was actually fairly warm (60s), so even if it drizzled during the race it wouldn’t be chilly.

During the start of the race, the race director said we would get wet and joked (I thought) about practicing our backstroke. Then the race started, and we took off.

My race recap / race report the 2022 Strawberry Fields Forever Ultramarathon

I’ve included a picture below that I was sent a month or so before the race when I asked for a course map, and a second picture because I also asked for the elevation profile. I’ve marked with letters (A-I) points on the course that I’ll describe below for reference, and we ran counterclockwise this year so the elevation map I’ve marked with matching letters where “A” is on the right and “I” is on the left, matching how I experienced the course.

The course is slightly different in the start/finish area, but otherwise is 95% matching what we actually ran, so I didn’t bother grabbing my actual course map from my run since this one was handy and a lot cleaner than my Runkeeper-derived map of the race.

Annotated course map with points A-I
StrawberryFieldsForever-Ultra-Elevation-Profile

My Runkeeper elevation profile of the 50k (5 repeated laps) looked like this:
Runkeeper elevation profile of 5 loops on the Strawberry Fields Forever 50k course

I’ll describe my first experience through the course (Lap 1) in more detail, then a couple of thoughts about the experiences of the subsequent laps, in part to describe fueling and other choices I made.

Lap 1:

We left the start by running across the soccer field and getting on a paved path that hooked around the ballfield and then headed out a gate and up The Hill. This was the one hill I thought was on the course. I ran a little bit and passed a few people who walked on a shallower slope, then I also converted to a walk for the rest of the hill. It was the most crowded race start I’ve done, because there were so many people (150 across the 10k, 50k, 100k, and 100 miler) and such a short distance between the start and this hill. The Hill, as I thought of it, is point A on the course map.

Luckily, heading up the hill there are gorgeous purple wildflowers along the path and mountain views. At the top of the hill there are some benches at the point where we took a left turn and headed down the hill, going down the same elevation in about half a mile so it was longer than the uphill section. This downhill slope (B) was very runnable and gravel covered, whereas going up the hill was more dirt and mud.

At the bottom of the hill, there was a hairpin turn and we turned and headed back up the hill, although not all the way up, and more along a plateau in the side of the hill. The “plateau” is point C on the map. I thought it would be runnable once I got back up the initial hill, but it was mud pit after mud pit, and I would have two steps of running in between mud pits to carefully walk through. It was really frustrating. I ended up texting to my parents and Scott that it was about 1.7 miles of mud (from the uphill, and the plateau) before I got to some gravel that was more easily runnable. Woohoo for gravel! This was a nice, short downhill slope (D) before we flattened out and switched back to dirt and more mud pits.

This was the E area, although it did feel more runnable than the plateau because there were longer stretches between muddy sections.

Eventually, we saw the river and came out from the trail into a parking lot and then jogged over onto the trail that parallels the river for a while. This trail that I thought of as “River Road” (starting around point F) is just mowed grass and is between a sharp bluff drop with opening where people would be down at the river fishing, and in some cases we were running *underneath* fishing lines from the parking spots down to the river! There were a few people who would be walking back and forth from cars to the river, but in general they were all very courteous and there was no obstruction of the trail. Despite the mowed grass aspect of the trail, this stretch physically and psychologically felt easier because there were no mud pits for 90% of it. Near the end there were a few muddy areas right about the point we hopped back over into the road to connect up a gravel road for a short spurt.

This year, the race actually put a bonus aid station out here. I didn’t partake, but they had a tent up with two volunteers who were cheerful and kind to passing runners, and it looked like they had giant things of gatorade or water, bottled water, and some sugared soda. They probably had other stuff, but that’s just what I saw when passing.

After that short gravel road bit, we turned back onto a dirt trail that led us to the river. Not the big river we had been running next to, but the place where the Columbia River overflowed the trail and we had to cross it. This is what the race director meant by practicing our backstroke.

You can see a video in this tweet of how deep and far across you had to get in this river crossing (around point G, but hopefully in future years this isn’t a point of interest on the map!!)

Showing a text on my watch of my BIL warning me about a river crossing

Coming out of the river, my feet were like blocks of ice. I cheered up at the thought that I had finished the wet feet portion of the course and I’d dry off before I looped back around and hit the muddy hill and plateau again. But, sadly, just around the next curve, came a mud POND. Not a pit, a pond.

Showing how bad the mud was

Again, ankle deep water and mud, not just once but in three different ponds all within 30 seconds or so of each other. It was really frustrating, and obviously you can’t run through them, so it slowed you down.

Then finally after the river crossing and the mud ponds, we hooked a right into a nice, forest trail that we spent about a mile and a half in (point H). It had a few muddy spots like you would normally expect to get muddy on a trail, but it wasn’t ankle deep or water filled or anything else. It was a nice relief!

Then we turned out of the forest and crossed a road and headed up one more (tiny, but it felt annoying despite how small it looks on the elevation profile) hill (point I), ran down the other side of that slope, stepped across another mud pond onto a pleasingly gravel path, and took the gravel path about .3 miles back all the way to complete the first full lap.

Phew.

I actually made pretty good time the first loop despite not knowing about all the mud or river crossing challenges. I was pleased with my time which was on track with my plan. Scott took my pack about .1 miles before I entered the start/finish area and brought it back to me refilled as I exited the start/finish area.

Lap 2:

The second lap was pretty similar. The Hill (A) felt remarkably harder after having experienced the first loop. I did try to run more of the downhill (B) as I recognized I’d make up some time from the walking climb as well as knowing I couldn’t run up the plateau or some of the mud pits along the plateau (C) as well as I had expected. I also decided running in the mud pits didn’t work, and went with the safer approach of stepping through them and then running 2 steps in between. I was a little slower this time, but still a reasonable pace for my goals.

The rest of the loop was roughly the same as the first, the mud was obnoxious, the river crossing freezing, the mud obnoxious again, and relief at running through the forest.

Scott met me at the end of the river road and biked along the short gravel section with me and went ahead so he could park his bike and take video of my second river crossing, which is the video above. I was thrilled to have video of that, because the static pictures of the river crossing didn’t feel like it did the depth and breadth of the water justice!

At the end of lap 2, Scott grabbed my pack again at the end of the loop and said he’d figured out where to meet me to give it back to me after the hill…if I wanted that. Yes, please! The bottom of the hill where you hairpin turn to go back up the plateau is the 1 mile marker point, so that means I ran the first mile of the third lap without my pack, and not having the weight of my full pack (almost 3L of water and lots of snacks and supplies: more on that pack below) was really helpful for my third time up the hill. He met me as planned at the bottom of the downhill (B) and I took my pack back which made a much nicer start to lap 3.

Lap 3:

Lap 3 for some reason I came out of the river crossing and the mud ponds feeling like I got extra mud in my right shoe. It felt gritty around the right side of my right food, and I was worried about having been running for so many hours with soaked feet. I decided to stop at a bench in the forest section and swap for dry socks. In retrospect, I wish I had stopped somewhere else, because I got swarmed by these moth/gnat/mosquito things that looked gross (dozens on my leg within a minute of sitting there) that I couldn’t brush off effectively while I was trying to remove my gaiters, untie my shoes, take my shoes off, peel my socks and bandaids and lambs wool off, put lubrication back on my toes, put more lambs wool on my toes, put the socks and shoes back on, and re-do my gaiters. Sadly, it took me 6 minutes despite me moving as fast as I could to do all of those things (this was a high/weirdly designed bench in a shack that looked like a bus stop in the middle of the woods, so it wasn’t the best way to sit, but I thought it was better than sitting on the ground).

(The bugs didn’t hurt me at the time, but two days later my dozens of bites all over my leg are red and swollen, though thankfully they only itch when they have something chafing against them.)

Anyway, I stood up and took off again and was frustrated knowing that it had taken 6 minutes and basically eaten the margin of time I had against my previous 50k time. I saw Scott about a quarter of a mile later, and I saw him right as I realized I had also somewhere lost my baggie of electrolyte pills. Argh! I didn’t have back up for those (although I had given Scott backups of everything else), so that spiked my stress levels as I was due for some electrolytes and wasn’t sure how I’d do with 3 or so more hours without them.

I gave Scott my pack and tasked him with checking my brother-in-law’s setup to see if he had spare electrolytes, while he was refilling my pack to give me in lap 4.

Lap 4:

I was pretty grumpy given the sock timing and the electrolyte mishap as I headed into lap 4. The hill still sucked, but I told myself “only one more hill after this!” and that thought cheered me up.

Scott had found two electrolyte options from my brother-in-law and brought those to me at the end of mile 1 (again, bottom of B slope) with my pack. He found two chewable and two swallow pills, so I had options for electrolytes. I chewed the first electrolyte tab as I headed up the plateau, and again talked myself through the mud pits with “only one more time through the mud pits after this!”.

I also tried overall to bounce back from the last of mile 4 where I let myself get frustrated, and try to take more advantage of the runnable parts of the course. I ran downhill (B) more than the previous laps, mostly ignoring the audio cues of my 30:60 intervals and probably running more like 45:30 or so. Similarly, the downhill gravel after the mud pits (D) I ran most of without paying attention to the audio run cues.

Scott this time also met me at the start of the river road section, and I gave him my pack again and asked him to take some things out that he had put in. He put in a bag with two pairs of replacement socks instead of just one pair of socks, and also put in an extra beef stick even though I didn’t ask for it. I asked him to remove it, and he did, but explained he had put it in just in case he didn’t find the electrolytes because it had 375g of sodium. (Sodium is primarily the electrolyte I am sensitive to and care most about). So this was actually a smart thing, although because I haven’t practiced eating larger amounts of protein and experienced enzyme dosing for it on the run, I would be pretty nervous about eating it in a race, so that made me a bit unnecessarily grumpy. Overall though, it was great to see him extra times on the course at this point, and I don’t know if he noticed how grumpy I was, but if he did he ignored it and I cheered up again knowing I only had “one more” of everything after this lap!

The other thing that helped was he biked my pack down the road to just before the river crossing, so I ran the river road section like I did lap 3 and 4 on the hill, without a pack. This gave me more energy and I found myself adding 5-10 seconds to the start of my run intervals to extend them.

The 4th river crossing was no less obnoxious and cold, but this time it and the mud ponds didn’t seem to embed grit inside my shoes, so I knew I would finish with the same pair of socks and not need another change to finish the race.

Lap 5:

I was so glad I was only running the 50k so that I only had 5 laps to do!

For the last lap, I was determined to finish strong. I thought I had a chance of making up a tiny bit of the sock change time that I had lost. I walked up the hill, but again ran more than my scheduled intervals downhill, grabbed my bag from Scott, picked my way across the mud pits for the final time (woohoo!), ran the downhill and ran a little long and more efficiently on the single track to the river road.

Scott took my pack again at the river road, and I swapped my intervals to be 30:45, since I was already running closer to that and I knew I only had 3.5 or so miles to go. I took my pack back at the end of river road and did my last-ever ice cold river crossing and mud pond extravaganza. After I left the last mud pond and turned into the forest, I switched my intervals to 30:30. I managed to keep my 30:30 intervals and stayed pretty quick – my last mile and a half was the fastest of the entire race!

I came into the finish line strong, as I had hoped to finish. Woohoo!

Overall strengths and positives from the race

Overall, running-wise I performed fairly well. I had a strong first lap and decent second lap, and I got more efficient on the laps as I went, staying focused and taking advantage of the more runnable parts of the course. I finished strong, with 30:45 intervals for over a mile and 30:30 intervals for over a mile to the finish.

Also, I didn’t quit after experiencing the river crossing and the mud ponds and the mud pits of the first lap. This wasn’t an “A” race for me or my first time at the distance, so it would’ve been really easy to quit. I probably didn’t in part because we did pay to spend the night before and drove all that way, and I didn’t want to have “wasted” Scott’s time by quitting, when I was very capable of continuing and wasn’t injured. But I’m proud of mostly the way I handled the challenges of the course, and for how I readjusted from the mental low and frustration after realizing how long my sock change took in lap 3. I’m also pleased that I didn’t get injured, given the terrain (mud, river crossing, and uneven grass to run on for most of the course). I’m also pleased and amazed I didn’t hurt my feet, cause major blisters, or have anything really happen to them after hours of wet, muddy, never-drying-off feet.

The huge positive was my fueling, electrolytes, and blood glucose management.

I started taking my electrolyte pills that have 200+mg of sodium at about 45 minutes into the race, on schedule. My snack choices also have 100-150mg of sodium, which allowed me to not take electrolyte pills as often as I would otherwise need to (or on a hotter day with more sweat – it was a damp, mid-60s day but I didn’t sweat as much as I usually do). But even with losing my electrolytes, I used two chewable 100mg sodium electrolytes instead and otherwise ended up with sufficient electrolytes. Even with ideal electrolyte supplementation, I’m very sensitive to sodium losses and am a salty sweater, and I have a distinct feeling when my electrolytes are insufficient, so not having that feeling during after the race was a big positive for me.

So was my fueling overall. The race started at 9am, and I woke up at 6am to eat my usual pre-race breakfast (a handful of pecans, plus my enzyme supplementation) so that it would both digest effectively and also be done hitting my blood sugar by the time the race started. My BGs were flat 120s or 130s when I started, which is how I like them. I took my first snack about an hour and 10 minutes into the race, which is about 15g carb (10g fat, 2g protein) of chili cheese flavored Fritos. For this, I didn’t dose any insulin as I was in range, and I took one lipase-only enzyme (which covers about 8g of fat for me) and one multi-enzyme (that covers about 6g of fat and probably over a dozen grams of protein). My second snack was an hour later, when I had a gluten free salted caramel Honey Stinger stroopwaffle (21g carb, 6 fat, 1 protein). For the stroopwaffle I ended up only taking a lipase-only pill to cover the fat, even though there’s 1g of protein. For me, I seem to be ok (or have no symptoms) from 2-3g of uncovered fat and 1-2g of uncovered protein. Anything more than that I like to dose enzymes for, although it depends on the situation. Throughout the day, I always did 1 lipase-only and 1 multi-enzyme for the Fritos, and 1 lipase-only for the stroopwaffle, and that seemed to work fine for me. I think I did a 0.3u (less than a third of the total insulin I would normally need) bolus for my stroopwaffle because I was around 150 mg/dL at the time, having risen following my un-covered Frito snack, and I thought I would need a tiny bit of insulin. This was perfect, and I came back down and flattened out. An hour and 20 minutes after that, I did another round of Fritos. An hour or so after that, a second stroopwaffle – but this time I didn’t dose any insulin for it as my BG was on a downward slope. An hour later, more Fritos. A little bit after that, I did my one single sugar-only correction (an 8g carb Airhead mini) as I was still sliding down toward 90 mg/dL, and while that’s nowhere near low, I thought my Fritos might hit a little late and I wanted to be sure I didn’t experience the feeling of a low. This was during the latter half of loop 4 when I was starting to increase my intensity, so I also knew I’d likely burn a little more glucose and it would balance out – and it did! I did one last round of Fritos during lap 5.
CGM graph during 50k ultramarathon

This all worked perfectly. I had 100% time in range between 90 and 150 mg/dL, even with 102g of “real food” carbs (15g x 4 servings of Fritos, 21g x 2 waffles), and one 8g Airhead mini, so in total I had 110g grams of carbs across ~7+ hours. This perfectly matched my needs with my run/walk moderate efforts.

BG and carb intake plotted along CGM graph during 50k ultramarathon

I also nailed the enzymes, as during the race I didn’t have any GI-related symptoms and after the race and the next day (which is the ultimate verdict for me with EPI), no symptoms.

So it seems like my practice and testing with low carbs, Fritos, and waffles worked out well! I had a few other snacks in my pack (yogurt-covered pretzels, peanut butter pretzel nuggets), but I never thought of wanting them or wanting something different. I did plan to try to do 2 snacks per hour, but I ended up doing about 1 per hour. I probably could have tolerated more, but I wasn’t hungry, my BGs were great, and so although it wasn’t quite according to my original plan I think this was ideal for me and my effort level on race day.

The final thing I think went well was deciding on the fly after loop 2 to have Scott take my pack until after the hill (so I ran the up/downhill mile without it), and then for additional stretches along river road in laps 4 and 5. I had my pocket of my shorts packed with dozens of Airheads and mints, so I was fine in terms of blood sugar management and definitely didn’t need things for a mile at a time. I’m usually concerned about staying hydrated and having water whenever I want to sip, plus for swallowing electrolytes and enzyme pills to go with my snacks, but I think on this course with the number of points Scott could meet me (after B, at F all through G, and from I to the finish), I could have gotten away with not having my pack the whole time; having WAY less water in the pack (I definitely didn’t need to haul 3L the whole time, that was for when I might not see Scott every 2-3 laps) and only one of each snack at a time.

Areas for improvement from my race

I trained primarily on gravel or paved trails and roads, but despite the “easy” elevation profile and terrain, this was essentially my first trail ultra. I coped really well with the terrain, but the cognitive burden of all the challenges (Mud pits! River crossing! Mud ponds!) added up. I’d probably do a little more trail running and hills (although I did some) in the final weeks before the race to help condition my brain a little more.

I’ll also continue to practice fueling so I can eat more regularly than every hour to an hour and a half, even though this was the most I’ve ever eaten during a run, I did well with the quantities, and my enzyme and BG management were also A+. But I didn’t eat as much as I planned for, and I think that might’ve helped with the cognitive fatigue, too, by at least 5-10%.

I also now have the experience of a “stop” during a race, in this case to swap my socks. I’ve only run one ultra and never stopped before to do gear changes, so that experience probably was sufficient prep for future stops, although I do want to be mentally stronger/less frustrated by unanticipated problem solving stops.

Specific to this course, as mentioned above, I could’ve gotten away with less supplies – food and water – in my pack. I actually ran a Ragnar relay race with a group of fellow T1s a few years back where I finished my run segment and…no one was there to meet me. They went for Starbucks and took too long to get there, so I had to stand in the finishing chute waiting for 10-15 minutes until someone showed up to start the next run leg. Oh, and that happened in two of the three legs I ran that day. Ooof. Standing there tired, hot, with nothing to eat or drink, likely added to my already life-with-type-1-diabetes-driven-experiences of always carrying more than enough stuff. But I could’ve gotten away very comfortably with carrying 1L of water and one set of each type of snacks at a time, given that Scott could meet me at 1 mile (end of B), start (F) and end of river road (before G), and at the finish, so I would never have been more than 2-2.5 miles without a refill, and honestly he could’ve gotten to every spot on the trail barring the river crossing bit to meet me if I was really in need of something. Less weight would’ve made it easier to push a little harder along the way. Basically, I carried gear like I was running a solo 30 mile effort at a time, which was safe but not necessary given the course. If I re-ran this race, I’d feel a lot more comfortable with minimal supplies.

Surprises from my race

I crossed the finish line, stopped to get my medal, then was waiting for my brother-in-law to finish another lap (he ran the 100k: 62 miles) before Scott and I left. I sat down for 30 minutes and then walked to the car, but despite sitting for a while, I was not as stiff and sore as I expected. And getting home after a 3.5 hour car ride…again I was shocked at how minimally stiff I was walking into the house. The next morning? More surprises at how little stiff and sore I was. By day 3, I felt like I had run a normal week the week prior. So in general, I think this is reinforcement that I trained really well for the distance and my long runs up to 50k and the short to medium next day runs also likely helped. I physically recovered well, which is again part training but also probably better fueling during the race, and of course now digesting everything that I ate during and after the race with enzyme supplementation for EPI!

However, the interesting (almost negative, but mostly interesting) thing for me has been what I perceived to be adrenal-type fatigue or stress hormone fatigue. I think it’s because I was unused to focusing on challenging trail conditions for so many hours, compared to running the same length of hours on “easy” paved or gravel trails. I actually didn’t listen to an audiobook, music, or podcast for about half of the race, because I was so stimulated by the course itself. What I feel is adrenal fatigue isn’t just being physically or mentally tired but something different that I haven’t experienced before. I’m listening to my body and resting a lot, and I waited until day 4 to do my first easy, slow run with much longer walk intervals (30s run, 90s walk instead of my usual 30:60). Day 1 and 2 had a lot of fatigue and I didn’t feel like doing much, Day 3 had notable improvement on fatigue and my legs and body physically felt back to normal for me. Day 4 I ran slowly, Day 5 I stuck with walking and felt more fatigue but no physical issues, Day 6 again I chose to walk because I didn’t feel like my energy had fully returned. I’ll probably stick with easy, longer walk interval runs for the next week or two with fewer days running until I feel like my fatigue is gone.

General thoughts about ultramarathon training and effective ultra race preparation

I think preparation makes a difference in ultramarathon running. Or maybe that’s just my personality? But a lot of my goal for this race was to learn what I could about the course and the race setup, imagine and plan for the experience I wanted, plan for problem solving (blisters, fuel, enzymes, BGs, etc), and be ready and able to adapt while being aware that I’d likely be tired and mentally fatigued. Generally, any preparation I could do in terms of deciding and making plans, preparing supplies, etc would be beneficial.

Some of the preparation included making lists in the weeks prior about the supplies I’d need in my pack, what Scott should have to refill my pack, what I’d need the night and morning before since we would not be at home, and after-race supplies for the 3.5h drive home.

From the lists, the week before the race I began grouping things. I had my running pack filled and ready to go. I packed my race outfit in a gallon bag, a full set of backup clothes in another gallon bag and labeled them, along with a separate post-run outfit and flip flops for the drive home. I also included a washcloth for wiping sweat or mud off after the run, and I certainly ended up needing that! I packed an extra pair of shoes and about 4 extra pairs of socks. I also had separate baggies with bandaids of different sizes, pre-cut strips of kinesio tape for my leg and smaller patches for blisters, extra squirrel nut butter sticks for anti-chafing purposes, as well as extra lambs wool (that I lay across the top of my toes to prevent socks from rubbing when they get wet from sweat or…river crossings, plus I can use it for padding between my toes or other blister-developing spots). I had sunscreen, bug spray, sungless, rain hat, and my sunny-weather running visor that wicks away sweat. I had low BG carbs for me to put in my pockets, a backup bag for Scott to refill, and a backup to the backup. The same for my fuel stash: my backpack was packed, I packed a small baggie for Scott as well as a larger bag with 5-7 of everything I thought I might want, and also an emergency backup baggie of enzymes.

*The only thing I didn’t have was a backup baggie of electrolyte pills. Next time, I’ll add this to my list and treat them like enzymes to make sure I have a separate backup stash.

I even made a list and gave it to Scott that mapped out where key things were for during and after the race. I don’t think he had to use it, because he was only digging through the snack bag for waffles and Fritos, but I did that so I didn’t have to remember where I had put my extra socks or my spare bandaids, etc. He basically had a map of what was in each larger bag. All of this was to reduce the decision and communication because I knew I’d have decision fatigue.

This also went for post-race planning. I told Scott to encourage me to change clothes, and it was worth the energy to change so I didn’t sit in cold, wet clothes for the long drive home. I pre-made a gluten free ham and cheese quesadilla (take two tortillas, fill with shredded cheese and slices of ham, microwave, cut into quarters, stick in baggies, mark with fat/protein/carb counts, and refrigerate) so we could warm that up in the car (this is what I use) so I had something to eat on the way home that wasn’t more Fritos or waffles. I didn’t end up wanting it, but I also brought a can of beef stew with carrots and potatoes, that I generally like as a post-race or post-run meal, and a plastic container and a spoon so I could warm up the stew if I wanted it. Again, all of this pre-planned and put on the list weeks prior to the race so I didn’t forget things like the container or the spoon.

The other thing I think about a lot is practicing everything I want to do for a race during a training run. People talk about eating the same foods, wearing the same clothes, etc. I think for those of us with type 1 diabetes (or celiac, EPI, or anything else), it’s even more important. With T1D, it’s so helpful to have the experience adjusting to changing BG levels and knowing what to do when you’re dropping or low and having a snack, vs in range and having a fueling snack, or high and having a fueling snack. I had 100% TIR during this run, but I didn’t have that during all of my training runs. Sometimes I’d plateau around 180 mg/dL and be over-cautious and not bring my BGs down effectively; other times I’d overshoot and cause a drop that required extra carbs to prevent or minimize a low. Lots of practice went into making this 100% TIR day happen, and some of it was probably a bit of luck mixed in with all the practice!

But generally, practice makes it a lot easier to know what to do on the fly during a race when you’re tired, stressed, and maybe crossing an icy cold river that wasn’t supposed to be part of your course experience. All that helps you make the best possible decisions in the weirdest of situations. That’s the best you can hope for with ultrarunning!

Findings from the world’s first RCT on open source AID (the CREATE trial) presented at #ADA2022

September 7, 2022 UPDATEI’m thrilled to share that the paper with the primary outcomes from the CREATE trial is now published. You can find it on the journal site here, or view an author copy here. You can also see a Twitter thread here, if you are interested in sharing the study with your networks.

Example citation:

Burnside, M; Lewis, D; Crocket, H; et al. Open-Source Automated Insulin Delivery in Type 1 Diabetes. N Engl J Med 2022;387:869-81. DOI:10.1056/NEJMoa2203913


(You can also see a previous Twitter thread here summarizing the study results, if you are interested in sharing the study with your networks.)

TLDR: The CREATE Trial was a multi-site, open-labeled, randomized, parallel-group, 24-week superiority trial evaluating the efficacy and safety of an open-source AID system using the OpenAPS algorithm in a modified version of AndroidAPS. Our study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14 percentage points higher among those who used the open-source AID system (95% confidence interval [CI], 9.2 to 18.8; P<0.001) compared to those who used sensor augmented pump therapy; a difference that corresponds to 3 hours 21 minutes more time spent in target range per day. The system did not contribute to any additional hypoglycemia. Glycemic improvements were evident within the first week and were maintained over the 24-week trial. This illustrates that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. This study concluded that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS, a widely used open-source AID solution, is efficacious and safe.

The backstory on this study

We developed the first open source AID in late 2014 and shared it with the world as OpenAPS in February 2015. It went from n=1 to (n=1)*2 and up from there. Over time, there were requests for data to help answer the question “how do you know it works (for anybody else)?”. This led to the first survey in the OpenAPS community (published here), followed by additional retrospective studies such as this one analyzing data donated by the community,  prospective studies, and even an in silico study of the algorithm. Thousands of users chose open source AID, first because there was no commercial AID, and later because open source AID such as the OpenAPS algorithm was more advanced or had interoperability features or other benefits such as quality of life improvements that they could not find in commercial AID (or because they were still restricted from being able to access or afford commercial AID options). The pile of evidence kept growing, and each study has shown safety and efficacy matching or surpassing commercial AID systems (such as in this study), yet still, there was always the “but there’s no RCT showing safety!” response.

After Martin de Bock saw me present about OpenAPS and open source AID at ADA Scientific Sessions in 2018, we literally spent an evening at the dinner table drawing the OpenAPS algorithm on a napkin at the table to illustrate how OpenAPS works in fine grained detail (as much as one can do on napkin drawings!) and dreamed up the idea of an RCT in New Zealand to study the open source AID system so many were using. We sought and were granted funding by New Zealand’s Health Research Council, published our protocol, and commenced the study.

This is my high level summary of the study and some significant aspects of it.

Study Design:

This study was a 24-week, multi-centre randomized controlled trial in children (7–15 years) and adults (16–70 years) with type 1 diabetes comparing open-source AID (using the OpenAPS algorithm within a version of AndroidAPS implemented in a smartphone with the DANA-i™ insulin pump and Dexcom G6® CGM), to sensor augmented pump therapy. The primary outcome was change in the percent of time in target sensor glucose range (3.9-10mmol/L [70-180mg/dL]) from run-in to the last two weeks of the randomized controlled trial.

  • This is a LONG study, designed to look for rare adverse events.
  • This study used the OpenAPS algorithm within a modified version of AndroidAPS, meaning the learning objectives were adapted for the purpose of the study. Participants spent at least 72 hours in “predictive low glucose suspend mode” (known as PLGM), which corrects for hypoglycemia but not hyperglycemia, before proceeding to the next stage of closed loop which also then corrected for hyperglycemia.
  • The full feature set of OpenAPS and AndroidAPS, including “supermicroboluses” (SMB) were able to be used by participants throughout the study.

Results:

Ninety-seven participants (48 children and 49 adults) were randomized.

Among adults, mean time in range (±SD) at study end was 74.5±11.9% using AID (Δ+ 9.6±11.8% from run-in; P<0.001) with 68% achieving a time in range of >70%.

Among children, mean time in range at study end was 67.5±11.5% (Δ+ 9.9±14.9% from run-in; P<0.001) with 50% achieving a time in range of >70%.

Mean time in range at study end for the control arm was 56.5±14.2% and 52.5±17.5% for adults and children respectively, with no improvement from run-in. No severe hypoglycemic or DKA events occurred in either arm. Two participants (one adult and one child) withdrew from AID due to frustrations with hardware issues.

  • The pump used in the study initially had an issue with the battery, and there were lots of pumps that needed refurbishment at the start of the study.
  • Aside from these pump issues, and standard pump site/cannula issues throughout the study (that are not unique to AID), there were no adverse events reported related to the algorithm or automated insulin delivery.
  • Only two participants withdrew from AID, due to frustration with pump hardware.
  • No severe hypoglycemia or DKA events occurred in either study arm!
  • In fact, use of open source AID improved time in range without causing additional hypoglycemia, which has long been a concern of critics of open source (and all types of) AID.
  • Time spent in ‘level 1’ and ‘level 2’ hyperglycemia was significantly lower in the AID group as well compared to the control group.

In the primary analysis, the mean (±SD) percentage of time that the glucose level was in the target range (3.9 – 10mmol/L [70-180mg/dL]) increased from 61.2±12.3% during run-in to 71.2±12.1% during the final 2-weeks of the trial in the AID group and decreased from 57.7±14.3% to 54±16% in the control group, with a mean adjusted difference (AID minus control at end of study) of 14.0 percentage points (95% confidence interval [CI], 9.2 to 18.8; P<0.001). No age interaction was detected, which suggests that adults and children benefited from AID similarly.

  • The CREATE study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy.
  • This difference reflects 3 hours 21 minutes more time spent in target range per day!
  • For children AID users, they spent 3 hours 1 minute more time in target range daily (95% CI, 1h 22m to 4h 41m).
  • For adult AID users, they spent 3 hours 41 minutes more time in target range daily (95% CI, 2h 4m to 5h 18m).
  • Glycemic improvements were evident within the first week and were maintained over the 24-week trial. Meaning: things got better quickly and stayed so through the entire 24-week time period of the trial!
  • AID was most effective at night.
Difference between control and AID arms overall, and during day and night separately, of TIR for overall, adults, and kids

One thing I think is worth making note of is that one criticism of previous studies with open source AID is regarding the self-selection effect. There is the theory that people do better with open source AID because of self-selection and self-motivation. However, the CREATE study recruited a diverse cohort of participants, and the study findings (as described above) match all previous reports of safety and efficacy outcomes from previous studies. The CREATE study also found that the greatest improvements in TIR were seen in participants with lowest TIR at baseline. This means one major finding of the CREATE study is that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID.

This therefore means there should be NO gatekeeping by healthcare providers or the healthcare system to restrict AID technology from people with insulin-requiring diabetes, regardless of their outcomes or experiences with previous diabetes treatment modalities.

There is also no age effect observed in the trail, meaning that the results of the CREATE Trial demonstrated that open-source AID is safe and effective in children and adults with type 1 diabetes. If someone wants to use open source AID, they would likely benefit, regardless of age or past diabetes experiences. If they don’t want to use open source AID or commercial AID…they don’t have to! But the choice should 100% be theirs.

In summary:

  • The CREATE trial was the first RCT to look at open source AID, after years of interest in such a study to complement the dozens of other studies evaluating open source AID.
  • The conclusion of the CREATE trial is that open-source AID using the OpenAPS algorithm within a version of AndroidAPS, a widely used open-source AID solution, appears safe and effective.
  • The CREATE trial found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy; a difference that reflects 3 hours 21 minutes more time spent in target range per day.
  • The study recruited a diverse cohort, yet still produced glycemic outcomes consistent with existing open-source AID literature, and that compare favorably to commercially available AID systems. Therefore, the CREATE Trial indicates that a range of people with type 1 diabetes might benefit from open-source AID solutions.

Huge thanks to each and every participant and their families for their contributions to this study! And ditto, big thanks to the amazing, multidisciplinary CREATE study team for their work on this study.


September 7, 2022 UPDATE – I’m thrilled to share that the paper with the primary outcomes from the CREATE trial is now published. You can find it on the journal site here, or like all of the research I contribute to, access an author copy on my research paper.

Example citation:

Burnside, M; Lewis, D; Crocket, H; et al. Open-Source Automated Insulin Delivery in Type 1 Diabetes. N Engl J Med 2022;387:869-81. DOI:10.1056/NE/Moa2203913

Note that the continuation phase study results are slated to be presented this fall at another conference!

Findings from the RCT on open source AID, the CREATE Trial, presented at #ADA2022