Running a Multi-Day Ultramarathon (Aiming for 200 Miles)

I used to make a lot of statements about things I thought I couldn’t do. I thought I couldn’t run overnight, so I couldn’t attempt to run 100 miles. I could never run 200 mile races the way other people did. Etc. Yet last year I found myself training for and attempting 100 miles (I chose to stop at 82, but successfully ran overnight and for 25 hours) and this year I found myself working through the excessive mental logistics and puzzle of determining that I could train for and attempt to run 200 miles, or as many miles as I could across 3-4 days.

Like my 100 mile attempt, I found some useful blog recaps and race reports of people’s official races they did for 200-ish mile races. However, like the 100 attempts, I found myself wanting more information for the mental training and logistical preparation people put into it. While my 200 mile training and prep anchored heavily on what I did before, this post describes more detail on how my training, prep, and ‘race’ experience for a multi-day or 200 mile ultra attempt.

DIY-ing a 200

For context, I have a previous post describing the myriad reasons of why I often choose to run DIY ultras, meaning I’m not signing up for an official race. Most of those reasons hold true for why I chose to DIY my 200. Like my 100 (82) miles, I mapped a route that was based on my home paved trail that takes me out and around the trails I’m familiar with. It has its downsides, but also the upsides: really good trail bathrooms and I feel safe running them. Plus, it’s easy and convenient for my husband to crew me. Since I expected this adventure to take 3-4 days (more on that below), that’s a heavy ask of my husband’s time and energy, so sticking with the easy routes that work for him is optimal, too. So while I also sought to run 200 miles just like any other 200-mile ultra runner, my course happens to have minimal elevation. Not all 200 mile ultramarathon races have a ton of elevation – some like the Cowboy 200 are pretty flat – so my experience is closer to that than the experience of those running mountain based ultras with 30,000 feet (or more) of elevation gain. And I’m ok with that!

Sleep

One of the puzzles I had to figure out to decide I could even attempt a 200 miler is sleep. With a 100 mile race, most people don’t sleep at all (nor did I) and we just run through the night. With 200 miles, that’s impossible, because it takes 3, 4, 5 days to finish and biologically you need sleep. Plus, I need more sleep than the average person. I’m a champion sleeper; I typically sleep much longer than everyone else; and I know I couldn’t function with an hour here or there like many people do at traditional races. So I actually designed my 200 mile ultra with this in mind: how could I cover 200 miles AND get sleep? Because I’m running to/from home, I have access to my kitchen, shower, and bed, so I decided that I would set up my run to run each day and come home and eat dinner, shower, and sleep each night for a short night in my bed.

I then decided that instead of winging it and running until I dropped before eating, showering, and sleeping, I would aim for running 50 miles each day. Then I’d come in, eat, shower, and sleep and get up the next morning and go again. 4 days, 3 nights, 50 miles each day: that would have me finishing around 87-90ish hours total (with the clock running from my initial start), including ~25 hours or more of total downtime between the eating/showering/sleeping/getting ready. That breakdown of 3.67 days is well within the typical finish times of many 200 mile ultras (yes, comparing to those with elevation gain), so it felt like it was both a stretch for me but also doable and in a sensible way that works for me and my needs. I mapped it all out in my spreadsheet, with the number of laps and my routes and pacing to finish 50 miles per day; the two times per day I would need my husband to come out and crew me at ‘aid station stops’ in between laps, and what time I would finish each night. I then factored in time to eat and shower and get ready for bed, sleep, and time to get up in the morning. Given the fact that I expected to run slower each day, the sleep windows go from 8 hours down to less than 6 hours by night 3. That being said, if I managed to sleep 5 hours per night and 15 hours total, that’s probably almost twice as much as most people get during traditional races!

Like sleep, I was also very cognizant of the fact that a 200 probably comes down to mental fortitude and will power to keep going; meticulous fueling; and excellent foot care. Plus reasonable training, of course.

Meticulous fueling

I have previously written about building and using a spreadsheet to track my fuel intake during ultras. This method works really well for me because after each training run I can see how much I consumed and any trends. I started to spot that as I got tired, I would tend to choose certain snacks that happened to be slightly lower calorie. Not by much, but the snack selections went from those that are 150-180 calories to 120-140 calories, in part because I perceived them to be both ‘smaller’ (less volume) and ‘easier to swallow’ when I was tired. Doubled up in the same hour, this meant that I started to have hours of 240 calories instead of more than 250. That doesn’t sound like much, but I need every calorie I can get.

I mapped out my estimated energy expenditure based on the 50 miles per day, and even consuming 250 calories per hour, I would end up with several thousand calories of deficit each day! I spent a lot of time testing food that I think I can eat for dinner on the 3 nights to ensure that I get a good 1000 calories or more in before going to bed, to help address and reduce the growing energy deficit. But I also ended up optimizing my race fuel, too. Because I ran so many long runs in training where I fueled every 30 minutes, and because I had been mapping out my snack list for each lap for 50 miles a day for 4 days, I’ve been aware for months that I would probably get food fatigue if I didn’t expand my fuel list. I worked really hard to test a bunch of new snacks and add them to the rotation. That really helped even in training, across all 12 laps (3 laps a day to get 50 miles, times 4 days), I carefully made sure I wouldn’t have too many repeats and get sick of one food or one group of things I planned to eat. I also recently realized that some of the smaller items (e.g. 120 calorie servings) could be increased. I’m already portioning out servings from a big bag into small baggies; in some cases adding one more pretzel or one more piece of candy (or more) would drive up the calories by 10-20 per serving. Those small tweaks I made to 5 of my ~18 possible snacks means that I added about 200 calories on top of what was already represented in those snacks. If I happen to choose those 5 snacks as part of my list for any one lap, that means I have a bonus 200 calories I’ve convinced myself to consume without it being a big deal, because it’s simply one more pretzel or one more piece of candy in the snack that I’m already use to consuming. (Again, because I’m DIYing my race and have specific needs relative to running with celiac, diabetes, and exocrine pancreatic insufficiency, for me, pre-planning my fuel and having it laid out in advance for every run, or in the race every single lap, is what works for me personally.)

Here’s a view of how I laid out my fuel. I had worked on a list of what I wanted for each lap, checking against repeats across the same day and making sure I wasn’t too heavily relying on any one snack throughout all the days. I then bagged up all snacks individually, then followed my list to lay them out by each lap and day accordingly. I also have a bag per day each for enzymes and electrolytes, which you’ll see on the left. Previously, I’ve done one bag per lap, but to reduce the number of things I’m pulling in and out of my vest each time, I decided I could do one big bag each per day (and that did end up working out well).

Two pictures side by side, with papers on the floor showing left to right laps 1-3 on the top and along the left side days 1-4, to create a grid to lay out my snacks. On the left picture, I have my enzymes, electrolytes per day and then a pile of snacks grouped for each lap. On the right, all the snacks and enzymes and electrolytes have been put into gallon bags, one for each lap.

Contingency planning

Like I did for my 100, I was (clearly) planning for as many possibilities as I could. I knew that during the run – and each evening after the run – I would have limited excess mental capacity for new ideas and brainstorming solutions when problems come up. The more I prepared for things that I knew were likely to happen – fatigue, sore body, blisters, chafing, dropping things, getting tired of eating, etc – the more likely that they would be small things and not big things that can contribute to ending a race attempt. This includes learning from my past 100 attempt and how I dealt with the rain. First of all, I planned to move my race if it looks like we’ll get 6 months of rain in a single 24 hour period! But also, I scheduled my race so that if I do have a few hours of really hard rain, I could choose to take a break and come in and eat/shower/change/rest and go back out later, or extend and finish a lap on the last day or the day after that. I was not running a race that would yank me from the course, but I did have a hard limit after day 5 based on a pre-planned doctor’s appointment that would be a hassle to reschedule, so I needed to finish by the night after day 5. But this gave me the flexibility to take breaks (that I wasn’t really planning to take but was prepared to if I needed to due to weather conditions).

Training for a 200 mile ultramarathon

Like training plans for marathons and 100 milers, the training plans I’ve read about for 200 mile ultramarathons intimidate me. So much mileage! So much time for a slow run/walker like me. I did try to look at sample 200 mile ultra plans and get a sense for what they’re trying to achieve – e.g. when do they peak their mileage before the race, how many back to back runs of what general length in terms of time etc – and then loosely keep that in mind.

But basically, I trained for this 200 mile ultra just like I trained for my marathon, 50k, 100k, and 82 miler. I like to end up doing long runs (which for me are run/walks of 30 seconds run, 60 seconds walk, just like I do shorter runs) of up to around 50k distance. This time, I did two total training runs that were each around 29 miles, just based on the length of the trail I had to run. I could have run longer, but mentally had the confidence that another ~45 minutes per run wasn’t going to change my ability to attempt 50 miles a day for 4 days. If I didn’t have 3 years of this training style under my personal belt, I might feel different about it. That’s longer than many people run, but I find the experience of 7-8 hours of time on my feet fueling, run/walking, and problem solving (including building up my willpower to spend that much time moving) to be what works for me.

The main difference for my 200 is probably also that it’s my 3rd year of ultrarunning. I was able to increase my long runs a little bit more of a time, when historically I used to add 2 miles a time to a long run. I jumped up 4 miles at a time – again, run/walking so very easy on my legs – when building up my long runs, so I was able to end up with 2 different 29 mile runs, two weeks apart, even though I really kicked off training specifically for this 8 weeks prior (10 weeks including taper) to the run. In between I also did a weekend of back to back to back runs (meaning 3 days in a row) where I ran 16 miles, another 16 miles, and 13 miles to practice getting up and running on tired legs. In past cycles I had done a lot more back to back (2-day) with a long and a medium run, but this time I did less of the 2-day and did the one big 3-day since I was targeting a 4-day experience. In future, if I were to do this again, given how well my body held up with all this training, I might have done more back to back, but I took things very cautiously and wanted to not overtrain and cause injury from ramping up too quickly.

As part of that (trying not to over do it), instead of doing several little runs throughout the week I focused on more medium-long runs with my vest and fueling, so I would do something like a long run (starting at 10 miles building up to 29 miles), a medium-long run (8 miles up to 13 miles or 16 miles) and another medium-ish run (usually 8 miles). Three runs a week, and that was it. Earlier in the 8 weeks, I was still doing a lot of hiking off the season, so I had plenty of other time-on-feet experiences. Later in the season I sometimes squeezed in a 4th short run of the week if we wouldn’t be hiking, and ran without my vest and tried to do some ‘speed work’ (aka run a little faster than my easy long run pace). Nothing fancy. Again, this is based on my slow running style (that’s actually a fixed interval of short run and short walk, usually 30 seconds run and 60 seconds walk), my schedule, my personality, and more. If you read this, don’t think my mileage or training style is the answer. But I did want to share what I did and that it generally worked for me.

I did struggle with wondering if I was training “enough”. But I never train “enough” compared to others’ marathon, 50k, 100k, 100 mile plans, either. I’m a low mileage-ish trainer overall, even though I do throw in a few longer runs than most people do. My peak training for marathon, 50k, and 100k is usually around low 50s (miles per week). Surprisingly, this 200 cycle did get me to some mid 60 mile weeks! One thing that also helped me mentally was adding in a rolling 7 day calculation of the miles, not just looking at miles per calendar week. That helped when I shifted some runs around due to scheduling, because I could see that I was still keeping a reasonable 55-low60s mileage over 7 days even though the calendar week total dropped to low 40s because of the way the runs happened to land in the calendar weeks.

Generally, though, looking back at how my training was more than I had accomplished for previous races; I feel better than ever (good fueling really helps!); I didn’t have any accidents or overtraining injuries or niggles; I decided a few weeks before peak that I was training enough and it was the right amount for me.

Another factor that was slightly different was how much hiking I had done this year. I ran my 100k in March then took some time off, promising my husband that we would hike “more” this year. That also coincided with me not really bouncing back from my 100k recovery period: I didn’t feel like doing much running, so we kept planning hiking adventures. Eventually I realized (because I was diagnosed with Graves’ disease last year, I’m having my thyroid and antibody and other related blood work done every 3 months while we work on getting everything into range) that this coincided with my TSH going too high for my body’s happiness; and my disinterest in long runs was actually a symptom (for me) of slightly too-high TSH. I changed my thyroid medication and within two weeks felt HUGELY more interested in long running, which is what coincided with reinvigorating my interest in a fall ultra, training, and ultimately deciding to go for the 200. But in the meantime, we kept hiking a lot – to the tune of over 225 miles hiked and over 53,000 feet of elevation gain! I never tracked elevation gain for hiking before (last year, not sure I retrospectively tracked it all but it was closer to 100 miles – so definitely likely 2x increase), but I can imagine this is definitely >2x above what I’ve done on my previous biggest hiking year, just given the sheer number of hikes that we went out on. So overall, the strengthening of my muscles from hiking helped, as did the time on feet. Before I kicked off my 8 week cycle, we were easily spending 3-4 hours a hike and usually at least two hikes a weekend, so I had a lot of time on feet almost every hike equivalent to 12 or more miles of running at that point. That really helped when I reintroduced long runs and aided my ability to jump my long run in distance by 4 miles at a time instead of more gently progressing it by 2 miles a week as I had done in the past.

How my 200 mile attempt actually went

Spoiler alert: I DNF (did not finish) 200 miles. Instead, I stopped – happily – at 100 miles. But it wasn’t for a lack of training.

Day 1 – 51 miles – All as planned

I set out on lap 1 on Day 1 as planned and on time, starting in the dark with a waist lamp at 6am. It was dark and just faintly cool, but warm enough (51F) that I didn’t bother with long sleeves because I knew I would warm up. (Instead, for all days, I was happy in shorts and a short sleeve shirt when the temps would range from 49F to 76F and back down again.) I only had to run for about an hour in the dark and the sky gradually brightened. It ended up being a cloudy, overcast and nice weather day so it didn’t get super bright first thing, but because it wasn’t wet and cold, it wasn’t annoying at all. I tried to start and stay at an easy pace, and was running slow enough (about ~30s/mile slower than my training paces) that I didn’t have to alter my planned intervals to slow me down any more. All was fairly well and as planned in the first lap. I stopped to use the bathroom at mile 3.5 and as planned at my 8 mile turnaround point, and also stopped to stuff a little more wool in a spot in my shoe a mile later. That added 2 minutes of time, but I didn’t let it bother me and still managed to finish lap 1 at about a 15:08 min/mi average pace, which was definitely faster than I had predicted. I used the bathroom again at the turnaround while my husband re-filled my hydration pack, then I stuffed the next round of snacks in my vest and took off. The bathroom and re-fueling “aid station” stop only took 5 minutes. Not bad! And on I went.

A background-less shot of me in my ultrarunning gear. I'm wearing a grey moisture-wicking visor; sunglasses; a purple ultrarunning vest packed with snacks in front and the blue tube of my hydration pack looped in front; a bright flourescent pink short sleeve shirt; grey shorts with pockets bulging on the side with my phone (left pocket) and skittles and headphones and keys (right pocket), and in this lap I was wearing bright pink shoes. Lap 2 was also pretty reasonable, although I was surprised by how often I wanted a bathroom. My period had started that morning (fun timing), and while I didn’t have a lot of flow, the signals my abdomen was giving my brain was telling me that I needed to go to the bathroom more often than I would have otherwise. That started to stress me out slightly, because I found myself wishing for a bathroom in the longest stretch without trail bathrooms and in a very populated area, the duration of which was about 5.5 miles long. I tried to drink less but was also aware of trying not to under hydrate or imbalance my electrolytes. I always get a little dehydrated during my period; and I was running a multi-day ultra where I needed a lot of hydration and more sodium than usual; this situation didn’t add up well! But I made it without any embarrassing moments on the trail. The second aid station again only took 5 minutes. (It really makes a world of difference to not have to dry off my feet, Desitin them up, and re-do socks and shoes every single aid station like I did last year!) I could have moved faster, but I was trying to not let small minutes of time frazzle me, and I was succeeding with being efficient but not rushed and continuing on my way. I had slowed down some during lap 2, however – dropping from a 15:08 to 15:20ish min/mi pace. Not much, but noticeable.

At sunset, with light blue sky fading to yellow at the horizon behind the row of tall, skinny bush like trees with gaps and a hot air balloon a hundred or so feet off the ground seen between the trees.Lap 3 I did feel more tired. I talked my husband into bringing me my headlamp toward the end of the last lap, instead of me having to carry it for 4+ hours before the sun went down. (Originally, I thought I would need it 2-3 hours into this last lap, but because I was moving so well it was now looking like 4 hours, and it would be a 2-3 mile e-bike ride for him to bring me the lamp when I wanted it. That was a mental win to not have to run with the lamp when I wasn’t using it!) I was still run/walking the same duration of intervals, but slowed down to about 16:01 pace for this lap. Overall, I would be at 15:40 average for the whole day, but the fatigue and my tired feet started to kick in on the third lap between miles 34-51. Plus, I stopped to take a LOT more pictures, because there was a hot air balloon growing in the distance as it was flying right toward me – and then by me next to the trail! It ended up landing next to the soccer fields a mile behind me after it passed me in this picture. I actually made it home right as the sun set and didn’t have to wear my lamp at all that evening.

Day 1 recovery was better and worse than I expected. I sat down and used my foot massager on my still-socked feet, which felt very good. I took a shower after I peeled my socks off and took a look at my feet for the first time. I had one blister that I didn’t know was growing at all pop about an hour before I finished, but it was under some of my pre-taped area. I decided to leave the tape and see how it looked and felt in the morning. I had 2-3 other tiny, not a big deal blisters that I would tape in the morning but didn’t need any attention that night.

I had planned to eat a reasonably sized dinner – preferably around 1000 calories – each night, to help me address my calorie deficit. And I had a big deficit: I had burned 5,447 calories and consumed 3,051 calories in my 13 hours and 13 minutes of running. But I could only eat ¼ of the pizza I planned for dinner, and that took a lot of work to force myself to eat. So I gave up, and went to bed with a 3,846 calorie deficit, which was bigger than I wanted.

And going to bed hurt. I was stiff, which I could deal with, but my feet that didn’t hurt much while running started SCREAMING at me. All over. They hurt so bad. Not blisters, just intense aches. Ouch! I started to doubt my ability to run the next day, but this is where my pre-planning kicked in (aided by my husband who had agreed to the rules we had decided upon): no matter what, I would get up in the morning, get dressed, and go out and start my first lap. If I decided to quit, I could, but I could not quit at night in bed or in the morning in the bed or in the house. I had to get up and go. So I went to sleep, less optimistic about my ability to finish 50 miles again on day 2, but willing to see what would happen.

Day 2: 34 instead of 50 miles, and walking my first ever lap

I actually woke up before my alarm went off on day 2. Because I had finished so efficiently the day before, I was able to again get a good night’s sleep, even with the early alarm and waking up again at 4:30am with plans to be going by 6am. The extra time was helpful, because I didn’t feel rushed as I got ready to go. I spent some extra time taping my new blisters. Because they hadn’t popped, I put small torn pieces of Kleenex against them and used cut strips of kinesio tape to protect the area. (Read “Fixing Your Feet” for other great ultra-related foot care tips; I learned about Kleenex from that book.) I also use lambs’ wool for areas that rub or might be getting hot spots, so I put wool back in my usual places (between big and second toes, and on the side of the foot) plus another toe that was rubbing but not blistered and could use some cushion. I also this year have been trying Tom’s blister powder in my socks, which seems to help since my feet are extra sweat prone, and I had pre-powdered a stack of socks so I could simply slip them on and get going once I had done the Kleenex/tape and wool setup. The one blister that had popped under my tape wasn’t hurting when I pressed on it, so I left it alone and just added loose wool for a little padding.

A pretty view of the trail with bright blue sky after the sun rose with green bushes (and the river out of sight) to the left, with the trail parallel to a high concrete wall of a road with cheery red and yellow leaved trees leaning over the trail.And off I went. I managed to run/walk from the start, and faster than I had projected on my spreadsheets originally and definitely faster than I thought was possible the night before or even before I started that morning. Sure, I was slower than the day before, but 15:40 min/mi pace was nothing to sneeze at, and I was feeling good. I was really surprised that my legs, hips and body did not hurt at all! My multi-day or back-to-back training seemed to pay off here. All was well for most of the first lap (17 miles again), but then the last 2 or so miles, my pace started dipping unexpectedly so I was doing 16+ min/mi without changing my easy effort. I was disappointed, and tired, when I came into my aid station turnaround. I again didn’t need foot care and spent less than 5 minutes here, but I told Scott as I left that I was going to walk for a while, because my feet had been hurting and they were getting worse. Not blisters: but the balls of my feet were feeling excruciating.

A close up of a yellow shelled snail against the paved trail that I saw while walking the world's slowest 17-mile lap on day 2.I headed out, and within a few minutes he had re-packed up and biked up to ride alongside me for a few minutes and chat. I told him I was probably going to need to walk this entire lap. We agreed this was fine and to be expected, and was in fact built into my schedule that I would slow down. I’ve never walked a full lap in an ultra before, so this would be novel to me. But then my feet got louder and louder and I told him I didn’t think I could even walk the full lap. We decided that I should take some Tylenol, because I wasn’t limping and this wouldn’t mask any pain that would be important cues for my body that I would be overriding, but simply muting the “ow this is a lot” screams that the bones in the balls of my feet were feeling. He biked home, grabbed some, and came back out. I took the Tylenol and sent him home again, walking on. Luckily, the Tylenol did kick in and it went from almost unbearable to manageable super-discomfort, so I continued walking. And walking. And walking. It took FOREVER, it felt like, having gone from 15-16 min/mi pace with 30 seconds of running, 60 seconds of walking, to doing 19-20 minute miles of pure walking. It was boring. I had podcasts, music, audiobooks galore, and I was still bored and uncomfortable and not loving this experience. I also was thinking about it on the way back about how I did not want to do a 3rd lap that day (to get me to my planned 50 miles) walking again.

Scott biked out early to meet me and bring me extra ice, because it was getting hot and I was an hour slower than the day before and risking running out of water that lap if he didn’t. After he refilled my hydration pack and brought it back to me while I walked on, I told him I wanted to be done for the day. He pointed out that when I finished this lap, I would be at 34 miles for the day, and combined with the day before (51), that put me at 85 miles, which would be a new distance PR for me since last year I had stopped at 82. That was true, and that would be a nice place to stop for the day. He reminded me of our ‘rules’ that I could go out the next day and do another lap to get me to 100, and decide during that lap what else I wanted to do. I was pretty sure I didn’t want to do more, but agreed I would decide the next day. So I walked home, completing lap 2 and 34 miles for the day, bringing me to 85 miles overall across 2 days.

Day 2 recovery went a little better, in part because I didn’t do 51 miles (only 34) and I had walked rather than ran the second lap, and also stopped earlier in the day (4pm instead of 7pm). I had more time to shower and bring myself to finally eat an entire 1000 calories before going to bed, again with my feet screaming at me. I had more blisters this time, mostly again on my right foot, but the balls of my feet and the bones of my feet ached in a way they never had before. This time, though, instead of setting my alarm to get up and go by 6am, I decided to sleep for longer, and go out a little later to start my first lap. This was a deviation from my plan, but another deviation I felt was the right one: I needed the sleep to help my body recover to be able to even attempt another lap.

Day 3: Only 16 miles, but hitting 100 for the first time ever

Instead of 6am, I set out on Day 3 around 8:30am. I would have taken even longer to go, but the forecast was for a warm day (we ended up hitting 81F) and I wanted to be done with the lap before the worst of the heat. I thought there was a 10% chance I’d keep going after this lap, but it was a pretty small chance. However, I set out for the planned 16 mile lap and was pleasantly surprised that I was run/walking at about a 15:40 pace! Again, better than I had projected (although yes, I had deviated from my mileage plan the day before), and it felt like a good affirmation that stopping the day before instead of slogging out another walking lap was the right thing to do.

After a first few miles, I toyed with the idea of continuing on. But I knew with the heat I probably wouldn’t stand more than one more lap, which would get me to 116. Even if I went out again the fourth day, and did 1-2 laps, that would MAYBE get me to 150, but I doubted I could do that without starting to cause some serious damage. And it honestly wasn’t feeling fun. I had enjoyed the first day, running in the dark, the fog, the daylight, and the twilight, seeing changing fall leaves and running through piles of them. The second day was also fun for the first lap, but the second lap walking was probably what a lot of ultra marathoners call the “death march” and just not fun. I didn’t want to keep going if it wasn’t fun, and I didn’t want to run myself into the ground (meaning to be so worn down that it would take weeks to months to recover) or into injury, especially when the specific milestones didn’t really mean anything. Sure, I wanted to be a 200 mile ultramarathoner, something that only a few thousand people have ever done – but I didn’t want to do it at the expense of my well-being. I spent a lot of time thinking about it, especially miles 4-8, and was thinking about the fact that the day before I had started, I had gone to a doctor’s appointment and had an official diagnosis confirming my fifth autoimmune disease, then proceeded to run (was running) 100 miles. Despite all the fun challenges of running with autoimmune conditions, I’m in really good health and fitness. My training this year went so well and I really enjoyed it. Most of this ultra had gone so well physically, and my legs and body weren’t hurting at all: the weakness was my feet. I didn’t think I could have trained any differently to address that, nor do I think I could change it moving forward. It’s honestly just hard to run that many hours or that many miles, as most ultramarathoners know, and your feet take a beating. Given that I was running on pavement for all of those hours, it can be even harder – or a different kind of hard – than kicking roots and rocks on a dirt trail. I figured I would metaphorically kick myself if I tried for 116 or 134 and injured myself in a way that would take 6-8 weeks to recover, whereas I felt pretty confident that if I stopped after this lap (at 100), I would have a relatively short and easy recovery, no major issues, and bounce back better than I ever have, despite it being my longest ever ultramarathon. Yes, I was doing it as a multi-day with sleep in between, but both in time on feet and in mileage, it was still the most I’d ever done in 2 or 3 days.

And, I was tired of eating. I was fueling SO well. Per my plans, I set out to do >500 mg of sodium per hour and >250 calories per hour. I had been nailing it every lap and every day! Day 1 I averaged 809 mg of sodium per hour and 290 calories per hour. Day 2 was even increased from that, averaging 934 mg of sodium per hour and 303 calories per hour! Given the decreased caloric burn of day 2 because I walked the second lap, my caloric deficit for day 2 was a mere ~882 calories (given that I also managed to eat a full dinner that night), even though I skipped the last hour as I finished the walking lap. Day 3 I was also fueling above my goals, but I was tired of it. Sooooo tired of it. Remember, I have to take a pill every time I eat, because I have exocrine pancreatic insufficiency (EPI or PEI). I was eating every 30 minutes as I ran or walked, so that meant swallowing at least one pill every 30 minutes. I had swallowed 57 pills on Day 1 and 48 pills on Day 2, between my enzymes and electrolyte pills. SO MANY PILLS. The idea of continuing to eat constantly every 30 minutes for another lap of ~5 or more hours was also not appealing. I knew if I didn’t eat, I couldn’t continue.

A chart with an hourly break down of sodium, calories, and carbs consumed per hour, plus totals of caloric consumption, burn, and calculated deficit across ~27 hours of move time to accomplish 100 miles run.

And so, I decided to stop after one more lap on day 3, even though I was holding up a respectable 15:41 min/mi pace throughout. I hit 100 miles and finished the lap at home, happy with my decision.

Two pictures of me leaning over after my run holding a sign (one reading 50 miles, one reading 100 miles) for each of my cats to sniff.(You can see from these two pictures that I smelled VERY interesting, sweaty and salty and exhausted at the end of day 1 and day 3, when I hit 50 miles and 100 miles, respectively. We have two twin kittens (now 3 years old) and one came out to sniff me first on the first day, and the other came out as I came home on the third day!)

Because I had only run one final lap (16 miles) on day 3, and had so many bonus hours in the rest of the day afterward when I was done and home, I was able to eat more and end up with only a 803 calorie deficit for the day. So overall, day 1 had the biggest deficit and probably influenced my fatigue and perception of pain on day 2, but because I had shortened day 2 and then day 3, my very high calorie intake every hour did a pretty good job matching my calorie expenditure, which is probably why I felt very little muscle fatigue in my body and had no significant sore areas other than the bottoms of my feet. I ended up averaging 821 mg/hr of sodium and 279 calories per hour (taking into account the fact that I skipped two final snacks at the end of day 2 when I was walking it out; ignoring that completely skipped hour would mean the average caloric intake on hours I ate anything at all was closer to 290 calories/hr!)

In total, I ended up consuming 124 pills in approximately 27 hours of move time across my 100 miles. (This doesn’t include enzyme pills for my breakfast or dinners each of those days, either – just the electrolyte and enzyme pills consumed while running!)

AFTERMATH

Recovery after day 3 was pretty similar to day 2, with me being able to eat more and limit my calorie deficit. I’ve had long ~30 mile training runs where I wasn’t very hungry afterward, but it surprised me that even two days after my ultra, I still haven’t really regained my appetite. I would have figured my almost 4000 calorie deficit from day 1 would drive a lot of hunger, so this surprised me.

So too has my physical state: 48 hours following the completion of my 100 miles, I am in *fantastic* shape compared to other multi-day back to back series of runs I’ve done, ultramarathons or not. The few blisters I got, mainly on my right foot, have already flattened themselves up and mostly vanished. I think I get more blisters on my right foot because of breaking my toe last year: my right foot now splays wider in my shoe, so it tends to get more blisters and cause more trouble than my left foot. I got only one blister on my left foot, which is still fluid filled but not painful and starting to visibly deflate now that I’m not rubbing it onto a shoe constantly any more. And my legs don’t feel like I ran at all, let alone running 51+34+16 miles!

I am tired, though. I don’t have brain fog, probably because of my excellent fueling, but I am fatigued in terms of overall energy and lack of motivation to get a lot done yesterday and today (other than writing this blog post!). So that’s probably pretty on par with my effort expended and matches what I expected, but it’s nice to be able to move around without hurting (other than my feet).

My feet in terms of general aches and ows are what came out the worst from my run. Day 2, what hurt was the bottom of the balls of my feet. Starting each night though, I was getting aches all over in all of the bones of my feet. After day 3, that night the foot aches were particularly strong, and I took some Tylenol to help with that. Yesterday evening and today though, the ache has settled down to very minor and only occasionally noticeable. The tendon from the top of my left foot up my ankle is sore and gets cranky when I wear my sneakers (although it didn’t bother me at all while running any of the days), so after tying and re-tying my shoelaces 18 times yesterday to try to find the perfect fit for my left foot, today I went on my recovery walk in flip flops and was much happier.

What I’m taking away from this 200 mile attempt that was only 100 miles:

I feel a little disappointed that I didn’t get anywhere near 200 miles, but obviously, I was not willing to hurt long enough or hard enough to get there. My husband called it a stretch goal. Rationally, I am very happy with my choices to stop at 100 and end up in the fantastic physical shape that I am in, and I recognize that I made a very rational choice and tradeoff between ending in good shape (and health) and the mainly ego-driven benefits of possibly achieving 200 miles (for me).

Would I do anything different? I can’t think of anything. If I somehow had an alternate do-over, I can’t think of anything I would think to change. I’d like to reduce my risk of blisters but I’m already doing all I can there, and dealing with changes in my right foot shape post-broken toe that I have no control over. And I’m not sure how to train more/better for reducing the bottom ball of foot pain that I got: I already trained multiple days, back to back, long hours of feet on pavement. It’s possible that having my doctor’s appointment the day before I started influenced my mental calculation of my future risk/benefit tradeoff of continuing more miles, and so not having had that then may have changed my calculations to do another lap or two, or go out on the 4th day (which I did not). But, I don’t have a do over, and I’ll never know, and I’m not too upset about that because I was able to control what I could control and am again pretty happy with the outcomes. 100 or 150 miles felt about the same to me, psychologically, in terms of satisfaction.

What I would tell other people about attempting multiple day ultramarathons or 200 mile ultramarathons:

Training back to back days is one option, as is long spurts of time on feet walking/hiking/running. I don’t think “just running” has to be the only way to train for these things. I’m also a big proponent of short intervals: If you hear people recommend taking walk breaks, it doesn’t have to be 1 minute every 10 minutes or every mile. It can be as short as every 30 seconds of running, take a walk break! There’s no wrong way to do it, whatever makes your body and brain happy. I get bored running longer (and don’t like it); other people get bored running the short intervals that I do – so find what works for you and what you’re actually willing to do.

Having plans for how you’ll rest X hours and go out and try to make it another lap or to the next aid station works really well, especially if you have crew/pacers/support (for me, my husband) who will stick to those rules and help you get back out there to try the next lap/section. Speaking of sleep/rest, laying down for a while helps as much as sleeping, so even if you can’t sleep, committing to the rest of X hours is also good for resting your feet and everything. I found that the hour laying down before I fell asleep helped my body process the noise of the “ouch” from my feet and it was a lot easier to sleep after that. Plan that you’ll have some down/up time before and after your sleep/rest time, and figure that into your time plans accordingly.

The cheesy “know your why” and “know what you want” recommendations do help. I didn’t want 200 miles badly enough to hurt more for longer and risk months of recovery (or the inability to recover). Maybe you’d be lucky enough to achieve 200 without hurting that bad, that long, or risking injury – or maybe you’ll have to make that choice, and you might make it differently than I did. (Maybe you’re lucky enough to not have 5 autoimmune things to juggle! I hope you don’t have to!) I kind of knew going in that I was only going to hit 200 if all went perfect.

Diabetes and this 200 mile ultramarathon that was a 100 mile ultra:

I just realized that I managed to write an ENTIRE race report without talking about diabetes and glucose management…because I had zero diabetes-related thoughts or issues during these several days of my run! Sweet! (Pun fully intended.)

Remember, I have type 1 diabetes and use an open source automated insulin delivery (AID) system (in my case, still using OpenAPS after alllllll these years), and I’ve talked previously about how I fuel while ultrarunning and juggling blood glucose management. Unlike previous ultras, I had zero pump site malfunctions (phew) and my glucose stayed nicely in range throughout. I think I had one small drift above range for 2 hours due to an hour of higher carb activity right when I shifted to walking the second lap on day 2, but otherwise was nicely in range all days and all nights without any extra thought or energy expended. I didn’t have to take a single “low carb”/hypoglycemia treatment! I think there was one snack I took a few minutes early when I saw I was drifting down slightly, but that was mostly a convenience thing and I probably would not have gone low (below target) even if I had waited for my planned fuel interval. But out of 46 snacks, only one 5-10 minutes early is impressive to me.

I had no issues after each day’s run, either: OpenAPS seamlessly adjusted to the increasing insulin sensitivity (using “autosensitivity” or “autosens”) so I didn’t have to do manual profile shifts or overrides or any manual interference. I did decide each night whether I wanted to let it SMB (supermicrobolus) as usual or stick to temp basal only to reduce the risk of hypoglycemia, but I had no post-dinner or overnight lows at all.

The most “work” I had to do was deciding to wear a second CGM sensor (staggered, 5 days after my other one started) so that I had a CGM sensor session going with good quality data that I could fall back to if my other sensor started to get jumpy, because the sensor session was supposed to end the night of day 4 of my planned run. I obviously didn’t run day 4, but even so I was glad to have another sensor going (worth the cost of overlapping my sensors) in order to have the reassurance of constant data if the first one died or fell out and I could seamlessly switch to an already-warmed up sensor with good data. I didn’t need it, but I was glad to have done that in prep.

(Because I didn’t talk about diabetes a lot in this post, because it was not very relevant to my experiences here, you might want to check out my previous race recaps and posts about utlrarunning like this one where I talk in more detail about balancing fueling, insulin, and glucose management while running for zillions of hours.)

TLDR: I ran 100 miles, and I did it my DIY way: my own course, my own (slow pace), with sleep breaks, a lot of fueling, and a lot of satisfaction of setting big goals and attempting to achieve them. I think for me, the process goals of figuring out how to even safely attempt ultramarathons are even more rewarding than the mileage milestones of ultrarunning.

Running a multi-day ultramarathon by Dana M. Lewis from DIYPS.org

Why DIY AID in 2023? #ADA2023 Debate

I was asked to participate in a ‘debate’ about AID at #ADA2023 (ADA Scientific Sessions), representing the perspective that DIY systems should be an option for people living with diabetes.

I present this perspective as a person with type 1 diabetes who has been using DIY AID for almost a decade (and as a developer/contributor to the open source AID systems used in DIY) – please note my constant reminder that I am not a medical doctor.

Dr. Gregory P. Forlenza, an Associate Professor from Barbara Davis Center, presented a viewpoint as a medical doctor practicing in the US.

FYI: here are my disclosures and Dr. Forlenza’s disclosures:

On the left is my slide (Dana M. Lewis) showing I have no commercial support or conflicts of interest. My research in the last 3 years has previously been funded by the New Zealand Health Research Council (for the CREATE Trial); JDRF; and DiabetesMine. Dr. Forlenza lists research support from NIH, JDRF, NSF, Helmsley Charitable Trust, Medtronic, Dexcom, Abbott, Insulet, Tandem, Beta Bionics, and Lilly. He also lists Consulting/Speaking/AdBoard: Medtronic, Dexcom, Abbott, Insulet, Tandem, Beta Bionics, and Lilly.

I opened the debate with my initial presentation. I talk about the history of DIY in diabetes going back to the 1970s, when people with diabetes had to “DIY” with blood glucose meters because initially healthcare providers did not want people to fingerstick at home because they might do something with the information. Similarly, even insulin pumps and CGMs have been used in different “DIY” ways over the years – notably, people with diabetes began dosing insulin using CGM data for years prior to them being approved for that purpose. It’s therefore less of a surprise in that context to think about DIY being done for AID. (If you’re reading this you probably also know that DIY AID was done years before commercial AID was even available; and that there are multiple DIY systems with multiple pump and CGM options, algorithms, and phone options).

And, for people with diabetes, using DIY is very similar to how a lot of doctors recommend or prescribe doing things off label. Diabetes has a LOT of these types of recommendations, whether it’s different types of insulins used in pumps that weren’t approved for that type of insulin; medications for Type 2 being used for Type 1 (and vice versa); and other things that aren’t regulatory approved at all but often recommended anyway. For example, GLP-1’s that are approved for weight management and not glycemic control, but are often prescribed for glycemic control reasons. Or things like Vitamin D, which are widely prescribed or recommended as a supplement even though it is not regulatory-approved as a pharmaceutical agent.

I always like to emphasize that although open source AID is not necessarily regulated (but can be: one open source system has received regulatory clearance recently), that’s not a synonym for ‘no evidence’. There’s plenty of high quality scientific evidence on DIY use and non-DIY use of open source AID. There’s even a recent RCT in the New England Journal of Medicine, not to mention several other RCTs (see here and here, plus another pending publication forthcoming). In addition to those gold-standard RCTs, there are also reviews of large-scale big data datasets from people with diabetes using AID, such as this one where we reviewed 122 people’s glucose data representing 46,070 days’ worth of data; or another forthcoming publication where we analyzed the n=75 unique (distinct from the previous dataset) DIY AID users with 36,827 days’ of data (average of 491 days per participant) and also found above goal TIR outcomes (e.g. mean TIR 70-180 mg/dL of 82.08%).

Yet, people often choose to DIY with AID not just for the glucose outcomes. Yes, commercial AID systems (especially now second-generation) can similarly reach the goal of 70+% TIR on average. DIY helps provide more choices about the type and amount of work that people with diabetes have to put IN to these systems in order to get these above-goal OUTcomes. They can choose, overall or situationally, whether to bolus, count carbs precisely, announce meals at all, or only announce relative meal size while still achieving >80% TIR, no or little hypoglycemia, and less hyperglycemia. Many people using DIY AID for years have been doing no-bolus and/or no meal announcements at all, bringing this closer to a full closed loop, or at least, an AID system with very, very little user input required on a daily basis if they so choose. I presented data back in 2018(!) showing how this was being done in DIY AID, and it was recently confirmed in a randomized control trial (hello, gold standard!) showing that between traditional use (with meal announcements and meal boluses); meal announcement only (no boluses); and no announcement nor bolusing, that they all got similar outcomes in terms of TIR (all above-goal). There was also no difference in those modes of total daily insulin dose (TDD) or amount of carb intake. There was a small difference in time below range being slightly higher in the first mode (where people were counting carbs and bolusing) as compared to the other two modes – which suggests that MORE user input may actually be limiting the capabilities of the system!

The TLDR here is that people with diabetes can do less work/provide less input into AID and still achieve the same level of ideal, above-goal outcomes – and ongoing studies are showing the increased QOL and other patient-reported outcomes that also improve as a result.

Again, people may be predisposed to think that the main difference between commercial and DIY is whether or not it is regulatory approved (and therefore prescribable by doctors and able to be supported by a company under warranty); the bigger differences are instead around interoperability across devices, data access, and transparency of how the system works.

There’s even an international consensus statement on open source AID, created by an international group of 48 medical and legal experts, endorsed by 9 national and international diabetes organizations, supporting that open source AID used in DIY AID is a safe and effective treatment option, confirming that the scientific evidence exists and it has the potential to help people with diabetes and reduce the burden of diabetes. They emphasize that doctors should support patient (and caregiver) autonomy and choice of DIY AID, and state that doctors have a responsibility to learn about all options that exist including DIY. The consensus statement is focused on open source AID but also, in my opinion, applies to all AID: they say that AID systems should fully disclose how they operate to enable informed decisions and that all users should have real-time and open access to their own data. Yes, please! (This is true of DIY but not true of all commercial systems.)

The elephant in the room that I always bring up is cost, insurance coverage, and therefore access and accessibility of AID. Many places have government or insurance that won’t cover AID. For example, the proposed NICE guidelines in the UK wouldn’t provide AID to everyone who wants one. In other places, some people can get their pump covered but not CGM, or vice versa, and must pay out of pocket. Therefore in some cases, DIY has out of pocket costs (because it’s not covered by insurance), but is still cheaper than AID with insurance coverage (if it’s even covered).

I also want to remind everyone that choosing to DIY – or not – is not a once-in-a-lifetime decision. People who use DIY choose every day to use it and continue to use it; at any time, they could and some do choose to switch to a commercial system. Others try commercial, switch back to DIY, and switch back and forth over time for various reasons. It’s not a single or permanent decision to DIY!

The key point is: DIY AID provides safety and efficacy *and* user choice for people with diabetes.

Dr. Forlenza followed my presentation, talking about commercial AID systems and how they’ve moved through development more quickly recently. He points to the RCTs for each approved commercial system that exist, saying commercial AID systems work, and describing different feature sets and variety across commercial systems. He shared his thoughts on advantages of commercial systems including integration between components by the companies; regulatory approval meaning these systems can be prescribed by healthcare providers; company-provided warranties; and company provided training and support of healthcare providers and patients.

He makes a big point about a perceived reporting bias in social media, which is a valid point, and talks about people who cherry pick (my words) data to share online about their TIR.

He puts an observational study and the CREATE Trial RCT data up next to the commercial AID systems RCT data, showing how the second generation commercial AID reach similar TIR outcomes.

He then says “what are you #notwaiting for?”, pointing out in the US that there are 4 commercial systems FDA approved for type 1 diabetes. He says “Data from the DIY trials themselves demonstrate that DIY users, even with extreme selection bias, do not achieve better glycemic control than is seen with commercial systems.” He concludes that commercial AID has a wide variety of options; commercial systems achieve target-level outcomes; a perception that both glucose outcomes and QOL are being addressed by the commercial market, and that “we do not need Unapproved DIY solutions in this space”.

After Dr. Forlenza’s presentation, I began my rebuttal, starting with pointing out that he is incorrectly conflating perceived biases/self-reporting of social media posts with gold-standard, rigorously performed scientific trials evaluating DIY. Data from DIY AID trials do not suffer from ‘selection bias’ any more than commercial AID trials do. (In fact, all clinical trials have their own aspects of selection bias, although that isn’t the point here.) I reminded the audience of the not one but multiple RCTs available as well as dozens of other prospective and retrospective clinical trials. Plus, we have 82,000+ data points analyzed showing above-goal outcomes, and many studies that evaluate this data and adjust for starting outcomes still show that people with diabetes who use DIY AID benefit from doing so, regardless of their starting A1c/TIR or demographics. This isn’t cherry-picked social media anecdata.

When studies are done rigorously, as they have been done in DIY, we agree that now second-generation commercial AID systems reach (or exceed, depending on the system) ADA standard of care outcomes. For example, Dr. Forlenza cited the OP5 study with 73.9% TIR which is similar to the CREATE Trial 74.5% TIR.

My point is not that commercial systems don’t work; my point is that DIY systems *do* work and that the fact that commercial systems work doesn’t then override the fact that DIY systems have been shown to work, also! It’s a “yes, and”! Yes, commercial AID systems work; and yes, DIY AID systems work.

The bigger point, which Dr. Forlenza does not address, is that the person with diabetes should get to CHOOSE what is best for them, which is not ONLY about glucose outcomes. Yes, a commercial system- like DIY AID – may help someone get to goal TIR (or above goal), but DIY provides more choice in terms of the input behaviors required to achieve those outcomes! There’s also possible choice of systems with different pumps or CGMs, different (often lower) cost, increased data access and interoperability of data displays, different mobile device options, and more.

Also, supporting user choice of DIY is in fact A STANDARD OF CARE!

It’s in the ADA’s Standards of Care, in fact, as I wrote about here when observing that it’s in the 2023 Standards of Care…as well as in 2022, 2021, 2020, and 2019!

I wouldn’t be surprised if there are people attending the debate who think they don’t have any – or many – patients using DIY AID. For those who think that (or are reading this thinking the same), I ask a question: how many patients have you asked if they are using DIY AID?

There’s a bunch of reasons why it may not come up, if you haven’t asked:

  • They may use the same consumables (sites, reservoirs) with a different or previous pump in a DIY AID system.
  • Their prescribed pump (particularly in Europe and non-US places that have Bluetooth-enabled pumps) may be usable in a DIY AID.
  • They may not be getting their supplies through insurance, so their prescription doesn’t match what they are currently using.
  • Or, they have more urgent priorities to discuss at appointments, so it doesn’t come up.
  • Or, it’s also possible that it hasn’t come up because they don’t need any assistance or support from their healthcare provider.

Speaking of learning and support, it’s worth noting that in DIY AID, because it is open source and the documentation is freely available, users typically begin learning more about the system prior to initiating their start of closed loop (automated insulin delivery). As a result, the process of understanding and developing trust in the system begins prior to closed loop start as well. In contrast, much of the time there is limited available education prior to receiving the prescription for a commercial AID; it often aligns more closely with the timeline of starting the device. Additionally, because it is a “black box” with fewer available details about exactly how it works (and why), the process of developing trust can be a slower process that occurs only after a user begins to use a commercial device.

With DIY AID, because it is open source and the documentation is freely available, users typically begin learning more about the system prior to initiating their start of closed loop (automated insulin delivery). As a result, the process of understanding and developing trust in the system begins prior to closed loop start as well. In contrast, much of the time there is limited available education prior to receiving the prescription for a commercial AID; it often aligns more closely with the timeline of starting the device. Additionally, because it is a black box with less available details about exactly how it works (and why), the process of developing trust can be a slower process that occurs only after a user begins to use a commercial device. The learning & trust in AID timelines is something that needs more attention in commercial AID moving forward.

I closed my rebuttal section by asking a few questions out loud:

I wonder how healthcare providers feel when patients learn something before they do – which is often what happens with DIY AID. Does it make you uncomfortable, excited, curious, or some other feeling? Why?

I encouraged healthcare providers to consider when they are comfortable with off-label prescriptions (or recommending things that aren’t approved, such as Vitamin D), and reflect on how that differs from understanding patients’ choices to DIY.

I also prompted everyone to consider whether they’ve actually evaluated (all of) the safety and efficacy data, of which many studies exist. And to consider who benefits from each type of system, not only commercial/DIY but individual systems within those buckets. And to consider who gets offered/prescribed AID systems (of any sort) and whether subconscious biases around tech literacy, previous glucose outcomes, and other factors (race, gender, other demographic variables) result in particular groups of people being excluded from accessing AID. I also remind everyone to think about what financial incentives influence access and available of AID education, and where the education comes from.

Although Dr. Forlenza’s  rebuttal followed mine, I’ll summarize it here before finishing a recap of my rebuttal: he talks about individual selection bias/cherry picked data, acknowledging it can occur in anecdotes with commercial systems as well; talks about the distinction of regulatory approval vs. off label and unapproved; legal concerns for healthcare providers; and closes pointing out that many PWD see primary care providers, he doesn’t believe it is reasonable to expect PCPs to become familiar with DIY since there are no paid device representatives to support their learning, and that growth of AID requires industry support.

People probably wanted to walk out of this debate with a black and white, clear answer on what is the ‘right’ type of AID system: DIY or commercial. The answer to that question isn’t straightforward, because it depends.

It depends on whether a system is even AVAILABLE. Not all countries have regulatory-approved systems available, meaning commercial AID is not available everywhere. Some places and people are also limited by ACCESSIBILITY, because their healthcare providers won’t prescribe an AID system to them; or insurance won’t cover it. AFFORDABILITY, even with insurance coverage, also plays a role: commercial AID systems (and even pump and CGM components without AID) are expensive and not everyone can afford them. Finally, ADAPTABILITY matters for some people, and not all systems work well for everyone.

When these factors align – they are available, accessible, affordable, and adaptable – that means for some people in some places in some situations, there are commercial systems that meet those needs. But for other people in other places in other situations, DIY systems instead or also can meet that need.

The point is, though, that we need a bigger overlap of these criteria! We need MORE AID systems to be available, accessible, affordable, and adaptable. Those can either be commercial or DIY AID systems.

The point that Dr. Forlenza and I readily agree on is that we need MORE AID – not less.

This is why I support user choice for people with diabetes and for people who want – for any variety of reasons – to use a DIY system to be able to do so.

People probably want a black and white, clear answer on what is the ‘right’ type of AID system: DIY or commercial. It depends on whether a system is even AVAILABLE. Not all countries have regulatory-approved systems available, meaning commercial AID is not available everywhere. Some places and people are also limited by ACCESSIBILITY, because their healthcare providers won’t prescribe an AID system to them; or insurance won’t cover it. AFFORDABILITY, even if insurance coverage, also plays a role: commercial AID systems (and even pump and CGM components without AID) are expensive and not everyone can afford them. Finally, ADAPTABILITY matters for some people, and not all systems work well for everyone. The point is that we need a bigger overlap of these criteria! We need more alignment of these factors - more AID (DIY and commercial) available, accessible, affordable, and adaptable for people with diabetes. I support user choice for people with diabetes, which includes DIY AID systems

PS – I also presented a poster at #ADA2023 about the high prevalence rates of exocrine pancreatic insufficiency (EPI / PEI / PI) in Type 1 and Type 2 diabetes – you can find the poster and a summary of it here.

How I Use LLMs like ChatGPT And Tips For Getting Started

You’ve probably heard about new AI (artificial intelligence) tools like ChatGPT, Bard, Midjourney, DALL-E and others. But, what are they good for?

Last fall I started experimenting with them. I looked at AI art tools and found them to be challenging, at the time, for one of my purposes, which was creating characters and illustrating a storyline with consistent characters for some of my children’s books. I also tested GPT-3 (meaning version 3.0 of GPT). It wasn’t that great, to be honest. But later, GPT-3.5 was released, along with the ChatGPT chat interface to it, which WAS a big improvement for a lot of my use cases. (And now, GPT-4 is out and is an even bigger improvement, although it costs more to use. More on the cost differences below)

So what am I using these AI tools for? And how might YOU use some of these AI tools? And what are the limitations? This is what I’ve learned:

  1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.

You know the feeling of staring at a blank page and not knowing where to start? Maybe it’s the blank page of a cold email; the blank page of an essay or paper you need to write; the blank page of the outline for a presentation. Starting is hard!

Even for this blog post, I had a list of bulleted notes of things I wanted to remember to include. But I wasn’t sure how I wanted to start the blog post or incorporate them. I stuck the notes in ChatGPT and asked it to expand the notes.

What did it do? It wrote a few paragraph summary. Which isn’t what I wanted, so I asked it again to use the notes and this time “expand each bullet into a few sentences, rather than summarizing”. With these clear directions, it did, and I was able to look at this content and decide what I wanted to edit, include, or remove.

Sometimes I’m stuck on a particular writing task, and I use ChatGPT to break it down. In addition to kick-starting any type of writing overall, I’ve asked it to:

  • Take an outline of notes and summarize them into an introduction; limitations section; discussion section; conclusion; one paragraph summary; etc.
  • Take a bullet point list of notes and write full, complete sentences.
  • Take a long list of notes I’ve written about data I’ve extracted from a systematic review I was working on, and ask it about recurring themes or outlier concepts. Especially when I had 20 pages (!) of hand-written notes in bullets with some loose organization by section, I could feed in chunks of content and get help getting the big picture from that 20 pages of content I had created. It can highlight themes in the data based on the written narratives around the data.

A lot of times, the best thing it does is it prompts my brain to say “that’s not correct! It should be talking about…” and I’m able to more easily write the content that was in the back of my brain all along. I probably use 5% of what it’s written, and more frequently use it as a springboard for my writing. That might be unique to how I’m using it, though, and other simple use cases such as writing an email to someone or other simplistic content tasks may mean you can keep 90% or more of the content to use.

2. It can also help analyze data (caution alert!) if you understand how the tools work.

Huge learning moment here: these tools are called LLMs (large language models). They are trained on large amounts of language. They’re essentially designed so that, based on all of those words (language) it’s taken in previously, to predict content that “sounds” like what would come after a given prompt. So if you ask it to write a song or a haiku, it “knows” what a song or a haiku “looks” like, and can generate words to match those patterns.

It’s essentially a PATTERN MATCHER on WORDS. Yeah, I’m yelling in all caps here because this is the biggest confusion I see. ChatGPT or most of these LLMs don’t have access to the internet; they’re not looking up in a search engine for an answer. If you ask it a question about a person, it’s going to give you an answer (because it knows what this type of answer “sounds” like), but depending on the amount of information it “remembers”, some may be accurate and some may be 100% made up.

Why am I explaining this? Remember the above section where I highlighted how it can start to sense themes in the data? It’s not answering solely based on the raw data; it’s not doing analysis of the data, but mostly of the words surrounding the data. For example, you can paste in data (from a spreadsheet) and ask it questions. I did that once, pasting in some data from a pivot table and asking it the same question I had asked myself in analyzing the data. It gave me the same sense of the data that I had based on my own analysis, then pointed out it was only qualitative analysis and that I should also do quantitative statistical analysis. So I asked it if it could do quantitative statistical analysis. It said yes, it could, and spit out some numbers and described the methods of quantitative statistical analysis.

But here’s the thing: those numbers were completely made up!

It can’t actually use (in its current design) the methods it was describing verbally, and instead made up numbers that ‘sounded’ right.

So I asked it to describe how to do that statistical method in Google Sheets. It provided the formula and instructions; I did that analysis myself; and confirmed that the numbers it had given me were 100% made up.

The takeaway here is: it outright said it could do a thing (quantitative statistical analysis) that it can’t do. It’s like a human in some regards: some humans will lie or fudge and make stuff up when you talk to them. It’s helpful to be aware and query whether someone has relevant expertise, what their motivations are, etc. in determining whether or not to use their advice/input on something. The same should go for these AI tools! Knowing this is an LLM and it’s going to pattern match on language helps you pinpoint when it’s going to be prone to making stuff up. Humans are especially likely to make something up that sounds plausible in situations where they’re “expected” to know the answer. LLMs are in that situation all the time: sometimes they actually do know an answer, sometimes they have a good guess, and sometimes they’re just pattern matching and coming up with something that sounds plausible.

In short:

  • LLM’s can expand general concepts and write language about what is generally well known based on its training data.
  • Try to ask it a particular fact, though, and it’s probably going to make stuff up, whether that’s about a person or a concept – you need to fact check it elsewhere.
  • It can’t do math!

But what it can do is teach you or show you how to do the math, the coding, or whatever thing you wish it would do for you. And this gets into one of my favorite use cases for it.

3. You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.

One of the first things I did was ask ChatGPT to help me write a script. In fact, that’s what I did to expedite the process of finding tweets where I had used an image in order to get a screenshot to embed on my blog, rather than embedding the tweet.

It’s now so easy to generate code for scripts, regardless of which language you have previous experience with. I used to write all of my code as bash scripts, because that’s the format I was most familiar with. But ChatGPT likes to do things as Python scripts, so I asked it simple questions like “how do I call a python script from the command line” after I asked it to write a script and it generated a python script. Sure, you could search in a search engine or Stack Overflow for similar questions and get the same information. But one nice thing is that if you have it generate a script and then ask it step by step how to run a script, it gives you step by step instructions in context of what you were doing. So instead of saying “to run a script, type `python script.py’”, using placeholder names, it’ll say “to run the script, use ‘python actual-name-of-the-script-it-built-you.py’ “ and you can click the button to copy that, paste it in, and hit enter. It saves a lot of time for figuring out how to take placeholder information (which you would get from a traditional search engine result or Stack Overflow, where people are fond of things like saying FOOBAR and you have no idea if that means something or is meant to be a placeholder). Careful observers will notice that the latest scripts I’ve added to my Open Humans Data Tools repository (which is packed with a bunch of scripts to help work with big datasets!) are now in Python rather than bash; such as when I was adding new scripts for fellow researchers looking to check for updates in big datasets (such as the OpenAPS Data Commons). This is because I used GPT to help with those scripts!

It’s really easy now to go from an idea to a script. If you’re able to describe it logically, you can ask it to write a script, tell you how to run it, and help you debug it. Sometimes you can start by asking it a question, such as “Is it possible to do Y?” and it describes a method. You need to test the method or check for it elsewhere, but things like uploading a list of DOIs to Mendeley to save me hundreds of clicks? I didn’t realize Mendeley had an API or that I could write a script that would do that! ChatGPT helped me write the script, figure out how to create a developer account and app access information for Mendeley, and debug along the way so I ended up within an hour and a half of having a tool that easily saved me 3 hours on the very first project that I used it with.

I’m gushing about this because there’s probably a lot of ideas you have that you immediately throw out as being too hard, or you don’t know how to do it. It takes time, but I’m learning to remember to think “I should ask the LLM this” and ask it questions such as:

  • Is it possible to do X?
  • Write a script to do X.
  • I have X data. Pretend I am someone who doesn’t know how to use Y software and explain how I should do Z.

Another thing I’ve done frequently is ask it to help me quickly write a complex formula to use in a spreadsheet. Such as “write a formula that can be used in Google Sheets to take an average of the values in M3:M84 if they are greater than zero”.

It gives me the formula, and also describes it, and in some cases, gives alternative options.

Other things I’ve done with spreadsheets include:

  • Ask it to write a conditional formatting custom formula, then give me instructions for expanding the conditional formatting to apply to a certain cell range.
  • Asking it to check if a cell is filled with a particular value and then repeating the value in the new cell, in order to create new data series to use in particular charts and graphs I wanted to create from my data.
  • Help me transform my data so I could generate a box and whisker plot.
  • Ask it for other visuals that might be effective ways to illustrate and visualize the same dataset.
  • Explain the difference between two similar formulas (e.g. COUNT and COUNTA or when to use IF and IFS).

This has been incredibly helpful especially with some of my self-tracked datasets (particularly around thyroid-related symptom data) where I’m still trying to figure out the relationship between thyroid levels, thyroid antibody levels, and symptom data (and things like menstrual cycle timing). I’ve used it for creating the formulas and solutions I’ve talked about in projects such as the one where I created a “today” line that dynamically updates in a chart.

It’s also helped me get past the friction of setting up new tools. Case in point, Jupyter notebooks. I’ve used them in the web browser version before, but often had issues running the notebooks people gave me. I debugged and did all kinds of troubleshooting, but have not for years been able to get it successfully installed locally on (multiple of) my computers. I had finally given up on effectively using notebooks and definitely given up on running it locally on my machine.

However, I decided to see if I could get ChatGPT to coax me through the install process.

I told it:

“I have this table with data. Pretend I am someone who has never used R before. Tell me, step by step, how to use a Jupyter notebook to generate a box and whisker plot using this data”

(and I pasted my data that I had copied from a spreadsheet, then hit enter).

It outlined exactly what I needed to do, saying to install Jupyter Notebook locally if I hadn’t, gave me code to do that, installing the R kernel, told me how to do that, then how to start a notebook all the way down to what code to put in the notebook, the data transformed that I could copy/paste, and all the code that generated the plot.

However, remember I have never been able to successfully get Jupyter Notebooks running! For years! I was stuck on step 2, installing R. I said:

“Step 2, explain to me how I enter those commands in R? Do I do this in Terminal?”

It said “Oh apologies, no, you run those commands elsewhere, preferably in Rstudio. Here is how to download RStudio and run the commands”.

So, like humans often do, it glossed over a crucial step. But it went back and explained it to me and kept giving more detailed instructions and helping me debug various errors. After 5-6 more troubleshooting steps, it worked! And I was able to open Jupyter Notebooks locally and get it working!

All along, most of the tutorials I had been reading had skipped or glossed over that I needed to do something with R, and where that was. Probably because most people writing the tutorials are already data scientists who have worked with R and RStudio etc, so they didn’t know those dependencies were baked in! Using ChatGPT helped me be able to put in every error message or every place I got stuck, and it coached me through each spot (with no judgment or impatience). It was great!

I was then able to continue with the other steps of getting my data transformed, into the notebook, running the code, and generating my first ever box and whisker plot with R!

A box and whisker plot, illustrated simply to show that I used R and Jupyter finally successfully!

This is where I really saw the power of these tools, reducing the friction of trying something new (a tool, a piece of software, a new method, a new language, etc.) and helping you troubleshoot patiently step by step.

Does it sometimes skip steps or give you solutions that don’t work? Yes. But it’s still a LOT faster than manually debugging, trying to find someone to help, or spending hours in a search engine or Stack Overflow trying to translate generic code/advice/solutions into something that works on your setup. The beauty of these tools is you can simply paste in the error message and it goes “oh, sorry, try this to solve that error”.

Because the barrier to entry is so low (compared to before), I’ve also asked it to help me with other project ideas where I previously didn’t want to spend the time needed to learn new software and languages and all the nuances of getting from start to end of a project.

Such as, building an iOS app by myself.

I have a ton of projects where I want to temporarily track certain types of data for a short period of time. My fall back is usually a spreadsheet on my phone, but it’s not always easy to quickly enter data on a spreadsheet on your phone, even if you set up a template with a drop down menu like I’ve done in the past (for my DIY macronutrient tool, for example). For example, I want to see if there’s a correlation in my blood pressure at different times and patterns of inflammation in my eyelid and heart rate symptoms (which are symptoms, for me, of thyroid antibodies being out of range, due to Graves’ disease). That means I need to track my symptom data, but also now some blood pressure data. I want to be able to put these datasets together easily, which I can, but the hardest part (so to speak) is finding a way that I am willing to record my blood pressure data. I don’t want to use an existing BP tracking app, and I don’t want a connected BP monitor, and I don’t want to use Apple Health. (Yes, I’m picky!)

I decided to ask ChatGPT to help me accomplish this. I told it:

“You’re an AI programming assistant. Help me write a basic iOS app using Swift UI. The goal is a simple blood pressure tracking app. I want the user interface to default to the data entry screen where there should be three boxes to take the systolic, diastolic blood pressure numbers and also the pulse. There should also be selection boxes to indicate whether the BP was taken sitting up or laying down. Also, enable the selection of a section of symptom check boxes that include “HR feeling” and “Eyes”. Once entered on this screen, the data should save to a google spreadsheet.” 

This is a completely custom, DIY, n of 1 app. I don’t care about it working for anyone else, I simply want to be able to enter my blood pressure, pulse, whether I’m sitting or laying down, and the two specific, unique to me symptoms I’m trying to analyze alongside the BP data.

And it helped me build this! It taught me how to set up a new SwiftUI project in XCode, gave me code for the user interface, how to set up an API with Google Sheets, write code to save the data to Sheets, and get the app to run.

(I am still debugging the connection to Google Sheets, so in the interim I changed my mind and had it create another screen to display the stored data then enable it to email me a CSV file, because it’s so easy to write scripts or formulas to take data from two sources and append it together!)

Is it fancy? No. Am I going to try to distribute it? No. It’s meeting a custom need to enable me to collect specific data super easily over a short period of time in a way that my previous tools did not enable.

Here’s a preview of my custom app running in a simulator phone:

Simulator iphone with a basic iOS app that intakes BP, pulse, buttons for indicating whether BP was taken sitting or laying down; and toggles for key symptoms (in my case HR feeling or eyes), and a purple save button.

I did this in a few hours, rather than taking days or weeks. And now, the barrier to entry to creating more custom iOS is reduced, because now I’m more comfortable working with XCode and the file structures and what it takes to build and deploy an app! Sure, again, I could have learned to do this in other ways, but the learning curve is drastically shortened and it takes away most of the ‘getting started’ friction.

That’s the theme across all of these projects:

  • Barriers to entry are lower and it’s easier to get started
  • It’s easier to try things, even if they flop
  • There’s a quicker learning curve on new tools, technologies and languages
  • You get customized support and troubleshooting without having to translate through as many generic placeholders

PS – speaking of iOS apps, based on building this one simple app I had the confidence to try building a really complex, novel app that has never existed in the world before! It’s for people with exocrine pancreatic insufficiency like me who want to log pancreatic enzyme replacement therapy (PERT) dosing and improve their outcomes – check out PERT Pilot and how I built it here.

4. Notes about what these tools cost

I found ChatGPT useful for writing projects in terms of getting started, even though the content wasn’t that great (on GPT-3.5, too). Then they came out with GPT-4 and made a ChatGPT Pro option for $20/month. I didn’t think it was worth it and resisted it. Then I finally decided to try it, because some of the more sophisticated use cases I wanted to use it for required a longer context window, and in addition to a better model it also gave you a longer context window. I paid the first $20 assuming I’d want to cancel it by the end of the month.

Nope.

The $20 has been worth it on every single project that I’ve used it for. I’ve easily saved 5x that on most projects in terms of reducing the energy needed to start a project, whether it was writing or developing code. It has saved 10x that in time cost recouped from debugging new code and tools.

GPT-4 does have caps, though, so even with the $20/month, you can only do 25 messages every 3 hours. I try to be cognizant of which projects I default to using GPT-3.5 on (unlimited) versus saving the more sophisticated projects for my GPT-4 quota.

For example, I saw a new tool someone had built called “AutoResearcher”, downloaded it, and tried to use it. I ran into a bug and pasted the error into GPT-3.5 and got help figuring out where the problem was. Then I decided I wanted to add a feature to output to a text file, and it helped me quickly edit the code to do that, and I PR’ed it back in and it was accepted (woohoo) and now everyone using that tool can use that feature. That was pretty simple and I was able to use GPT-3.5 for that. But sometimes, when I need a larger context window for a more sophisticated or content-heavy project, I start with GPT-4. When I run into the cap, it tells me when my next window opens up (3 hours after I started using it), and I usually have an hour or two until then. I can open a new chat on GPT-3.5 (without the same context) and try to do things there; switch to another project; or come back at the time it says to continue using GPT-4 on that context/setup.

Why the limit? Because it’s a more expensive model. So you have a tradeoff between paying more and having a limit on how much you can use it, because of the cost to the company.

—–

TLDR:

Most important note: LLMs don’t “think” or “know” things the way humans do. They output language they predict you want to see, based on its training and the inputs you give it. It’s like the autocomplete of a sentence in your email, but more words on a wider range of topics!

Also, the LLM can’t do math. But they can write code. Including code to do math.

(Some, but not all, LLMs have access to the internet to look up or incorporate facts; make sure you know which LLM you are using and whether it has this feature or not.)

Ways to get started:

    1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.
      • Ask it to help you expand on notes; write summaries of existing content; or write sections of content based on instructions you give it
    2.  It can also help analyze data (caution alert!) if you understand the limitations of the LLM.
      • The most effective way to work with data is to have it tell you how to run things in analytical software, whether that’s how to use R or a spreadsheet or other software for data analysis. Remember the LLM can’t do math, but it can write code so you can then do the math!
    3.  You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.
      • Build a new habit of asking it “Can I do X” or “Is it possible to do Y” and when it says it’s possible, give it a try! Tell it to give you step-by-step instructions. Tell it where you get stuck. Give it your error messages or where you get lost and have it coach you through the process. 

What’s been your favorite way to use an LLM? I’d love to know other ways I should be using them, so please drop a comment with your favorite projects/ways of using them!

Personally, the latest project that I built with an LLM has been PERT Pilot!

How I use LLMs (like ChatGPT) and tips for getting started

How I PRed My 100k Time

I’ve been training for a big goal of mine: running a 100k in a specific amount of time. Yes, I’ve run farther than that before: last year I ran ~82 miles. However, I had someone in my family network who ran 100k last year, and I realized their time made a reasonable goal for me. I’m competitive, so the extra motivation of striving for a certain time is helpful for channeling my “racing”, even if I’m “racing” someone virtually (who ran a year ago!).

Like last year, I decided I would run my 100k (which is 62+ miles) as a solo or DIY ultramarathon. I originally plotted five laps of various lengths, then figured out I could slightly alter my longest route by almost a mile, making it so I would do 2 laps of the same length, a third lap of my original longest length, and then a fourth lap of a shorter length that’s also one of my preferred running routes. Only four laps would be mentally easier than doing five laps, even though it would end up being exactly the same distance. Like last year, I leveraged extensive planning (most of it done last year) to plan my electrolytes, enzymes, and fueling in advance. I had a lot less work to do this year, because I simply refreshed the list of gear and prep work from last year, shortened of course to match the length of my expected race (less than 18 hours vs ~24+ hours). The main thing I changed in terms of preparation is that while I set out a few “just in case” supplies, most of them I left in their places, figuring they’d be easy enough to find in the house by Scott (my husband) if I needed to ask him to bring out anything in particular. The few things I laid out were emergency medical supplies like inhaled insulin, inhaled glucagon, a backup pump site, etc. And my usual piles of supplies – clothes, fuel to refill my vest, etc – for each lap.

My 100k run supplies set out on the floor. I have a bag of OTC enzymes (for exocrine pancreatic insufficiency), 8-10 individually packaged snacks ranging from Fritos to yogurt pretzels to sandwich cookies, cashews, and beef sticks, a bag of electrolyte pills, and eye drops and disposable tooth brushes. Each lap (4 total) has a set of each of these.

One thing that was different for my 100k was my training. Last year, I was coming back from a broken toe and focused on rebuilding my feet. I found that I needed to stick with three runs per week. This year, I was back up to 4-5 runs per week and building up my long runs beginning in January, but in early February I felt like my left shin was getting niggle-y and I backed down to 3 runs a week. Plus, I was also more active on the weekends, including most weekends where we were cross-country skiing twice, often covering 10-15 miles between two days of skiing, so I was getting 3+ extra hours of “time on legs”, albeit differently than running. Instead of just keeping one longer run, a medium run, and two shorter runs (my original plan), I shifted to one long run, one medium long run (originally 8 and then jumping to 13 miles because it matched my favorite route), and the big difference was making my third run about 8 miles, too. This meant that I carried my vest and fueled for all three runs, rather than just one or two runs per week. I think the extra time training with the weight of my vest paid off, and the miles I didn’t do or the days I didn’t run didn’t seem to make a difference in regard to recovering during the weeks of training or for the big run itself. Plus, I practiced fueling every week for every run.

I also tapered differently. Once I switched to three runs a week, my shin felt a lot better. However, in addition to cross country skiing, Scott and I also have access now to an outdoor rock climbing wall (so fun!) and have been doing that. It’s a different type of workout and also helps with full body and upper body strength, while being fun and not feeling like a workout. I bring it up mostly because three weeks ago, I think I hurt the inside of my hip socket somehow by pressing off a foothold at a weird angle, and my hip started to be painful. It was mostly ok running, but I backed off my running schedule and did fewer miles for a week. The following week I was supposed to do my last longest long run – but I felt like it wouldn’t be ideal to do with my hip still feeling intermittently sore. Sometimes it felt uncomfortable running, other times it didn’t, but it didn’t feel fully back to normal. I decided to skip the last long run and stick with a week of my medium run length (I did 13, 13, and 8). That felt mostly good, and it occurred to me that two shorter weeks in a row were essentially a taper. If I didn’t feel like one more super long run (originally somewhere just under a 50k) was necessary to prepare, then I might as well consider moving my ‘race’ up. This is a big benefit of DIY’ing it, being able to adjust to injury or schedule – or the weather! The weather was also forecasted to be REALLY nice – no rain, high 50s F, and so I tentatively aimed to do a few short runs the following week with my 100k on the best weather day of the weekend. Or if the weather didn’t work out, I could push it out another week and stick with my original plan.

My taper continued to evolve, with me running 4 easy miles on Monday (without my vest) to see how my hip felt. Mostly better, but it still occasionally niggled when walking or running, which made me nervous. I discussed this endlessly with Scott, who as usual thought I was overcomplicating it and that I didn’t need to run more that week before my 100k. I didn’t like the idea of running Monday, then not running again until (Friday-Sunday, whenever it ended up being), but a friend unexpectedly was in town and free on Wednesday morning, so I went for a walk outside with her and that made it easy to choose not to run! It was going to be what it was going to be, and my hip would either let me run 100k or it would let me know to make it a regular long run day and I could stop at any time.

So – my training wasn’t ideal (shifting down to 3 runs a week) and my taper was very unexpected and evolved differently than it usually does, but listening to my body avoided major injury and I woke up feeling excited and with a good weather forecast for Friday morning, so I set off at 6am for my 100k.

(Why 6am start, if I was DIYing? My goal was to finish by 11:45pm, to beat the goal time of 11:46pm, which would have been 17 hours and 46 minutes. I could start later but that would involve more hours of running at night and keeping Scott awake longer, so I traded for an hour of running before it got light and finishing around midnight for a closer to normal bedtime for us both.)

*One other major thing I did to prep was that as soon as I identified that I wanted to shift my race up a week, I went in and started scheduling my bedtimes, beginning with the night before the race. If I raced at 6 from home, I would wake up at 5 to get ready, so I wanted to be sleeping by 9pm at the latest in order to get close to a normal night of sleep. Ideally it would be closer to 8-8:30. I set my bed time and each night prior, marked the bedtime 15 minutes later, so that when I started I was trying to push my bedtime from ~11pm to 10:45 pm then the next night 10:30pm etc. It wasn’t always super precise – I’ve done a better job achieving the goal bedtimes previously, but given that I did an early morning cross country ski race on the morning of daylight saving time the week before (ouch), it went pretty ok, and I woke up at 5am on race morning feeling rested and better than I usually do on race days. 7 hours and 45 minutes of sleep is an hour to an hour and a half less than usual, but it’s a LOT better than the 4-5 hours of sleep I might have otherwise gotten without shifting my schedule.

THE START (MILES 0-17)

My ultra running experience checklist, to highlight the good and the less good as I run. This shows that I saw stars, bunnies, and a loon and a pheasant, but did not see my usual eagles, heron, or heard any ducks splashing in the river at night.I set out at 6am, It was 33 degrees (F), so I wore shorts and a short sleeve shirt, with a pair of fleece lined pants over my shorts and a long sleeve shirt, rain jacket, ear cover, and gloves on my hand. It was dry, which helped. I was the only one out on the trail in the dark, and I had a really bright waist lamp and was running on a paved trail, so I didn’t have issues seeing or running. I felt a bit chilly but within 3 minutes could tell I would be fine temperature wise. As I got on the trail, I glanced up and grinned – the stars were out! That meant I could “check” something off my experience list at the very start. (I make a list of positive and less great experiences to ‘check off’ mentally, everything from seeing the stars or seeing bunnies or other wildlife to things like blisters, chafing, or being cold or tired or having out of whack glucose levels – to help me process and “check them off” my list and move on after problem solving, rather than dwelling on them and getting myself into a negative mood). The other thing I chuckled about at the start was passing the point where, about a half mile in to my 82 miles, I had popped the bite valve off of my hydration hose and gotten water everywhere and couldn’t find the bite valve for 3 minutes. That didn’t happen this time, phew! So this run was already off to a great start, just by nothing wild like that happening within the first few minutes. I peeled off my ear cover at 0.75 miles and my gloves at a mile. My jacket then peeled off to tie around my waist by the second mile, and I was surprised when my alarm went off at 6:30am reminding me to take in my first fuel. My plan calls for fuel every 30 minutes, which is why I like starting at the top of the hour (e.g. 6:00am) so I can use the alarm function on my phone to have alarms pre-set for the clock times when I need to fuel. Morning-sunrise-during-100kAs I continued my run/walk, just like I do in all my training runs, I pulled my enzymes out of my left pocket, swallowed them, put them away, grabbed my fuel out of my right pocket (starting with chili cheese Fritos), then also entered it into my fuel tracking spreadsheet so I could keep an eye on rolling calorie and sodium consumption throughout my run. (Plus, Scott can also see it and keep an eye on it as an extra data point that I’m doing well and following all planned activities, as well as having live GPS tracking and glucose tracking capabilities). I carried on, and as the sky began to lighten, I could see frost covering the ground beside the trail – brrr! It actually felt a little bit colder as the sun rose, and I could see wafts of fog rolling along the river. I started to see more people out for early morning runs, and I checked my usual irritation at people who were likely only out for (3? 5? 10? Psh!) short morning runs while I was just beginning an all day slog.

PheasantI was running well and a little ahead of my expected pace, closer to my usual long run/walk paces (which have been around 14:30-14:50 min/mi lately). I was concerned it was too fast and I would burn out as so many people do, but I did have wiggle room in my paces and had planned for an eventual slow down regardless. I made it to the first turnaround, used the trail bathroom there, and continued on, noting that even with the bathroom stop factored in, I was still on or ahead of schedule. I texted Scott to let him know to check my paces earlier than he might otherwise, and also stopped in my tracks to take a picture of a quail-like bird (which Scott thinks was a pheasant) that I’d never seen before. Lap 1 continued well, and I was feeling good and maintaining an overall sub-15 pace while I had been planning for a 15:10/ish average pace, so although Scott told me he didn’t need me to warn him about being particular miles away for aid station stops, I saw he was still at home by the time I was less than a mile out, and texted him. He was finishing a work call and had to rush to finish packing and come meet me. It wouldn’t have been a big deal if he had “missed” me at the expected turnaround spot, because there’s other benches and places where we could have met after that, but I think he was still stressed out (sorry!) about it, although I wasn’t. However, he biked up to me right at the turnaround spot, grabbed my vest and headed back to our normal table for refueling, while I used the bathroom and then headed out to meet him.

The other thing that might have stressed him out a little – and did stress me out a little bit – was my glucose levels. They were running normal levels for me during a run, around ~150mg/dL in the first 2-3 hours of my run. This is higher than I normally like to be for non-running times but is reasonable for long runs. I usually run a bit higher at the start and then settle in around 120-130mg/dL, because the risk of having too much insulin at the start from breakfast is prone to causing lows in the first hour; therefore I let myself reduce insulin prior to the run so that the first hour or so runs higher. However, instead of coming down as usual from the start of my run, I started a steady rise from 150 to 180. That was weird, but maybe it was a physiological response to the stress? I issued a correction, but I kept rising. I crossed 200 when I should have been beginning to flatten, and it kept going. What on earth? I idly passed my hand over my abdomen to check my pump site, and couldn’t feel my pump site. It had come unclipped!!! This was super frustrating, because it means I didn’t know how much insulin was in my body or when it had come unclipped. (Noteworthy that in 20+ years of using an insulin pump, this has NEVER happened before until this month, and it has now happened twice, so I need to record the batch/lot numbers and report it – this batch of sites is easily coming unclipped with a tug on the tubing, which is clearly dangerous because you can’t feel it come unclipped and don’t know until you see rising glucose levels.) “Luckily” though, this was when I was within 30 minutes or so of being back to Scott, so I texted him and told him to grab the inhaled insulin baggie I had set out, and I would use that at the aid station to more quickly get my body back into a good state (both in terms of feeling the insulin action as well as normalizing glucose levels more quickly. For those who don’t know, injected/pump insulin takes ~45 minutes to peak activity in the body, whereas inhaled insulin is much faster in the ballpark of ~15-20 minutes peak action, so in situations like this I prefer to, when possible, use inhaled insulin to normalize how my body is feeling while also resuming/fixing the pump site for normal insulin from then on).

As planned, at every aid station stop he brought water and ice to refill my camelback, which he did while I was at the bathroom. When I came up to the table where he was, I quickly did some inhaled insulin. Then I sat down and took off my socks and shoes and inspected my feet. My right foot felt like it had been rubbing on the outside slightly, so I added a piece of kinesiology tape to the outer edge of my foot. I already had pieces on the bottom of my feet to help prevent blisters like I got during my 82, and those seemed to be working, and it was quick and easy to add a straight piece of tape, re-stick pieces of lamb’s wool next to each big toe (to prevent blisters there), put fresh socks on, and put a fresh pair of shoes on. I also changed my shirts. It was now 44 F and it was supposed to warm up to 61 F by the end of this next lap. I stood up to put my pack on again and realized I had forgotten to peel off my pants! Argh. I had to unlace my shoes again, which was the most annoying part of my stop. I peeled off the pants (still wearing my shorts under), put my shoes back on and laced them again, then put my vest back on. I removed the remaining trash from my vest pockets, pulled out the old enzyme and electrolyte baggies, and began to put the new fuel supply and enzyme and electrolyte supply in the front vest pockets. Last time for my 82, I had Scott do the refilling of my vest, but this time I just had him set out my gallon bag that contained all of these, so that I could place the snacks how I like best and also have an idea of what I had for that lap. I would need to double check that I had enzymes and electrolytes, anyway, so it ended up being easier for me to do this and I think I’ll keep doing this moving forward. Oh, and at each aid station stop we popped my (non-ultra) Apple Watch on a watch charger to top off the charge, too. I also swapped in a new mini battery to my pack to help keep my phone battery up, and then took off. All this, including the bathroom time, took about 15 minutes! I had budgeted 20 minutes for each stop, and I was pleased that this first stop was ahead of schedule in addition to my running slightly ahead of schedule, because that gave me extra buffer if I slowed down later.

A 24 hour view of my CGM graph to show my glucose levels before (overnight), during the run including marks where my pump site likely unclipped, where I reclipped it, and how my glucose was in range for the remainder of the run.
A 24 hour view of my CGM graph to show my glucose levels before (overnight), during the run including marks where my pump site likely unclipped, where I reclipped it, and how my glucose was in range for the remainder of the run.

LAP 2 (MILES 18-34)

The next lap was the same route as the first, and felt like a normal long run day. It was mid 40s and gradually warmed up to 63 F and actually felt hot for the second half! It hadn’t been 60+ degrees in Seattle since October (!) so my body wasn’t used to the “heat”. I was still feeling good physically and running well – in fact, I was running only ~10s slower than my average pace from lap 1! If I kept this up and didn’t fall off the pace much in the second lap, I would have a very nice buffer for the end of the race. I focused on this lap and only thought about these 16-17 miles. I did begin to squirt water from my camelback on to the ‘cooling’ visor I have, which evaporates and helps your head feel cooler – especially since I wasn’t used to the heat and was sweating more, that felt good. The end of the second lap, I started to feel like I was slightly under my ideal sodium levels. I’m pretty sensitive to sodium; I also drink a lot (I was carrying 3-3.5L for every 17 mile lap!); and I’m a salty sweater. Add increased heat, and even though I was right on track with my goal of about ~500mg/hour of sodium intake between my fuel and additional electrolyte pills, I felt a bit under, and so the next while I added an extra electrolyte pill to increase my sodium intake, and the feeling went away as expected.

(My glucose levels had come back down nicely within the first few miles of this lap, dipped down but as I was fueling every 30 minutes, came nicely into range and stayed 100% in range with no issues for the next ~12 hours of the run!)

This time, Scott was aware that I was ahead of expected paces and had been mapping my paces. He told me that if I stayed at that pace for the lap, I would be able to slow down to a 16 min/mi pace for lap 3 (16 miles) and down further to a 17 min/mi pace for the last (almost 13 miles) lap and still beat my goal time. That sounded good to me! He ended up biking out early to meet me so he could start charging my watch a few minutes early, and I ended up taking one of my next snacks – a warmed up frozen waffle – for my ‘last’ snack of the lap because it was time for a snack and there was no reason to wait even though it was part of the ‘next’ lap’s fuel plan. So I got to eat a warm waffle, which was nice!

Once we got almost there, Scott took my vest and biked ahead to begin the camelback process. I hit the turnaround, made another quick bathroom stop, and ran over to the table. This time, since it was 60s and I would finish my next lap while it was still above 50 degrees and light, I left my clothing layers as-is, other than a quick shirt switch to get rid of my sweaty shirt. I decided not to undo my shoes and check my feet for blisters; they felt fine and good. Because I didn’t need a shoe change or have anything going on to troubleshoot, I was in and out in 5 minutes! Hooray, that gave me another 10 minute buffer (in addition to 5 before, plus all my running ahead of schedule). I took off for lap 3, but warned Scott I would probably be slowing down.

LAP 3 (MILES 35-50)

The third lap was almost the same route, but shorter by a little less than a mile. I was originally concerned, depending on how much I had slowed down, that I would finish either right around sunset or after sunset, so that Scott might need to bring me out a long sleeve shirt and my waist lamp. However, I was ahead of schedule, so I didn’t worry about it, and again set out trying to not fall off my paces too much. I slowed down only a tiny bit on the way out, and was surprised at the turnaround point that I was now only slightly above a 15 min/mi pace! The last few miles I felt like slowing down more, but I was motivated by two thoughts: one was that I would finish this lap and essentially be at 50 miles. This meant, given my excellent pacing, that I would be “PR”ing my 50 mile pace. I’ve not run a standalone 50 miles before, just as part of my 82 mile when I wasn’t paying attention to pace at all (and ran 2-3 min/mi slower as a result), so I was focused on holding my effort level to be close to the same. Plus, after this lap, I “only” had a ~13 mile single lap left. That was my usual route, so it would be mentally easier, and it’s my last lap, so I knew I would get a mental boost from that. Psychologically, having the 50 mile mark to PR here really helped me hold my pace! I ended up only slowing down ~13s average pace compared to the ~10s deterioration between laps 1 and 2. I was pretty pleased with that, especially with hitting 50 miles then!

At this aid station stop, I was pretty cheerful even though I kept telling Scott I would be slowing down. I took ~10 minutes at this stop because I had to put my jacket back on around my waist and put my double headlamp on (which I wear around my waist) for when it got dark, plus do the normal refueling. I changed my short sleeve shirt again so I had a dry shirt, and debated but went ahead and put my fresh long sleeve shirt on and rolled up the sleeves. I figured I’d be putting it on as soon as it got dark, and I didn’t want to have to hassle with getting my vest on and off (while moving) in order to get the shirt on, especially because I’d also have to do that with my jacket later, so I went with the long sleeve shirt on and rolled up the sleeves for now. I had originally planned to put my long pants back on over my shorts, but it was still 63 degrees and the forecast was only going to get down to 45 degrees by midnight, and I seemed ahead of schedule and should finish by then. If I did get really cold, Scott could always bike out early and bring me more layers, but even 45 degrees in the dark with long sleeves, jacket, ear cover, and two pairs of gloves should be fine, so I went without the pants.

Speaking of ahead of schedule, I was! I had 5 minutes from the first aid station, 15 minutes from the second aid station, 5 minutes from this last aid station…plus another ~15 minutes ahead of what I thought my running time would have been at this point. Woohoo!

LAP 4 (MILES 51-63)

However, as soon as I walked off with my restocked vest, I immediately felt incredibly sore thighs. Ouch! My feet also started complaining suddenly. I did an extra walk interval and resumed my run/walking and my first mile out of the aid station stop was possibly my slowest mile (barring any with a bathroom stop) for the entire race, which is funny, because it was only about a 16:30 pace. But I figured it would be downhill from there and I’d be lucky to hold a sub 17 pace for these last 13 miles, especially because most of them would be in the dark and I naturally move a bit slower in the dark. Luckily, I was so far ahead that I knew that even a 17 min/mi average pace (or even slower) would be fine. However, I had joked to Scott coming into the end of lap 3 that I was tempted to just walk lap 4 (because I was finally starting to be tired) but then I’d have to eat more snacks, because I’d be out there longer. Sounds funny, but it was true – I was eating ok but occasionally I was having trouble swallowing my enzyme pills. Which is completely reasonable, I had been swallowing dozens of those (and electrolyte pills) all day and putting food down my throat for ~12+ hours consistently. It wasn’t the action of swallowing that was a problem, but I seemed to be occasionally mistiming how I would get the pills washed to the back of my mouth at the top of my throat to be able to swallow them down. Once or twice I had to take in some extra water, so it really wasn’t a big deal, but it was a slight concern that if I stopped being able to enzyme, I couldn’t fuel (because I have EPI) and I’d either have to tough it out without fueling (bad idea) or stop (not a fun idea). So I had that little extra motivation to try to keep run/walking!

Luckily, that first mile of the last lap was the worst. My thighs were still sore but less so and my feet stopped yelling at me and were back to normal. I resumed a reasonable run/walk pace, albeit at closer to a 15:30+ pace, which was a bigger jump from my previous lap average pace. I didn’t let it stress me out, but I was wishing I felt like fighting harder. But I didn’t, and focused on holding that effort level. I texted Scott, telling him I was averaging sub-16 pace (barely) at miles 4 and 5, then asking him to check my assumption that if I didn’t completely walk it in, I could maybe be an hour ahead of schedule? He confirmed that I “only” needed 16:53 average pace for the lap to come in at 10:30pm (75 minutes ahead of goal) and that if I kept sub-16 I could come in around 10:19pm. Hmmm, that was nice to hear! I didn’t think I would keep sub-16 because it was getting dark and I was tired, ~55 miles into the run, but I was pretty sure I’d be able to be sub 17 and likely sub 16:53! I carried on, turning my light on as it got dark. I was happily distracted by checking happy experiences off my mental list, mostly seeing bunnies beside and darting across the trail in the dark!

I hit the almost-halfway mileage point of the last lap, but even though it wasn’t halfway in mileage it felt like the last big milestone – it was the last mini-hill I had to climb to cross a bridge to loop around back to finish the lap. Hooray! I texted Scott and told him I coudn’t believe that, with ~7 miles left, I would be done in <2 hours. It was starting to sink in that I’d probably beat my goal of 11:45 and not doubt that it was real, and that I’d beat it by more than a few minutes. I then couldn’t resist – and was also worried Scott wouldn’t realize how well I was moving and be prone to coming out too late – and texted him again when I was <5 miles out and then 4 miles out. But by the time I was at 3 miles, he replied to ask if I needed anything else other than the bag I had planned for him to bring to the finish. Nope, I said.

At that point, I was back on my home turf, as I think about the last 2-3 miles that I run or walk on most days of the week. And I had run these miles 3 times already (in each direction, too), but it was pretty joyful getting to the point where I know not only every half mile marker but every tenth of a mile. And when I came up under the last bridge and saw a bright light biking toward me, it was Scott! He made it out to the 1.75 mile mark and rode in with me, which was fun. I was still holding just under sub-16 pace, too. I naturally pick up the pace when he’s biking with me – even when I’ve run 60+ miles! – and I was thinking that I’d be close but a few minutes under an hour and a half of schedule. It didn’t really matter exactly, but I like even numbers, yet I didn’t feel like I had tons of energy to push hard to the end – I was pleased enough to still be moving at a reasonable speed at this point!

Finally, about a half mile out, Scott biked ahead to set up the finish for me. (Purple painter’s tape and a sign I had made!) I glanced at my watch as I rounded the last corner, about .1 mile away, and though “oh, I was so close to beating the goal by over an hour and a half, too bad I didn’t push harder a few minutes ago so I could come in by 10:16 and be an hour and a half ahead”. I ran a tiny bit more but didn’t have much speed, walked a few last steps, then ran the rest of the way so Scott could video me coming into the finish. I could see the light from his bike’s light glowing on the trail, and as I turned the corner to the finish I was almost blinded by his waist light and his head lamp. I ran through the finish tape and grinned. I did it! He stopped videoing and told me to stop my trackers. I did but told him it didn’t matter, because I was somewhere under an hour and a half. We took a still picture, then picked up my tape and got ready to head home. I had done it! I had run 100k, beat my goal time…and it turns out I DID beat it by over an hour and a half! We checked the timestamp on the video Scott took of the finish and it has me crossing at 10:16pm, so that makes it a 16 hour and 16 minute finish – woohoo!

A picture at night in the dark with me running, light at my waist, toward the purple painter tape stretched out as my finish line.

My last lap ended up being ~37 seconds average pace slower, so I had :10, :13, and :37 differences between the laps. Not too bad for that distance! I think I could’ve pushed a little harder, but I honestly didn’t feel like it psychologically, since I was already exceeding all of my goals, and I was enjoying focusing on the process meta-goals of trying to keep steady efforts and paces. Overall, my average pace was 15:36 min/mi which included ~30 min of aid station stops; and my average moving pace (excluding those 30 minutes of aid station time but did include probably another ~8-10 min of bathroom stops) was 15:17 min/mi. I’m pleased with that!

FUN STATS

A pivot table with conditional formatting showing when my sodium, calories, and carbs per hour met my hourly goal amounts.One of the things I do for all training runs but also races is input my fueling as I go, because it helps me make sure I’m actually fueling and spot any problems as they start to develop. As I mentioned, at one point I felt a tiny bit low on sodium and sure enough, I had dipped slightly below 500mg/hr in the two hottest hours of the day when I had also been sweating more and drinking more than I had been previously. Plus, it means I have cool post-run data to see how much I consumed and figure out if I want to adjust my strategy. This time, though? I wouldn’t change a thing. I nailed it! I averaged 585 mg/hour of sodium across all ~16 hours of my run. I also averaged ~264 calories/hour, which is above my ~250/hr goal. I did skip – intentionally – the very last snack at the top of the 16th hour, and it still meant that I was above goal in all my metrics. I don’t set goals for carb intake, but in case you were wondering, I ended up averaging 29.9 grams of carbs/hour (min 12, max 50, and the average snack is 15.4 carbs), but that’s totally coincidental. Overall, I consumed 3,663 calories, which was 419 carbs, 195 g of fat, and 69 grams of protein.

With EPI, as I mentioned that means I have to swallow enzyme pills with every snack, which was every 30 minutes. I swallowed 71 OTC enzyme pills (!) to match all that fuel, plus 26 electrolyte pills…meaning I swallowed 97 pills in 16 hours. You can see why I get tired of swallowing!

A graph showing the rates of sodium/hr for each 16 hours of the run (averaging above 500mg/hr); calories per hour (averaging above 250/hour), and carbs per hour.

Here’s a visual where you can see my consumption of calories, sodium, (and carbs) over the course of my race. The dip at the end is because I intentionally skipped the second snack of the hour 16 because I was almost done. Up to 15 hours (excluding the last hour), I had a slightly rolling increase in sodium/hr and a very slight decrease in calories/hr, with carbs/hr slightly increasing. Including the 16th hour (with a skipped snack intentionally), this changed the trends to slight rolling decrease in sodium/hr; the slight decrease trend in calories/hr continued; but it flattened the carbs/hr trend line to be neutral.

In contrast to my 82 mile where I had more significant fluctuations in sodium (and really felt it), I’m glad I was able to keep my sodium consumption at goal levels and also more easily respond when the conditions changed (hotter weather causing more sweat and more water intake than previous hours) so I could keep myself from getting into a hole sodium-wise. Overall, I feel like I get an A+ for executing my fueling and sodium strategy as planned. GI-wise, I get an A+++ because I had ZERO GI symptoms during and after the run! That’s really rare for any ultrarunners, let alone those of us with GI conditions (in my case, exocrine pancreatic insufficiency). Plus, despite the unclipped pump site and BG rise that resulted, I resumed back to typical running glucose levels for me and achieved 100% TIR 70-180 after that and I think likely 100% TIR for a more narrow range like 70-140, too, although I haven’t bothered to run those stats because I don’t care exactly what the numbers are. More importantly, I never went low, I never had any big drops or rises, and other than the brief 30 minutes of annoyance due to an unclipped pump site, diabetes did not factor any more into my thinking than blister management or EPI pill swallowing or sodium did – which is great!

Here’s a view of what I had leftover after my run. I had intentionally planned for an extra snack for every lap, plus I ran faster so I needed fewer overall. I also had packed extra enzymes and electrolytes for every lap, hoping I would never need to stress about running out on any individual lap – and I didn’t, so those amounts worked well.

A view of the enzymes and electrolyte baggies after my run, with a few left in each baggie as I planned for extras. I also had some snacks I didn't eat, both because I planned one extra per lap but I also ran faster than I expected, so I needed fewer overall

POST-RUN RECOVERY

As soon as I stopped running and took a picture at the finish line, we got ready to head home. My muscles froze up as soon as I stopped, just like always, so I moved like a tin person for a few steps before I loosened back up and was able to walk normally. I got home, and was able to climb into the shower (and out!) without too much hardship. I climbed into bed, hydrated, and was able to go to sleep pretty normally for about 5 hours. I woke up at 5am pretty awake, which possibly was also due to the fact that I had been sleep shifting my sleep schedule, but I also felt really stiff and used the opportunity to point and flex my ankles. I slept every 20-30 minutes off and on for another few hours before I finally got up at 8am and THEN felt really sore and stiff! My right lower shin was sore and had felt sore just a tiny bit in the last few miles of my run, so it wasn’t surprising that it was sore. My right hip, which is the one I had been watching prior to the race, was sore again. I hobbled around the house and started to loosen up, enough that I decided that I would put shoes on and try to go for a short easy walk. Usually, I can’t psychologically fathom putting shoes on my feet after an ultra, but my feet felt really decent! I had some blisters, sure, but I hadn’t even noticed them running and they didn’t hurt to walk on. My hip and ankle were more noticeable. I didn’t try to take the stairs and used the elevator, then began hobbling down the sidewalk. Ouch. My hip was hurting so much that I stopped at the first bench and laid down on it to stretch my hip out. Then I walked .3 miles to the next bench and again stretched my hip. A little better, so we went out a bit farther with the plan to turn around, but my hip finally loosened up after a half mile where I could mostly walk normally! Hooray. In total, I managed 1.5 miles or so of a walk, which is pretty big for me the day after an ultra run.

Meaningfully, overnight, I still had 100% time in range (ideal glucose levels). I did not have to do any extra work, thanks to OpenAPS and autosensitivity which adjusts automatically to any increases and later return to normal insulin sensitivity from so much activity!

A 12 hour view of glucose levels after my 100k. This was 100% TIR between 70-180 and probably a tighter range, although I did not bother to calculate what the tighter range is.

The next night, I slept even better, and didn’t notice any in-bed stiffness, although again on the second morning I felt stiff getting out of bed, but was able to do my full 5k+ walk route with my hip loosening up completely by a mile so that I didn’t even think about it!

On day 3, I feel 90% back to normal physically. I’m mostly fatigued,which Scott keeps reminding me is “as one should be” after runnning 100k! The nice change is that with previous ultras or long runs, I’ve felt brain fog for days or sometimes weeks – likely due to not fueling enough. But with my A+ fueling, my brain feels great – and good enough that it’s annoyed with my body still being a little bit tired. Interestingly, my body is both tired but also itching for more activity and new adventures. My friend compared it to “sea legs” where the brain has learned that the body should always be in motion, which is a decent analogy.

WHAT I HAVE LEARNED

I wouldn’t change anything in terms of my race pacing, execution, aid station stops, fueling, etc. for this run.

What I want to make sure I do next time includes continuing to adapt my training to listen to my body, rather than sticking to my pre-decided plan of how much to run. I feel like I can do that both because I now have 3000+ miles on my body of lifetime running (that I didn’t have for my first ultra); and I now have two ultras (last year’s 82 miles post-broken toe and this year’s 100k with minor hiccups like a sore shin and a hip at different times) where I was forced to or chose to adapt training, and it turned out just as good as I would have expected. For my 100k, I think the adaptation to 3 runs per week, all with my vest, ended up working well. This is the first run where I didn’t have noticeable shoulder soreness from my pack!

Same goes for taper: I don’t think, at my speed/skill level, that exact taper strategy makes a difference, and this experience confirmed it, doing DIY ultras and being able to flex a week forward or back based on how I’m physically feeling and when the best weather will be is now my preferred strategy for sure.

—-

If you’re new to ultras and haven’t read any of my other posts, consider reading some of the following, which I’ve alluded to in my post and directly contribute to the above situation being so positive:

Feel free to leave questions if you have any, either about slow ultra running in general or any other aspects of ultra running! I’m a places-from-last kind of ultra runner, but I’m happy to share my thinking process if it helps anyone else plan their own adventures.

Functional Self-Tracking is The Only Self-Tracking I Do

“I could never do that,” you say.

And I’ve heard it before.

Eating gluten free for the rest of your life, because you were diagnosed with celiac disease? Heard that response (I could never do that) for going on 14 years.

Inject yourself with insulin or fingerstick test your blood glucose 14 times a day? Wear an insulin pump on your body 24/7/365? Wear a CGM on your body 24/7/365?

Yeah, I’ve heard you can’t do that, either. (For 20 years and counting.) Which means I and the other people living with the situations that necessitate these behaviors are…doing this for fun?

We’re not.

More recently, I’ve heard this type of comment come up about tracking what I’m eating, and in particular, tracking what I’m eating when I’m running. I definitely don’t do that for fun.

I have a 20+ year strong history of hating tracking things, actually. When I was diagnosed with type 1 diabetes, I was given a physical log book and asked to write down my blood glucose numbers.

“Why?” I asked. They’re stored in the meter.

The answer was because supposedly the medical team was going to review them.

And they did.

And it was useless.

“Why were you high on February 22, 2003?”

Whether we were asking this question in March of 2003 or January of 2023 (almost 20 years later), the answer would be the same: I have no idea.

BG data, by itself, is like a single data point for a pilot. It’s useless without the contextual stream of data as well as other metrics (in the diabetes case, things like what was eaten, what activity happened, what my schedule was before this point, and all insulin dosed potentially in the last 12-24h).

So you wouldn’t be surprised to find out that I stopped tracking. I didn’t stop testing my blood glucose levels – in fact, I tested upwards of 14 times a day when I was in high school, because the real-time information was helpful. Retrospectively? Nope.

I didn’t start “tracking” things again (for diabetes) until late 2013, when we realized that I could get my CGM data off the device and into the laptop beside my bed, dragging the CGM data into a CSV file in Dropbox and sending it to the cloud so an app called “Pushover” would make a louder and different alarm on my phone to wake me up to overnight hypoglycemia. The only reason I added any manual “tracking” to this system was because we realized we could create an algorithm to USE the information I gave it (about what I was eating and the insulin I was taking) combined with the real-time CGM data to usefully predict glucose levels in the future. Predictions meant we could make *predictive* alarms, instead of solely having *reactive* alarms, which is what the status quo in diabetes has been for decades.

So sure, I started tracking what I was eating and dosing, but not really. I was hitting buttons to enter this information into the system because it was useful, again, in real time. I didn’t bother doing much with the data retrospectively. I did occasional do things like reflect on my changes in sensitivity after I got the norovirus, for example, but again this was mostly looking in awe at how the real-time functionality of autosensitivity, an algorithm feature we designed to adjust to real-time changes in sensitivity to insulin, dealt throughout the course of being sick.

At the beginning of 2020, my life changed. Not because of the pandemic (although also because of that), but because I began to have serious, very bothersome GI symptoms that dragged on throughout 2020 and 2021. I’ve written here about my experiences in eventually self-diagnosing (and confirming) that I have exocrine pancreatic insufficiency, and began taking pancreatic enzyme replacement therapy in January 2022.

What I haven’t yet done, though, is explain all my failed attempts at tracking things in 2020 and 2021. Or, not failed attempts, but where I started and stopped and why those tracking attempts weren’t useful.

Once I realized I had GI symptoms that weren’t going away, I tried writing down everything I ate. I tried writing in a list on my phone in spring of 2020. I couldn’t see any patterns. So I stopped.

A few months later, in summer of 2020, I tried again, this time using a digital spreadsheet so I could enter data from my phone or my computer. Again, after a few days, I still couldn’t see any patterns. So I stopped.

I made a third attempt to try to look at ingredients, rather than categories of food or individual food items. I came up with a short list of potential contenders, but repeated testing of consuming those ingredients didn’t do me any good. I stopped, again.

When I first went to the GI doctor in fall of 2020, one of the questions he asked was whether there was any pattern between my symptoms and what I was eating. “No,” I breathed out in a frustrated sigh. “I can’t find any patterns in what I’m eating and the symptoms.”

So we didn’t go down that rabbit hole.

At the start of 2021, though, I was sick and tired (of being sick and tired with GI symptoms for going on a year) and tried again. I decided that some of my “worst” symptoms happened after I consumed onions, so I tried removing obvious sources of onion from my diet. That evolved to onion and garlic, but I realized almost everything I ate also had onion powder or garlic powder, so I tried avoiding those. It helped, some. That then led me to research more, learn about the categorization of FODMAPs, and try a low-FODMAP diet in mid/fall 2021. That helped some.

Then I found out I actually had exocrine pancreatic insufficiency and it all made sense: what my symptoms were, why they were happening, and why the numerous previous tracking attempts were not successful.

You wouldn’t think I’d start tracking again, but I did. Although this time, finally, was different.

When I realized I had EPI, I learned that my body was no longer producing enough digestive enzymes to help my body digest fat, protein, and carbs. Because I’m a person with type 1 diabetes and have been correlating my insulin doses to my carbohydrate consumption for 20+ years, it seemed logical to me to track the amount of fat and protein in what I was eating, track my enzyme (PERT) dosing, and see if there were any correlations that indicated my doses needed to be more or less.

My spreadsheet involved recording the outcome of the previous day’s symptoms, and I had a section for entering multiple things that I ate throughout the day and the number of enzymes. I wrote a short description of my meal (“butter chicken” or “frozen pizza” or “chicken nuggets and veggies”), the estimate of fat and protein counts for the meal, and the number of enzymes I took for that meal. I had columns on the left that added up the total amount of fat and protein for the day, and the total number of enzymes.

It became very apparent to me – within two days – that the dose of the enzymes relative to the quantity of fat and protein I was eating mattered. I used this information to titrate (adjust) my enzyme dose and better match the enzymes to the amount of fat or protein I was eating. It was successful.

I kept writing down what I was eating, though.

In part, because it became a quick reference library to find the “counts” of a previous meal that I was duplicating, without having to re-do the burdensome math of adding up all the ingredients and counting them out for a typical portion size.

It also helped me see that within the first month, I was definitely improving, but not all the way – in terms of fully reducing and eliminating all of my symptoms. So I continued to use it to titrate my enzyme doses.

Then it helped me carefully work my way through re-adding food items and ingredients that I had been avoiding (like onions, apples, and pears) and proving to my brain that those were the result of enzyme insufficiency, not food intolerances. Once I had a working system for determining how to dose enzymes, it became a lot easier to see when I had slight symptoms from slightly getting my dosing wrong or majorly mis-estimating the fat and protein in what I was eating.

It provided me with a feedback loop that doesn’t really exist in EPI and GI conditions, and it was a daily, informative, real-time feedback loop.

As I reached the end of my first year of dosing with PERT, though, I was still using my spreadsheet. It surprised me, actually. Did I need to be using it? Not all the time. But the biggest reason I kept using it relates to how I often eat. I often look at an ‘entree’ for protein and then ‘build’ the rest of my meal around that, to help make sure I’m getting enough protein to fuel my ultrarunning endeavors. So I pick my entree/main thing I’m eating and put it in my spreadsheet under the fat and protein columns (=17 g of fat, =20 g of protein), for example, then decide what I’m going to eat to go with it. Say I add a bag of cheddar popcorn, so that becomes (=17+9 g of fat) and (=20+2 g of protein), and when I hit enter, those cells now tell me it’s 26 g of fat and 22 g of protein for the meal, which tells my brain (and I also tell the spreadsheet) that I’ll take 1 PERT pill for that. So I use the spreadsheet functionally to “build” what I’m eating and calculate the total grams of protein and fat; which helps me ‘calculate’ how much PERT to take (based on my previous titration efforts I know I can do up to 30g of fat and protein each in one PERT pill of the size of my prescription)

Example in my spreadsheet showing a meal and the in-progress data entry of entering the formula to add up two meal items' worth of fat and protein

Essentially, this has become a real-time calculator to add up the numbers every time I eat. Sure, I could do this in my head, but I’m usually multitasking and deciding what I want to eat and writing it down, doing something else, doing yet something else, then going to make my food and eat it. This helps me remember, between the time I decided – sometimes minutes, sometimes hours in advance of when I start eating and need to actually take the enzymes – what the counts are and what the PERT dosing needs to be.

I have done some neat retrospective analysis, of course – last year I had estimated that I took thousands of PERT pills (more on that here). I was able to do that not because it’s “fun” to track every pill that I swallow, but because I had, as a result of functional self-tracking of what I was eating to determine my PERT dosing for everything I ate, had a record of 99% of the enzyme pills that I took last year.

I do have some things that I’m no longer entering in my spreadsheet, which is why it’s only 99% of what I eat. There are some things like a quick snack where I grab it and the OTC enzymes to match without thought, and swallow the pills and eat the snack and don’t write it down. That maybe happens once a week. Generally, though, if I’m eating multiple things (like for a meal), then it’s incredibly useful in that moment to use my spreadsheet to add up all the counts to get my dosing right. If I don’t do that, my dosing is often off, and even a little bit “off” can cause uncomfortable and annoying symptoms the rest of the day, overnight, and into the next morning.

So, I have quite the incentive to use this spreadsheet to make sure that I get my dosing right. It’s functional: not for the perceived “fun” of writing things down.

It’s the same thing that happens when I run long runs. I need to fuel my runs, and fuel (food) means enzymes. Figuring out how many enzymes to dose as I’m running 6, 9, or 25 hours into a run gets increasingly harder. I found that what works for me is having a pre-built list of the fuel options; and a spreadsheet where I quickly on my phone open it and tap a drop down list to mark what I’m eating, and it pulls in the counts from the library and tells me how many enzymes to take for that fuel (which I’ve already pre-calculated).

It’s useful in real-time for helping me dose the right amount of enzymes for the fuel that I need and am taking every 30 minutes throughout my run. It’s also useful for helping me stay on top of my goal amounts of calories and sodium to make sure I’m fueling enough of the right things (for running in general), which is something that can be hard to do the longer I run. (More about this method and a template for anyone who wants to track similarly here.)

The TL;DR point of this is: I don’t track things for fun. I track things if and when they’re functionally useful, and primarily that is in real-time medical decision making.

These methods may not make sense to you, and don’t have to.

It may not be a method that works for you, or you may not have the situation that I’m in (T1D, Graves, celiac, and EPI – fun!) that necessitates these, or you may not have the goals that I have (ultrarunning). That’s ok!

But don’t say that you “couldn’t” do something. You ‘couldn’t’ track what you consumed when you ran or you ‘couldn’t’ write down what you were eating or you ‘couldn’t’ take that many pills or you ‘couldn’t’ inject insulin or…

You could, if you needed to, and if you decided it was the way that you could and would be able to achieve your goals.

Looking Back Through 2022 (What You May Have Missed)

I ended up writing a post last year recapping 2021, in part because I felt like I did hardly anything – which wasn’t true. In part, that was based on my body having a number of things going on that I didn’t know at the time. I figured those out in 2022 which made 2022 hard and also provided me with a sense of accomplishment as I tackled some of these new challenges.

For 2022, I have a very different feeling looking back on the entire year, which makes me so happy because it was night and day (different) compared to this time last year.

One major example? Exocrine Pancreatic Insufficiency.

I started taking enzymes (pancreatic enzyme replacement therapy, known as PERT) in early January. And they clearly worked, hooray!

I quickly realized that like insulin, PERT dosing needed to be based on the contents of my meals. I figured out how to effectively titrate for each meal and within a month or two was reliably dosing effectively with everything I was eating and drinking. And, I was writing and sharing my knowledge with others – you can see many of the posts I wrote collected at DIYPS.org/EPI.

I also designed and built an open source web calculator to help others figure out their ratios of lipase and fat and protease and protein to help them improve their dosing.

I even published a peer-reviewed journal article about EPI – submitted within 4 months of confirming that I had it! You can read that paper here with an analysis of glucose data from both before and after starting PERT. It’s a really neat example that I hope will pave the way for answering many questions we all have about how particular medications possibly affect glucose levels (instead of simply being warned that they “may cause hypoglycemia or hyperglycemia” which is vague and unhelpful.)

I also had my eyes opened to having another chronic disease that has very, very expensive medication with no generic medication option available (and OTCs may or may not work well). Here’s some of the math I did on the cost of living with EPI and diabetes (and celiac and Graves) for a year, in case you missed it.

Another other challenge+success was running (again), but with a 6 week forced break (ha) because I massively broke a toe in July 2022.

That was physically painful and frustrating for delaying my ultramarathon training.

I had been successfully figuring out how to run and fuel with enzymes for EPI; I even built a DIY macronutrient tracker and shared a template so others can use it. I ran a 50k with a river crossing in early June and was on track to target my 100 mile run in early fall.

However with the broken toe, I took the time off needed and carefully built back up, put a lot of planning into it, and made my attempt in late October instead.

I succeeded in running ~82 miles in ~25 hours, all in one go!

I am immensely proud of that run for so many reasons, some of which are general pride at the accomplishment and others are specific, including:

  • Doing something I didn’t think I could do which is running all day and all night without stopping
  • Doing this as a solo or “DIY” self-organized ultra
  • Eating every 30 minutes like clockwork, consuming enzymes (more than 92 pills!), which means 50 snacks consumed. No GI issues, either, which is remarkable even for an ultrarunner without EPI!
  • Generally figuring out all the plans and logistics needed to be able to handle such a run, especially when dealing with type 1 diabetes, celiac, EPI, and Graves
  • Not causing any injuries, and in fact recovering remarkably fast which shows how effective my training and ‘race’ strategy were.

On top of this all, I achieved my biggest-ever running year, with more than 1,333 miles run this year. This is 300+ more than my previous best from last year which was the first time I crossed 1,000 miles in a year.

Professionally, I did quite a lot of miscellaneous writing, research, and other activities.

I spent a lot of time doing research. I also peer reviewed more than 24 papers for academic journals. I was asked to join an editorial board for a journal. I served on 2 grant review committees/programs.

I also wrote a lot.

*by ton, I mean way more than the past couple of years combined. Some of that has been due to getting some energy back once I’ve fixed missing enzyme and mis-adjusted hormone levels in my body! I’m up to 40+ blog posts this year.

And personally, the punches felt like they kept coming, because this year we also found out that I have Graves’ disease, taking my chronic disease count up to 4. Argh. (T1D, celiac, EPI, and now Graves’, for those curious about my list.)

My experience with Graves’ has included symptoms of subclinical hyperthyroidism (although my T3 and T4 are in range), and I have chosen to try thyroid medication in order to manage the really bothersome Graves’-related eye symptoms. That’s been an ongoing process and the symptoms of this have been up and down a number of times as I went on medication, reduced medication levels, etc.

What I’ve learned from my experience with both EPI and Graves’ in the same year is that there are some huge gaps in medical knowledge around how these things actually work and how to use real-world data (whether patient-recorded data or wearable-tracked data) to help with diagnosis, treatment (including medication titration), etc. So the upside to this is I have quite a few new projects and articles coming to fruition to help tackle some of the gaps that I fell into or spotted this year.

And that’s why I’m feeling optimistic, and like I accomplished quite a bit more in 2022 than in 2021. Some of it is the satisfaction of knowing the core two reasons why the previous year felt so physically bad; hopefully no more unsolved mysteries or additional chronic diseases will pop up in the next few years. Yet some of it is also the satisfaction of solving problems and creating solutions that I’m uniquely poised, due to my past experiences and skillsets, to solve. That feels good, and it feels good as always to get to channel my experiences and expertise to try to create solutions with words or code or research to help other people.

More Tools To Help Diabetes Researchers and Other Researchers

A few years ago I made a big deal about a tool I had created, converting someone’s web tool into a command line tool to be able to take complex json data and convert it to csv. Years later, I (and thousands of others, it’s been downloaded 1600+ times!) am still using this tool because there’s nothing better that I’ve found when you have data that you don’t know the data structure for or the data structure varies across files.

I ended up creating a repository on Github to store it with details on running it, and have expanded it over the last (almost) six years as I and others have added additional tools. For example, it’s where Arsalan, one of my frequent collaborators, and I store open source code from some of our recent papers.

Recently, I added two more small scripts. This was motivated to help researchers who have been successfully using the OpenAPS Data Commons and want to update their dataset with a later version of the data. Chances are, they have cleaned and worked with a previous version of the dataset, and instead of having to re-clean all of the data all over again, this set of scripts should help narrow down what the “new” data is that needs to be pulled out, cleaned, and appended to a previously cleaned dataset.

You can check out the full tool repository here (it has several other scripts in addition to the ones mentioned above). The latest are two python scripts that checks the content of an existing folder and lists out the memberID and filenames for each. This is useful to run on an existing, already-cleaned dataset to see what you currently have. It can also be run on the latest/newest/bigger dataset available. Then, the second script can be run to compare the memberIDs and file names in the newer/biggest/larger dataset against the previously cleaned/smaller/older dataset. Those that “match” already exist in the version of the dataset they have; they don’t need to be pulled again. The others don’t exist in the current dataset, and can be popped into a script to pull out just those data files to then be cleaned and appended to the existing dataset.

As a heads up specifically for those working with the OpenAPS Data Commons, it is best practice to name/describe the version of the dataset via the size. For example, you might be working with the n=88 or n=122 version of the dataset. If you used the above method, you would then describe it along the lines of taking and cleaning the n=122 version; selecting new files available from the n=183 version and appending them to the n=122 version; and the resulting dataset is n=(122+number of new files used).

Folks who access the n=183 version of the dataset and haven’t previously used a smaller version of the dataset can reference using the n=183 and clarifying how many files they ended up using, e.g. describing that they followed X method to clean the data starting from the n=183 version and their resulting dataset is n=166, for example.

It is important to clarify which version and size of the dataset is being used.

PS – this method works on other data file types, too! You’d change the variable/column header names in the script to update this for other cases.

We Have Changed the Standards of Care for People With Diabetes

We’ve helped change the standard of care for people with diabetes, with open source automated insulin delivery.

I get citation alerts sometimes when my previous research papers or articles are cited. For the last few years, I get notifications when new consensus guidelines or research comes out that reference or include mention of open source automated insulin delivery (AID). At this time of year, the ADA Standards of Care is released for the following year, and I find out usually via these citation alerts.

Why?

This year, in 2023, there’s a section on open source automated insulin delivery!

A screenshot of the 2023 ADA Standards of Care section under Diabetes Technology (7) that lists DIY closed looping, meaning open source automated insulin delivery

But did you know, that’s not really new? Here’s what the 2022 version said:

A screenshot of the 2022 ADA Standards of Care section under Diabetes Technology (7) that lists DIY closed looping, meaning open source automated insulin delivery

And 2021 also included…

A screenshot of the 2021 ADA Standards of Care section under Diabetes Technology (7) that lists DIY closed looping, meaning open source automated insulin delivery

And 2020? Yup, it was there, too.

A screenshot of the 2020 ADA Standards of Care section under Diabetes Technology (7) that lists DIY closed looping, meaning open source automated insulin delivery

All the way back to 2019!

A screenshot of the 2019 ADA Standards of Care under Diabetes Technology (7) that lists DIY closed looping, meaning open source automated insulin delivery

If you read them in chronological order, you can see quite a shift.

In 2019, it was a single sentence noting their existence under a sub-heading of “Future Systems” under AID. In 2020, the content graduated to a full paragraph at the end of the AID section (that year just called “sensor-augmented pumps”). In 2021, it was the same paragraph under the AID section heading. 2022 was the year it graduated to having its own heading calling it out, with a specific evidence based recommendation! 2023 is basically the same as 2022.

So what does it say?

It points out patients are using open source AID (which they highlight as do-it-yourself closed loop systems). It sort of incorrectly suggests healthcare professionals can’t prescribe these systems (they can, actually – providers can prescribe all kinds of things that are off-label – there’s just not much point of a “prescription” unless it’s needed for a person’s elementary school (or similar) who has a policy to only support “prescribed” devices).

And then, most importantly, it points out that regardless, healthcare providers should assist in diabetes management and support patient choice to ensure the safety of people with diabetes. YAY!

“…it is crucial to keep people with diabetes safe if they are using these methods for automated insulin delivery. Part of this entails ensuring people have a backup plan in case of pump failure. Additionally, in most DIY systems, insulin doses are adjusted based on the pump settings for basal rates, carbohydrate ratios, correction doses, and insulin activity. Therefore, these settings can be evaluated and modified based on the individual’s insulin requirements.”

You’ll notice they call out having a backup plan in case of pump failure.

Well, yeah.

That should be true of *any* AID system or standalone insulin pump. This highlights that the needs of people using open source AID in terms of healthcare support are not that different from people choosing other types of diabetes therapies and technologies.

It is really meaningful that they are specifically calling out supporting people living with diabetes. Regardless of technology choices, people with diabetes should be supported by their healthcare providers. Full stop. This is highlighted and increasingly emphasized, thanks to the movement of individuals using open source automated insulin delivery. But the benefits of this is not limited to those of us using open source automated insulin delivery; this spills over and expands to people using different types of BG meters, CGM, insulin pumps, insulin pens, syringes, etc.

No matter their choice of tools or technologies, people with diabetes SHOULD be supported in THEIR choices. Not choices limited by healthcare providers, who might only suggest specific tools that they (healthcare providers) have been trained on or are familiar with – but the choices of the patient.

In future years, I expect the ADA Standard of Care for 2024 and beyond to evolve, in respect to the section on open source automated insulin delivery.

The evidence grading should increase from “E” (which stands for “Expert consensus or clinical experience”), because there is now a full randomized control trial in the New England Journal of Medicine on open source automated insulin delivery, in addition to the continuation results (24 weeks following the RCT; 48 full weeks of data) accepted for publication (presented at EASD 2022), and a myriad of other studies ranging from retrospective to prospective trials. The evidence is out there, so I expect that this evidence grading and the text of the recommendation text will evolve accordingly to catch up to the evidence that exists. (The standards of care are based on literature available up to the middle of the previous year; much of the things I’ve cited above came out in later 2022, so it matches the methodology to not be included until the following year; these newest articles should be scooped up by searches up to July 2023 for the 2024 edition.)

In the meantime, I wish more people with diabetes were aware of the Standards of Care and could use them in discussion with providers who may not be as happy with their choices. (That’s part of the reason I wrote this post!)

I also wish we patients didn’t have to be aware of this and don’t have to argue our cases for support of our choices from healthcare providers.

But hopefully over time, this paradigm of supporting patient choice will continue to grow in the culture of healthcare providers and truly become the standard of care for everyone, without any personal advocacy required.

Note added in December 2024 – the 2025 Standards of Care now have evidence grade “B” and include the specific recommendation to “Support and provide diabetes management advice to people with diabetes who choose to use an open-source closed-loop system.”

You can find the 2025 Standard of Care section here.

Did you know? We helped change the standards of care for people living with diabetes. By Dana M. Lewis from DIYPS.org

New Chapter: Personalizing Research: Involving, Inviting, and Engaging Patient Researchers

TLDR: A new chapter I wrote, invited for a book on Personal Health Informatics, is out! You can read a summary below describing my chapter. You can also find a link to a full pre-print (a copy of my submitted, unedited version) of the article (as well as author copies of all of my articles) on my research page.

In November 2020 I was invited to submit a proposal for a chapter for a pending book on personal health informatics. Like journal articles, you can be invited to submit for a book chapter as part of a larger book topic.

Knowing that book chapters take a long time to come out, I carefully thought about the topic of my article and whether I could write something that would be relevant approximately a year after I wrote it.

The context of the book was:

“high-quality scholarly work that seeks to provide clarity, consistency, and reproducibility, with a shared view of the status-quo of consumer and pervasive health informatics and its relevance to precision medicine and healthcare applications and system design. The book will offer a snapshot of this emerging field, supported by the methodological, practical, and ethical perspectives from researchers and practitioners in the field. In addition to being a research reader, this book will provide pragmatic insights for practitioners in designing, implementing, and evaluating personal health informatics in the healthcare settings.”

They also wanted to include patient perspectives, which is part of the reason I was invited to submit a proposal for a chapter, and asked if I could write about citizen science from the patient perspective.

I decided to write more broadly about patient perspectives in research, and since the audience of this book is likely to be academic researchers and practitioners already in the field, seek to provide some ideas and input as to how they could think about practically inviting and engaging patient partners in research, as well as supporting the burgeoning field of patient researchers who lead their own research.

I submitted my draft article in April 2021; received feedback and submitted the revision in August 2021; and the book was due to be published in “spring 2022”.

::crickets::

The book is now out in November 2022, hooray! It is called Personal Health Informatics and you can find it online here.

Abstract from my chapter:

There are many benefits to engaging and involving patients in traditional, researcher-led research, ranging from improved recruitment and increased enrollment to accelerating and facilitating the implementation of research outcomes. Researchers, however, may not be aware of when and where they can involve patients (people with lived healthcare experience) in research or what the benefits may be of improving patient engagement in the research process or of expanding patient involvement to other research stages. This chapter seeks to highlight the benefits and opportunities of engaging patients in traditional research and provide practical suggestions for inviting or recruiting patients for participation in research, whether or not there is an established patient and public involvement (PPI) program. This includes tips for developing a productive working relationship and culture between researchers and the patients involved in research. There are also many patients themselves conducting research, and often without the benefits, resources, and opportunities made available to traditional researchers. Traditional researchers should identify and recognize researchers who have emerged from non-traditional paths who are driving and engaging in their own research, and provide support and resources where appropriate to foster further patient-driven research. This investment can lead to collaboration opportunities for additional highly relevant and effective research studies with traditional researchers in the future. This chapter provides examples of patient researchers and offers tools to support traditional researchers who want to support patient-led research efforts and improve their ability to successfully engage patient stakeholders in their own research.

Here are some of the highlights and recommendations from my chapter:

  • Invite patients to participate in research, and do it early.
  • Ask patients how they’d like to be involved in research.
  • Relationship building and culture setting is important. Address the power dynamics within your project and team.
  • Set expectations for everyone involved on the team.
  • Consider training and skill-building opportunities for patients who are partnering in research.
  • If you’re looking to support a patient who is already initiating or performing research, first ask: “How can I help?”. This article includes a list of suggestions of how you can help them.

This article also highlights many exceptional researchers who are patients and their work, including:

Note the chapter discusses explicitly how not everyone has a PhD or an MD; this is not a requisite to doing high-quality research!

The chapter concludes with “clinical pearls’’, which are four suggested tips to use in daily practice, and includes some suggested resources like the Opening Pathways Readiness Quiz. It also includes a suggestion of making a “To Don’t” list in collaboration with patient research partners.

The chapter also contains two review questions:

  1. Imagine that you have a research project where you would like to apply for funding, and the funder mandates that you have a patient involved in your research project. At what stage do you involve a patient in your project, and how do you do so?
  2. You are at a scientific conference and observe a patient giving a presentation about their own research or project. They’re not a traditional researcher – they don’t have a PhD or have a day job as a researcher. You want to approach them and offer your help with their research. What do you offer when you approach them?

To see the answers to these review questions, check out the article in full! :)

TLDR: A new chapter I wrote, invited for a book on Personal Health Informatics, is out! You can find a link to a full pre-print (a copy of my submitted, unedited version) of the article (as well as author copies of all of my articles) on my research page.

If you’d like to cite this in one of your articles, note that the DOI for the article is https://doi.org/10.1007/978-3-031-07696-1_17 and an example citation is:

Lewis, D. (2022). Personalizing Research: Involving, Inviting, and Engaging Patient Researchers. In: Hsueh, PY.S., Wetter, T., Zhu, X. (eds) Personal Health Informatics. Cognitive Informatics in Biomedicine and Healthcare. Springer, Cham. https://doi.org/10.1007/978-3-031-07696-1_17

Excerpted tips from the book chapter "Personalizing Research: Involving, Inviting, and Engaging Patient Researchers" by Dana Lewis

Regulatory Approval Is A Red Herring

One of the most common questions I have been asked over the last 8 years is whether or not we are submitting OpenAPS to the FDA for regulatory approval.

This question is a big red herring.

Regulatory approval is often seen and discussed as the one path for authenticating and validating safety and efficacy.

It’s not the only way.

It’s only one way.

As background, you need to understand what OpenAPS is. We took an already-approved insulin pump that I already had, a continuous glucose monitor (CGM) that I already had, and found a way to read data from those devices and also to use the already-built commands in the pump to send back instructions to automate insulin delivery via the decision-making algorithm that we created. The OpenAPS algorithm was the core innovation, along with the realization that this already-approved pump had those capabilities built in. We used various off the shelf hardware (mini-computers and radio communication boards) to interoperate with my already approved medical devices. There was novelty in how we put all the pieces together, though the innovation was the algorithm itself.

The caveat, though, is that although the pump I was using was regulatory-approved and on the market, which is how I already had it, it had later been recalled after researchers, the manufacturer, and the FDA realized that you could use the already-built commands in the pump’s infrastructure. So these pumps, while not causing harm to anyone and no cases of harm have ever been recorded, were no longer being sold. It wasn’t a big deal to the company; it was a voluntary recall, and people like me often chose to keep our pumps if we were not concerned about this potential risk.

We had figured out how to interoperate with these other devices. We could have taken our system to the FDA. But because we were using already-off-the-market pumps, there was no way the FDA would approve it. And at the time (circa 2014), there was no vision or pathway for interoperable devices, so they didn’t have the infrastructure to approve “just” an automated insulin delivery algorithm. (That changed many years later and they now have infrastructure for reviewing interoperable pumps, CGM, and algorithms which they call controllers).

The other relevant fact is that the FDA has jurisdiction based on the commerce clause in the US Constitution: Congress used its authority to authorize the FDA to regulate interstate commerce in food, drugs, and medical devices. So if you’re intending to be a commercial entity and sell products, you must submit for regulatory approval.

But if you’re not going to sell products…

This is the other aspect that many people don’t seem to understand. All roads do not lead to regulatory approval because not everyone wants to create a company and spend 5+ years dedicating all their time to it. That’s what we would have had to do in order to have a company to try to pursue regulatory approval.

And the key point is: given such a strict regulatory environment, we (speaking for Dana and Scott) did not want to commercialize anything. Therefore there was no point in submitting for regulatory approval. Regardless of whether or not the FDA was likely to approve given the situation at the time, we did not want to create a company, spend years of our life dealing with regulatory and compliance issues full time, and maybe eventually get permission to sell a thing (that we didn’t care about selling).

The aspect of regulatory approval is a red herring in the story of the understanding of OpenAPS and the impact it is having and could have.

Yes, we could have created a company. But then we would not have been able to spend the thousands of hours that we spent improving the system we made open source and helping thousands of individuals who were able to use the algorithm and subsequent systems with a variety of pumps, CGMs, and mobile devices as an open source automated insulin delivery system. We intentionally chose this path to not commercialize and thus not to pursue regulatory approval.

As a result of our work (and others from the community), the ecosystem has now changed.

Time has also passed: it’s been 8 years since I first automated insulin delivery for myself!

The commercial players have brought multiple commercial AIDs to market now, too.

We created OpenAPS when there was NO commercial option at the time. Now there are a few commercial options.

But it is also an important note that I, and many thousands of other people, are still choosing to use open source AID systems.

Why?

This is another aspect of the red herring of regulatory approval.

Just because something is approved does not mean it’s available to order.

If it’s available to order (and not all countries have approved AID systems!), it doesn’t mean it’s accessible or affordable.

Insurance companies are still fighting against covering pumps and CGMs as standalone devices. New commercial AID systems are even more expensive, and the insurance companies are fighting against coverage for them, too. So just because someone wants an AID and has one approved in their country doesn’t mean that they will be able to access and/or afford it. Many people with diabetes struggle with the cost of insulin, or the cost of CGM and/or their insulin pump.

Sometimes providers refuse to prescribe devices, based on preconceived notions (and biases) about who might do “well” with new therapies based on past outcomes with different therapies.

For some, open source AID is still the most accessible and affordable option.

And in some places, it is still the ONLY option available to automate insulin delivery.

(And in most places, open source AID is still the most advanced, flexible, and customizable option.)

Understanding the many reasons why someone might choose to use open source automated insulin delivery folds back into the understanding of how someone chooses to use open source automated insulin delivery.

It is tied to the understanding that manual insulin delivery – where someone makes all the decisions themselves and injects or presses buttons manually to deliver insulin – is inherently risky.

Automated insulin delivery reduces risk compared to manual insulin delivery. While some new risk is introduced (as is true of any additional devices), the net risk reduction overall is significantly large compared to manual insulin delivery.

This net risk reduction is important to contextualize.

Without automated insulin delivery, people overdose or underdose on insulin multiple times a day, causing adverse effects and bad outcomes and decreasing their quality of life. Even when they’re doing everything right, this is inevitable because the timing of insulin is so challenging to manage alongside dozens of other variables that at every decision point play a role in influencing the glucose outcomes.

With open source automated insulin delivery, it is not a single point-in-time decision to use the system.

Every moment, every day, people are actively choosing to use their open source automated insulin delivery system because it is better than the alternative of managing diabetes manually without automated insulin delivery.

It is a conscious choice that people make every single day. They could otherwise choose to not use the automated components and “fall back” to manual diabetes care at any moment of the day or night if they so choose. But most don’t, because it is safer and the outcomes are better with automated insulin delivery.

Each individual’s actions to use open source AID on an ongoing basis are data points on the increased safety and efficacy.

However, this paradigm of patient-generated data and patient choice as data contributing toward safety and efficacy is new. There are not many, if any, other examples of patient-developed technology that does not go down the commercial path, so there are not a lot of comparisons for open source AID systems.

As a result, when there were questions about the safety and efficacy of the system (e.g., “how do you know it works for someone else other than you, Dana?”), we began to research as a community to address the questions. We published data at the world’s biggest scientific conference and were peer-reviewed by scientists and accepted to present a poster. We did so. We were cited in a piece in Nature as a result. We then were invited to submit a letter to the editor of a traditional diabetes journal to summarize our findings; we did so and were published.

I then waited for the rest of the research community to pick up this lead and build on the work…but they didn’t. I picked it up again and began facilitating research directly with the community, coordinating efforts to make anonymized pools of data for individuals with open source AID to submit their data to and for years have facilitated access to dozens of researchers to use this data for additional research. This has led to dozens of publications further documenting the efficacy of these solutions.

Yet still, there was concern around safety because the healthcare world didn’t know how to assess these patient-generated data points of choice to use this system because it was better than the alternative every single day.

So finally, as a direct result of presenting this community-based research again at the world’s largest diabetes scientific conference, we were able to collaborate and design a grant proposal that received grant funding from New Zealand’s Health Research Council (the equivalent of the NIH in the US) for a randomized control trial of the OpenAPS algorithm in an open source AID system.

An RCT is often seen as the gold standard in science, so the fact that we received funding for such a study alone was a big milestone.

And this year, in 2022, the RCT was completed and our findings were published in one of the world’s largest medical journals, the New England Journal of Medicine, establishing that the use of the OpenAPS algorithm in an open source AID was found to be safe and effective in children and adults.

No surprises here, though. I’ve been using this system for more than 8 years, and seeing thousands of others choose the OpenAPS algorithm on an ongoing, daily basis for similar reasons.

So today, it is possible that someone could take an open source AID system using the OpenAPS algorithm to the FDA for regulatory approval. It won’t likely be me, though.

Why not? The same reasons apply from 8 years ago: I am not a company, I don’t want to create a company to be able to sell things to end users. The path to regulatory approval primarily matters for those who want to sell commercial products to end users.

Also, regulatory approval (if someone got the OpenAPS algorithm in an open source AID or a different algorithm in an open source AID) does not mean it will be commercially available, even if it will be approved.

It requires a company that has pumps and CGMs it can sell alongside the AID system OR commercial partnerships ready to go that are able to sell all of the interoperable, approved components to interoperate with the AID system.

So regulatory approval of an AID system (algorithm/mobile controller design) without a commercial partnership plan ready to go is not very meaningful to people with diabetes in and of itself. It sounds cool, but will it actually do anything? In and of itself, no.

Thus, the red herring.

Might it be meaningful eventually? Yes, possibly, especially if we collectively have insurers to get over themselves and provide coverage for AID systems given that AID systems all massively improve short-term and long-term outcomes for people with diabetes.

But as I said earlier, regulatory approval does necessitate access nor affordability, so an approved system that’s not available and affordable to people is not a system that can be used by many.

We have a long way to go before commercial AID systems are widely accessible and affordable, let alone available in every single country for people with diabetes worldwide.

Therefore, regulatory approval is only one piece of this puzzle.

And it is not the only way to assess safety and efficacy.

The bigger picture this has shown me over the years is that while systems are created to reduce harm toward people – and this is valid and good – there have been tendencies to convert to the assumption that therefore the systems are the only way to achieve the goal of harm reduction or to assess safety and efficacy.

They aren’t the only way.

As explained above, FDA approval is one method of creating a rubber stamp as a shorthand for “is this considered to be safe and effective”.

That’s also legally necessary for companies to use if they want to sell products. For situations that aren’t selling products, it’s not the only way to assess safety and efficacy, which we have shown with OpenAPS.

With open source automated insulin delivery systems, individuals have access to every line of code and can test and choose for themselves, not just once, but every single day, whether they consider it to be safer and more effective for them than manual insulin dosing. Instead of blindly trusting a company, they get the choice to evaluate what they’re using in a different way – if they so choose.

So any questions around seeking regulatory approval are red herrings.

A different question might be: What’s the future of the OpenAPS algorithm?

The answer is written in our OpenAPS plain language reference design that we posted in February of 2015. We detailed our vision for individuals like us, researchers, and companies to be able to use it in the future.

And that’s how it’s being used today, by 1) people like me; and 2)  in research, to improve what we can learn about diabetes itself and improve AID; and 3) by companies, one of whom has already incorporated parts of our safety design as part of a safety layer in their ML-based AID system and has CE mark approval and is being sold and used by thousands of people in Europe.

It’s possible that someone will take it for regulatory approval; but that’s not necessary for the thousands of people already using it. That may or may not make it more available for thousands more (see earlier caveats about needing commercial partnerships to be able to interoperate with pumps and CGMs).

And regardless, it is still being used to change the world for thousands of people and help us learn and understand new things about the physiology of diabetes because of the way it was designed.

That’s how it’s been used and that’s the future of how it will continue to be used.

No rubber stamps required.

Regulatory Approval: A Red Herring