Running a Multi-Day Ultramarathon (Aiming for 200 Miles)

I used to make a lot of statements about things I thought I couldn’t do. I thought I couldn’t run overnight, so I couldn’t attempt to run 100 miles. I could never run 200 mile races the way other people did. Etc. Yet last year I found myself training for and attempting 100 miles (I chose to stop at 82, but successfully ran overnight and for 25 hours) and this year I found myself working through the excessive mental logistics and puzzle of determining that I could train for and attempt to run 200 miles, or as many miles as I could across 3-4 days.

Like my 100 mile attempt, I found some useful blog recaps and race reports of people’s official races they did for 200-ish mile races. However, like the 100 attempts, I found myself wanting more information for the mental training and logistical preparation people put into it. While my 200 mile training and prep anchored heavily on what I did before, this post describes more detail on how my training, prep, and ‘race’ experience for a multi-day or 200 mile ultra attempt.

DIY-ing a 200

For context, I have a previous post describing the myriad reasons of why I often choose to run DIY ultras, meaning I’m not signing up for an official race. Most of those reasons hold true for why I chose to DIY my 200. Like my 100 (82) miles, I mapped a route that was based on my home paved trail that takes me out and around the trails I’m familiar with. It has its downsides, but also the upsides: really good trail bathrooms and I feel safe running them. Plus, it’s easy and convenient for my husband to crew me. Since I expected this adventure to take 3-4 days (more on that below), that’s a heavy ask of my husband’s time and energy, so sticking with the easy routes that work for him is optimal, too. So while I also sought to run 200 miles just like any other 200-mile ultra runner, my course happens to have minimal elevation. Not all 200 mile ultramarathon races have a ton of elevation – some like the Cowboy 200 are pretty flat – so my experience is closer to that than the experience of those running mountain based ultras with 30,000 feet (or more) of elevation gain. And I’m ok with that!

Sleep

One of the puzzles I had to figure out to decide I could even attempt a 200 miler is sleep. With a 100 mile race, most people don’t sleep at all (nor did I) and we just run through the night. With 200 miles, that’s impossible, because it takes 3, 4, 5 days to finish and biologically you need sleep. Plus, I need more sleep than the average person. I’m a champion sleeper; I typically sleep much longer than everyone else; and I know I couldn’t function with an hour here or there like many people do at traditional races. So I actually designed my 200 mile ultra with this in mind: how could I cover 200 miles AND get sleep? Because I’m running to/from home, I have access to my kitchen, shower, and bed, so I decided that I would set up my run to run each day and come home and eat dinner, shower, and sleep each night for a short night in my bed.

I then decided that instead of winging it and running until I dropped before eating, showering, and sleeping, I would aim for running 50 miles each day. Then I’d come in, eat, shower, and sleep and get up the next morning and go again. 4 days, 3 nights, 50 miles each day: that would have me finishing around 87-90ish hours total (with the clock running from my initial start), including ~25 hours or more of total downtime between the eating/showering/sleeping/getting ready. That breakdown of 3.67 days is well within the typical finish times of many 200 mile ultras (yes, comparing to those with elevation gain), so it felt like it was both a stretch for me but also doable and in a sensible way that works for me and my needs. I mapped it all out in my spreadsheet, with the number of laps and my routes and pacing to finish 50 miles per day; the two times per day I would need my husband to come out and crew me at ‘aid station stops’ in between laps, and what time I would finish each night. I then factored in time to eat and shower and get ready for bed, sleep, and time to get up in the morning. Given the fact that I expected to run slower each day, the sleep windows go from 8 hours down to less than 6 hours by night 3. That being said, if I managed to sleep 5 hours per night and 15 hours total, that’s probably almost twice as much as most people get during traditional races!

Like sleep, I was also very cognizant of the fact that a 200 probably comes down to mental fortitude and will power to keep going; meticulous fueling; and excellent foot care. Plus reasonable training, of course.

Meticulous fueling

I have previously written about building and using a spreadsheet to track my fuel intake during ultras. This method works really well for me because after each training run I can see how much I consumed and any trends. I started to spot that as I got tired, I would tend to choose certain snacks that happened to be slightly lower calorie. Not by much, but the snack selections went from those that are 150-180 calories to 120-140 calories, in part because I perceived them to be both ‘smaller’ (less volume) and ‘easier to swallow’ when I was tired. Doubled up in the same hour, this meant that I started to have hours of 240 calories instead of more than 250. That doesn’t sound like much, but I need every calorie I can get.

I mapped out my estimated energy expenditure based on the 50 miles per day, and even consuming 250 calories per hour, I would end up with several thousand calories of deficit each day! I spent a lot of time testing food that I think I can eat for dinner on the 3 nights to ensure that I get a good 1000 calories or more in before going to bed, to help address and reduce the growing energy deficit. But I also ended up optimizing my race fuel, too. Because I ran so many long runs in training where I fueled every 30 minutes, and because I had been mapping out my snack list for each lap for 50 miles a day for 4 days, I’ve been aware for months that I would probably get food fatigue if I didn’t expand my fuel list. I worked really hard to test a bunch of new snacks and add them to the rotation. That really helped even in training, across all 12 laps (3 laps a day to get 50 miles, times 4 days), I carefully made sure I wouldn’t have too many repeats and get sick of one food or one group of things I planned to eat. I also recently realized that some of the smaller items (e.g. 120 calorie servings) could be increased. I’m already portioning out servings from a big bag into small baggies; in some cases adding one more pretzel or one more piece of candy (or more) would drive up the calories by 10-20 per serving. Those small tweaks I made to 5 of my ~18 possible snacks means that I added about 200 calories on top of what was already represented in those snacks. If I happen to choose those 5 snacks as part of my list for any one lap, that means I have a bonus 200 calories I’ve convinced myself to consume without it being a big deal, because it’s simply one more pretzel or one more piece of candy in the snack that I’m already use to consuming. (Again, because I’m DIYing my race and have specific needs relative to running with celiac, diabetes, and exocrine pancreatic insufficiency, for me, pre-planning my fuel and having it laid out in advance for every run, or in the race every single lap, is what works for me personally.)

Here’s a view of how I laid out my fuel. I had worked on a list of what I wanted for each lap, checking against repeats across the same day and making sure I wasn’t too heavily relying on any one snack throughout all the days. I then bagged up all snacks individually, then followed my list to lay them out by each lap and day accordingly. I also have a bag per day each for enzymes and electrolytes, which you’ll see on the left. Previously, I’ve done one bag per lap, but to reduce the number of things I’m pulling in and out of my vest each time, I decided I could do one big bag each per day (and that did end up working out well).

Two pictures side by side, with papers on the floor showing left to right laps 1-3 on the top and along the left side days 1-4, to create a grid to lay out my snacks. On the left picture, I have my enzymes, electrolytes per day and then a pile of snacks grouped for each lap. On the right, all the snacks and enzymes and electrolytes have been put into gallon bags, one for each lap.

Contingency planning

Like I did for my 100, I was (clearly) planning for as many possibilities as I could. I knew that during the run – and each evening after the run – I would have limited excess mental capacity for new ideas and brainstorming solutions when problems come up. The more I prepared for things that I knew were likely to happen – fatigue, sore body, blisters, chafing, dropping things, getting tired of eating, etc – the more likely that they would be small things and not big things that can contribute to ending a race attempt. This includes learning from my past 100 attempt and how I dealt with the rain. First of all, I planned to move my race if it looks like we’ll get 6 months of rain in a single 24 hour period! But also, I scheduled my race so that if I do have a few hours of really hard rain, I could choose to take a break and come in and eat/shower/change/rest and go back out later, or extend and finish a lap on the last day or the day after that. I was not running a race that would yank me from the course, but I did have a hard limit after day 5 based on a pre-planned doctor’s appointment that would be a hassle to reschedule, so I needed to finish by the night after day 5. But this gave me the flexibility to take breaks (that I wasn’t really planning to take but was prepared to if I needed to due to weather conditions).

Training for a 200 mile ultramarathon

Like training plans for marathons and 100 milers, the training plans I’ve read about for 200 mile ultramarathons intimidate me. So much mileage! So much time for a slow run/walker like me. I did try to look at sample 200 mile ultra plans and get a sense for what they’re trying to achieve – e.g. when do they peak their mileage before the race, how many back to back runs of what general length in terms of time etc – and then loosely keep that in mind.

But basically, I trained for this 200 mile ultra just like I trained for my marathon, 50k, 100k, and 82 miler. I like to end up doing long runs (which for me are run/walks of 30 seconds run, 60 seconds walk, just like I do shorter runs) of up to around 50k distance. This time, I did two total training runs that were each around 29 miles, just based on the length of the trail I had to run. I could have run longer, but mentally had the confidence that another ~45 minutes per run wasn’t going to change my ability to attempt 50 miles a day for 4 days. If I didn’t have 3 years of this training style under my personal belt, I might feel different about it. That’s longer than many people run, but I find the experience of 7-8 hours of time on my feet fueling, run/walking, and problem solving (including building up my willpower to spend that much time moving) to be what works for me.

The main difference for my 200 is probably also that it’s my 3rd year of ultrarunning. I was able to increase my long runs a little bit more of a time, when historically I used to add 2 miles a time to a long run. I jumped up 4 miles at a time – again, run/walking so very easy on my legs – when building up my long runs, so I was able to end up with 2 different 29 mile runs, two weeks apart, even though I really kicked off training specifically for this 8 weeks prior (10 weeks including taper) to the run. In between I also did a weekend of back to back to back runs (meaning 3 days in a row) where I ran 16 miles, another 16 miles, and 13 miles to practice getting up and running on tired legs. In past cycles I had done a lot more back to back (2-day) with a long and a medium run, but this time I did less of the 2-day and did the one big 3-day since I was targeting a 4-day experience. In future, if I were to do this again, given how well my body held up with all this training, I might have done more back to back, but I took things very cautiously and wanted to not overtrain and cause injury from ramping up too quickly.

As part of that (trying not to over do it), instead of doing several little runs throughout the week I focused on more medium-long runs with my vest and fueling, so I would do something like a long run (starting at 10 miles building up to 29 miles), a medium-long run (8 miles up to 13 miles or 16 miles) and another medium-ish run (usually 8 miles). Three runs a week, and that was it. Earlier in the 8 weeks, I was still doing a lot of hiking off the season, so I had plenty of other time-on-feet experiences. Later in the season I sometimes squeezed in a 4th short run of the week if we wouldn’t be hiking, and ran without my vest and tried to do some ‘speed work’ (aka run a little faster than my easy long run pace). Nothing fancy. Again, this is based on my slow running style (that’s actually a fixed interval of short run and short walk, usually 30 seconds run and 60 seconds walk), my schedule, my personality, and more. If you read this, don’t think my mileage or training style is the answer. But I did want to share what I did and that it generally worked for me.

I did struggle with wondering if I was training “enough”. But I never train “enough” compared to others’ marathon, 50k, 100k, 100 mile plans, either. I’m a low mileage-ish trainer overall, even though I do throw in a few longer runs than most people do. My peak training for marathon, 50k, and 100k is usually around low 50s (miles per week). Surprisingly, this 200 cycle did get me to some mid 60 mile weeks! One thing that also helped me mentally was adding in a rolling 7 day calculation of the miles, not just looking at miles per calendar week. That helped when I shifted some runs around due to scheduling, because I could see that I was still keeping a reasonable 55-low60s mileage over 7 days even though the calendar week total dropped to low 40s because of the way the runs happened to land in the calendar weeks.

Generally, though, looking back at how my training was more than I had accomplished for previous races; I feel better than ever (good fueling really helps!); I didn’t have any accidents or overtraining injuries or niggles; I decided a few weeks before peak that I was training enough and it was the right amount for me.

Another factor that was slightly different was how much hiking I had done this year. I ran my 100k in March then took some time off, promising my husband that we would hike “more” this year. That also coincided with me not really bouncing back from my 100k recovery period: I didn’t feel like doing much running, so we kept planning hiking adventures. Eventually I realized (because I was diagnosed with Graves’ disease last year, I’m having my thyroid and antibody and other related blood work done every 3 months while we work on getting everything into range) that this coincided with my TSH going too high for my body’s happiness; and my disinterest in long runs was actually a symptom (for me) of slightly too-high TSH. I changed my thyroid medication and within two weeks felt HUGELY more interested in long running, which is what coincided with reinvigorating my interest in a fall ultra, training, and ultimately deciding to go for the 200. But in the meantime, we kept hiking a lot – to the tune of over 225 miles hiked and over 53,000 feet of elevation gain! I never tracked elevation gain for hiking before (last year, not sure I retrospectively tracked it all but it was closer to 100 miles – so definitely likely 2x increase), but I can imagine this is definitely >2x above what I’ve done on my previous biggest hiking year, just given the sheer number of hikes that we went out on. So overall, the strengthening of my muscles from hiking helped, as did the time on feet. Before I kicked off my 8 week cycle, we were easily spending 3-4 hours a hike and usually at least two hikes a weekend, so I had a lot of time on feet almost every hike equivalent to 12 or more miles of running at that point. That really helped when I reintroduced long runs and aided my ability to jump my long run in distance by 4 miles at a time instead of more gently progressing it by 2 miles a week as I had done in the past.

How my 200 mile attempt actually went

Spoiler alert: I DNF (did not finish) 200 miles. Instead, I stopped – happily – at 100 miles. But it wasn’t for a lack of training.

Day 1 – 51 miles – All as planned

I set out on lap 1 on Day 1 as planned and on time, starting in the dark with a waist lamp at 6am. It was dark and just faintly cool, but warm enough (51F) that I didn’t bother with long sleeves because I knew I would warm up. (Instead, for all days, I was happy in shorts and a short sleeve shirt when the temps would range from 49F to 76F and back down again.) I only had to run for about an hour in the dark and the sky gradually brightened. It ended up being a cloudy, overcast and nice weather day so it didn’t get super bright first thing, but because it wasn’t wet and cold, it wasn’t annoying at all. I tried to start and stay at an easy pace, and was running slow enough (about ~30s/mile slower than my training paces) that I didn’t have to alter my planned intervals to slow me down any more. All was fairly well and as planned in the first lap. I stopped to use the bathroom at mile 3.5 and as planned at my 8 mile turnaround point, and also stopped to stuff a little more wool in a spot in my shoe a mile later. That added 2 minutes of time, but I didn’t let it bother me and still managed to finish lap 1 at about a 15:08 min/mi average pace, which was definitely faster than I had predicted. I used the bathroom again at the turnaround while my husband re-filled my hydration pack, then I stuffed the next round of snacks in my vest and took off. The bathroom and re-fueling “aid station” stop only took 5 minutes. Not bad! And on I went.

A background-less shot of me in my ultrarunning gear. I'm wearing a grey moisture-wicking visor; sunglasses; a purple ultrarunning vest packed with snacks in front and the blue tube of my hydration pack looped in front; a bright flourescent pink short sleeve shirt; grey shorts with pockets bulging on the side with my phone (left pocket) and skittles and headphones and keys (right pocket), and in this lap I was wearing bright pink shoes. Lap 2 was also pretty reasonable, although I was surprised by how often I wanted a bathroom. My period had started that morning (fun timing), and while I didn’t have a lot of flow, the signals my abdomen was giving my brain was telling me that I needed to go to the bathroom more often than I would have otherwise. That started to stress me out slightly, because I found myself wishing for a bathroom in the longest stretch without trail bathrooms and in a very populated area, the duration of which was about 5.5 miles long. I tried to drink less but was also aware of trying not to under hydrate or imbalance my electrolytes. I always get a little dehydrated during my period; and I was running a multi-day ultra where I needed a lot of hydration and more sodium than usual; this situation didn’t add up well! But I made it without any embarrassing moments on the trail. The second aid station again only took 5 minutes. (It really makes a world of difference to not have to dry off my feet, Desitin them up, and re-do socks and shoes every single aid station like I did last year!) I could have moved faster, but I was trying to not let small minutes of time frazzle me, and I was succeeding with being efficient but not rushed and continuing on my way. I had slowed down some during lap 2, however – dropping from a 15:08 to 15:20ish min/mi pace. Not much, but noticeable.

At sunset, with light blue sky fading to yellow at the horizon behind the row of tall, skinny bush like trees with gaps and a hot air balloon a hundred or so feet off the ground seen between the trees.Lap 3 I did feel more tired. I talked my husband into bringing me my headlamp toward the end of the last lap, instead of me having to carry it for 4+ hours before the sun went down. (Originally, I thought I would need it 2-3 hours into this last lap, but because I was moving so well it was now looking like 4 hours, and it would be a 2-3 mile e-bike ride for him to bring me the lamp when I wanted it. That was a mental win to not have to run with the lamp when I wasn’t using it!) I was still run/walking the same duration of intervals, but slowed down to about 16:01 pace for this lap. Overall, I would be at 15:40 average for the whole day, but the fatigue and my tired feet started to kick in on the third lap between miles 34-51. Plus, I stopped to take a LOT more pictures, because there was a hot air balloon growing in the distance as it was flying right toward me – and then by me next to the trail! It ended up landing next to the soccer fields a mile behind me after it passed me in this picture. I actually made it home right as the sun set and didn’t have to wear my lamp at all that evening.

Day 1 recovery was better and worse than I expected. I sat down and used my foot massager on my still-socked feet, which felt very good. I took a shower after I peeled my socks off and took a look at my feet for the first time. I had one blister that I didn’t know was growing at all pop about an hour before I finished, but it was under some of my pre-taped area. I decided to leave the tape and see how it looked and felt in the morning. I had 2-3 other tiny, not a big deal blisters that I would tape in the morning but didn’t need any attention that night.

I had planned to eat a reasonably sized dinner – preferably around 1000 calories – each night, to help me address my calorie deficit. And I had a big deficit: I had burned 5,447 calories and consumed 3,051 calories in my 13 hours and 13 minutes of running. But I could only eat ¼ of the pizza I planned for dinner, and that took a lot of work to force myself to eat. So I gave up, and went to bed with a 3,846 calorie deficit, which was bigger than I wanted.

And going to bed hurt. I was stiff, which I could deal with, but my feet that didn’t hurt much while running started SCREAMING at me. All over. They hurt so bad. Not blisters, just intense aches. Ouch! I started to doubt my ability to run the next day, but this is where my pre-planning kicked in (aided by my husband who had agreed to the rules we had decided upon): no matter what, I would get up in the morning, get dressed, and go out and start my first lap. If I decided to quit, I could, but I could not quit at night in bed or in the morning in the bed or in the house. I had to get up and go. So I went to sleep, less optimistic about my ability to finish 50 miles again on day 2, but willing to see what would happen.

Day 2: 34 instead of 50 miles, and walking my first ever lap

I actually woke up before my alarm went off on day 2. Because I had finished so efficiently the day before, I was able to again get a good night’s sleep, even with the early alarm and waking up again at 4:30am with plans to be going by 6am. The extra time was helpful, because I didn’t feel rushed as I got ready to go. I spent some extra time taping my new blisters. Because they hadn’t popped, I put small torn pieces of Kleenex against them and used cut strips of kinesio tape to protect the area. (Read “Fixing Your Feet” for other great ultra-related foot care tips; I learned about Kleenex from that book.) I also use lambs’ wool for areas that rub or might be getting hot spots, so I put wool back in my usual places (between big and second toes, and on the side of the foot) plus another toe that was rubbing but not blistered and could use some cushion. I also this year have been trying Tom’s blister powder in my socks, which seems to help since my feet are extra sweat prone, and I had pre-powdered a stack of socks so I could simply slip them on and get going once I had done the Kleenex/tape and wool setup. The one blister that had popped under my tape wasn’t hurting when I pressed on it, so I left it alone and just added loose wool for a little padding.

A pretty view of the trail with bright blue sky after the sun rose with green bushes (and the river out of sight) to the left, with the trail parallel to a high concrete wall of a road with cheery red and yellow leaved trees leaning over the trail.And off I went. I managed to run/walk from the start, and faster than I had projected on my spreadsheets originally and definitely faster than I thought was possible the night before or even before I started that morning. Sure, I was slower than the day before, but 15:40 min/mi pace was nothing to sneeze at, and I was feeling good. I was really surprised that my legs, hips and body did not hurt at all! My multi-day or back-to-back training seemed to pay off here. All was well for most of the first lap (17 miles again), but then the last 2 or so miles, my pace started dipping unexpectedly so I was doing 16+ min/mi without changing my easy effort. I was disappointed, and tired, when I came into my aid station turnaround. I again didn’t need foot care and spent less than 5 minutes here, but I told Scott as I left that I was going to walk for a while, because my feet had been hurting and they were getting worse. Not blisters: but the balls of my feet were feeling excruciating.

A close up of a yellow shelled snail against the paved trail that I saw while walking the world's slowest 17-mile lap on day 2.I headed out, and within a few minutes he had re-packed up and biked up to ride alongside me for a few minutes and chat. I told him I was probably going to need to walk this entire lap. We agreed this was fine and to be expected, and was in fact built into my schedule that I would slow down. I’ve never walked a full lap in an ultra before, so this would be novel to me. But then my feet got louder and louder and I told him I didn’t think I could even walk the full lap. We decided that I should take some Tylenol, because I wasn’t limping and this wouldn’t mask any pain that would be important cues for my body that I would be overriding, but simply muting the “ow this is a lot” screams that the bones in the balls of my feet were feeling. He biked home, grabbed some, and came back out. I took the Tylenol and sent him home again, walking on. Luckily, the Tylenol did kick in and it went from almost unbearable to manageable super-discomfort, so I continued walking. And walking. And walking. It took FOREVER, it felt like, having gone from 15-16 min/mi pace with 30 seconds of running, 60 seconds of walking, to doing 19-20 minute miles of pure walking. It was boring. I had podcasts, music, audiobooks galore, and I was still bored and uncomfortable and not loving this experience. I also was thinking about it on the way back about how I did not want to do a 3rd lap that day (to get me to my planned 50 miles) walking again.

Scott biked out early to meet me and bring me extra ice, because it was getting hot and I was an hour slower than the day before and risking running out of water that lap if he didn’t. After he refilled my hydration pack and brought it back to me while I walked on, I told him I wanted to be done for the day. He pointed out that when I finished this lap, I would be at 34 miles for the day, and combined with the day before (51), that put me at 85 miles, which would be a new distance PR for me since last year I had stopped at 82. That was true, and that would be a nice place to stop for the day. He reminded me of our ‘rules’ that I could go out the next day and do another lap to get me to 100, and decide during that lap what else I wanted to do. I was pretty sure I didn’t want to do more, but agreed I would decide the next day. So I walked home, completing lap 2 and 34 miles for the day, bringing me to 85 miles overall across 2 days.

Day 2 recovery went a little better, in part because I didn’t do 51 miles (only 34) and I had walked rather than ran the second lap, and also stopped earlier in the day (4pm instead of 7pm). I had more time to shower and bring myself to finally eat an entire 1000 calories before going to bed, again with my feet screaming at me. I had more blisters this time, mostly again on my right foot, but the balls of my feet and the bones of my feet ached in a way they never had before. This time, though, instead of setting my alarm to get up and go by 6am, I decided to sleep for longer, and go out a little later to start my first lap. This was a deviation from my plan, but another deviation I felt was the right one: I needed the sleep to help my body recover to be able to even attempt another lap.

Day 3: Only 16 miles, but hitting 100 for the first time ever

Instead of 6am, I set out on Day 3 around 8:30am. I would have taken even longer to go, but the forecast was for a warm day (we ended up hitting 81F) and I wanted to be done with the lap before the worst of the heat. I thought there was a 10% chance I’d keep going after this lap, but it was a pretty small chance. However, I set out for the planned 16 mile lap and was pleasantly surprised that I was run/walking at about a 15:40 pace! Again, better than I had projected (although yes, I had deviated from my mileage plan the day before), and it felt like a good affirmation that stopping the day before instead of slogging out another walking lap was the right thing to do.

After a first few miles, I toyed with the idea of continuing on. But I knew with the heat I probably wouldn’t stand more than one more lap, which would get me to 116. Even if I went out again the fourth day, and did 1-2 laps, that would MAYBE get me to 150, but I doubted I could do that without starting to cause some serious damage. And it honestly wasn’t feeling fun. I had enjoyed the first day, running in the dark, the fog, the daylight, and the twilight, seeing changing fall leaves and running through piles of them. The second day was also fun for the first lap, but the second lap walking was probably what a lot of ultra marathoners call the “death march” and just not fun. I didn’t want to keep going if it wasn’t fun, and I didn’t want to run myself into the ground (meaning to be so worn down that it would take weeks to months to recover) or into injury, especially when the specific milestones didn’t really mean anything. Sure, I wanted to be a 200 mile ultramarathoner, something that only a few thousand people have ever done – but I didn’t want to do it at the expense of my well-being. I spent a lot of time thinking about it, especially miles 4-8, and was thinking about the fact that the day before I had started, I had gone to a doctor’s appointment and had an official diagnosis confirming my fifth autoimmune disease, then proceeded to run (was running) 100 miles. Despite all the fun challenges of running with autoimmune conditions, I’m in really good health and fitness. My training this year went so well and I really enjoyed it. Most of this ultra had gone so well physically, and my legs and body weren’t hurting at all: the weakness was my feet. I didn’t think I could have trained any differently to address that, nor do I think I could change it moving forward. It’s honestly just hard to run that many hours or that many miles, as most ultramarathoners know, and your feet take a beating. Given that I was running on pavement for all of those hours, it can be even harder – or a different kind of hard – than kicking roots and rocks on a dirt trail. I figured I would metaphorically kick myself if I tried for 116 or 134 and injured myself in a way that would take 6-8 weeks to recover, whereas I felt pretty confident that if I stopped after this lap (at 100), I would have a relatively short and easy recovery, no major issues, and bounce back better than I ever have, despite it being my longest ever ultramarathon. Yes, I was doing it as a multi-day with sleep in between, but both in time on feet and in mileage, it was still the most I’d ever done in 2 or 3 days.

And, I was tired of eating. I was fueling SO well. Per my plans, I set out to do >500 mg of sodium per hour and >250 calories per hour. I had been nailing it every lap and every day! Day 1 I averaged 809 mg of sodium per hour and 290 calories per hour. Day 2 was even increased from that, averaging 934 mg of sodium per hour and 303 calories per hour! Given the decreased caloric burn of day 2 because I walked the second lap, my caloric deficit for day 2 was a mere ~882 calories (given that I also managed to eat a full dinner that night), even though I skipped the last hour as I finished the walking lap. Day 3 I was also fueling above my goals, but I was tired of it. Sooooo tired of it. Remember, I have to take a pill every time I eat, because I have exocrine pancreatic insufficiency (EPI or PEI). I was eating every 30 minutes as I ran or walked, so that meant swallowing at least one pill every 30 minutes. I had swallowed 57 pills on Day 1 and 48 pills on Day 2, between my enzymes and electrolyte pills. SO MANY PILLS. The idea of continuing to eat constantly every 30 minutes for another lap of ~5 or more hours was also not appealing. I knew if I didn’t eat, I couldn’t continue.

A chart with an hourly break down of sodium, calories, and carbs consumed per hour, plus totals of caloric consumption, burn, and calculated deficit across ~27 hours of move time to accomplish 100 miles run.

And so, I decided to stop after one more lap on day 3, even though I was holding up a respectable 15:41 min/mi pace throughout. I hit 100 miles and finished the lap at home, happy with my decision.

Two pictures of me leaning over after my run holding a sign (one reading 50 miles, one reading 100 miles) for each of my cats to sniff.(You can see from these two pictures that I smelled VERY interesting, sweaty and salty and exhausted at the end of day 1 and day 3, when I hit 50 miles and 100 miles, respectively. We have two twin kittens (now 3 years old) and one came out to sniff me first on the first day, and the other came out as I came home on the third day!)

Because I had only run one final lap (16 miles) on day 3, and had so many bonus hours in the rest of the day afterward when I was done and home, I was able to eat more and end up with only a 803 calorie deficit for the day. So overall, day 1 had the biggest deficit and probably influenced my fatigue and perception of pain on day 2, but because I had shortened day 2 and then day 3, my very high calorie intake every hour did a pretty good job matching my calorie expenditure, which is probably why I felt very little muscle fatigue in my body and had no significant sore areas other than the bottoms of my feet. I ended up averaging 821 mg/hr of sodium and 279 calories per hour (taking into account the fact that I skipped two final snacks at the end of day 2 when I was walking it out; ignoring that completely skipped hour would mean the average caloric intake on hours I ate anything at all was closer to 290 calories/hr!)

In total, I ended up consuming 124 pills in approximately 27 hours of move time across my 100 miles. (This doesn’t include enzyme pills for my breakfast or dinners each of those days, either – just the electrolyte and enzyme pills consumed while running!)

AFTERMATH

Recovery after day 3 was pretty similar to day 2, with me being able to eat more and limit my calorie deficit. I’ve had long ~30 mile training runs where I wasn’t very hungry afterward, but it surprised me that even two days after my ultra, I still haven’t really regained my appetite. I would have figured my almost 4000 calorie deficit from day 1 would drive a lot of hunger, so this surprised me.

So too has my physical state: 48 hours following the completion of my 100 miles, I am in *fantastic* shape compared to other multi-day back to back series of runs I’ve done, ultramarathons or not. The few blisters I got, mainly on my right foot, have already flattened themselves up and mostly vanished. I think I get more blisters on my right foot because of breaking my toe last year: my right foot now splays wider in my shoe, so it tends to get more blisters and cause more trouble than my left foot. I got only one blister on my left foot, which is still fluid filled but not painful and starting to visibly deflate now that I’m not rubbing it onto a shoe constantly any more. And my legs don’t feel like I ran at all, let alone running 51+34+16 miles!

I am tired, though. I don’t have brain fog, probably because of my excellent fueling, but I am fatigued in terms of overall energy and lack of motivation to get a lot done yesterday and today (other than writing this blog post!). So that’s probably pretty on par with my effort expended and matches what I expected, but it’s nice to be able to move around without hurting (other than my feet).

My feet in terms of general aches and ows are what came out the worst from my run. Day 2, what hurt was the bottom of the balls of my feet. Starting each night though, I was getting aches all over in all of the bones of my feet. After day 3, that night the foot aches were particularly strong, and I took some Tylenol to help with that. Yesterday evening and today though, the ache has settled down to very minor and only occasionally noticeable. The tendon from the top of my left foot up my ankle is sore and gets cranky when I wear my sneakers (although it didn’t bother me at all while running any of the days), so after tying and re-tying my shoelaces 18 times yesterday to try to find the perfect fit for my left foot, today I went on my recovery walk in flip flops and was much happier.

What I’m taking away from this 200 mile attempt that was only 100 miles:

I feel a little disappointed that I didn’t get anywhere near 200 miles, but obviously, I was not willing to hurt long enough or hard enough to get there. My husband called it a stretch goal. Rationally, I am very happy with my choices to stop at 100 and end up in the fantastic physical shape that I am in, and I recognize that I made a very rational choice and tradeoff between ending in good shape (and health) and the mainly ego-driven benefits of possibly achieving 200 miles (for me).

Would I do anything different? I can’t think of anything. If I somehow had an alternate do-over, I can’t think of anything I would think to change. I’d like to reduce my risk of blisters but I’m already doing all I can there, and dealing with changes in my right foot shape post-broken toe that I have no control over. And I’m not sure how to train more/better for reducing the bottom ball of foot pain that I got: I already trained multiple days, back to back, long hours of feet on pavement. It’s possible that having my doctor’s appointment the day before I started influenced my mental calculation of my future risk/benefit tradeoff of continuing more miles, and so not having had that then may have changed my calculations to do another lap or two, or go out on the 4th day (which I did not). But, I don’t have a do over, and I’ll never know, and I’m not too upset about that because I was able to control what I could control and am again pretty happy with the outcomes. 100 or 150 miles felt about the same to me, psychologically, in terms of satisfaction.

What I would tell other people about attempting multiple day ultramarathons or 200 mile ultramarathons:

Training back to back days is one option, as is long spurts of time on feet walking/hiking/running. I don’t think “just running” has to be the only way to train for these things. I’m also a big proponent of short intervals: If you hear people recommend taking walk breaks, it doesn’t have to be 1 minute every 10 minutes or every mile. It can be as short as every 30 seconds of running, take a walk break! There’s no wrong way to do it, whatever makes your body and brain happy. I get bored running longer (and don’t like it); other people get bored running the short intervals that I do – so find what works for you and what you’re actually willing to do.

Having plans for how you’ll rest X hours and go out and try to make it another lap or to the next aid station works really well, especially if you have crew/pacers/support (for me, my husband) who will stick to those rules and help you get back out there to try the next lap/section. Speaking of sleep/rest, laying down for a while helps as much as sleeping, so even if you can’t sleep, committing to the rest of X hours is also good for resting your feet and everything. I found that the hour laying down before I fell asleep helped my body process the noise of the “ouch” from my feet and it was a lot easier to sleep after that. Plan that you’ll have some down/up time before and after your sleep/rest time, and figure that into your time plans accordingly.

The cheesy “know your why” and “know what you want” recommendations do help. I didn’t want 200 miles badly enough to hurt more for longer and risk months of recovery (or the inability to recover). Maybe you’d be lucky enough to achieve 200 without hurting that bad, that long, or risking injury – or maybe you’ll have to make that choice, and you might make it differently than I did. (Maybe you’re lucky enough to not have 5 autoimmune things to juggle! I hope you don’t have to!) I kind of knew going in that I was only going to hit 200 if all went perfect.

Diabetes and this 200 mile ultramarathon that was a 100 mile ultra:

I just realized that I managed to write an ENTIRE race report without talking about diabetes and glucose management…because I had zero diabetes-related thoughts or issues during these several days of my run! Sweet! (Pun fully intended.)

Remember, I have type 1 diabetes and use an open source automated insulin delivery (AID) system (in my case, still using OpenAPS after alllllll these years), and I’ve talked previously about how I fuel while ultrarunning and juggling blood glucose management. Unlike previous ultras, I had zero pump site malfunctions (phew) and my glucose stayed nicely in range throughout. I think I had one small drift above range for 2 hours due to an hour of higher carb activity right when I shifted to walking the second lap on day 2, but otherwise was nicely in range all days and all nights without any extra thought or energy expended. I didn’t have to take a single “low carb”/hypoglycemia treatment! I think there was one snack I took a few minutes early when I saw I was drifting down slightly, but that was mostly a convenience thing and I probably would not have gone low (below target) even if I had waited for my planned fuel interval. But out of 46 snacks, only one 5-10 minutes early is impressive to me.

I had no issues after each day’s run, either: OpenAPS seamlessly adjusted to the increasing insulin sensitivity (using “autosensitivity” or “autosens”) so I didn’t have to do manual profile shifts or overrides or any manual interference. I did decide each night whether I wanted to let it SMB (supermicrobolus) as usual or stick to temp basal only to reduce the risk of hypoglycemia, but I had no post-dinner or overnight lows at all.

The most “work” I had to do was deciding to wear a second CGM sensor (staggered, 5 days after my other one started) so that I had a CGM sensor session going with good quality data that I could fall back to if my other sensor started to get jumpy, because the sensor session was supposed to end the night of day 4 of my planned run. I obviously didn’t run day 4, but even so I was glad to have another sensor going (worth the cost of overlapping my sensors) in order to have the reassurance of constant data if the first one died or fell out and I could seamlessly switch to an already-warmed up sensor with good data. I didn’t need it, but I was glad to have done that in prep.

(Because I didn’t talk about diabetes a lot in this post, because it was not very relevant to my experiences here, you might want to check out my previous race recaps and posts about utlrarunning like this one where I talk in more detail about balancing fueling, insulin, and glucose management while running for zillions of hours.)

TLDR: I ran 100 miles, and I did it my DIY way: my own course, my own (slow pace), with sleep breaks, a lot of fueling, and a lot of satisfaction of setting big goals and attempting to achieve them. I think for me, the process goals of figuring out how to even safely attempt ultramarathons are even more rewarding than the mileage milestones of ultrarunning.

Running a multi-day ultramarathon by Dana M. Lewis from DIYPS.org

Findings from the world’s first RCT on open source AID (the CREATE trial) presented at #ADA2022

September 7, 2022 UPDATEI’m thrilled to share that the paper with the primary outcomes from the CREATE trial is now published. You can find it on the journal site here, or view an author copy here. You can also see a Twitter thread here, if you are interested in sharing the study with your networks.

Example citation:

Burnside, M; Lewis, D; Crocket, H; et al. Open-Source Automated Insulin Delivery in Type 1 Diabetes. N Engl J Med 2022;387:869-81. DOI:10.1056/NEJMoa2203913


(You can also see a previous Twitter thread here summarizing the study results, if you are interested in sharing the study with your networks.)

TLDR: The CREATE Trial was a multi-site, open-labeled, randomized, parallel-group, 24-week superiority trial evaluating the efficacy and safety of an open-source AID system using the OpenAPS algorithm in a modified version of AndroidAPS. Our study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14 percentage points higher among those who used the open-source AID system (95% confidence interval [CI], 9.2 to 18.8; P<0.001) compared to those who used sensor augmented pump therapy; a difference that corresponds to 3 hours 21 minutes more time spent in target range per day. The system did not contribute to any additional hypoglycemia. Glycemic improvements were evident within the first week and were maintained over the 24-week trial. This illustrates that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. This study concluded that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS, a widely used open-source AID solution, is efficacious and safe.

The backstory on this study

We developed the first open source AID in late 2014 and shared it with the world as OpenAPS in February 2015. It went from n=1 to (n=1)*2 and up from there. Over time, there were requests for data to help answer the question “how do you know it works (for anybody else)?”. This led to the first survey in the OpenAPS community (published here), followed by additional retrospective studies such as this one analyzing data donated by the community,  prospective studies, and even an in silico study of the algorithm. Thousands of users chose open source AID, first because there was no commercial AID, and later because open source AID such as the OpenAPS algorithm was more advanced or had interoperability features or other benefits such as quality of life improvements that they could not find in commercial AID (or because they were still restricted from being able to access or afford commercial AID options). The pile of evidence kept growing, and each study has shown safety and efficacy matching or surpassing commercial AID systems (such as in this study), yet still, there was always the “but there’s no RCT showing safety!” response.

After Martin de Bock saw me present about OpenAPS and open source AID at ADA Scientific Sessions in 2018, we literally spent an evening at the dinner table drawing the OpenAPS algorithm on a napkin at the table to illustrate how OpenAPS works in fine grained detail (as much as one can do on napkin drawings!) and dreamed up the idea of an RCT in New Zealand to study the open source AID system so many were using. We sought and were granted funding by New Zealand’s Health Research Council, published our protocol, and commenced the study.

This is my high level summary of the study and some significant aspects of it.

Study Design:

This study was a 24-week, multi-centre randomized controlled trial in children (7–15 years) and adults (16–70 years) with type 1 diabetes comparing open-source AID (using the OpenAPS algorithm within a version of AndroidAPS implemented in a smartphone with the DANA-i™ insulin pump and Dexcom G6® CGM), to sensor augmented pump therapy. The primary outcome was change in the percent of time in target sensor glucose range (3.9-10mmol/L [70-180mg/dL]) from run-in to the last two weeks of the randomized controlled trial.

  • This is a LONG study, designed to look for rare adverse events.
  • This study used the OpenAPS algorithm within a modified version of AndroidAPS, meaning the learning objectives were adapted for the purpose of the study. Participants spent at least 72 hours in “predictive low glucose suspend mode” (known as PLGM), which corrects for hypoglycemia but not hyperglycemia, before proceeding to the next stage of closed loop which also then corrected for hyperglycemia.
  • The full feature set of OpenAPS and AndroidAPS, including “supermicroboluses” (SMB) were able to be used by participants throughout the study.

Results:

Ninety-seven participants (48 children and 49 adults) were randomized.

Among adults, mean time in range (±SD) at study end was 74.5±11.9% using AID (Δ+ 9.6±11.8% from run-in; P<0.001) with 68% achieving a time in range of >70%.

Among children, mean time in range at study end was 67.5±11.5% (Δ+ 9.9±14.9% from run-in; P<0.001) with 50% achieving a time in range of >70%.

Mean time in range at study end for the control arm was 56.5±14.2% and 52.5±17.5% for adults and children respectively, with no improvement from run-in. No severe hypoglycemic or DKA events occurred in either arm. Two participants (one adult and one child) withdrew from AID due to frustrations with hardware issues.

  • The pump used in the study initially had an issue with the battery, and there were lots of pumps that needed refurbishment at the start of the study.
  • Aside from these pump issues, and standard pump site/cannula issues throughout the study (that are not unique to AID), there were no adverse events reported related to the algorithm or automated insulin delivery.
  • Only two participants withdrew from AID, due to frustration with pump hardware.
  • No severe hypoglycemia or DKA events occurred in either study arm!
  • In fact, use of open source AID improved time in range without causing additional hypoglycemia, which has long been a concern of critics of open source (and all types of) AID.
  • Time spent in ‘level 1’ and ‘level 2’ hyperglycemia was significantly lower in the AID group as well compared to the control group.

In the primary analysis, the mean (±SD) percentage of time that the glucose level was in the target range (3.9 – 10mmol/L [70-180mg/dL]) increased from 61.2±12.3% during run-in to 71.2±12.1% during the final 2-weeks of the trial in the AID group and decreased from 57.7±14.3% to 54±16% in the control group, with a mean adjusted difference (AID minus control at end of study) of 14.0 percentage points (95% confidence interval [CI], 9.2 to 18.8; P<0.001). No age interaction was detected, which suggests that adults and children benefited from AID similarly.

  • The CREATE study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy.
  • This difference reflects 3 hours 21 minutes more time spent in target range per day!
  • For children AID users, they spent 3 hours 1 minute more time in target range daily (95% CI, 1h 22m to 4h 41m).
  • For adult AID users, they spent 3 hours 41 minutes more time in target range daily (95% CI, 2h 4m to 5h 18m).
  • Glycemic improvements were evident within the first week and were maintained over the 24-week trial. Meaning: things got better quickly and stayed so through the entire 24-week time period of the trial!
  • AID was most effective at night.
Difference between control and AID arms overall, and during day and night separately, of TIR for overall, adults, and kids

One thing I think is worth making note of is that one criticism of previous studies with open source AID is regarding the self-selection effect. There is the theory that people do better with open source AID because of self-selection and self-motivation. However, the CREATE study recruited a diverse cohort of participants, and the study findings (as described above) match all previous reports of safety and efficacy outcomes from previous studies. The CREATE study also found that the greatest improvements in TIR were seen in participants with lowest TIR at baseline. This means one major finding of the CREATE study is that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID.

This therefore means there should be NO gatekeeping by healthcare providers or the healthcare system to restrict AID technology from people with insulin-requiring diabetes, regardless of their outcomes or experiences with previous diabetes treatment modalities.

There is also no age effect observed in the trail, meaning that the results of the CREATE Trial demonstrated that open-source AID is safe and effective in children and adults with type 1 diabetes. If someone wants to use open source AID, they would likely benefit, regardless of age or past diabetes experiences. If they don’t want to use open source AID or commercial AID…they don’t have to! But the choice should 100% be theirs.

In summary:

  • The CREATE trial was the first RCT to look at open source AID, after years of interest in such a study to complement the dozens of other studies evaluating open source AID.
  • The conclusion of the CREATE trial is that open-source AID using the OpenAPS algorithm within a version of AndroidAPS, a widely used open-source AID solution, appears safe and effective.
  • The CREATE trial found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy; a difference that reflects 3 hours 21 minutes more time spent in target range per day.
  • The study recruited a diverse cohort, yet still produced glycemic outcomes consistent with existing open-source AID literature, and that compare favorably to commercially available AID systems. Therefore, the CREATE Trial indicates that a range of people with type 1 diabetes might benefit from open-source AID solutions.

Huge thanks to each and every participant and their families for their contributions to this study! And ditto, big thanks to the amazing, multidisciplinary CREATE study team for their work on this study.


September 7, 2022 UPDATE – I’m thrilled to share that the paper with the primary outcomes from the CREATE trial is now published. You can find it on the journal site here, or like all of the research I contribute to, access an author copy on my research paper.

Example citation:

Burnside, M; Lewis, D; Crocket, H; et al. Open-Source Automated Insulin Delivery in Type 1 Diabetes. N Engl J Med 2022;387:869-81. DOI:10.1056/NE/Moa2203913

Note that the continuation phase study results are slated to be presented this fall at another conference!

Findings from the RCT on open source AID, the CREATE Trial, presented at #ADA2022

Missing metrics in diabetes measurement by @DanaMLewis

“May I ask what your A1c is?”

This is a polite, and seemingly innocuous question. However, it’s one of my least favorite questions taken at face value. Why?

Well, this question is often a proxy for some of the following questions:

  • How well are *you* doing with DIY closed loop technology?
  • How well could *I* possibly do with DIY closed loop technology?
  • What’s possible to achieve in real-world life with type 1 diabetes?

But if I answered this question directly with “X.x%”, it leaves out so much crucial information. Such as:

  • What my BG targets are
    • Because with DIY closed loop tech like OpenAPS, you can choose and set your own target.
    • (That’s also one of the reasons why the 2018 OpenAPS Outcomes Study is fascinating to me, because people usually set high, conservative targets to start and then gradually lower them as they get comfortable. However, we didn’t have a way to retrospectively sleuth out targets, so those are results are even with the amalgamation of people’s targets being at any point they wanted at any time.)
  • What type of lifestyle I live
    • I don’t consider myself to eat particularly “high” or “low” carb. (And don’t start at me about why you choose to eat X amount of carbs – you do you! and YDMV) Someone who *is* eating a lot higher or lower carb diet compared to mine, though, may have a different experience than me.
    • Someone who is not doing exercise or activity may also have a different experience then me with variability in BGs. Sometimes I’m super active, climbing mountains (and falling off of them..more detail about that here) and running marathons and swimming or scuba diving, and sometimes I’m not. That activity is not so much about “being healthy”, but a point about how exercise and activity can actually make it a lot harder to manage BGs, both due to the volatility of the activity on insulin sensitivity etc.; but also because of the factor of going on/off of insulin for a period of time (because my pump is not waterproof).
  • What settings I have enabled in OpenAPS
    • I use most of the advanced settings, such as “superMicroBoluses” (aka SMB – read more about how it works here); with a higher than default “maxSMBBasalMinutes”; and I also use all of the advanced exercise settings so that targets also nudge sensitivity in addition to autosensitivity picking up any changes after exercise and other sensitivity-change-inducing activities or events. I also get Pushover alerts to tell me if I need any carbs (and how many), if I’m dropping faster and expected to go below my target, even with zero temping all the way down.
  • What my behavioral choices are
    • Timing of insulin matters. As I learned almost 5 years ago (wow), the impact of insulin timing compared to food *really* matters. Some people still are able to do and manage well with “pre-bolusing”. I don’t (as explained there in the previous link). But “eating soon” mode does help a lot for managing post-meal spikes (see here a quick and easy visual for how to do “eating soon”). However, I don’t do “eating soon” regularly like I used to. In part, because I’m now on a slightly-faster insulin that peaks in 45 minutes. I still get better outcomes when I do an eating-soon, sure, but behaviorally it’s less necessary.
    • The other reason is because I’ve also switched to not bolusing for meals.
      • (The exceptions being if I’m not looping for some reason, such as I’m in the middle of switching CGM sensors and don’t have CGM data to loop off of.)

These settings and choices are all crucial information to understanding the X.x% of A1c.

Diabetes isn’t just the average blood glucose value. It’s not just the standard deviation or coefficient of variation or % time in range or how much BG fluctuates.

Diabetes impacts so much of our daily life and requires so much cognitive burden for us, and our loved ones. That’s part of the reasons I appreciate so much Sulka & his family being candid about how their A1C didn’t change, but the amount of work required to achieve it did (way fewer manual corrections). And ditto for Jason & the Wittmer family for sharing about the change in the number of school nurse visits before/after using OpenAPS. (See both of their stories in this post)

For me, my quality of life metric has always been first about sleep: can I sleep safely and with peace of mind at night? Yes. Then – how long can I safely sleep? (The answer: a lot. Yay!)  But over time, my metrics have also evolved to consider how I can cut down (like Sulka) on the amount of work it takes to achieve my ideal outcomes, and find a happy balance there.

As I mentioned in this podcast recently, other than changing my pump site (here’s how I change mine) and soaking and swapping my CGM sensors (psst – soak your sensor!), I usually only take a few diabetes-related actions a day. They’re usually on my watch, pressing a button to either enable a temp target or entering carbs when I sit down to eat.

That’s a huge reduction in physical work, as well as amount of time spent thinking/planning/doing diabetes-related things. And when life happens – because I get the flu or the norovirus or I fall off a mountain and break my ankle – I don’t worry about diabetes any more.

So when I’m asked about A1c, my answer is not a simple “X.x%”. (And not just for the reason I’m annoyed by how much judging and shaming goes on around A1c, although that influences it, too.) I usually remind people that I first started with an “open loop” for a year, and that dropped my A1c by X%. And then I closed the loop, which reduced my A1c further. And we made OpenAPS even better over the last four years, which reduced it further. And then I completely stopped bolusing! And got less lows…and kept the same A1c.

And then I ask them what they’d really like to know. :) If it’s a fellow person with diabetes or a loved one, we talk about what problems they might be having or what areas they’d like to improve or what behaviors they’d like to change, if any. That’s usually way more effective than hearing “X.x%” of an A1c, and them wondering silently how to get there or what to do differently if someone wants to change things. (Or for clinicians who ask me, it turns into a discussion about choices and behaviors and tradeoffs that patients may choose to make.)

Remember, your diabetes may (and will) vary (aka, YDMV). Your lifestyle, the phase of life you’re in, your priorities, your body and health, and your choices will ALL be different than mine. That’s not bad in any way: that’s just the way it is. The behaviors I choose and the work I’m willing to do (or not do) to achieve *my* goals (and what my goals are), will be different than what you choose for yours.

And that’s therefore why A1c is not “enough” to me as a metric and something that we should compare people on, even though A1c is the “same” for everyone: because the work, time spent, behavioral tradeoffs, and goals related to it will all vary.

Missing_metrics_@DanaMLewis

Broken bones (trimalleolar ankle fracture), type 1 diabetes, and #OpenAPS

In January, Scott and I planned and went on a three day hiking trip in New Zealand. NZ is famous for “tramping” and “trekking”, and since we were in the country for a conference (you can see my talk at LinuxConfAU here!), we decided to give it a try. This was my first true “backpacking” type trip where you carry all your stuff on your back; and the first multi-day hiking experience. You could either rent a cot in a hut and carry all your food and cooking utensils and bedding on your back; or you could pay to hike with a company who has a lodge you can stay at (with hot showers and amazing food) and also has guides who hike with your pack. They had me at “gluten free food” and “hot showers”, so I convinced Scott that was the way we should do our Routeburn Track hike!

I planned ahead well for the hike; they gave us a packing list of recommended things to carry and bring. I learned from a friend in NZ, Martin, who had gone trekking a few weeks prior: his pack went over a cliff and was lost – yikes! Therefore, I planned one set of supplies in baggies and put them in both Scott & my pack just in case something happened to one of our packs, we’d still be completely covered.

Day 1 of the hike was awesome – it was overcast and felt like hiking in Seattle, but the scenery and wildlife were still great to experience. Since it was raining off and on, the waterfalls were spectacular.

Day 2 also started awesome – it was a breathtakingly clear morning with blue skies and sunshine as we hiked up above the tree line and over a mountain ridge, along the valley, and onward toward the lunch spot. I was feeling great and enjoying my hike – this was one of my all-time favorite places to hike in terms of the view of the valley and lake that we hiked from; and the mountain views on the other side of the ridge once we topped the mountain and crossed over.

However, about 30 min from the lunch shelter (and about 300 feet of elevation to go), I noticed the lady hiking in front of us decided to sit and slide down a particularly large and angled rock on the trail. I approached the rock planning to stop and assess my plan before continuing on. Before I even decided what to do, I somehow slipped and vaulted (for lack of a better word) left and off the trail…and down the slope. I flipped over multiple times and knew I had to grab something to stop my flight and be able to save myself from going all the way off the mountain slope. I amazingly only ended up about 10 feet off the trail, clinging to a giant bush/fern-like plant.

I had to be pulled back up to the trail by Scott and another hiker who came running after hearing my yell for help as I went down the mountain. (Scott came down off the trail few feet, and had to hold onto the hand of another hiker with one hand while pulling me up with the other, just like in the movies. It’s not a lot of fun to be at the end of the human chain, though!) At that point, I knew I had injured my right ankle and could only use my left foot/leg and right knee to try to climb back up to the trail while they pulled on my backpack. We got me back on to the trail and over to a rock to rest. We waited a few minutes for the back-of-the-pack guides who showed up and taped around my ankle and boot to see if I could walk on it – they thought it was sprained. I could flex, but couldn’t really put weight on it without excruciatingly sharp pain on the right side. I’d never sprained my foot before or broken any bones in my life, so I was frustrated by how painful the ‘sprain’ was. I had an overwhelming wave of nausea that I knew was in response to the pain, too, so at one point I had to sit there and lean back with my eyes closed while everyone else talked around me.

The guides wanted to see if we could get to a nearby river to ice my leg in. I used my poles as pseudo-crutches in front of me, with my arms bent at 90 degree angles, and with Scott behind me to check my balance, would crutch and hop on one leg. It wasn’t like regular crutching, though, where you can press your weight down on your arms and hands. It was really an act of placing the poles slightly forward for balance and then hopping up and forward, pressing off my left leg. My left leg was quickly exhausted and cramping from the effort of hopping forward with my entire body weight. It was also complicated by the rain making things more slippery; and of course; this is a mountain trail with rocks and boulders of different sizes. What I didn’t even notice walking normally on two feet became incredibly frustrating for figuring out when and how to jump up onto a small rock; or around to the side; etc.

“Lucky” for me (eye roll), we happened to be in an ascending section of the trail with quite a few large rocky sections, and there was no way I could hop up the uneven rocks on foot. So instead, I chose to crawl up and over those sections on my hands and knees. Then I would get up at the top and hop again through the “flatter” gravel and rock sections, then crawl again. It was slow and exhausting, and painful when I would get up one one leg again and start hopping again. I was in the most physical pain I’d ever been in my life.

After about a very slow and painful quarter of a mile, and as rain was dripping down more steadily, the guides decided I wouldn’t make it the remaining 300ft of elevation/30 minute (normal) hike to the lunch spot. They radioed for a medevac helicopter to come pick me up. I was incredibly upset and disappointed that I had ruined our hike… but also knew I absolutely wouldn’t even make it to the lunch shelter. I remember saying “I feel so stupid!” to Scott.

The helicopter came in a surprisingly quick amount of time, and they let out one of the EMT’s nearby and then flew over to a hill across from the trail. The EMT saw that I was decently clothed and covered (I had 3/4 length running pants on; a rain jacket and hood; and had a second rain jacket to cover my legs against the rain and wind) and did a verbal status check to confirm I was decent enough for them to lift me off the mountain. They weren’t able to land safely anywhere nearby on the trail because it was so steep and narrow; so they put me in a “sack” that went around my back and looped over my arms and between my legs, and was hooked on to the EMT’s harness. Scott and the guide stood back, while the helicopter came back and lowered the winch. I was winched up from there. However, the EMT had told me once we got up to the helicopter that the team inside would pull me straight back. And that didn’t happen, which was slightly more terrifying because we started flying away from the mountain while still *outside* the helicopter. It turns out the helicopter had unloaded a stretcher and supplies on the nearby hill, and so we were lowered down – with me and the EMT still perched outside the skids – to the hillside there, so the team could then gather the supplies & then load me in so I could sit on the stretcher.

The other terrifying factor about being evacuated off the mountain was that due to the weather that was blowing in hours ahead of schedule, and the “we have to winch you off the mountain” aspect: they couldn’t take Scott with us. So I had to start making plans & preparing myself for going to the hospital by myself in a foreign country. I was terrified about my BGs & diabetes & how I know hospitals don’t always know what to do with people with T1D, let alone someone on a (DIY) closed loop. I tried to tamp down on my worries & make some plans while we waited for the helicopter, so Scott would know I was okay-ish and worry slightly less about me. But at that point, we knew he would have to finish the day’s hike (another 3-4 hours); spend the night; and hike down the next day as planned in order to meet up with me at the hospital.

As we lifted off in the helicopter, I handed the EMT my phone, where I had made a note with my name, age, medical information (T1D & celiac), and the situation about my ankle. He loved it, because he could just write down my information on the accident forms without yelling over the headset. Once he gave me my phone back, a few minutes later we passed back into an area with signal, and I was able to send text messages for the first time in 2 days.

I sent one to my mom, as carefully worded as I could possibly do:

“Slipped off the trail. Hurt ankle. BGs ok. In a helicopter to the hospital in Queenstown. Just got signal in helicopter. Don’t freak out. Will text or call later. Love you”

It had all the key information – something happened; here’s where I’m heading; BGs are fine; pleeeeeeeease don’t freak out.

I also sent a text to Scott’s dad, Howard, who’s an ER doc, with a tad different description:

“Slipped and flipped off the trail. Possible ankle fracture or serious sprain. Being medevac’d off in a helicopter. BGs are fine. But please stand by for any calls in case I need medical advice. Just got signal in the chopper. Scott is still on the trail until tomorrow so I am solo.”

I was quite nervous when we arrived at the hospital. I haven’t been in an ER since high school (when I was dehydrated from a virus). I’ve heard horror stories about T1D & hospitals. However, most of my fears related to T1D were completely unfounded. When I arrived, the EMT did some more paperwork, I talked briefly to a nurse, and then was left alone for quite a while (maybe an hour). Other than mentioning T1D (and that my BGs were fine) and celiac to the nurse, no one ever asked about my BGs throughout the rest of the time in the ER. Which was fine with me. What my BGs had actually done was rise steadily from about 120 up to 160, then stayed there flat. That’s a bit high, but given I was trying to manage pain and sort out my situation, I was comfortable being slightly elevated in case I crashed/dropped later when the adrenaline came down. I just let OpenAPS keep plugging away.

The first thing that was done in the ER about an hour after I arrived was wheeling me to go get an x-ray. It was quick and not too painful. I remember vividly that the radiologist came back out and and said “yes, your ankle is definitely broken. In two places.” I started at her and thought an expletive or two. But for some reason, that made me feel a lot better: my pain and the experience I had on the mountain was not totally disproportionate to the injury. I relaxed a lot then, and could feel a lot of the stress ebbing away. My BGs started a slow sloping drop down almost immediately, and ended up going from 160 down to 90 where I leveled out nicely and stayed for the next few hours.

After I was wheeled back to my area of the ER, the ER doc showed up. He started asking, “So I heard you hopped and climbed off the mountain?” and then followed up by saying yes, my ankle was broken…in three places.

Me: “WHAT? Did you say ::three::?”

The ER doc said he had already consulted ortho who confirmed I would need surgery. However, it didn’t have to be that night (halleluljah), and they usually waited ’til swelling went down to operate, so I had a choice of doing it in NZ or going home and doing it there. He asked when I was planning to leave: this was Sunday evening now; and we planned to fly out Wednesday morning. I asked if there were any downsides to waiting to do surgery at home; any risk to my long-term health? He said no, because they usually wait ~10 days for the swelling to go down to operate. So I could wait in NZ (me: uhhh, no) or fly home and see someone locally. I was absolutely thrilled I wouldn’t need to operate then and there, and without Scott. I asked for more details so I could get my FIL’s opinion (he concurred, coming home was reasonable), and then confirmed that I liked the plan to cast me; send me on my way; and let me get surgery at home.

It took them another 2 hours to get me to the procedure room and start my cast. This was a small, 6-bed ER. When they finally started my cast, the ER doc had his hands on my ankle holding it up…and another nurse rushed in warning that a critical patient was in route, 5 minutes out. The ER doc and the other nurse looked at each other, said “we can do this by then”, and literally casted me in 2 minutes and were wheeling me out in the third minute! It was a tad amusing. I was taken back to x-ray where they confirmed that the cast was done with my ankle in a good position. After that, I just needed my cast to be split so I could accommodate swelling for the long plane rides home; get my prescriptions for pain med; get crutches; and go home.

All that sounds fast, but due to the critical patient that had come in, it took another two hours. They finally came and split my cast (which is done by using the cast cutter to cut a line, then another line, then pull out the strip in between), sold me my crutches, and wrote my prescriptions. The ER doc handed me my script, and I asked if the first rx had acetaminophen (because it would mess up my G4). He said it did, so he scribbled that out and prescribed ibuprofen instead. The nurse then got & apologized for “having to sell me” crutches. New Zealand has a public health policy where they cover everything in an accident for foreigners: I didn’t have to pay for the medevac (!!), the ER visit (!!), the x-rays (!!), the cast …nothing. Just the crutches (which they normally lend for free to NZ but obviously I was taking these home). Then I was on my way.

Thankfully, the company we hiked with had of course radioed into Queenstown, and the operations manager had called the ER and left a message to give to me with his phone number. A few hours prior, when I found out I’d be casted & released that night, I had been texting my mom & had her call the hotel Scott & I were staying at the next (Monday) night to see if they had a room that (Sunday) night that I could check into. The hiking company guy offered to drive me wherever, so he came to pick me up. I had texted him to keep him posted on my progress/timeline of release (nice and vague and unhelpful for the most part). But I also asked as soon as we got in contact if he could radio a message to the lodge & tell Scott that: a) my ankle was broken; b) I was ok; c) I’d be at the hotel when he got in the next day and not to rush, I was ok. The guy said he could do me one better: when he came to pick me up, he’d bring the phone so I could ::call: and talk to Scott directly. (I almost cried with relief, there, at the idea of getting to talk to Scott so he wouldn’t be beside himself worrying for 22 hours). I did get to talk to Scott for about a minute and tell him everything directly, and convince him not to hike out himself in the morning, but stick with the group and the normal transport method back to Queenstown, and just come meet me at the hotel when he got back around 4pm the next day. He agreed.

(What I didn’t find out until later is that Scott had considered doing the rest of the hike completely that night. Two things ended up dissuading him: one was the fact that a guide would have had to go with him and then hike all the way back to the lodge that night. The other was the fact that he talked to me and I would be out of the hospital by the time he arrived; so since I said I was fine alone at the hotel, he’d wait until the next day.)

So, I was taken to the hotel and got help getting up to the hotel room and had ice delivered along with extra pillows to prop up, and our bags brought in. Thankfully, on the mountain, the EMT had agreed to winch my backpack up with me. This was huge, because I noted earlier, I had a full set of supplies in my backpack, and all we had to do on the mountain was grab an extra international adapter and my charger cords out of Scott’s bag and toss it into mine. That made it easy to just pull what I needed that night (my rig; charger cords & adapter; a snack) out of the top of my bag from my perch on the bed. I plugged in my rig; made sure I was looping, took my pain meds, and went to sleep.

Broken_bones_type_1_diabetes_trimalleolar_fracture_OpenAPS_DanaMLewisAmazingly, although you’re probably not any more surprised than I am at this point, my BGs stayed perfectly in range all night. Seriously: after that lowering from 160 once I relaxed and let some of the stress go? No lows. No highs. Perfectly in range. The pain/inflammation and my lack of eating didn’t throw me out of range at all. The day of the fall, all I ate was breakfast (8am); didn’t eat lunch and didn’t bother doing anything until 11pm when I had a beef jerky stick for some protein and half a granola bar (10g carbs). For the next two and a half weeks now, I’ve had no lows, and very few highs.

The one other high BG I really had was on Sunday after we got home (we got back on Wednesday). It happened after my crutch hit the door coming back to my bedroom from the bathroom, and I did such a good job hopping on my left foot and protecting my casted right foot, that I managed to break the smallest toe on my left foot. I pretty immediately knew that it was broken based on the pain; then my BG slowly rose from 110 up to 160; and then I started to have the same “shadow” bruising spread around my foot from the base of the toe. Scott wasn’t sure; when I had fallen off the trail I had yelled “help!” and “I think I broke my foot!”. I didn’t say it out loud this time; just thought it. Again, after some ibuprofen and icing and resting, within an hour my BG started coming back down slowly to below 100 mg/dL.

On Tuesday, I went to the orthopedic surgeon and confirmed: my left toe is definitely broken. My right ankle is definitely broken: the trimalleolar fracture diagnosis from NZ was confirmed. However, given that none of the ligaments were damaged, and the ankle was in a decent position, the ortho said there’s a good chance I can avoid surgery and heal in place inside a cast. The plan was to take off my split, plaster-based cast they did in NZ and give me a proper cast. We’d follow up in 10 days and confirm via x-ray that everything was going well. I asked how likely surgery would still be with this plan; and he said 20%. Well, given that I was assuming 100% before, that was huge progress! He also told me I shouldn’t travel within 4 weeks of the injury, which unfortunately means I had to cancel my trip to Berlin for ATTD later in February. It may or may not mean I have to cancel another trip; I’ll have to wait and see after the next follow up appointment, depending on whether or not I need surgery.

Up until this point, I had been fairly quiet (for me) on social media. I hadn’t posted the pictures of our hike; I didn’t talk about my fall or the trip home. One friend had texted and said “I wondered if you fell off the face of the earth!” to which I responded “uhhh…well…about that…I ::only:: fell off a mountain! Not earth!” Ha. Part of the reason was not knowing whether or not I would be able to travel as planned, and wanting to be courteous to informing the conferences who invited me to speak about the situation & what it meant for me being able to attend/not. Once I had done that, I was able to start posting & sharing with everyone what had happened.

To be perfectly honest, it’s one thing to have a broken limb and a cast and have to use crutches. It’s an entirely other ball of wax to have a broken toe on the foot that’s supposed to be your source of strength & balance. The ortho gave me a post-op surgical shoe to wear on my left foot to try to help, but it hurt so bad that I can’t use my knee scooter to move easily without my left foot burning from the pain. Thankfully, Scott’s parents’ neighbor also had a motorized sit-scooter that we borrowed. However, due to the snowpocalypse that hit Seattle, I’ve not been able to leave the house since Thursday. We haven’t been able to drive anywhere, or walk/scooter anywhere, in days. I’m not quite stir crazy yet; but; I’ll be really looking forward to the sidewalks being snow-free and hopefully lake-free (from all the melting snow) later in the week so I can get out again. I also picked up a cold somewhere, so I for the most part have been stationary in bed for the last week, propping up my feet and using endless boxes of Kleenex.

OpenAPS, as you can see, has done an excellent job responding to the changes in my insulin needs from being 100% sedentary. (Really – think trips to the bathroom and that’s it.)  In addition to the increased resistance from my cold and being sedentary, there’s one other new factor I’ve been dealing with. I asked my ortho about nutrition, and he wants me to get 1g of protein per kg of body weight, plus 1000mg/day of calcium. He suggested getting the extra protein via a powder, instead of calories (e.g. eating extra food). I found a zero-carb, gluten free powder that’s 25g of protein per scoop, and have been trying it with chocolate milk (which is 13g of carb and 10g of protein).

I’ve been drinking that 2x a day. Interestingly, previous to my injury, unless I was eating a 100% no carb meal (such as eggs and bacon for breakfast), I didn’t need to bolus/account for protein. However, even though I’m entering carbs for chocolate milk (15), I was seeing a spike up to 150 mg/dL after drinking it. I tried entering 30g for the next time (13g of milk; plus about 50% for the 25+10g worth of protein), and that worked better and only resulted in a 10 mg/dL rise in response to it. But even a handful of nuts’ worth of protein, especially on days where I’m hardly eating anything, have a much stronger effect on my BGs. This could be because my body is adjusting to me eating a lot less (I don’t have much appetite); adjusting to the much-higher-protein diet overall; and/or responding to the 100% sedentary pattern of my body now.

Thankfully, it’s not been a big deal, and OpenAPS does such a good job tamping down on the other noise-based factors: it’s nice that my biggest problems are brief rises to 160 or 170 mg/dL (that then come back down on their own). My 7-day and 30-day BG averages have stayed the same; and my % time in range for 80-160 has stayed the same, even with what feels like a few extra protein-related blips, and even when some days I eat hardly anything and some days I manage 2-3 meals.

So to summarize a ridiculously long post:

  • When I break bones, my BGs rise up (due to inflammation and/or the stress/other hormonal reaction) up to 160 mg/dL until I relax, when they’ll come back down. Otherwise, broken bones don’t really phase OpenAPS.
  • Ditto for lack of movement and changes in activity patterns not phasing OpenAPS.
  • The biggest “challenge” has been adjusting to the 3x amount of protein I’m getting as a dietary change.
  • I have a trimalleolar fracture; and that’s about 7% of ankle fractures. I read a lot of blog posts about people needing surgery & the recovery from it taking a long time. I’m not sure I won’t need surgery; but I’m hoping I won’t need it. If so, here’s one data point for a trimalleolar fracture being non-surgical  – I’ll update more later with full recovery timelines & details. Also, here is a Twitter thread where I’m tracking some of the most helpful things for life with crutches.
  • Don’t break your littlest toe – it can hurt more than larger fractures if you have to walk on it!

A huge thank you goes to my parents and Scott’s parents; our siblings on both sides for being incredibly supportive and helpful as well; and Scott himself who has been waiting on me (literally hand and foot) and taking most excellent care of me.

And thank you as well to anyone who read this & for everyone who’s been sending positive thoughts and love and support. Thank you!

4 years DIY closed looping with #OpenAPS – what changed and what hasn’t

It’s hard to express the magnitude of how much closed looping can improve a person with diabetes’ life, especially to someone who doesn’t have diabetes or live closely with someone that does. There are so many benefits – and so many way beyond the typically studied “A1c improvement” and “increased time in range”. Sure, those happen (and in case you haven’t seen it, see some of the outcomes from various international studies looking at DIY closed loop outcomes). But everything else…it’s hard to explain all of the magic that happens in real life, that’s made so much richer by having technology that for the most part keeps diabetes out of the way, and more importantly: off the top of your mind.

Personally, my first and most obvious benefit, and the whole reason I started DIYing in the first place, was to have the peace of mind to sleep safely at night. Objective achieved, immediately. Then over time, I got the improvements in A1c and time in range, plus reduction in time spent doing diabetes ‘stuff’ and time spent thinking about my own diabetes. The artificial pancreas ‘rigs’ got smaller. We improved the algorithm, to the point where it can handle the chaos that is everything from menstrual cycle to having the flu or norovirus.

More recently, in the past ~17 months, I’ve achieved an ultimate level of not doing much diabetes work that I never thought was possible: with the help of faster insulin and things like SMB’s (improved algorithm enhancements in OpenAPS), I’ve been able do a simple meal announcement by pressing a button on my watch or phone..and not having to bolus. Not worrying about precise carb counts. Not worrying about specific timing of insulin activity. Not worrying about post-meal lows. Not worrying about lots of exercise. And the results are pretty incredible to me:

We should be measuring and reducing user burden with AID in addition to improving TIR and A1c

But I remember early on when we had announced that we had figured out how to close the loop. We got a lot of push back saying, well, that’s good for you – but will it work for anyone else? And I remember thinking about how if it helped one other person sleep safely at night..it would be worth the amount of work it would take to open source it. Even if we didn’t know how well it would work for other people, we had a feeling it might work for some people. And that for even a few people who it might work for, it was worth doing. Would DIY end up working for everyone, or being something that everyone would want to do? Maybe not, and definitely not. We wouldn’t necessarily change the world for everyone by open sourcing an APS, but that could help change the world for someone else, and we thought that was (and still is) worth doing. After all, the ripple effect may help ultimately change the world for everyone else in ways we couldn’t predict or expect.

Ripple_effect_DanaMLewisThis has become true in more ways than one.

That ‘one other person’ turned into a few..then dozen..hundreds..and now probably thousand(s) around the world using various DIY closed loop systems.

And in addition to more people being able to choose to access different DIY systems with more pumps of choice, CGMs of choice, and algorithm of choice, we’ve also seen the ripple effect in the way the world works, too. There is now, thankfully, at least one company who is evaluating open source code; running simulations with it; and where it is out-performing their original algorithm or code components, utilizing that knowledge to improve their system. They’re also giving back to the open source diabetes community, too. Hopefully more companies will take this approach & bring better products more quickly to the market. When they are ready to submit said products, we know at least U.S. regulators at the FDA are ready to quickly review and work with companies to get better tools on the market. That’s a huge change from years ago, when there was a lot of finger pointing and what felt like a lot of delay preventing newer technology from reaching the market. The other change I’m seeing is in diabetes research, where researchers are increasingly working directly with patients from the start and designing better studies around the things that actually matter to people with diabetes, including analyzing the impact and outcomes of open source technology.

After five years of open source diabetes work, and specifically four years of DIY closed looping, it finally feels like the ripples are ultimately helping achieve the vision we had at the start of OpenAPS, articulated in the conclusion of the OpenAPS Reference Design:

OpenAPS_Reference_Design_conclusionIs there still more work to do? Absolutely.

Even as more commercial APS roll out, it takes too long for these to reach many countries. And in most parts of the world, it’s still insanely hard and/or expensive to get insulin (which is one of the reasons Scott and I support Life For A Child to help get insulin, supplies, and education to as many children as possible in countries where otherwise they wouldn’t be able to access it – more on that here.). And even when APS are “approved” commercially, that doesn’t mean they’ll be affordable or accessible, even with health insurance. So I expect our work to continue, not only to support ongoing improvements with DIY systems directly; but also with encouraging and running studies to generalize knowledge from DIY systems; hopefully seeing DIY systems approved to work with existing interoperable devices; helping any company that will listen to improve their systems, both in terms of algorithms but also in terms of usability; helping regulators to see both what’s possible as well as what’s needed to successfully using these types of system in the real world. I don’t see this work ending for years to come – not until the day where every person with diabetes in every country has access to basic diabetes supplies, and the ability to choose to use – or not – the best technology that we know is possible.

But even so, after four years of DIY closed looping, I’m incredibly thankful for the quality of life that has been made possible by OpenAPS and the community around it. And I’m thankful for the community for sharing their stories of what they’ve accomplished or done while using DIY closed loop systems. It’s incredible to see people sharing stories of how they are achieving their best outcomes after 45 years of diabetes; or people posting from Antartica; or after running marathons; or after a successful and healthy pregnancy where they used their DIY closed loop throughout; or after they’ve seen the swelling in their eyes go done; etc.

The stories of the real-life impacts of this type of technology are some of the best ripple effects that I never want to forget.

Presentations and poster content from @DanaMLewis at #2018ADA

DanaMLewis_ADA2018As I mentioned, I am honored to have two presentations and a co-authored poster being presented at #2018ADA. As per my usual, I plan to post all content and make it fully available online as the embargo lifts. There will be three sets of content:

  • Poster 79-LB in Category 12-A Detecting Insulin Sensitivity Changes for Individuals with Type 1 Diabetes using “Autosensitivity” from OpenAPS’ poster, co-authored by Dana Lewis, Tim Street, Scott Leibrand, and Sayali Phatak.
  • Content from my presentation Saturday, The Data behind DIY Diabetes—Opportunities for Collaboration and Ongoing Research’, which is part of the “The Diabetes Do-It-Yourself (DIY) Revolution” Symposium!
  • Content from my presentation Monday, Improvements in A1c and Time-in-Range in DIY Closed-Loop (OpenAPS) Users’, co-authored by Dana Lewis, Scott Swain, and Tom Donner.

First up: the autosensitivity poster!

Dana_Scott_ADA2018_autosens_posterYou can find the full write up and content of the autosensitivity poster in a post over on OpenAPS.org. There’s also a twitter thread if you’d like to share this poster with others on Twitter or elsewhere.

Summary: we ran autosensitivity retrospectively on the command line to assess patterns of sensitivity changes for 16 individuals who had donated data in the OpenAPS Data Commons. Many had normal distributions of sensitivity, but we found a few people who trended sensitive or resistant, indicating underlying pump settings could likely benefit from a change.
2018 ADA poster on Autosensitivity from OpenAPS by DanaMLewis

 

Presentation:
The Data behind DIY Diabetes—Opportunities for Collaboration and Ongoing Research’

This presentation was a big deal to me, as it was flanked by 3 other excellent presentations on the topic of DIY and diabetes. Jason Wittmer gave a great overview and context setting of DIY diabetes, ranging from DIY remote monitoring and CGM tools all the way to DIY closed loops like OpenAPS. Jason is a dad who created OpenAPS rigs for his son with T1D. Lorenzo Sandini spoke about the clinician’s perspective for when patients come into the office with DIY tools. He knows it from both sides – he’s using OpenAPS rigs, and also has patients who use OpenAPS. And after my presentation, Joyce Lee also spoke about the overarching landscape of diabetes and the role DIY plays in this emerging technology space.

Why did I present as part of this group today? One of the roles I’ve taken on in the last few years in the OpenAPS community (among others) is a collaborator and facilitator of research with and about the community. I put together the first outcomes study (see here in JDST or here in a blog post form on OpenAPS.org) in 2016. We presented a poster on Autotune last year at ADA (see here in a blog post form on OpenAPS.org). I’ve also worked to create and manage the OpenAPS Data Commons, as well as build tools for researchers to use this data, so individuals can easily and anonymously donate their DIY closed loop data for other research projects, lowering the friction and barriers for both patients and researchers. And, I’ve co-led or led several research projects with the community’s data as a result.

My presentation was therefore about setting the stage with background on OpenAPS & how we ended up creating the OpenAPS Data Commons; presenting a selection of research projects that have utilized data from the community; highlighting other research projects working with the OpenAPS community; announcing a new international collaboration (OPEN – more coming on that in the future!) for research with the DIY community; and hopefully encouraging other diabetes researchers to think about sharing their work, data, methods, tools, and insights as openly possible to help us all move forward with improving the lives of people with diabetes.

That is, of course, quite an abbreviated summary! I’ve shared a thread on Twitter that goes into detail on each of the key points as part of the presentation, or there’s a version of this Twitter/presentation content also written below.

If you’re someone who wants to do research with retrospective data from the OpenAPS Data Commons, you can find out more about it here (including instructions on how to request data). And if you’re interested in prospective research, please do reach out as well!

Full content for those who don’t want to read Twitter:

Patients are often seen as passive recipients of care, but many of us PWDs have discovered that problems are opportunities to change things. My journey to DIY began after I was frustrated by my inability to hear CGM alarms at night. 4 years ago, there was no way for me to access my own device data in real time OR retrospectively. Thanks to John Costik for sharing his code, I was able to get my CGM data & send it to the cloud and down to my phone, creating a louder alarm. Scott and I created an algorithm to push notifications to me to take action. This was an ‘open loop’ system we called #DIYPS. With Ben West’s help, we realized could combine our algorithm with small, off-the-shelf hardware & a radio stick to automate insulin delivery. #OpenAPS was thus created, open sourcing all components of DIY closed loop system so others could close the loop, too. An #OpenAPS rig consists of a small computer, radio chip, & battery. The hardware is constantly evolving. Many of us also use Nightscout to visualize our closed loop data, and share with loved ones.

2018ADA_slide12018ADA_slide 42018ADA_slide 32018ADA_Slide 2

 

 

 

 

 

 

I closed the loop in December of 2015. As people learned about it, I got pushback: “It works for you, but how do you know it’s going to work for others?” I didn’t, and I said so. But that didn’t mean I shouldn’t share what was working for me.

Once we had dozens of users of #OpenAPS, we presented a research study at #2016ADA, with 18 individuals sharing outcomes data on A1c, TIR, and QOL improvements. (See that publication here: https://twitter.com/danamlewis/status/763782789070192640 ). I was often asked to share my data for people to analyze, but I’m not representative of entire #OpenAPS community. Plus, the community has kept growing: we estimate there are more than (n=1)*710+ (as of June 2018) people worldwide using different kinds of DIY APs. (Note: if you’d like to keep track of the growing #OpenAPS community, the count of loopers worldwide is updated periodically at  https://openaps.org/outcomes ).  I began to work with Open Humans to build the #OpenAPS Data Commons, enabling individuals to anonymously upload their data and consent to share it with the Data Commons.

2018ADA_Slide 52018ADA_Slide 62018ADA_Slide 72018ADA_Slide 8

 

 

 

 

 

Criteria for using the #OpenAPS Data Commons:

  • 1) share insights back with the community, especially if you find something about an individual’s data set where we should notify them
  • 2) publish in an accessible (and preferably open) manner

I’ve learned that not many are prepared to take advantage of the rich (and complex) data available from #OpenAPS users; and many researchers have varying background and skillsets.  To aid researchers, I created a series of open source tools (described here: http://bit.ly/2l5ypxq, and tools available at https://github.com/danamlewis/OpenHumansDataTools ) to help researchers & patients working with data.

2018ADA_Slide 10 2018ADA_Slide 9

 

 

 

We have a variety of research projects that have leveraged the anonymously donated, DIY closed loop data from the #OpenAPS Data Commons.

  • 2018ADA_Slide 112018ADA_Slide 12One research project, in collaboration with a Stanford team, evaluated published machine learning model predictions & #OpenAPS predictions. Some models (particularly linear regression) = accurate predictions in short term, but less so longer term when insulin peaks. This study is pending publication, but I’d like to note the challenge of more traditional research keeping pace with DIY innovation: the code (and data) studied was from January 2017. #OpenAPS prediction code has been updated 2x since then.
  • In response to the feedback from the #2016ADA #OpenAPS Outcomes study we presented, a follow up study on #OpenAPS outcomes was created in partnership with a team at Johns Hopkins. That study will be presented on Monday, 6-6:15pm (352-OR).
  • 2018ADA_Slide 13Many people share publicly online their outcomes with DIY closed loops. Sulka Haro has shared his script to evaluate the reduction in daily manual diabetes interventions after they began using #OpenAPS. Before: 4.5/day manual corrections; now they treat <1/day.
  • #OpenAPS features such as autosensitivity automatically detect sensitivity changes and insulin needs, improving outcomes. (See above at the top of this post for the full poster content).
  • If you missed it at #2017ADA (see here: http://bit.ly/2rMBFmn) , Autotune is a tool for assessing changes to basal rates, ISF, and carb ratio. Developed for #OpenAPS users but can also be used by traditional pumpers (and some MDI users also utilize it).

I’m also thrilled to share a new tool we’ve created: an #OpenAPS simulator to allow us to more easily back-test and compare settings changes & feature changes in #OpenAPS code.
2018ADA_Slide 14

  • Screen Shot 2018-06-22 at 4.48.06 PM2018ADA_Slide 16  We pulled a recent week of data for n=1 adult PWD who does no-bolus, rough carb entry meal announcements, and ran the simulator to predict what the outcomes would be for no-bolus and no meal-announcement.

 

  • 2018ADA_Slide 172018ADA_Slide 18 We also ran the simulator on n=1 teen PWD who does no-bolus and no-meal-announcement in real life. The simulator tracked closely to his actual outcomes (validated this week with a lab-A1c of 6.1)

 

 

 

The new #OpenAPS simulator will allow us to better test future algorithm changes and features across a diverse data set donated by DIY closed loop users.

There are many other studies & collaborations ongoing with the DIY community.

  • Michelle Litchman, Perry Gee, Lesly Kelly, and myself have a paper pending review analyzing social-media-reported outcomes & themes from DIY community.
  • 2018ADA_Slide 19There are also multiple other posters about DIY outcomes here at #2018ADA:
  • 2018ADA_Slide 20 There are many topics of interest in DIY community we’d like to see studies on, and have data for. These include: “eating soon” (optimal insulin dosing for lesser post-prandial spikes); and variability in sensitivity for various ages, pregnancy, and menstrual cycle.
  • 2018ADA_Slide 21I’m also thrilled to announce funding will be awarded to OPEN (a new collaboration on Outcomes of Patients’ Evidence, with Novel, DIY-AP tech), a 36-month international collaboration assessing outcomes, QOL, further development, access of real-world AP tech, etc. (More to come on this soon!)

In summary: we don’t have a choice in living with diabetes. We *do* have a choice to DIY, and also to research to learn more and improve knowledge and availability of tools for us PWDs, more quickly. We would love to partner and collaborate with anyone interested in working with the DIY community, whether that is utilizing the #OpenAPS Data Commons for retrospective studies or designing prospective studies. If you take away one thing today: let it be the request for us to all openly share our tools, data, and insights so we can all make life with type 1 diabetes better, faster.

2018ADA_Slide 222018ADA_Slide 23

 

 

 

 

A huge thank you as always to the community: those who have donated and shared data; those who have helped develop, test, troubleshoot, and otherwise help power the #OpenAPS and other DIY diabetes communities.

2018ADA_Slide 24

Presentation:
Improvements in A1c and Time-in-Range in DIY Closed-Loop (OpenAPS) Users

(full tweet thread available here; or a description of this presentation below)

#OpenAPS is an open and transparent effort to make safe and effective Artificial Pancreas System (APS) technology widely available to reduce the burden of Type 1 diabetes. #OpenAPS evolved from my first DIY closed loop system and our desire to openly share what we’ve learned living with DIY closed loops. It takes a small, off-the-shelf computer; a radio; and a battery to communicate with existing insulin pumps and CGMs. As a PWD, I care a lot about safety: the safety reference design is the first thing in #OpenAPS that was shared, in order to help set expectations around what a DIY closed loop can (and cannot) do.

ADA2018_Slide 23ADA2018_Slide 24As I shared about my own DIY experience, people questioned whether it would work for others, or just me. At #2016ADA, we presented an outcomes study with data from 18 of the first 40 DIY closed loop users. Feedback on that study included requests to evaluate CGM data, given concerns around accuracy of self-reported outcomes.

This 2018 #OpenAPS outcomes study was the result. We performed a retrospective cross-over analysis of continuous BG readings recorded during 2-week segments 4-6 weeks before and after initiation of OpenAPS.

ADA2018_Slide 26For this study, n=20 based on the availability of data that met the stringent protocol requirements (and the limited number of people who had both recorded that data and donated it to the #OpenAPS Data Commons in early 2017).  Demographics show that, like the 2016 study, the people choosing to #OpenAPS typically have lower A1C than the average T1D population; have had diabetes for over a decade; and are long-time pump and CGM users. Like the 2016 study, this 2018 study found mean BG and TIR improved across all time categories (overall, day, and nighttime).

ADA2018_Slide 28ADA2018_Slide 29ADA2018_Slide 30ADA2018_Slide 31ADA2018_Slide 32

Overall, mean BG (mg/dl) improved (135.7 to 128.3); mean estimated HbA1c improved (6.4 to 6.1%). TIR (70-180) increased from 75.8 to 82.2%. Overall, time spent high and low were all reduced, in addition to eAG and A1c reduction. Overnight (11pm-7am) had smaller improvement in all categories compared to daytime improvements in these categories.

Notably: although this study primarily focused on a 4-6 week time frame pre-looping vs. 4-6 weeks post-looping, the improvements in all categories are sustained over time by #OpenAPS users.

ADA2018_Slide 33 ADA2018_Slide 34

ADA2018_Slide 35Conclusion: Even with tight initial control, persons with T1D saw meaningful improvements in estimated A1c, TIR, and a reduction in time spent high and low, during the day and at night, after initiating #OpenAPS. Although this study focused on BG data from CGM, do not overlook additional QOL benefits when analyzing benefits of hybrid closed loop therapy or designing future studies! See these examples shared from Sulka Haro and Jason Wittmer as example of quality of life impacts of #OpenAPS.

A huge thank you to the community: those who have donated and shared data; those who have helped develop, test, troubleshoot, and otherwise help power the #OpenAPS and other DIY diabetes communities.

And, special thank you to my co-authors, Scott Swain & Tom Donner, for the collaboration on this study. Lewis_Donner_Swain_ADA2018

Not bolusing for meals (Fiasp, 0.6.0 algorithm in oref0 dev branch, and more)

I tweeted last week+, “I just realized I’ve now gone about 3 weeks without meal bolusing.” That means just a meal announcement (i.e. carb entry estimate, a la 30 carbs or 60 carbs or whatever, based on my IFTTT buttons). No manual bolus.

Highlighting 3 weeks without meal bolusing, and just doing a carb announcement, with good outcomes thanks to OpenAPS

I kind of keep waiting for the other shoe to drop, because it sounds to good to be true. I’m sure you’re skeptical reading this.

I bet she’s doing SOME bolus.

Well, she must not be eating any carbs.

She must be having worse outcomes, bad post-meal BGs, etc.

Nope, nope, and nope.

  • While I started testing this new set of features with partial boluses and worked my way down (see more below on the testing topic), I’m now literally doing no manual meal bolus. I start eating, and press one button on my watch for a carb estimate entry (that via IFTTT goes to Nightscout and my rig).
  • I eat carbs. I’ve eaten 120 grams of carbs of gluten free biscuits and gravy; 60-90 grams of pasta; dinner followed by a few gluten free cookies, etc.
  • More nuanced details below, but:
    • My 70-180 time in range has stayed the same (93+%) compared to the versions I was testing before with manual meal boluses.
    • My 70-150 and 80-160 time in ranges have decreased slightly compared to manual meal boluses, but…
    • My average blood sugar has actually dropped down (as has my a1c to match).
    • (So this means I’m having a few more spikes above 160, usually topping off in 160-170 whereas before my manual meal boluses would have me top off around 150, when all was well.)

Also note – no eating soon required. No early bolus or pre-bolus. Just single button press as I stick food in my mouth.

Wow.

(See where I said, waiting for the other shoe to drop?)

That’s why I waited a while to even tweet about it. Maybe it’s a fluke. Maybe it won’t work for other people. Maybe, maybe, maybe. Who knows. It’s still fairly early to tell, but as other people are beginning to test the current dev branch of oref0 with 0.6.0-related features, other people are starting to see improvements as well. (And that could be some of the many other features we are adding to 0.6.0, ranging from exponential curves for insulin activity, to allowing SMBs to do more, to carb-ratio-tuned-autosensitivity, to huge autotune improvements, etc.) 

So while I don’t want to over-hype – and never do, what works for me will not work for everyone – I do want to share my cautious excitement over continuing to be able to push the envelope on algorithms and what might be possible outcome-wise for this kind of technology.

Suggesting no meal bolus means we can quit arguing about the name "artificial pancreas"

Here’s what is enabling me to be in the no-bolus zone for now well over a month, with still (to me) great outcomes worth the tradeoffs described above:

  1. Faster insulin. Thanks to our lovely looping friends in Germany/Austria, we came back from Europe with a few vials of Fiasp to try. I was HIGHLY skeptical about this. Some of our European friends saw great results right away, others didn’t. I didn’t get great results on it at first. Some of that may be due to natural changes between insulin types and not knowing exactly how to adjust my manual bolus strategy to the faster insulin action, but until we did some code changes to allow SMB‘s to do more and added some other features to what’s now 0.6.0, I wasn’t thrilled and in fact after about two weeks of it was about to switch off of it. So that brings me to #2.
  2. More improvements to the algorithm, which is now what will become the 0.6.0 release of oref0. There’s a whole lot of stuff packed in there. Exponential curves. Different carb absorption decay calculations. Allowing SMB to do more. Additional safety guards since we ramped SMB up.

How we started testing no-bolus approach:

  • I have always known that about 6u of insulin (thanks to testing dating back to my early DIYPS days, many many many moons ago) is about as much as I should bolus at any time. So, even if I ate 120 carbs, I usually did about a 6u bolus up front, and let the rig pick up the rest as needed over more hours. I started doing ~75% or something like that of boluses, based on wherever I felt like rounding to with my easy bolus buttons.
  • Whether I did 75% or 100%, I didn’t see a ton of difference at first…
  • ..so I took a leap and tried no-bolus with some SMB adjustments to allow it to ramp up faster with carb entry. Behaviorally, I find it a lot easier to do nothing 😀 vs. figure out the right amount of up front bolus. And outcomes wise (see above) it was very similar.

It definitely was an interesting approach to test. Between the Fiasp and the no-bolus up front, in some meals it matched really well and I had practically no rise. Due to incoming netIOB, food type, etc, sometimes I did have a rise – but while it spiked slightly higher (160-170 usually vs my earlier 150s with manual bolus), it was only up there for 2-3 data points and then came sharply down, leveling out smoothly in my preferred post-meal range. So an important lesson I learned was not to over-react to just the BG curve going up, without looking at the predictions to see where I was going to come just back down. (And as I had more than one meal where the spike and drop back to normal happened, it was very easy to adjust to the BG graph and not get that emotional tug to “do more” with a quick short rise like that).

Obviously, starting BG makes a difference. I’m usually starting <130 mg/dL when I see these spikes cap out at 170 or lower. I’ve started higher, and seen higher rises, too. They’re not all perfect: with occasional pump site issues, carb underestimates, unplanned carb stacking, and all the randomness of diabetes and a non-structured lifestyle (including live-testing bleeding edge algorithm changes), I’ve spent 12% of the last month >160 mg/dL, which is about the same as the 3 months before that. But in most cases (I’d say 95%), the no-bolus approach has actually yielded better outcomes than I expected AND has avoided post-meal lows better than I would have achieved with a manual bolus.

This is huge when you think about the QOL aspect of not having to do as much math at a meal; and when you think about all the complicating factors related to food – timing (do you bolus when you order, or when the food arrives, or earlier than that?), and the gluten factor. I have celiac disease, so if I’m eating out (which we do a lot, and especially since I travel frequently), bolusing prior to setting eyes on the food (knowing they didn’t plate it with bread, causing them to have to go back and start all over again) just isn’t smart. That’s why eating soon historically worked so well for me vs. traditional pre-boluses, because I could set the target entering the restaurant, bolus when I laid eyes on my hopefully safe food, and get reasonable (150 topping out) meal outcomes.

It also worked really well in the case where a restaurant cooked my gluten free pasta in the same pasta cooker and water as regular pasta, but didn’t inform me until after I found stray gluten noodles in the bottom of my pasta dish and started asking how that was possible since they (used to) do gluten free well. (Now, I pick up heaps of pasta, and sort pasta noodles one by one to make sure they all match before ever eating gluten free pasta. It makes waiters look at you very worriedly as you wave pasta around in the air, but better safe than glutened (again).) So, I was majorly glutened, and my digestion system was all out of sorts (isn’t that a nice polite way to describe getting glutened?) for many days, which of course impacted BG and insulin right then and for the days afterward. But because I had done carb entry and no-bolus, I was able to edit the carb entry down, and I didn’t have that much insulin stacked, and didn’t end up low after glutening, which is usually what happens.

Is that a super regular situation for most people? No. But it was super nice. And also helped me face pasta again last night, so I could put in a (very low in case of gluten) carb estimate, match my noodles, eat pasta, and let the SMBs ramp up to match absorption. It works very well for me.

Example BG graph from only announcing, not bolusing for, a meal with OpenAPS

Whether you have celiac or not, for many reasons (insert yours here), it’s nice to not to have to commit to the bolus up front. It’s closer to approaching what I think non-PWDs do at mealtimes: just eat.

(I haven’t done much testing (yet? TBD) for no-carb-entry and no-meal-bolus scenario, I expect I would have higher spikes but would be interesting to see if it would still come down reasonably fast. Probably wouldn’t be my go-to strategy because I don’t mind a general meal size estimate one button push, but would be nice to know what that curve shape would look like. If I test that, it’ll start with small snacks and ramp my way up.)

The questions I always get:

  1. Q: HOW DO I GET THIS?
    A: Caution: like all things OpenAPS but especially always true for the development branch, 0.6.0 is NOT released yet to master and is still highly experimental. I wouldn’t install dev unless you want to pay lots of close attention to it, and are willing to update multiple times over the course of the week, because Scott and I are merging features and tweaks almost daily to it.

    Got the disclaimers down? Ok. It’s in the dev branch of oref0. You should read this PR with notes on some more detail of what’s included, but you should also review the code diff to see all that’s changed, because it’s not all documented yet. Also, follow the instructions at the bottom to be able to install it without git. Hop into Gitter if you have questions about it!

    (Big huge thanks to folks like Tim and Matthias for early testing of 0.6.0; and to Tim for writing up about the initial rounds of 0.6.0-dev here (note that we’ve made further changes since this post), and others who’ve been testing & providing feedback and input into the dev branch!)

  2. Q: When will this get “released” to master?
    A: It depends. This is still a highly active dev branch, and we’re making a lot of changes and tweaking and testing things. The more people who test now and provide feedback will enable us to get to the final “prepare for release” testing stage. Lots and lots of testing, and things depend on how much existing needs tweaked, and what else we decide should go with this release. So, there’s never any specific release date.
  3. Q: What is Fiasp?
    A: Faster acting insulin that was only approved in Europe and Canada…until today. Convenient timing. I asked a PR person who messaged me about it, and they said it’s estimated to be available in U.S. pharmacies by late December/earlier Q1. As previously stated, available elsewhere in other parts of the world.

    Fiasp peaks sooner (say, ~45 minutes) with the same tail as everything else. It’s not instantaneous. For your million and one questions about whether it’s approved for your use in a tree, on a plane, at the zoo, and all other extrapolations – please ask Google/your doctor/the manufacturer, and not me. I don’t know. :)

  4. Q: Will any of this work for people NOT on Fiasp?
    A: Nothing is guaranteed (even for other people on Fiasp), but the folks who’ve started testing 0.6.0 even without Fiasp (on Humalog or Novolog/Novorapid, etc.) have been happier on it vs. earlier versions, too.

    I don’t expect Fiasp to work super well forever for me, given what I’ve heard from other people with months of experience on it…and given my first two weeks of Fiasp not being spectacular, I want people to not expect miracles. (Sorry, this blog post does not promise miracles, so sorry if you got super excited at the above. No miracles! This is not a cure! We still have diabetes!) Like all things artificial pancreas, I think it’s better to be cautiously hopeful with realistic expectations that things *might* be a little bit better than before, but as always, YDMV (your diabetes may/will always vary), your body will vary, and life happens, etc. so who knows.

Just 4 months ago, we published a blog post pointing out that the new features had allowed us to achieve 4 out of 5 of: no bolus; not counting carbs, medium/high carb meals, 80%+ time in range; and no hypoglycemia.  With Fiasp and  0.6.0 (currently what’s in the dev branch), we’ve now achieved all 5 simultaneously: I can eat large high-carb meals, enter very vague guesstimates of 60 or 90 carbs (no need for actual carb counting, just general size-based meal announcement), and still achieve 80%+ time in range 70-150 mg/dL without ever going <55 mg/dL.  Does that mean that OpenAPS with Fiasp finally meets the definition of a “real” Artificial Pancreas (step 5 on JDRF’s 6-step AP development pathway)?  We think it does.

So, tl;dr (because long post is long): with Fiasp and 0.6.0-dev branch, I’m able to not bolus for meals, and just enter a very generally sized meal estimate. It’s working well for me, and like all things, we’re working to make it available to other people via OpenAPS for others who want to try similar features/approaches. It may not work well for everyone. If it helps one other person, though, like everything else it’ll be worth it. Big thanks to Scott for LOTS of development in 0.6.0 and partnership in design of these features; too many people to name for testing and providing feedback and helping iterate on these features; and to the entire community for being awesome and helping us to continue to push the envelope on what might be possible for those of us with type 1 diabetes. :)

This. Matters. (Why I continue to work on #OpenAPS, for myself and for others)

If you give a mouse a cookie or give a patient their data, great things will happen.

First, it was louder CGM alarms and predictive alerts (#DIYPS).

Next, it was a basic hybrid closed loop artificial pancreas that we open sourced so other people could build one if they wanted to (#OpenAPS, with the oref0 basic algorithm).

Then, it was all kinds of nifty lessons learned about timing insulin activity optimally (do eating soon mode around an hour before a meal) and how to use things like IFTTT integration to squash even the tiniest (like from 100mg/dL to 140mg/dL) predictable rises.

It was also things like displays, button, widgets on the devices of my choice – ranging from being able to “text” my pancreas, to a swipe and button tap on my phone, to a button press on my watch – not to mention tinier sized pancreases that fit in or clip easily to a pocket.

Then it was autosensitivity that enabled the system to adjust to my changing circumstances (like getting a norovirus), plus autotune to make sure my baseline pump settings were where they needed to be.

And now, it’s oref1 features that enable me to make different choices at every meal depending on the social situation and what I feel like doing, while still getting good outcomes. Actually, not good outcomes. GREAT outcomes.

With oref0 and OpenAPS, I’d been getting good or really good outcomes for 2 years. But it wasn’t perfect – I wasn’t routinely getting 100% time in range with lower end of the range BG for a 24hour average. ~90% time in range was more common. (Note – this time in range is generally calculated with 80-160mg/dL. I could easily “get” higher time in range with an 80-180 mg/dL target, or a lot higher also with a 70-170mg/dL target, but 80-160mg/dL was what I was actually shooting for, so that’s what I calculate for me personally). I was fairly happy with my average BGs, but they could have been slightly better.

I wrote from a general perspective this week about being able to “choose one” thing to give up. And oref1 is a definite game changer for this.

  • It’s being able to put in a carb estimate and do a single, partial bolus, and see your BG go from 90 to peaking out at 130 mg/dL despite a large carb (and pure ballpark estimate) meal. And no later rise or drop, either.
  • It’s now seeing multiple days a week with 24 hour average BGs a full ~10 or so points lower than you’re used to regularly seeing – and multiple days in a week with full 100% time in range (for 80-160mg/dL), and otherwise being really darn close to 100% way more often than I’ve been before.

But I have to tell you – seeing is believing, even more than the numbers show.

I remember in the early days of #DIYPS and #OpenAPS, there were a lot of people saying “well, that’s you”. But it’s not just me. See Tim’s take on “changing the habits of a lifetime“. See Katie’s parent perspective on how much her interactions/interventions have lessened on a daily basis when testing SMB.

See this quote from Matthias, an early tester of oref1:

I was pretty happy with my 5.8% from a couple months of SMB, which has included the 2 worst months of eating habits in years.  It almost feels like a break from diabetes, even though I’m still checking hourly to make sure everything is connected and working etc and periodically glancing to see if I need to do anything.  So much of the burden of tight control has been lifted, and I can’t even do a decent job explaining the feeling to family.

And another note from Katie, who started testing SMB and oref1:

We used to battle 220s at this time of day (showing a picture flat at 109). Four basal rates in morning. Extra bolus while leaving house. Several text messages before second class of day would be over. Crazy amount of work [in the morning]. Now I just have to brush my teeth.

And this, too:

I don’t know if I’ve ever gone 24 hours without ANY mention of something that was because of diabetes to (my child).

Ya’ll. This stuff matters. Diabetes is SO much more than the math – it’s the countless seconds that add up and subtract from our focus on school/work/life. And diabetes is taking away this time not just from a person with diabetes, but from our parents/spouses/siblings/children/loved ones. It’s a burden, it’s stressful…and everything we can do matters for improving quality of life. It brings me to tears every time someone posts about these types of transformative experiences, because it’s yet another reminder that this work makes a real difference in the real lives of real people. (And, it’s helpful for Scott to hear this type of feedback, too – since he doesn’t have diabetes himself, it’s powerful for him to see the impact of how his code contributions and the features we’re designing and building are making a difference not just to BG outcomes.)

Thank you to everyone who keeps paying it forward to help others, and to all of you who share your stories and feedback to help and encourage us to keep making things better for everyone.

 

Why guess when you don’t have to? (#OpenAPS logs & why they’re handy)

One of the biggest benefits (in my very biased opinion) of a DIY closed loop is this: it’s designed to be understandable to the person using it.

You don’t have to guess “what did it do at 2am?” or “why did it do a temp basal and not an SMB?”

Well, you COULD guess – but you don’t have to. Guessing is a choice ;).

Because we’ve been designing a system that a person has to decide to trust, it provides information about everything it’s doing and the information it has. That’s what “the logs” are for, and you can get information from “the logs” from a couple of places:

  • The OpenAPS “pill” in Nightscout
  • Secondary logging sources like Papertrail
  • Information that shows up on your Pebble watch
  • The full logs from SSH’ing into a rig (usually what we mean when we ask, “what do your logs say?”)

Here’s an example of the information the OpenAPS pill provides me in Nightscout:

Example OpenAPS pill info in Nightscout

This tells me that at 11:03 am, my BG was 121; I had no carbs on board; was dropping a tiny bit as expected and was likely going to end up slightly below my target; and the current temporary basal rate running was about equivalent to what OpenAPS thought I needed at the time. I had 0.47 netIOB, all from basal adjustments. It also specifies some of the eventual numbers that are what trigger the “purple line predictions” displayed in Nightscout, so if you can’t tell where the line is (90 or 100?), you can use the pill information to determine that more easily.

(Here’s the instructions for setting up Nightscout for OpenAPS)

Here’s an example of a log from Papertrail and what it tells us:

Example papertrail usage for OpenAPS

This example is from Katie, who described her daughter’s patterns in the morning: when Anna leaves her rig in the bedroom and went to take a shower, you can see the tune change at around 6:55, meaning she’s out of range of the rig. After the shower, getting dressed, and getting back to the rig around 7:25, it goes back to “normal” tuning (which means reading and writing to the pump as usual).

Papertrail is handy for figuring out if a rig is working or not remotely and high level why it might not be, especially if it’s a communication or power problem. But I generally find it to be most helpful when you know what kind of problem it is, and use it to drill down on a particular thing. However, it’s not going to give you absolutely all the details needed for every problem – so make sure to read about how to access the traditional logs, too, and be able to do that on the go.

(Here’s the instructions for getting Papertrail going for OpenAPS)

Here’s what the logs ported to my Pebble tell me:

OpenAPS logs on Pebble watch @DanaMLewis example

There’s several helpful things that display on my watch (using the excellent “Urchin” watchface designed by Mark Wilson, which you can customize to suit your personal preference): BGs, basal activity, and then some detailed text, similar to what’s in the OpenAPS pill (current BG, the change in BG, timestamp of BG, my netIOB, my eventual BGs, and any temp basal activity). In this case, it’s easy for me to glance and see that I was a bit low for a while; am now flat but have negative net IOB so it’s been high temping a bit to level out the netIOB.

(I’ve always preferred a data-rich watchface – even back in the days of “open looping” with #DIYPS:)

https://twitter.com/danamlewis/status/652566409537433600/photo/1

(Here’s more about the Urchin watchface)

Here’s what the full logs from the rig tell me:

Example OpenAPS logs from the rig

This has a LOT of information in it (which is why it’s so awesome). There are messages being shared by each step of the loop – when it’s listening for “silence” to make sure it can talk successfully to the pump; refreshing pump history; checking the clocks on devices and for fresh BGs; and then processing through the math on what the BG is, where it’s headed, and what needs to happen. You can also see from this example where autosensitivity is kicking in, adjust basals slightly up, target down, and sensitivity down, etc. (And for those who aren’t testing oref1 features like SMB and UAM yet, you’ll get a glimpse of how there’s now additional information in the logs about if those features are currently enabled.)

(Here are some other logs you can watch, and how to run them)

Pro tip for OpenAPS users: if you’re logged into your rig, you just have to type l (the letter “L” but lower case) for it to bring up your logs!

So if you find yourself wondering: what did OpenAPS do/why did it do <thing>? Instead of wondering, start by looking at the logs.

And remember, if you don’t know what the problem is – the full logs are the best source of information for spotting what the main problem is. You can then use the information from the logs to ask about how to resolve a particular problem (Gitter is great for this!)– but part of troubleshooting well/finding out more is taking the first step to pull up your logs, because anyone who is going to help you troubleshoot will need that information to figure out a solution.

And if you ever see someone say “RTFL”, instead of “read the manual” or “read the docs”, it means “read the logs”. 😉 :)

Introducing oref1 and super-microboluses (SMB) (and what it means compared to oref0, the original #OpenAPS algorithm)

For a while, I’ve been mentioning “next-generation” algorithms in passing when talking about some of the work that Scott and I have been doing as it relates to OpenAPS development. After we created autotune to help people (even non-loopers) tune underlying pump basal rates, ISF, and CSF, we revisited one of our regular threads of conversations about how it might be possible to further reduce the burden of life with diabetes with algorithm improvements related to meal-time insulin dosing.

This is why we first created meal-assist and then “advanced meal-assist” (AMA), because we learned that most people have trouble with estimating carbs and figuring out optimal timing of meal-related insulin dosing. AMA, if enabled and informed about the number of carbs, is a stronger aid for OpenAPS users who want extra help during and following mealtimes.

Since creating AMA, Scott and I had another idea of a way that we could do even more for meal-time outcomes. Given the time constraints and reality of currently available mealtime insulins (that peak in 60-90 minutes; they’re not instantaneous), we started talking about how to leverage the idea of a “super bolus” for closed loopers.

A super bolus is an approach you can take to give more insulin up front at a meal, beyond what the carb count would call for, by “borrowing” from basal insulin that would be delivered over the next few hours. By adding insulin to the bolus and then low temping for a few hours after that, it essentially “front shifts” some of the insulin activity.

Like a lot of things done manually, it’s hard to do safely and achieve optimal outcomes. But, like a lot of things, we’ve learned that by letting computers do more precise math than we humans are wont to do, OpenAPS can actually do really well with this concept.

Introducing oref1

Those of you who are familiar with the original OpenAPS reference design know that ONLY setting temporary basal rates was a big safety constraint. Why? Because it’s less of an issue if a temporary basal rate is issued over and over again; and if the system stops communicating, the temp basal eventually expires and resume normal pump activity. That was a core part of oref0. So to distinguish this new set of algorithm features that depart from that aspect of the oref0 approach, we are introducing it as “oref1”. Most OpenAPS users will only use oref0, like they have been doing. oref1 should only be enabled specifically by any advanced users who want to test or use these features.

The notable difference between the oref0 and oref1 algorithms is that, when enabled, oref1 makes use of small “supermicroboluses” (SMB) of insulin at mealtimes to more quickly (but safely) administer the insulin required to respond to blood sugar rises due to carb absorption.

Introducing SuperMicroBoluses (or “SMB”)

The microboluses administered by oref1 are called “super” because they use a miniature version of the “super bolus” technique described above.  They allow oref1 to safely dose mealtime insulin more rapidly, while at the same time setting a temp basal rate of zero of sufficient duration to ensure that BG levels will return to a safe range with no further action even if carb absorption slows suddenly (for example, due to post-meal activity or GI upset) or stops completely (for example due to an interrupted meal or a carb estimate that turns out to be too high). Where oref0 AMA might decide that 1 U of extra insulin is likely to be required, and will set a 2U/hr higher-than-normal temporary basal rate to deliver that insulin over 30 minutes, oref1 with SMB might deliver that same 1U of insulin as 0.4U, 0.3U, 0.2U, and 0.1U boluses, at 5 minute intervals, along with a 60 minute zero temp (from a normal basal of 1U/hr) in case the extra insulin proves unnecessary.

As with oref0, the oref1 algorithm continuously recalculates the insulin required every 5 minutes based on CGM data and previous dosing, which means that oref1 will continually issue new SMBs every 5 minutes, increasing or reducing their size as needed as long as CGM data indicates that blood glucose levels are rising (or not falling) relative to what would be expected from insulin alone.  If BG levels start falling, there is generally already a long zero temp basal running, which means that excess IOB is quickly reduced as needed, until BG levels stabilize and more insulin is warranted.

Safety constraints and safety design for SMB and oref1

Automatically administering boluses safely is of course the key challenge with such an algorithm, as we must find another way to avoid the issues highlighted in the oref0 design constraints.  In oref1, this is accomplished by using several new safety checks (as outlined here), and verifying all output, before the system can administer a SMB.

At the core of the oref1 SMB safety checks is the concept that OpenAPS must verify, via multiple redundant methods, that it knows about all insulin that has been delivered by the pump, and that the pump is not currently in the process of delivering a bolus, before it can safely do so.  In addition, it must calculate the length of zero temp required to eventually bring BG levels back in range even with no further carb absorption, set that temporary basal rate if needed, and verify that the correct temporary basal rate is running for the proper duration before administering a SMB.

To verify that it knows about all recent insulin dosing and that no bolus is currently being administered, oref1 first checks the pump’s reservoir level, then performs a full query of the pump’s treatment history, calculates the required insulin dose (noting the reservoir level the pump should be at when the dose is administered) and then checks the pump’s bolusing status and reservoir level again immediately before dosing.  These checks guard against dosing based on a stale recommendation that might otherwise be administered more than once, or the possibility that one OpenAPS rig might administer a bolus just as another rig is about to do so.  In addition, all SMBs are limited to 1/3 of the insulin known to be required based on current information, such that even in the race condition where two rigs nearly simultaneously issue boluses, no more than 2/3 of the required insulin is delivered, and future SMBs can be adjusted to ensure that oref1 never delivers more insulin than it can safely withhold via a zero temp basal.

In some situations, a lack of BG or intermittent pump communications can prevent SMBs from being delivered promptly.  In such cases, oref1 attempts to fall back to oref0 + AMA behavior and set an appropriate high temp basal.  However, if it is unable to do so, manual boluses are sometimes required to finish dosing for the recently consumed meal and prevent BG from rising too high.  As a result, oref1’s SMB features are only enabled as long as carb impact is still present: after a few hours (after carbs all decay), all such features are disabled, and oref1-enabled OpenAPS instances return to oref0 behavior while the user is asleep or otherwise not engaging with the system.

In addition to these safety status checks, the oref1 algorithm’s design helps ensure safety.  As already noted, setting a long-duration temporary basal rate of zero while super-microbolusing provides good protection against hypoglycemia, and very strong protection against severe hypoglycemia, by ensuring that insulin delivery is zero when BG levels start to drop, even if the OpenAPS rig loses communication with the pump, and that such a suspension is long enough to eventually bring BG levels back up to the target range, even if no manual corrective action is taken (for example, during sleep).  Because of these design features, oref1 may even represent an improvement over oref0 w/ AMA in terms of avoiding post-meal hypoglycemia.

In real world testing, oref1 has thus far proven at least as safe as oref0 w/ AMA with regard to hypoglycemia, and better able to prevent post-meal hyperglycemia when SMB is ongoing.

What does SMB “look” like?

Here is what SMB activity currently looks like when displayed on Nightscout, and my Pebble watch:

First oref1 SMB OpenAPS test by @DanaMLewisFirst oref1 SMB OpenAPS test as seen on @DanaMLewis pebble watch

How do features like this get developed and tested?

SMB, like any other advanced feature, goes through extensive testing. First, we talk about it. Then, it becomes written up in plain language as an issue for us to track discussion and development. Then we begin to develop the feature, and Scott and I test it on a spare pump and rig. When it gets to the point of being ready to test it in the real world, I test it during a time period when I can focus on observing and monitoring what it is doing. Throughout all of this, we continue to make tweaks and changes to improve what we’re developing. After several days (or for something this different, weeks) of Dana-testing, we then have a few other volunteers begin to test it on spare rigs. They follow the same process of monitoring it on spare rigs and giving feedback and helping us develop it before choosing to run it on a rig and a pump connected to their body. More feedback, discussion, and observation. Eventually, it gets to a point where it is ready to go to the “dev” branch of OpenAPS code, which is where this code is now heading. Several people will review the code and approve it to be added to the “dev” branch. We will then have others test the “dev” branch with this and any other features or code changes – both by people who want to enable this code feature, as well as people who don’t want this feature (to make sure we don’t break existing setups). Eventually, after numerous thumbs up from multiple members of the community who have helped us test different use cases, that code from the “dev” branch will be “approved” and will go to the “master” branch of code where it is available to a more typical user of OpenAPS.

However, not everyone automatically gets this code or will use it. People already running on the master branch won’t get this code or be able to use it until they update their rig. Even then, unless they were to specifically enable this feature (or any other advanced feature), they would not have this particular segment of code drive any of their rig’s behavior.

Where to find out more about oref1, SMB, etc.:

  • We have updated the OpenAPS Reference Design to reflect the differences between oref0 and the oref1 features.
  • OpenAPS documentation about oref1, which as of July 13, 2017 is now part of the master branch of oref0 code.
  • Ask questions! Like all things developed in the OpenAPS community, SMB and oref1-related features will evolve over time. We encourage you to hop into Gitter and ask questions about these features & whether they’re right for you (if you’re DIY closed looping).

Special note of thanks to several people who have contributed to ongoing discussions about SMB, plus the very early testers who have been running this on spare rigs and pumps. Plus always, ongoing thanks to everyone who is contributing and has contributed to OpenAPS development!