2022 Strawberry Fields Forever Ultramarathon Race Report Recap

I recently ran my second-ever 50k ultramarathon. This is my attempt to provide a race recap or “race report”, which in part is to help people in the future considering this race and this course. (I couldn’t find a lot of race reports investigating this race!)

It’s also an effort to provide an example of how I executed fueling, enzyme dosing (because I have exocrine pancreatic insufficiency, known as EPI), and blood sugar management (because I have type 1 diabetes), because there’s also not a lot of practical guidance or examples of how people do this. A lot of it is individual, and what works for me won’t necessarily work for anyone, but if anything hopefully it will help other people feel not alone as they work to figure out what works for them!

Context of my running and training in preparation

I wrote quite a bit in this previous post about my training last year for a marathon and my first 50k. Basically, I’m slow, and I also choose to run/walk for my training and racing. This year I’ve been doing 30:60 intervals, meaning I run 30 seconds and walk 60 seconds.

Due to a combination of improved training (and having a year of training last year), as well as now having recognized I was not getting sufficient pancreatic enzymes so that I was not digesting and using the food I was eating effectively, this year has been going really well. I ended up training as far as a practice 50k about 5 weeks out from my race. I did several more mid- to high-20 mile runs as well. I also did a next-day run following my long runs, starting around 3-4 miles and eventually increasing to 8 miles the day after my 50k. The goal of these next-day runs was to practice running on tired legs.

Overall, I think this training was very effective for me. My training runs were easy paced, and I always felt like I could run more after I was done. I recovered well, and the next-day runs weren’t painful and I did not have to truncate or skip any of those planned runs. (Previous years, running always felt hard and I didn’t know what it was like to recover “well” until this year.) My paces also increased to about a minute/mile faster than last year’s easy pace. Again, that’s probably a combination of increased running overall and better digestion and recovery.

Last year I chose to run a marathon and then do a 50k while I was “trained up” for my marathon. This year, I wanted to do a 50k as a fitness assessment on the path to a 50 mile race this fall. I looked for local-ish 50k options that did not have much elevation, and found the Strawberry Fields Forever Ultra.

Why I chose this race, and the basics about this race

The Strawberry Fields Forever Ultra met most of my goal criteria, including that it was around the time that I wanted to run a 50k, so that I had almost 6 months to train and also before it got to be too hot and risked being during wildfire smoke season. (Sadly, that’s a season that now overlaps significantly with the summers here.) It’s local-ish, meaning we could drive to it, although we did spend the night before the race in the area just to save some stress the morning of the race. The race nicely started at 9am, and we drove home in the evening after the race.

The race is on a 10k (6.2 miles) looped course in North Bonneville, Washington, and hosted a 10k event (1 lap), a 50k event (5 laps), and also had 100k (10 laps) or (almost) 100 miles (16 laps). It does have a little bit of elevation – or “little” by ultramarathon standards. The site and all reports describe one hill and net 200 feet of elevation gain and loss. I didn’t love the idea of a 200 foot hill, but thought I could make do. It also describes the course as “grass and dirt” trails. You’ll see a map later where I’ve described some key points on the course, and it’s also worth noting that this course is very “crew-able”. Most people hang out at the start/finish, since it’s “just” a 10k loop and people are looping through pretty frequently. However, if you want to, either for moral or practical support, crew could walk over to various points, or my husband brought his e-bike and biked around between points on the course very easily using a mix of the other trails and actual roads nearby.

The course is well marked. Any turn had a white sign with a black arrow on it and also white arrows drawn on the ground, and there were dozens of little red/pink fluorescent flags marking the course. Any time there was a fork in the path, these flags (usually 2-3 for emphasis, which was excellent for tired brains) would guide you to the correct direction.

The nice thing about this race is it includes the 100 mile option and that has a course limit of 30 hours, which means all the other distances also have this course limit of 30 hours. That’s fantastic when a lot of 50k or 50 mile (or 100k, which is 62 miles) courses might have 12 hour or similar tighter course limits. If you wanted to have a nice long opportunity to cover the distance, with the ability to stop and rest (or nap/sleep), this is a great option for that.

With the 50k, I was aiming to match or ideally beat my time from my first 50k, recognizing that this course is harder given the terrain and hill. However, I think my fitness is higher, so beating that time even with the elevation gain seemed reasonable.

Special conditions and challenges of the 2022 Strawberry Fields Forever Ultramarathon

It’s worth noting that in 2021 there was a record abnormal heat wave due to a “heat dome” that made it 100+ degrees (F) during the race. Yikes. I read about that and I am not willing to run a race when I have not trained for that type of heat (or any heat), so I actually waited until the week before the race to officially sign up after I saw the forecast for the race. The forecast originally was 80 F, then bounced around mid 60s to mid 70s, all of which seemed doable. I wouldn’t mind some rain during the race, either, as rainy 50s and 60s is what I’ve been training in for months.

But just to make things interesting, for the 2022 event the Pacific Northwest got an “atmospheric river” that dumped inches of rain on Thursday..and Friday. Gulp. Scott and I drove down to spend the night Friday night before the race, and it was dumping hard rain. I began to worry about the mud that would be on the course before we even started the race. However, the rain finished overnight and we woke up to everything being wet, but not actively raining. It was actually fairly warm (60s), so even if it drizzled during the race it wouldn’t be chilly.

During the start of the race, the race director said we would get wet and joked (I thought) about practicing our backstroke. Then the race started, and we took off.

My race recap / race report the 2022 Strawberry Fields Forever Ultramarathon

I’ve included a picture below that I was sent a month or so before the race when I asked for a course map, and a second picture because I also asked for the elevation profile. I’ve marked with letters (A-I) points on the course that I’ll describe below for reference, and we ran counterclockwise this year so the elevation map I’ve marked with matching letters where “A” is on the right and “I” is on the left, matching how I experienced the course.

The course is slightly different in the start/finish area, but otherwise is 95% matching what we actually ran, so I didn’t bother grabbing my actual course map from my run since this one was handy and a lot cleaner than my Runkeeper-derived map of the race.

Annotated course map with points A-I
StrawberryFieldsForever-Ultra-Elevation-Profile

My Runkeeper elevation profile of the 50k (5 repeated laps) looked like this:
Runkeeper elevation profile of 5 loops on the Strawberry Fields Forever 50k course

I’ll describe my first experience through the course (Lap 1) in more detail, then a couple of thoughts about the experiences of the subsequent laps, in part to describe fueling and other choices I made.

Lap 1:

We left the start by running across the soccer field and getting on a paved path that hooked around the ballfield and then headed out a gate and up The Hill. This was the one hill I thought was on the course. I ran a little bit and passed a few people who walked on a shallower slope, then I also converted to a walk for the rest of the hill. It was the most crowded race start I’ve done, because there were so many people (150 across the 10k, 50k, 100k, and 100 miler) and such a short distance between the start and this hill. The Hill, as I thought of it, is point A on the course map.

Luckily, heading up the hill there are gorgeous purple wildflowers along the path and mountain views. At the top of the hill there are some benches at the point where we took a left turn and headed down the hill, going down the same elevation in about half a mile so it was longer than the uphill section. This downhill slope (B) was very runnable and gravel covered, whereas going up the hill was more dirt and mud.

At the bottom of the hill, there was a hairpin turn and we turned and headed back up the hill, although not all the way up, and more along a plateau in the side of the hill. The “plateau” is point C on the map. I thought it would be runnable once I got back up the initial hill, but it was mud pit after mud pit, and I would have two steps of running in between mud pits to carefully walk through. It was really frustrating. I ended up texting to my parents and Scott that it was about 1.7 miles of mud (from the uphill, and the plateau) before I got to some gravel that was more easily runnable. Woohoo for gravel! This was a nice, short downhill slope (D) before we flattened out and switched back to dirt and more mud pits.

This was the E area, although it did feel more runnable than the plateau because there were longer stretches between muddy sections.

Eventually, we saw the river and came out from the trail into a parking lot and then jogged over onto the trail that parallels the river for a while. This trail that I thought of as “River Road” (starting around point F) is just mowed grass and is between a sharp bluff drop with opening where people would be down at the river fishing, and in some cases we were running *underneath* fishing lines from the parking spots down to the river! There were a few people who would be walking back and forth from cars to the river, but in general they were all very courteous and there was no obstruction of the trail. Despite the mowed grass aspect of the trail, this stretch physically and psychologically felt easier because there were no mud pits for 90% of it. Near the end there were a few muddy areas right about the point we hopped back over into the road to connect up a gravel road for a short spurt.

This year, the race actually put a bonus aid station out here. I didn’t partake, but they had a tent up with two volunteers who were cheerful and kind to passing runners, and it looked like they had giant things of gatorade or water, bottled water, and some sugared soda. They probably had other stuff, but that’s just what I saw when passing.

After that short gravel road bit, we turned back onto a dirt trail that led us to the river. Not the big river we had been running next to, but the place where the Columbia River overflowed the trail and we had to cross it. This is what the race director meant by practicing our backstroke.

You can see a video in this tweet of how deep and far across you had to get in this river crossing (around point G, but hopefully in future years this isn’t a point of interest on the map!!)

Coming out of the river, my feet were like blocks of ice. I cheered up at the thought that I had finished the wet feet portion of the course and I’d dry off before I looped back around and hit the muddy hill and plateau again. But, sadly, just around the next curve, came a mud POND. Not a pit, a pond.

Again, ankle deep water and mud, not just once but in three different ponds all within 30 seconds or so of each other. It was really frustrating, and obviously you can’t run through them, so it slowed you down.

Then finally after the river crossing and the mud ponds, we hooked a right into a nice, forest trail that we spent about a mile and a half in (point H). It had a few muddy spots like you would normally expect to get muddy on a trail, but it wasn’t ankle deep or water filled or anything else. It was a nice relief!

Then we turned out of the forest and crossed a road and headed up one more (tiny, but it felt annoying despite how small it looks on the elevation profile) hill (point I), ran down the other side of that slope, stepped across another mud pond onto a pleasingly gravel path, and took the gravel path about .3 miles back all the way to complete the first full lap.

Phew.

I actually made pretty good time the first loop despite not knowing about all the mud or river crossing challenges. I was pleased with my time which was on track with my plan. Scott took my pack about .1 miles before I entered the start/finish area and brought it back to me refilled as I exited the start/finish area.

Lap 2:

The second lap was pretty similar. The Hill (A) felt remarkably harder after having experienced the first loop. I did try to run more of the downhill (B) as I recognized I’d make up some time from the walking climb as well as knowing I couldn’t run up the plateau or some of the mud pits along the plateau (C) as well as I had expected. I also decided running in the mud pits didn’t work, and went with the safer approach of stepping through them and then running 2 steps in between. I was a little slower this time, but still a reasonable pace for my goals.

The rest of the loop was roughly the same as the first, the mud was obnoxious, the river crossing freezing, the mud obnoxious again, and relief at running through the forest.

Scott met me at the end of the river road and biked along the short gravel section with me and went ahead so he could park his bike and take video of my second river crossing, which is the video above. I was thrilled to have video of that, because the static pictures of the river crossing didn’t feel like it did the depth and breadth of the water justice!

At the end of lap 2, Scott grabbed my pack again at the end of the loop and said he’d figured out where to meet me to give it back to me after the hill…if I wanted that. Yes, please! The bottom of the hill where you hairpin turn to go back up the plateau is the 1 mile marker point, so that means I ran the first mile of the third lap without my pack, and not having the weight of my full pack (almost 3L of water and lots of snacks and supplies: more on that pack below) was really helpful for my third time up the hill. He met me as planned at the bottom of the downhill (B) and I took my pack back which made a much nicer start to lap 3.

Lap 3:

Lap 3 for some reason I came out of the river crossing and the mud ponds feeling like I got extra mud in my right shoe. It felt gritty around the right side of my right food, and I was worried about having been running for so many hours with soaked feet. I decided to stop at a bench in the forest section and swap for dry socks. In retrospect, I wish I had stopped somewhere else, because I got swarmed by these moth/gnat/mosquito things that looked gross (dozens on my leg within a minute of sitting there) that I couldn’t brush off effectively while I was trying to remove my gaiters, untie my shoes, take my shoes off, peel my socks and bandaids and lambs wool off, put lubrication back on my toes, put more lambs wool on my toes, put the socks and shoes back on, and re-do my gaiters. Sadly, it took me 6 minutes despite me moving as fast as I could to do all of those things (this was a high/weirdly designed bench in a shack that looked like a bus stop in the middle of the woods, so it wasn’t the best way to sit, but I thought it was better than sitting on the ground).

(The bugs didn’t hurt me at the time, but two days later my dozens of bites all over my leg are red and swollen, though thankfully they only itch when they have something chafing against them.)

Anyway, I stood up and took off again and was frustrated knowing that it had taken 6 minutes and basically eaten the margin of time I had against my previous 50k time. I saw Scott about a quarter of a mile later, and I saw him right as I realized I had also somewhere lost my baggie of electrolyte pills. Argh! I didn’t have back up for those (although I had given Scott backups of everything else), so that spiked my stress levels as I was due for some electrolytes and wasn’t sure how I’d do with 3 or so more hours without them.

I gave Scott my pack and tasked him with checking my brother-in-law’s setup to see if he had spare electrolytes, while he was refilling my pack to give me in lap 4.

Lap 4:

I was pretty grumpy given the sock timing and the electrolyte mishap as I headed into lap 4. The hill still sucked, but I told myself “only one more hill after this!” and that thought cheered me up.

Scott had found two electrolyte options from my brother-in-law and brought those to me at the end of mile 1 (again, bottom of B slope) with my pack. He found two chewable and two swallow pills, so I had options for electrolytes. I chewed the first electrolyte tab as I headed up the plateau, and again talked myself through the mud pits with “only one more time through the mud pits after this!”.

I also tried overall to bounce back from the last of mile 4 where I let myself get frustrated, and try to take more advantage of the runnable parts of the course. I ran downhill (B) more than the previous laps, mostly ignoring the audio cues of my 30:60 intervals and probably running more like 45:30 or so. Similarly, the downhill gravel after the mud pits (D) I ran most of without paying attention to the audio run cues.

Scott this time also met me at the start of the river road section, and I gave him my pack again and asked him to take some things out that he had put in. He put in a bag with two pairs of replacement socks instead of just one pair of socks, and also put in an extra beef stick even though I didn’t ask for it. I asked him to remove it, and he did, but explained he had put it in just in case he didn’t find the electrolytes because it had 375g of sodium. (Sodium is primarily the electrolyte I am sensitive to and care most about). So this was actually a smart thing, although because I haven’t practiced eating larger amounts of protein and experienced enzyme dosing for it on the run, I would be pretty nervous about eating it in a race, so that made me a bit unnecessarily grumpy. Overall though, it was great to see him extra times on the course at this point, and I don’t know if he noticed how grumpy I was, but if he did he ignored it and I cheered up again knowing I only had “one more” of everything after this lap!

The other thing that helped was he biked my pack down the road to just before the river crossing, so I ran the river road section like I did lap 3 and 4 on the hill, without a pack. This gave me more energy and I found myself adding 5-10 seconds to the start of my run intervals to extend them.

The 4th river crossing was no less obnoxious and cold, but this time it and the mud ponds didn’t seem to embed grit inside my shoes, so I knew I would finish with the same pair of socks and not need another change to finish the race.

Lap 5:

I was so glad I was only running the 50k so that I only had 5 laps to do!

For the last lap, I was determined to finish strong. I thought I had a chance of making up a tiny bit of the sock change time that I had lost. I walked up the hill, but again ran more than my scheduled intervals downhill, grabbed my bag from Scott, picked my way across the mud pits for the final time (woohoo!), ran the downhill and ran a little long and more efficiently on the single track to the river road.

Scott took my pack again at the river road, and I swapped my intervals to be 30:45, since I was already running closer to that and I knew I only had 3.5 or so miles to go. I took my pack back at the end of river road and did my last-ever ice cold river crossing and mud pond extravaganza. After I left the last mud pond and turned into the forest, I switched my intervals to 30:30. I managed to keep my 30:30 intervals and stayed pretty quick – my last mile and a half was the fastest of the entire race!

I came into the finish line strong, as I had hoped to finish. Woohoo!

Overall strengths and positives from the race

Overall, running-wise I performed fairly well. I had a strong first lap and decent second lap, and I got more efficient on the laps as I went, staying focused and taking advantage of the more runnable parts of the course. I finished strong, with 30:45 intervals for over a mile and 30:30 intervals for over a mile to the finish.

Also, I didn’t quit after experiencing the river crossing and the mud ponds and the mud pits of the first lap. This wasn’t an “A” race for me or my first time at the distance, so it would’ve been really easy to quit. I probably didn’t in part because we did pay to spend the night before and drove all that way, and I didn’t want to have “wasted” Scott’s time by quitting, when I was very capable of continuing and wasn’t injured. But I’m proud of mostly the way I handled the challenges of the course, and for how I readjusted from the mental low and frustration after realizing how long my sock change took in lap 3. I’m also pleased that I didn’t get injured, given the terrain (mud, river crossing, and uneven grass to run on for most of the course). I’m also pleased and amazed I didn’t hurt my feet, cause major blisters, or have anything really happen to them after hours of wet, muddy, never-drying-off feet.

The huge positive was my fueling, electrolytes, and blood glucose management.

I started taking my electrolyte pills that have 200+mg of sodium at about 45 minutes into the race, on schedule. My snack choices also have 100-150mg of sodium, which allowed me to not take electrolyte pills as often as I would otherwise need to (or on a hotter day with more sweat – it was a damp, mid-60s day but I didn’t sweat as much as I usually do). But even with losing my electrolytes, I used two chewable 100mg sodium electrolytes instead and otherwise ended up with sufficient electrolytes. Even with ideal electrolyte supplementation, I’m very sensitive to sodium losses and am a salty sweater, and I have a distinct feeling when my electrolytes are insufficient, so not having that feeling during after the race was a big positive for me.

So was my fueling overall. The race started at 9am, and I woke up at 6am to eat my usual pre-race breakfast (a handful of pecans, plus my enzyme supplementation) so that it would both digest effectively and also be done hitting my blood sugar by the time the race started. My BGs were flat 120s or 130s when I started, which is how I like them. I took my first snack about an hour and 10 minutes into the race, which is about 15g carb (10g fat, 2g protein) of chili cheese flavored Fritos. For this, I didn’t dose any insulin as I was in range, and I took one lipase-only enzyme (which covers about 8g of fat for me) and one multi-enzyme (that covers about 6g of fat and probably over a dozen grams of protein). My second snack was an hour later, when I had a gluten free salted caramel Honey Stinger stroopwaffle (21g carb, 6 fat, 1 protein). For the stroopwaffle I ended up only taking a lipase-only pill to cover the fat, even though there’s 1g of protein. For me, I seem to be ok (or have no symptoms) from 2-3g of uncovered fat and 1-2g of uncovered protein. Anything more than that I like to dose enzymes for, although it depends on the situation. Throughout the day, I always did 1 lipase-only and 1 multi-enzyme for the Fritos, and 1 lipase-only for the stroopwaffle, and that seemed to work fine for me. I think I did a 0.3u (less than a third of the total insulin I would normally need) bolus for my stroopwaffle because I was around 150 mg/dL at the time, having risen following my un-covered Frito snack, and I thought I would need a tiny bit of insulin. This was perfect, and I came back down and flattened out. An hour and 20 minutes after that, I did another round of Fritos. An hour or so after that, a second stroopwaffle – but this time I didn’t dose any insulin for it as my BG was on a downward slope. An hour later, more Fritos. A little bit after that, I did my one single sugar-only correction (an 8g carb Airhead mini) as I was still sliding down toward 90 mg/dL, and while that’s nowhere near low, I thought my Fritos might hit a little late and I wanted to be sure I didn’t experience the feeling of a low. This was during the latter half of loop 4 when I was starting to increase my intensity, so I also knew I’d likely burn a little more glucose and it would balance out – and it did! I did one last round of Fritos during lap 5.
CGM graph during 50k ultramarathon

This all worked perfectly. I had 100% time in range between 90 and 150 mg/dL, even with 102g of “real food” carbs (15g x 4 servings of Fritos, 21g x 2 waffles), and one 8g Airhead mini, so in total I had 110g grams of carbs across ~7+ hours. This perfectly matched my needs with my run/walk moderate efforts.

BG  and carb intake plotted along CGM graph during 50k ultramarathon

I also nailed the enzymes, as during the race I didn’t have any GI-related symptoms and after the race and the next day (which is the ultimate verdict for me with EPI), no symptoms.

So it seems like my practice and testing with low carbs, Fritos, and waffles worked out well! I had a few other snacks in my pack (yogurt-covered pretzels, peanut butter pretzel nuggets), but I never thought of wanting them or wanting something different. I did plan to try to do 2 snacks per hour, but I ended up doing about 1 per hour. I probably could have tolerated more, but I wasn’t hungry, my BGs were great, and so although it wasn’t quite according to my original plan I think this was ideal for me and my effort level on race day.

The final thing I think went well was deciding on the fly after loop 2 to have Scott take my pack until after the hill (so I ran the up/downhill mile without it), and then for additional stretches along river road in laps 4 and 5. I had my pocket of my shorts packed with dozens of Airheads and mints, so I was fine in terms of blood sugar management and definitely didn’t need things for a mile at a time. I’m usually concerned about staying hydrated and having water whenever I want to sip, plus for swallowing electrolytes and enzyme pills to go with my snacks, but I think on this course with the number of points Scott could meet me (after B, at F all through G, and from I to the finish), I could have gotten away with not having my pack the whole time; having WAY less water in the pack (I definitely didn’t need to haul 3L the whole time, that was for when I might not see Scott every 2-3 laps) and only one of each snack at a time.

Areas for improvement from my race

I trained primarily on gravel or paved trails and roads, but despite the “easy” elevation profile and terrain, this was essentially my first trail ultra. I coped really well with the terrain, but the cognitive burden of all the challenges (Mud pits! River crossing! Mud ponds!) added up. I’d probably do a little more trail running and hills (although I did some) in the final weeks before the race to help condition my brain a little more.

I’ll also continue to practice fueling so I can eat more regularly than every hour to an hour and a half, even though this was the most I’ve ever eaten during a run, I did well with the quantities, and my enzyme and BG management were also A+. But I didn’t eat as much as I planned for, and I think that might’ve helped with the cognitive fatigue, too, by at least 5-10%.

I also now have the experience of a “stop” during a race, in this case to swap my socks. I’ve only run one ultra and never stopped before to do gear changes, so that experience probably was sufficient prep for future stops, although I do want to be mentally stronger/less frustrated by unanticipated problem solving stops.

Specific to this course, as mentioned above, I could’ve gotten away with less supplies – food and water – in my pack. I actually ran a Ragnar relay race with a group of fellow T1s a few years back where I finished my run segment and…no one was there to meet me. They went for Starbucks and took too long to get there, so I had to stand in the finishing chute waiting for 10-15 minutes until someone showed up to start the next run leg. Oh, and that happened in two of the three legs I ran that day. Ooof. Standing there tired, hot, with nothing to eat or drink, likely added to my already life-with-type-1-diabetes-driven-experiences of always carrying more than enough stuff. But I could’ve gotten away very comfortably with carrying 1L of water and one set of each type of snacks at a time, given that Scott could meet me at 1 mile (end of B), start (F) and end of river road (before G), and at the finish, so I would never have been more than 2-2.5 miles without a refill, and honestly he could’ve gotten to every spot on the trail barring the river crossing bit to meet me if I was really in need of something. Less weight would’ve made it easier to push a little harder along the way. Basically, I carried gear like I was running a solo 30 mile effort at a time, which was safe but not necessary given the course. If I re-ran this race, I’d feel a lot more comfortable with minimal supplies.

Surprises from my race

I crossed the finish line, stopped to get my medal, then was waiting for my brother-in-law to finish another lap (he ran the 100k: 62 miles) before Scott and I left. I sat down for 30 minutes and then walked to the car, but despite sitting for a while, I was not as stiff and sore as I expected. And getting home after a 3.5 hour car ride…again I was shocked at how minimally stiff I was walking into the house. The next morning? More surprises at how little stiff and sore I was. By day 3, I felt like I had run a normal week the week prior. So in general, I think this is reinforcement that I trained really well for the distance and my long runs up to 50k and the short to medium next day runs also likely helped. I physically recovered well, which is again part training but also probably better fueling during the race, and of course now digesting everything that I ate during and after the race with enzyme supplementation for EPI!

However, the interesting (almost negative, but mostly interesting) thing for me has been what I perceived to be adrenal-type fatigue or stress hormone fatigue. I think it’s because I was unused to focusing on challenging trail conditions for so many hours, compared to running the same length of hours on “easy” paved or gravel trails. I actually didn’t listen to an audiobook, music, or podcast for about half of the race, because I was so stimulated by the course itself. What I feel is adrenal fatigue isn’t just being physically or mentally tired but something different that I haven’t experienced before. I’m listening to my body and resting a lot, and I waited until day 4 to do my first easy, slow run with much longer walk intervals (30s run, 90s walk instead of my usual 30:60). Day 1 and 2 had a lot of fatigue and I didn’t feel like doing much, Day 3 had notable improvement on fatigue and my legs and body physically felt back to normal for me. Day 4 I ran slowly, Day 5 I stuck with walking and felt more fatigue but no physical issues, Day 6 again I chose to walk because I didn’t feel like my energy had fully returned. I’ll probably stick with easy, longer walk interval runs for the next week or two with fewer days running until I feel like my fatigue is gone.

General thoughts about ultramarathon training and effective ultra race preparation

I think preparation makes a difference in ultramarathon running. Or maybe that’s just my personality? But a lot of my goal for this race was to learn what I could about the course and the race setup, imagine and plan for the experience I wanted, plan for problem solving (blisters, fuel, enzymes, BGs, etc), and be ready and able to adapt while being aware that I’d likely be tired and mentally fatigued. Generally, any preparation I could do in terms of deciding and making plans, preparing supplies, etc would be beneficial.

Some of the preparation included making lists in the weeks prior about the supplies I’d need in my pack, what Scott should have to refill my pack, what I’d need the night and morning before since we would not be at home, and after-race supplies for the 3.5h drive home.

From the lists, the week before the race I began grouping things. I had my running pack filled and ready to go. I packed my race outfit in a gallon bag, a full set of backup clothes in another gallon bag and labeled them, along with a separate post-run outfit and flip flops for the drive home. I also included a washcloth for wiping sweat or mud off after the run, and I certainly ended up needing that! I packed an extra pair of shoes and about 4 extra pairs of socks. I also had separate baggies with bandaids of different sizes, pre-cut strips of kinesio tape for my leg and smaller patches for blisters, extra squirrel nut butter sticks for anti-chafing purposes, as well as extra lambs wool (that I lay across the top of my toes to prevent socks from rubbing when they get wet from sweat or…river crossings, plus I can use it for padding between my toes or other blister-developing spots). I had sunscreen, bug spray, sungless, rain hat, and my sunny-weather running visor that wicks away sweat. I had low BG carbs for me to put in my pockets, a backup bag for Scott to refill, and a backup to the backup. The same for my fuel stash: my backpack was packed, I packed a small baggie for Scott as well as a larger bag with 5-7 of everything I thought I might want, and also an emergency backup baggie of enzymes.

*The only thing I didn’t have was a backup baggie of electrolyte pills. Next time, I’ll add this to my list and treat them like enzymes to make sure I have a separate backup stash.

I even made a list and gave it to Scott that mapped out where key things were for during and after the race. I don’t think he had to use it, because he was only digging through the snack bag for waffles and Fritos, but I did that so I didn’t have to remember where I had put my extra socks or my spare bandaids, etc. He basically had a map of what was in each larger bag. All of this was to reduce the decision and communication because I knew I’d have decision fatigue.

This also went for post-race planning. I told Scott to encourage me to change clothes, and it was worth the energy to change so I didn’t sit in cold, wet clothes for the long drive home. I pre-made a gluten free ham and cheese quesadilla (take two tortillas, fill with shredded cheese and slices of ham, microwave, cut into quarters, stick in baggies, mark with fat/protein/carb counts, and refrigerate) so we could warm that up in the car (this is what I use) so I had something to eat on the way home that wasn’t more Fritos or waffles. I didn’t end up wanting it, but I also brought a can of beef stew with carrots and potatoes, that I generally like as a post-race or post-run meal, and a plastic container and a spoon so I could warm up the stew if I wanted it. Again, all of this pre-planned and put on the list weeks prior to the race so I didn’t forget things like the container or the spoon.

The other thing I think about a lot is practicing everything I want to do for a race during a training run. People talk about eating the same foods, wearing the same clothes, etc. I think for those of us with type 1 diabetes (or celiac, EPI, or anything else), it’s even more important. With T1D, it’s so helpful to have the experience adjusting to changing BG levels and knowing what to do when you’re dropping or low and having a snack, vs in range and having a fueling snack, or high and having a fueling snack. I had 100% TIR during this run, but I didn’t have that during all of my training runs. Sometimes I’d plateau around 180 mg/dL and be over-cautious and not bring my BGs down effectively; other times I’d overshoot and cause a drop that required extra carbs to prevent or minimize a low. Lots of practice went into making this 100% TIR day happen, and some of it was probably a bit of luck mixed in with all the practice!

But generally, practice makes it a lot easier to know what to do on the fly during a race when you’re tired, stressed, and maybe crossing an icy cold river that wasn’t supposed to be part of your course experience. All that helps you make the best possible decisions in the weirdest of situations. That’s the best you can hope for with ultrarunning!

Findings from the world’s first RCT on open source AID (the CREATE trial) presented at #ADA2022

(You can also see a Twitter thread here summarizing the study results, if you are interested in sharing the study with your networks.)

TLDR: The CREATE Trial was a multi-site, open-labeled, randomized, parallel-group, 24-week superiority trial evaluating the efficacy and safety of an open-source AID system using the OpenAPS algorithm in a modified version of AndroidAPS. Our study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14 percentage points higher among those who used the open-source AID system (95% confidence interval [CI], 9.2 to 18.8; P<0.001) compared to those who used sensor augmented pump therapy; a difference that corresponds to 3 hours 21 minutes more time spent in target range per day. The system did not contribute to any additional hypoglycemia. Glycemic improvements were evident within the first week and were maintained over the 24-week trial. This illustrates that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. This study concluded that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS, a widely used open-source AID solution, is efficacious and safe.

The backstory on this study

We developed the first open source AID in late 2014 and shared it with the world as OpenAPS in February 2015. It went from n=1 to (n=1)*2 and up from there. Over time, there were requests for data to help answer the question “how do you know it works (for anybody else)?”. This led to the first survey in the OpenAPS community (published here), followed by additional retrospective studies such as this one analyzing data donated by the community,  prospective studies, and even an in silico study of the algorithm. Thousands of users chose open source AID, first because there was no commercial AID, and later because open source AID such as the OpenAPS algorithm was more advanced or had interoperability features or other benefits such as quality of life improvements that they could not find in commercial AID (or because they were still restricted from being able to access or afford commercial AID options). The pile of evidence kept growing, and each study has shown safety and efficacy matching or surpassing commercial AID systems (such as in this study), yet still, there was always the “but there’s no RCT showing safety!” response.

After Martin de Bock saw me present about OpenAPS and open source AID at ADA Scientific Sessions in 2018, we literally spent an evening at the dinner table drawing the OpenAPS algorithm on a napkin at the table to illustrate how OpenAPS works in fine grained detail (as much as one can do on napkin drawings!) and dreamed up the idea of an RCT in New Zealand to study the open source AID system so many were using. We sought and were granted funding by New Zealand’s Health Research Council, published our protocol, and commenced the study.

This is my high level summary of the study and some significant aspects of it.

Study Design:

This study was a 24-week, multi-centre randomized controlled trial in children (7–15 years) and adults (16–70 years) with type 1 diabetes comparing open-source AID (using the OpenAPS algorithm within a version of AndroidAPS implemented in a smartphone with the DANA-i™ insulin pump and Dexcom G6® CGM), to sensor augmented pump therapy. The primary outcome was change in the percent of time in target sensor glucose range (3.9-10mmol/L [70-180mg/dL]) from run-in to the last two weeks of the randomized controlled trial.

  • This is a LONG study, designed to look for rare adverse events.
  • This study used the OpenAPS algorithm within a modified version of AndroidAPS, meaning the learning objectives were adapted for the purpose of the study. Participants spent at least 72 hours in “predictive low glucose suspend mode” (known as PLGM), which corrects for hypoglycemia but not hyperglycemia, before proceeding to the next stage of closed loop which also then corrected for hyperglycemia.
  • The full feature set of OpenAPS and AndroidAPS, including “supermicroboluses” (SMB) were able to be used by participants throughout the study.

Results:

Ninety-seven participants (48 children and 49 adults) were randomized.

Among adults, mean time in range (±SD) at study end was 74.5±11.9% using AID (Δ+ 9.6±11.8% from run-in; P<0.001) with 68% achieving a time in range of >70%.

Among children, mean time in range at study end was 67.5±11.5% (Δ+ 9.9±14.9% from run-in; P<0.001) with 50% achieving a time in range of >70%.

Mean time in range at study end for the control arm was 56.5±14.2% and 52.5±17.5% for adults and children respectively, with no improvement from run-in. No severe hypoglycemic or DKA events occurred in either arm. Two participants (one adult and one child) withdrew from AID due to frustrations with hardware issues.

  • The pump used in the study initially had an issue with the battery, and there were lots of pumps that needed refurbishment at the start of the study.
  • Aside from these pump issues, and standard pump site/cannula issues throughout the study (that are not unique to AID), there were no adverse events reported related to the algorithm or automated insulin delivery.
  • Only two participants withdrew from AID, due to frustration with pump hardware.
  • No severe hypoglycemia or DKA events occurred in either study arm!
  • In fact, use of open source AID improved time in range without causing additional hypoglycemia, which has long been a concern of critics of open source (and all types of) AID.
  • Time spent in ‘level 1’ and ‘level 2’ hyperglycemia was significantly lower in the AID group as well compared to the control group.

In the primary analysis, the mean (±SD) percentage of time that the glucose level was in the target range (3.9 – 10mmol/L [70-180mg/dL]) increased from 61.2±12.3% during run-in to 71.2±12.1% during the final 2-weeks of the trial in the AID group and decreased from 57.7±14.3% to 54±16% in the control group, with a mean adjusted difference (AID minus control at end of study) of 14.0 percentage points (95% confidence interval [CI], 9.2 to 18.8; P<0.001). No age interaction was detected, which suggests that adults and children benefited from AID similarly.

  • The CREATE study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy.
  • This difference reflects 3 hours 21 minutes more time spent in target range per day!
  • For children AID users, they spent 3 hours 1 minute more time in target range daily (95% CI, 1h 22m to 4h 41m).
  • For adult AID users, they spent 3 hours 41 minutes more time in target range daily (95% CI, 2h 4m to 5h 18m).
  • Glycemic improvements were evident within the first week and were maintained over the 24-week trial. Meaning: things got better quickly and stayed so through the entire 24-week time period of the trial!
  • AID was most effective at night.
Difference between control and AID arms overall, and during day and night separately, of TIR for overall, adults, and kids

One thing I think is worth making note of is that one criticism of previous studies with open source AID is regarding the self-selection effect. There is the theory that people do better with open source AID because of self-selection and self-motivation. However, the CREATE study recruited a diverse cohort of participants, and the study findings (as described above) match all previous reports of safety and efficacy outcomes from previous studies. The CREATE study also found that the greatest improvements in TIR were seen in participants with lowest TIR at baseline. This means one major finding of the CREATE study is that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID.

This therefore means there should be NO gatekeeping by healthcare providers or the healthcare system to restrict AID technology from people with insulin-requiring diabetes, regardless of their outcomes or experiences with previous diabetes treatment modalities.

There is also no age effect observed in the trail, meaning that the results of the CREATE Trial demonstrated that open-source AID is safe and effective in children and adults with type 1 diabetes. If someone wants to use open source AID, they would likely benefit, regardless of age or past diabetes experiences. If they don’t want to use open source AID or commercial AID…they don’t have to! But the choice should 100% be theirs.

In summary:

  • The CREATE trial was the first RCT to look at open source AID, after years of interest in such a study to complement the dozens of other studies evaluating open source AID.
  • The conclusion of the CREATE trial is that open-source AID using the OpenAPS algorithm within a version of AndroidAPS, a widely used open-source AID solution, appears safe and effective.
  • The CREATE trial found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy; a difference that reflects 3 hours 21 minutes more time spent in target range per day.
  • The study recruited a diverse cohort, yet still produced glycemic outcomes consistent with existing open-source AID literature, and that compare favorably to commercially available AID systems. Therefore, the CREATE Trial indicates that a range of people with type 1 diabetes might benefit from open-source AID solutions.

Huge thanks to each and every participant and their families for their contributions to this study! And ditto, big thanks to the amazing, multidisciplinary CREATE study team for their work on this study.

Note that the continuation phase study results are slated to be presented this fall at another conference!

Findings from the RCT on open source AID, the CREATE Trial, presented at #ADA2022

Looking back at work and accomplishments in 2021

I decided to do a look back at the last year’s worth of work, in part because it was a(nother) weird year in the world and also because, if you’re interested in my work, unless you read every single Tweet, there may have been a few things you missed that are of interest!

In general, I set goals every year that stretch across personal and professional efforts. This includes a daily physical activity streak that coincides with my walking and running lots of miles this year in pursuit of my second marathon and first (50k) ultramarathon. It’s good for my mental and physical health, which is why I post almost daily updates to help keep myself accountable. I also set goals like “do something creative” which could be personal (last year, knitting a new niece a purple baby blanket ticked the box on this goal!) or professional. This year, it was primarily professional creativity that accomplished this goal (more on that below).

Here’s some specifics about goals I accomplished:

RUNNING

  • My initial goal was training ‘consistently and better’ than I did for my first marathon, with 400 miles as my stretch goal if I was successfully training for the marathon. (Otherwise, 200 miles for the year would be the goal without a marathon.) My biggest-ever running year in 2013 with my first marathon was 356 miles, so that was a good big goal for me. I achieved it in June!
  • I completed my second marathon in July, and PR’d by over half an hour.
  • I completed my first-ever ultramarathon, a 50k!
  • I re-set my mileage goal after achieving 400 miles..to 500..600…etc. I ultimately achieved the biggest-ever mileage goal I’ve ever hit and think I ever will hit: I ran 1,000 miles in a single year!
  • I wrote lots of details about my methods of running (primarily, run/walking) and running with diabetes here. If you’re looking for someone to cheer you on as you set a goal for daily activity, like walking, or learning to run, or returning to running…DM or @ me on Twitter (@DanaMLewis). I love to cheer people on as they work toward their activity goals! It helps keep me inspired, too, to keep aiming at my own goals.

CREATIVITY

  • My efforts to be creative were primarily on the professional side this year. The “Convening The Center” project ended up having 2 out of 3 of my things that I categorized as being creative. The first was the design of the digital activities and the experience of CTC overall (more about that here). The second were the items in the physical “kit” we mailed out to participants: we brainstormed and created custom playing cards and physical custom keychains. They were really fun to make, especially in partnership with our excellent project artist, Rebeka Ryvola, who did the actual design work!
  • My third “creative” endeavor was a presentation, but it was unlike the presentations I usually give. I was tasked to create a presentation that was “visually engaging” and would not involve showing my face in the presentation. I’ve linked to the video below in the presentation section, but it was a lot of work to think about how to create a visually and auditory focused presentation and try to make it engaging, and I’m proud of how it turned out!

RESEARCH AND PUBLICATIONS

  • This is where the bulk of my professional work sits right now. I continue to be a PI on the CREATE trial, the world’s first randomized control trial assessing open-source automated insulin delivery technology, including the algorithm Scott and I dreamed up and that I have been using every day for the past 7 years. The first data from the trial itself is forthcoming in 2022. 
  • Convening The Center also was a grant-funded project that we turned into research with a publication that we submitted, assessing more of what patients “do”, which is typically not assessed by researchers and those looking at patient engagement in research or innovation. Hopefully, the publication of the research article we just submitted will become a 2022 milestone! In the meantime, you can read our report from the project here (https://bit.ly/305iQ1W ), as this grant-funded project is now completed.
  • Goal-wise, I aim to generate a few publications every year. I do not work for any organization and I am not an academic. However, I come from a communications background and see the benefit of reaching different audiences where they are, which is why I write blog posts for the patient community and also seek to disseminate knowledge to the research and clinical communities through traditional peer-reviewed literature. You can see past years’ research articulated on my research page (DIYPS.org/research), but here’s a highlight of some of the 2021 publications:
  • Also, although I’m not a traditional academic researcher, I also participate in the peer review process and frequently get asked to peer-review submitted articles to a variety of journals. I skimmed my email and it looks like I completed (at least) 13 peer reviews, most of which included also reviewing subsequent revisions of those submitted articles. So it looks like my rate of peer reviewing (currently) is matching my rate of publishing. I typically get asked to review articles related to open-source or DIY diabetes technology (OpenAPS, AndroidAPS, Loop, Nightscout, and other efforts), citizen science in healthcare, patient-led research or patient engagement in research, digital health, and diabetes data science. If you’re submitting articles on that topic, you’re welcome to recommend me as a potential reviewer.

PRESENTATIONS

  • I continued to give a lot of virtual presentations this year, such as at conferences like the “Insulin100” celebration conference (you can see the copy I recorded of my conference presentation here). I keynoted at the European Patients Forum Congress as well as at ADA’s Precision Diabetes Medicine 2021; an invited talk ADA Scientific Sessions (session coverage here); the 2021 Federal Wearables Summit: (video here); and the BIH Clinician Scientist Symposium (video here), to name a few (but not all).
  • Additionally, as I mentioned, one of the presentations I’m most proud of was created for the Fall 2021 #DData Exchange event:

OTHER STUFF

I did quite a few other small projects that don’t fit neatly into the above categories.

One final thing I’m excited to share is that also in 2021, Amazon came out with a beta program for producing hardcover/hardback books, alongside the ability to print paperback books on demand (and of course Kindle). So, you can now buy a copy of my book about Automated Insulin Delivery: How artificial pancreas “closed loop” systems can aid you in living with diabetes in paperback, hardback, or on Kindle. (You can also, still, read it 100% for free online via your phone or desktop at ArtificialPancreasBook.com, or download a PDF for free to read on your device of choice. Thousands of people have downloaded the PDF!)

Now available in hardcover, the book about Automated Insulin Delivery by Dana M. Lewis

Designing digital interactive activities that aren’t traditional icebreakers

A participant from Convening The Center recently emailed and asked what technology we had used for some of our interactive components within the phase 2 and 3 gatherings for the project. The short answer was “Google Slides” but there was a lot more that went into the choice of tech and the design of activities, so I ended up writing this blog post in case it was helpful to anyone else looking for ideas for interactive activities, new icebreakers for the digital era, etc.

Design context:

We held four small (8 people max) gatherings during “Phase 2” of CTC and one large (25 participants) gathering for “Phase 3”, and used Zoom as our videoconference platform of choice. But throughout the project, we knew we were bringing together random strangers to a meeting with no agenda (more about the project here, for background), and wanted to have ways to help people introduce themselves without relying on rote introductions that often fall back to name, title/organization (which often did not exist in this context!), or similar credentials.

We also had a few activities during the meeting where we wanted people to interact, and so the “icebreakers” (so to speak) were a low-stress way to introduce people to the types of activities we’d repeat later in the meeting.

Technology choice:

I’ve seen people use Jamboard (made by Google) for this purpose (icebreakers or introductory activities), and it was one that came to mind. However, I’ve been a participant on a Jamboard for a different type of meeting, and there are a few problems with it. There’s a limit to the number of participants; it requires participants to create the item they want to put on the board (e.g. figure out how to add a sticky note), and the examples I’ve seen content-wise ended up using it in a very binary way. That in some cases was due to the people designing the activity (more on content design, below), but given that we wanted to also use Google Slides to display information to participants and also enable notetaking in the same location, it also became easy to replicate the basic functionality in Google Slides instead. (PS – this article was helpful for comparing pros/cons of Jamboard and Google Slides.)

Content choices:

The “icebreakers” we chose served a few purposes. One, as mentioned above, was familiarizing people with the platform so we could use it for meeting-related activities. The other was the point of traditional icebreakers, which is to help everyone feel comfortable and also enable people to introduce themselves. That being said, most of the time introductions rely on credentials, and this was specifically a credential-less or non-credential-focused gathering, so we brainstormed quite a bit to think of what type of activities would allow people to get comfortable interacting with Google Slides and also introduce themselves in non-stressful ways.

The first activity we did for the small groups was a world map image and asked people to drag and drop their image to “if you could be anywhere in the world right now, where would you be?”. (I had asked all participants to send some kind of image in advance, and if they didn’t, supplied an image and told them what it was during the meeting.) I had the images lined up to the side of the map, and in this screenshot you can see the before and after from one of the groups where they dragged and dropped their images.

Visual of a world map with images representing individuals and different places they want to be in the world

The second activity was a slide where we asked everyone to type “one boring or uninteresting fact about themselves”. Again, this was a push back against traditional activities of “introduce yourself by credentials/past work” that feels performative and competitive. I had everyone’s names listed on the slide, so each could type in their fact. It ended up being a really fun discussion and we got to see people’s personalities early on! In some cases, we had people drop in images (see screenshot of example) when there was cross-cultural confusion about the name of something, such as the name of a vegetable that varies worldwide! (In this case, it was okra!)

List of people's names and a boring fact about themselves

We also did the same type of “type in” activity for “Ask me about my expertise in..” and asked people to share an expertise they have personally, or professionally. This is the closest we got to ‘traditional’ introductions but instead of being about titles and organizations it was about expertise in activities.

Finally, we did the activity most related to our meeting that I had wanted people to be comfortable with dragging and dropping their image for. We had a slide, again with everyone’s image present, and a variety of types of activities listed. We queried participants about “where do you spend most of your time now?”. Participants dragged and dropped their images accordingly. In some cases, they duplicated their image (right click, duplicate in Google Slides) to put themselves in multiple categories. We also had an “other” category listed where people could add additional core activities.

Example of slide activity where people drag their image to portray activities they're doing now and want to do in the future

Then, we had another slide asking where do they want to spend most of their time in the future? The point of this was to be able to switch back and forth between each slide and visualize the changes for group members – and also so they could see what types of activities their fellow participants might have experience in.

Some of these activities are similar to what you might do in person at meetings by “dot voting” on topics. This type of slide is a way to achieve the same type of interactivity digitally.

Facilitating or moderating these types of interactive activities

In addition to choosing and designing these activities, I also feel that moderating or facilitating these activities played a big role in the success of them for this project.

As I had mentioned in the technology choice section,  I’ve previously been a participant in other meeting-driven activities (using Jamboard or other tech) where the questions/activities were binary and unrelated to the meeting. Questions such as “are you a dog or cat person? Pick one.” or “Is a hot dog a sandwich?” are binary, and in some cases a meeting facilitator may fall into the trap of then ascribing characteristics to participants based on their response. In a meeting where you’re trying to use these activities to create a comfortable environment for participation amongst virtual strangers…that can backfire and actually cause people to shut down and limit participation in the meeting following those introductory activities.

As a result of having been on the receiving end of that experience, I really wanted to design activities with relevance to our meeting (both in terms of technology used and the content) as well as enough flexibility to support whatever level of involvement people wanted to do. That included being prepared to move people’s images or type in for them, especially if they were on the road and not able to sit stationary and use google slides. (We had recommended people be stationary for this meeting, but knew it wasn’t always possible, and were prepared to still help them verbally direct us to move their image, type in their fact, etc. This also can be very important for people with vision impairment as well, so be prepared to assist people in completing the activities for whatever reason, and also to verbally describe what is going on the slides/boards as people move things or type in their facts. This can aid those with vision impairment and also those who are on the go and can’t look at a screen during the meeting for whatever reason.)

One other reason we used Google Slides is so we’d end up with a slide for each breakout group to be able to take notes, and a “parking lot” slide at the end of the deck for people to add questions or comments they wanted to bring back up in the main group or moving forward in future discussions. Because people already had the Google Slide deck open for the activity, it was easy for them to scroll down and be in the notetaking slide for their breakout group (we colored the background of the slides, and told people they were in the purple, blue, green, etc. slides to make it easier to jump into the right slide).

One other note regarding facilitation with Zoom + Google Slides is that the chat feature in Zoom doesn’t show previous chat to people who join the Zoom meeting after that message is sent. So if you want to use Zoom chat to share the Google Slides link, have your link saved elsewhere and assign someone to copy and paste that message into the chat frequently, so all participants have access and can open the URL as they join the meeting. (This also includes if someone leaves and re-enters the meeting: you may need to re-post the link yet again into chat.)

TLDR, we used Google Slides to facilitate meeting note taking, digital “dot voting” and other interactive icebreaker activities alongside Zoom.

Update – 2021 Convening The Center!

2020 did not go exactly as planned, and that includes Convening the Center (see original announcement/plan here), which we had intended to be an awesome, in-person gathering of individuals who are new or have previous experience working to improve healthcare through advocacy, innovation, design, research, entrepreneurship, or some other category of “doing” and “fixing” problems they see for themselves and their community. But, as an early “I see COVID-19 is going to be a problem” person (see this post Scott and I posted March 7 begging people to stay home), by early February I was warning my co-PI and RWJF contacts that we would likely be postponing Convening the Center, and by May that was pretty clear. So we decided to request (and received) an extension on our grant from RWJF to enable us to push the grant into 2021…and ultimately, ::waves hand at everything still going on:: decided to shift to an all-virtual experience.

I’ll be honest – I was a little disappointed! But now, after several more months of work with John (Harlow, my Co-PI), I’m now very excited about the opportunities an all-virtual experience for Convening the Center will bring. First and foremost, although we planned to pay participants for ALL travel costs, hotel, food, AND for their time, I knew there would likely be people who would still not be able to travel to participate. I am hoping with a virtual experience (where we still pay people for their time!), the reduced time commitment to participate will enable those people to potentially participate.

Secondly, we’ve been thinking quite a bit about the design of virtual meetings and gatherings and have some ideas up our sleeve (which we’ll share as we finish developing them!) about how to achieve the goals of our gathering, online, without triggering video conference fatigue. If you’ve had any fantastic virtual experiences in 2020 (or ever), please let us know what they were, and what you loved (or what to avoid!), so that we can draw on as many inputs as possible to design this virtual experience.

Here’s what Convening the Center will now look like:

  • Starting now: recruitment. We are looking to solicit interest from individuals who are new or have some experience working to change or improve health, healthcare, communities, etc. If that’s you, please self-nominate yourself here, and/or please also consider sharing this with your communities or a friend from another community!
  • January: we will reach out to nominees with another short form to gather a bit more information to help us create the cohort.
  • Early February: we will notify selected participants.
  • February: Phase 1 (2 hours scheduled time commitment from participants, plus some asynchronous opportunities)
  • April: Phase 2 (2-4 hour schedule time commitment from participants, plus some asynchronous opportunities)
  • June: Phase 3 (2-4 hour scheduled time commitment from participants, plus some asynchronous opportunities)

We’ll be sharing more in the future about what the “phases” look like, and this virtual format will allow us to also invite participation from a broader group beyond the original cohort of participants. Stay tuned!

Again, here is the nomination link you can self-nominate or nominate others at. Thanks!

Nominate someone you know for Convening The Center!

How to deal with wildfire smoke and air quality issues during COVID-19

2020. What a year. We’ve been social distancing since late February and being very careful in terms of minimizing interactions even with family, for months. We haven’t traveled, we haven’t gone out to eat, and we basically only go out to get exercise (with a mask when it’s on hiking trails/around anyone) or Scott goes to the grocery store (n95 masked). We’ve been working on CoEpi (see CoEpi.org – an open source exposure notification app based on symptom reports) and staying on top of the scientific literature around COVID-19, regarding NPIs like distancing and masking; at-home diagnostics like temperature and pulse oximetry monitoring, prophylactics and treatments like zinc, quercetine, and even MMR vaccines; and the impact of ventilation and air quality on COVID-19 transmission and susceptibility.

And we live in Washington, so the focus on air quality got very real very quickly during this year’s wildfire season, where we had wildfires across the state of Washington, then got pummeled for over a week with hazardous levels of wildfire smoke coming up from Oregon and California to cover our existing smoke layer. But, one of our DIY air quality hacks for COVID-19 gave us a head start on air quality improvements for smoke-laden air, which I’ll describe below.

Here are various things we’ve gotten and have been using in our personal attempts to thwart COVID-19:

  • Finger pulse oximeter.
    • Just about any cheap pulse oximeter you can find is fine. The goal is to get an idea of your normal baseline oxygen rates. If you dip low, that might be a reason to go to urgent care or the ER or at least talk to your doctor about it. For me, I am typically 98-99% (mine doesn’t read higher than 99%), and my personal plan would be to talk to a healthcare provider if I was sick and started dropping below 94%.
  • Thermometer
    • Use any thermometer that you’ll actually use. I have previously used a no-touch thermometer that could read foreheads but found it varied widely and inconsistently, so I went back to an under the tongue thermometer and took my temperature for several months at different times to figure out my baselines. If sick or you have a suspected exposure, it’s good to be checking at different times of the day (people often have lower temps in the morning than in the evening, so knowing your daily differences may help you evaluate if you’re elevated for you or not).
    • Note: women with menstrual cycles may have changes related to this; such as lower baseline temps at the start of the cycle and having a temperature upswing around or after the mid-point in their cycle. But not all do. Also, certain medications or birth controls can impact basal temperatures, so be aware of that.
  • Originally, n95 masks with outlet valves.
    • Note: n95 masks with valves cannot be used by medical professionals, because the valves make them less effective for protecting others. (So don’t freak out at people who had a box of valved n95 masks from previous wildfire smoke seasons, as we did. Ahem.) 
    • We had a box we bought after previous years’ wildfire smoke, and they work well for us (in low-risk non-medical settings) for repeated use. They’re Scott’s go-to choice. If you’re in a setting where the outlet valve matters (indoors in a doctor’s/medical setting, or on a plane), you can easily pop a surgical/procedure mask over the valve to block the valve to protect others from your exhaust, while still getting good n95-level protection for yourself.
    • They were out of stock since February, but given the focus on n95 without valves for medical PPE, there have been a few boxes of n95 masks with outlet valves showing up online at silly prices ($7 per mask or so). But, kn95’s are a cheaper per mask option that are generally more available – see below.
    • (June 2021 note – they are back to reasonable prices, in the $1-2 range per mask on Amazon, and available again.)
  • kn95 masks.
    • kn95 masks are a different standard than US-rated n95; but they both block 95% of tiny (0.3 micron) particles. For non-medical usage, we consider them equivalent. But like n95, the fit is key.
    • We originally bought these kn95s, but the ear loops were quite big on me. (See below for options if this is the case on any you get.) They aren’t as hardy as the n95s with valves (above); the straps have broken off, tearing the mask, after about 4-5 long wears. That’s still worth it for them being $2-3 each (depending on how many you buy at a time) for me, but I’d always pack a spare mask (of any kind) just in case.
      • Option one to adjust ear loops: I loop them over my ponytail, making them head loops. This has been my favorite kn95 option because I get a great fit and a tight seal with this method.
      • Option two to adjust ear loops: tie knots in the ear loops
      • Option three to adjust ear loops: use things like this to tighten the ear loops
    • We also got a set of these kn95s. They don’t fit quite as well in terms of a tight face fit, but these actually work as ear loops (as designed), and I was able to wear this inside the house on the worst day of air quality.
  • Box fan with a filter to reduce COVID-19 particles in the air:
    • We read this story about using an existing AC air furnace filter on a box fan to help reduce the number of COVID-19 particles in the air. We already had a box fan, so we took one of our spare 20×20 filters and popped it on. I’m allergic to dust, cats (which we just got), trees, grass, etc, so I knew it would also help with regular allergens. There are different levels of filter – all the way up to HEPA filters – but we had MERV 12 so that’s what we used.
  • Phone/object UV sanitizer
    • We got a PhoneSoap Pro (in lavender, but there are other colors). Phones are germy, and being able to pop the phone in (plus keys or any other objects like credit cards or insurance cards that might have been handled by another human) to disinfect has been nice to have.
    • The Pro is done sanitizing in 5 minutes, vs the regular one takes 10 minutes. It’s not quite 2x the price as the non-pro, but I’ve found it to be worthwhile because otherwise, I would be impatient to get my phone back out. I usually pop my phone in it when I get home from my walk, and by the time I’m done washing my hands and all the steps of getting home, the phone is about or already done being sanitized.
  • Bonus (but not as useful to everyone as the above, and pricey): Oura ring
    • Scott and I also both got Oura rings. They are pricey, but every morning when we wake up we can see our lowest resting heart rate (RHR), heart rate variability (HRV), temperature deviations, and respiratory rate (RR). There have been studies showing that HRV, RHR, overnight temperature, and RR changes happen early in COVID-19 and other infections, which can give an early warning sign that you might be getting sick with something. That can be a good early warning sign (before you get to the point of being symptomatic and highly infectious) that you need to mask up and work from home/social distance/not interact with other people if you can help it. I find the data soothing, as I am used to using a lot of diabetes data on a daily and real-time basis (see also: invented an open source artificial pancreas). Due to price and level of interest in self-tracking data, this may not be a great tool for everyone.
    • Note this doesn’t tell you your temperature in real time, or present absolute values, but it’s helpful to see, and get warnings about, any concerning trends in your body temperature data. I’ve seen several anecdotal reports of this being used for early detection of COVID-19 infection and various types of relapses experienced by long-haulers.

And here are some things we’ve added to battle air quality during wildfire smoke season:

  • We were already running a box fan with a filter (see above for more details) for COVID-19 and allergen reduction; so we kept running it on high speed for smoke reduction.
    • Basic steps: get box fan, get a filter, and duct tape or strap it on. Doesn’t have to be cute, but it will help.
    • I run this on high speed during the day in my bedroom, and then on low speed overnight or sleep with earplugs in.
  • We already had a small air purifier for allergens, which we also kept running on high. This one hangs out in our guest bedroom/my office.
  • We caved and got a new, bigger air purifier, since we expect future years to be equally and unfortunately as smoky. This is the new air purifier we got. (Scott chose the 280i version that claims to cover 279 sq. ft.). It’s expensive, but given how miserable I was even inside the house with decent air quality thanks to my box fan and filter, little purifier, and our A/C filtered air… I consider it to be worth the investment.
    • We plugged it in and validated that with our A/C-filtered air combined with my little air purifier and the box fan with filter running on high, we already had ‘good’ air quality (but not excellent). We also stuck it out in the hallway to see what the hallway air quality was running – around 125 ug/m^3 – yikes. Turns out that was almost as high as the outside air, which is I’ve had to wear a kn95 mask even to walk hallway laps, and why my eyes are irritated. example air quality difference between hallway and our kitchen. hallway is much higher.
  • Check your other filters while you’re on air quality monitoring alert. We found our A/C intake duct vent had not had the air filter changed since we moved in over a year ago… and turns out it’s a non-standard size and had a hand-cut stuffed in there, so we ordered a correctly sized one for the vent, and taped a different one over the outside in the interim.
  • The other thing to fight the smoke is having n95 with valves or kn95 masks to wear when we have to go outside, or if it gets particularly bad inside. Our previous strategy was to have several on hand for wildfire season, and we’ll continue to do this. (See above in the COVID-19 section for descriptions in more detail about different kinds of masks we’ve tried.)

Wildfires, their smoke, and COVID-19 combined is a bit of a mess for our health. Stay inside when you can, wear masks when you’re around other people outside your household that you have to share air with, wash your hands, and good luck.

Poster and presentation content from @DanaMLewis at #ADA2020 and #DData20

In previous years (see 2019 and 2018), I mentioned sharing content from ADA Scientific Sessions (this year it’s #ADA2020) with those not physically present at the conference. This year, NO ONE is present at the event, and we’re all virtual! Even more reason to share content from the conference. :)

I contributed to and co-authored two different posters at Scientific Sessions this year:

  • “Multi-Timescale Interactions of Glucose and Insulin in Type 1 Diabetes Reveal Benefits of Hybrid Closed Loop Systems“ (poster 99-LB) along with Azure Grant and Lance Kriegsfeld, PhD.
  • “Do-It-Yourself Artificial Pancreas Systems for Type 1 Diabetes Reduce Hyperglycemia Without Increasing Hypoglycemia” (poster 988-P in category 12-D Clinical Therapeutics/New Technology—Insulin Delivery Systems), alongside Jennifer Zabinsky, MD MEng, Haley Howell, MSHI, Alireza Ghezavati, MD, Andrew Nguyen, PhD, and Jenise Wong, MD PhD.

And, while not a poster at ADA, I also presented the “AID-IRL” study funded by DiabetesMine at #DData20, held in conjunction with Scientific Sessions. A summary of the study is also included in this post.

First up, the biological rhythms poster, “Multi-Timescale Interactions of Glucose and Insulin in Type 1 Diabetes Reveal Benefits of Hybrid Closed Loop Systems” (poster 99-LB). (Twitter thread summary of this poster here.)

Building off our work as detailed last year, Azure, Lance, and I have been exploring the biological rhythms in individuals living with type 1 diabetes. Why? It’s not been done before, and we now have the capabilities thanks to technology (pumps, CGM, and closed loops) to better understand how glucose and insulin dynamics may be similar or different than those without diabetes.

Background:

Mejean et al., 1988Blood glucose and insulin exhibit coupled biological rhythms at multiple timescales, including hours (ultradian, UR) and the day (circadian, CR) in individuals without diabetes. The presence and stability of these rhythms are associated with healthy glucose control in individuals without diabetes. (See right, adapted from Mejean et al., 1988).

However, biological rhythms in longitudinal (e.g., months to years) data sets of glucose and insulin outputs have not been mapped in a wide population of people with Type 1 Diabetes (PWT1D). It is not known how glucose and insulin rhythms compare between T1D and non-T1D individuals. It is also unknown if rhythms in T1D are affected by type of therapy, such as Sensor Augmented Pump (SAP) vs. Hybrid Closed Loop (HCL). As HCL systems permit feedback from a CGM to automatically adjust insulin delivery, we hypothesized that rhythmicity and glycemia would exhibit improvements in HCL users compared to SAP users. We describe longitudinal temporal structure in glucose and insulin delivery rate of individuals with T1D using SAP or HCL systems in comparison to glucose levels from a subset of individuals without diabetes.

Data collection and analysis:

We assessed stability and amplitude of normalized continuous glucose and insulin rate oscillations using the continuous wavelet transformation and wavelet coherence. Data came from 16 non-T1D individuals (CGM only, >2 weeks per individual) from the Quantified Self CGM dataset and 200 (n = 100 HCL, n = 100 SAP; >3 months per individual) individuals from the Tidepool Big Data Donation Project. Morlet wavelets were used for all analyses. Data were analyzed and plotted using Matlab 2020a and Python 3 in conjunction with in-house code for wavelet decomposition modified from the “Jlab” toolbox, from code developed by Dr. Tanya Leise (Leise 2013), and from the Wavelet Coherence toolkit by Dr. Xu Cui. Linear regression was used to generate correlations, and paired t-tests were used to compare AUC for wavelet and wavelet coherences by group (df=100). Stats used 1 point per individual per day.

Wavelets Assess Glucose and Insulin Rhythms and Interactions

Wavelet Coherence flow for glucose and insulin

Morlet wavelets (A) estimate rhythmic strength in glucose or insulin data at each minute in time (a combination of signal amplitude and oscillation stability) by assessing the fit of a wavelet stretched in window and in the x and y dimensions to a signal (B). The output (C) is a matrix of wavelet power, periodicity, and time (days). Transform of example HCL data illustrate the presence of predominantly circadian power in glucose, and predominantly 1-6 h ultradian power in insulin. Color map indicates wavelet power (synonymous with Y axis height). Wavelet coherence (D) enables assessment of rhythmic interactions between glucose and insulin; here, glucose and insulin rhythms are highly correlated at the 3-6 (ultradian) and 24 (circadian) hour timescales.

Results:

Hybrid Closed Loop Systems Reduce Hyperglycemia

Glucose distribution of SAP, HCL, and nonT1D
  • A) Proportional counts* of glucose distributions of all individuals with T1D using SAP (n=100) and HCL (n=100) systems. SAP system users exhibit a broader, right shifted distribution in comparison to individuals using HCL systems, indicating greater hyperglycemia (>7.8 mmol/L). Hypoglycemic events (<4mmol/L) comprised <5% of all data points for either T1D dataset.
  • B) Proportional counts* of non-T1D glucose distributions. Although limited in number, our dataset from people without diabetes exhibits a tighter blood glucose distribution, with the vast majority of values falling in euglycemic range (n=16 non-T1D individuals).
  • C) Median distributions for each dataset.
  • *Counts are scaled such that each individual contributes the same proportion of total data per bin.

HCL Improves Correlation of Glucose-Insulin Level & Rhythm

Glucose and Insulin rhythms in SAP and HCL

SAP users exhibit uncorrelated glucose and insulin levels (A) (r2 =3.3*10-5; p=0.341) and uncorrelated URs of glucose and insulin (B) (r2 =1.17*10-3; p=0.165). Glucose and its rhythms take a wide spectrum of values for each of the standard doses of insulin rates provided by the pump, leading to the striped appearance (B). By contrast, Hybrid Closed Loop users exhibit correlated glucose and insulin levels (C) (r2 =0.02; p=7.63*10-16), and correlated ultradian rhythms of glucose and insulin (D) (r2 =-0.13; p=5.22*10-38). Overlays (E,F).

HCL Results in Greater Coherence than SAP

Non-T1D individuals have highly coherent glucose and insulin at the circadian and ultradian timescales (see Mejean et al., 1988, Kern et al., 1996, Simon and Brandenberger 2002, Brandenberger et al., 1987), but these relationships had not previously been assessed long-term in T1D.

coherence between glucose and insulin in HCL and SAP, and glucose swings between SAP, HCL, and non-T1DA) Circadian (blue) and 3-6 hour ultradian (maroon) coherence of glucose and insulin in HCL (solid) and SAP (dotted) users. Transparent shading indicates standard deviation. Although both HCL and SAP individuals have lower coherence than would be expected in a non-T1D individual,  HCL CR and UR coherence are significantly greater than SAP CR and UR coherence (paired t-test p= 1.51*10-7 t=-5.77 and p= 5.01*10-14 t=-9.19, respectively). This brings HCL users’ glucose and insulin closer to the canonical non-T1D phenotype than SAP users’.

B) Additionally, the amplitude of HCL users’ glucose CRs and URs (solid) is closer (smaller) to that of non-T1D (dashed) individuals than are SAP glucose rhythms (dotted). SAP CR and UR amplitude is significantly higher than that of HCL or non-T1D (T-test,1,98, p= 47*10-17 and p= 5.95*10-20, respectively), but HCL CR amplitude is not significantly different from non-T1D CR amplitude (p=0.61).

Together, HCL users are more similar than SAP users to the canonical Non-T1D phenotype in A) rhythmic interaction between glucose and insulin and B) glucose rhythmic amplitude.

Conclusions and Future Directions

T1D and non-T1D individuals exhibit different relative stabilities of within-a-day rhythms and daily rhythms in blood glucose, and T1D glucose and insulin delivery rhythmic patterns differ by insulin delivery system.

Hybrid Closed Looping is Associated With:

  • Lower incidence of hyperglycemia
  • Greater correlation between glucose level and insulin delivery rate
  • Greater correlation between ultradian glucose and ultradian insulin delivery rhythms
  • Greater degree of circadian and ultradian coherence between glucose and insulin delivery rate than in SAP system use
  • Lower amplitude swings at the circadian and ultradian timescale

These preliminary results suggest that HCL recapitulates non-diabetes glucose-insulin dynamics to a greater degree than SAP. However, pump model, bolusing data, looping algorithms and insulin type likely all affect rhythmic structure and will need to be further differentiated. Future work will determine if stability of rhythmic structure is associated with greater time in range, which will help determine if bolstering of within-a-day and daily rhythmic structure is truly beneficial to PWT1D.
Acknowledgements:

Thanks to all of the individuals who donated their data as part of the Tidepool Big Data Donation Project, as well as the OpenAPS Data Commons, from which data is also being used in other areas of this study. This study is supported by JDRF (1-SRA-2019-821-S-B).

(You can download a full PDF copy of the poster here.)

Next is “Do-It-Yourself Artificial Pancreas Systems for Type 1 Diabetes Reduce Hyperglycemia Without Increasing Hypoglycemia” (poster 988-P in category 12-D Clinical Therapeutics/New Technology—Insulin Delivery Systems), which I co-authored alongside Jennifer Zabinsky, MD MEng, Haley Howell, MSHI, Alireza Ghezavati, MD, Andrew Nguyen, PhD, and Jenise Wong, MD PhD. There is a Twitter thread summarizing this poster here.

This was a retrospective double cohort study that evaluated data from the OpenAPS Data Commons (data ranged from 2017-2019) and compared it to conventional sensor-augmented pump (SAP) therapy from the Tidepool Big Data Donation Project.

Methods:

  • From the OpenAPS Data Commons, one month of CGM data (with more than 70% of the month spent using CGM), as long as they were >1 year of living with T1D, was used. People could be using any type of DIYAPS (OpenAPS, Loop, or AndroidAPS) and there were no age restrictions.
  • A random age-matched sample from the Tidepool Big Data Donation Project of people with type 1 diabetes with SAP was selected.
  • The primary outcome assessed was percent of CGM data <70 mg/dL.
  • The secondary outcomes assessed were # of hypoglycemic events per month (15 minutes or more <70 mg/dL); percent of time in range (70-180mg/dL); percent of time above range (>180mg/dL), mean CGM values, and coefficient of variation.
Methods_DIYAPSvsSAP_ADA2020_DanaMLewis

Demographics:

  • From Table 1, this shows the age of participants was not statistically different between the DIYAPS and SAP cohorts. Similarly, the age at T1D diagnosis or time since T1D diagnosis did not differ.
  • Table 2 shows the additional characteristics of the DIYAPS cohort, which included data shared by a parent/caregiver for their child with T1D. DIYAPS use was an average of 7 months, at the time of the month of CGM used for the study. The self-reported HbA1c in DIYAPS was 6.4%.
Demographics_DIYAPSvsSAP_ADA2020_DanaMLewis DIYAPS_Characteristics_DIYAPSvsSAP_ADA2020_DanaMLewis

Results:

  • Figure 1 shows the comparison in outcomes based on CGM data between the two groups. Asterisks (*) indicate statistical significance.
  • There was no statistically significant difference in % of CGM values below 70mg/dL between the groups in this data set sampled.
  • DIYAPS users had higher percent in target range and lower percent in hyperglycemic range, compared to the SAP users.
  • Table 3 shows the secondary outcomes.
  • There was no statistically significant difference in the average number of hypoglycemic events per month between the 2 groups.
  • The mean CGM glucose value was lower for the DIYAPS group, but the coefficient of variation did not differ between groups.
CGM_Comparison_DIYAPSvsSAP_ADA2020_DanaMLewis SecondaryOutcomes_DIYAPSvsSAP_ADA2020_DanaMLewis

Conclusions:

    • Users of DIYAPS (from this month of sampled data) had a comparable amount of hypoglycemia to those using SAP.
    • Mean CGM glucose and frequency of hyperglycemia were lower in the DIYAPS group.
    • Percent of CGM values in target range (70-180mg/dL) was significantly greater for DIYAPS users.
    • This shows a benefit in DIYAPS in reducing hyperglycemia without compromising a low occurrence of hypoglycemia. 
Conclusions_DIYAPSvsSAP_ADA2020_DanaMLewis

(You can download a PDF of the e-poster here.)

Finally, my presentation at this year’s D-Data conference (#DData20). The study I presented, called AID-IRL, was funded by Diabetes Mine. You can see a Twitter thread summarizing my AID-IRL presentation here.

AID-IRL-Aim-Methods_DanaMLewis

I did semi-structured phone interviews with 7 users of commercial AID systems in the last few months. The study was funded by DiabetesMine – both for my time in conducting the study, as well as funding for study participants. Study participants received $50 for their participation. I sought a mix of longer-time and newer AID users, using a mix of systems. Control-IQ (4) and 670G (2) users were interviewed; as well as (1) a CamAPS FX user since it was approved in the UK during the time of the study.

Based on the interviews, I coded their feedback for each of the different themes of the study depending on whether they saw improvements (or did not have issues); had no changes but were satisfied, or neutral experiences; or saw negative impact/experience. For each participant, I reviewed their experience and what they were happy with or frustrated by.

Here are some of the details for each participant.

AID-IRL-Participant1-DanaMLewisAID-IRL-Participant1-cont_DanaMLewis1 – A parent of a child using Control-IQ (off-label), with 30% increase in TIR with no increased hypoglycemia. They spend less time correcting than before; less time thinking about diabetes; and “get solid uninterrupted sleep for the first time since diagnosis”. They wish they had remote bolusing, more system information available in remote monitoring on phones. They miss using the system during the 2 hour CGM warmup, and found the system dealt well with growth spurt hormones but not as well with underestimated meals.

AID-IRL-Participant2-DanaMLewis AID-IRL-Participant2-cont-DanaMLewis2 – An adult male with T1D who previously used DIYAPS saw 5-10% decrease in TIR (but it’s on par with other participants’ TIR) with Control-IQ, and is very pleased by the all-in-one convenience of his commercial system.He misses autosensitivity (a short-term learning feature of how insulin needs may very from base settings) from DIYAPS and has stopped eating breakfast, since he found it couldn’t manage that well. He is doing more manual corrections than he was before.

AID-IRL-Participant5-DanaMLewis AID-IRL-Participant5-cont_DanaMLewis5 – An adult female with LADA started, stopped, and started using Control-IQ, getting the same TIR that she had before on Basal-IQ. It took artificially inflating settings to achieve these similar results. She likes peace of mind to sleep while the system prevents hypoglycemia. She is frustrated by ‘too high’ target; not having low prevention if she disables Control-IQ; and how much she had to inflate settings to achieve her outcomes. It’s hard to know how much insulin the system gives each hour (she still produces some of own insulin).

AID-IRL-Participant7-DanaMLewis AID-IRL-Participant7-cont-DanaMLewis7 – An adult female with T1D who frequently has to take steroids for other reasons, causing increased BGs. With Control-IQ, she sees 70% increase in TIR overall and increased TIR overnight, and found it does a ‘decent job keeping up’ with steroid-induced highs. She also wants to run ‘tighter’ and have an adjustable target, and does not ever run in sleep mode so that she can always get the bolus corrections that are more likely to bring her closer to target.

AID-IRL-Participant3-DanaMLewis AID-IRL-Participant3-cont-DanaMLewis3 – An adult male with T1D using 670G for 3 years didn’t observe any changes to A1c or TIR, but is pleased with his outcomes, especially with the ability to handle his activity levels by using the higher activity target.  He is frustrated by the CGM and is woken up 1-2x a week to calibrate overnight. He wishes he could still have low glucose suspend even if he’s kicked out of automode due to calibration issues. He also commented on post-meal highs and more manual interventions.

AID-IRL-Participant6-DanaMLewis AID-IRL-Participant6-contDanaMLewis6 – Another adult male user with 670G was originally diagnosed with T2 (now considered T1) with a very high total daily insulin use that was able to decrease significantly when switching to AID. He’s happy with increased TIR and less hypo, plus decreased TDD. Due to #COVID19, he did virtually training but would have preferred in-person. He has 4-5 alerts/day and is woken up every other night due to BG alarms or calibration. He does not like the time it takes to charge CGM transmitter, in addition to sensor warmup.

AID-IRL-Participant4-DanaMLewis AID-IRL-Participant4-contDanaMLewis4 – The last participant is an adult male with T1 who previously used DIYAPS but was able to test-drive the CamAPS FX. He saw no TIR change to DIYAPS (which pleased him) and thought the learning curve was easy – but he had to learn the system and let it learn him. He experienced ‘too much’ hypoglycemia (~7% <70mg/dL, 2x his previous), and found it challenging to not have visibility of IOB. He also found the in-app CGM alarms annoying. He noted the system may work better for people with regular routines.

You can see a summary of the participants’ experiences via this chart. Overall, most cited increased or same TIR. Some individuals saw reduced hypos, but a few saw increases. Post-meal highs were commonly mentioned.

AID-IRL-UniversalThemes2-DanaMLewis AID-IRL-UniversalThemes-DanaMLewis

Those newer to CGM have a noticeable learning curve and were more likely to comment on number of alarms and system alerts they saw. The 670G users were more likely to describe connection/troubleshooting issues and CGM calibration issues, both of which impacted sleep.

This view highlights those who more recently adopted AID systems. One noted their learning experience was ‘eased’ by “lurking” in the DIY community, and previously participating in an AID study. One felt the learning curve was high. Another struggled with CGM.

AID-IRL-NewAIDUsers-DanaMLewis

Both previous DIYAPS users who were using commercial AID systems referenced the convenience factor of commercial systems. One DIYAPS saw decreased TIR, and has also altered his behaviors accordingly, while the other saw no change to TIR but had increased hypo’s.

AID-IRL-PreviousDIYUsers-DanaMLewis

Companies building AID systems for PWDs should consider that the onboarding and learning curve may vary for individuals, especially those newer to CGM. Many want better displays of IOB and the ability to adjust targets. Remote bolusing and remote monitoring is highly desired by all, regardless of age. Post-prandial was frequently mentioned as the weak point in glycemic control of commercial AID systems. Even with ‘ideal’ TIR, many commercial users still are doing frequent manual corrections outside of mealtimes. This is an area of improvement for commercial AID to further reduce the burden of managing diabetes.

AID-IRL-FeedbackForCompanies-DanaMLewis

Note – all studies have their limitations. This was a small deep-dive study that is not necessarily representative, due to the design and small sample size. Timing of system availability influenced the ability to have new/longer time users.

AID-IRL-Limitations-DanaMLewis

Thank you to all of the participants of the study for sharing their feedback about their experiences with AID-IRL!

(You can download a PDF of my slides from the AID-IRL study here.)

Have questions about any of my posters or presentations? You can always reach me via email at Dana@OpenAPS.org.

Convening The Center

(Update: see the latest about Convening the Center in 2021 here)

Patients and care partners who want to make a difference in health care are advised to give up our day jobs, create non-profits, or change previously identified career paths to “go work for a healthcare organization.” These formal constructs are not the only ways to achieve change or make a difference.

Those who choose to work outside of traditional pathways often end up with fewer resources and fewer opportunities (not just financial, but also the opportunity of collaborations and connections).

Thinking about these gaps in resources and opportunities has been swimming around my head since the Convening we hosted as part of the Opening Pathways project (more about it here). As a project, we learned so much from the conversations we had when we were able to just bring people together.

The feedback we received from non-traditional healthcare stakeholders was one of the most surprising results of the Convening. These are people who are not necessarily working professionally in healthcare, but doing a lot of work in the nontraditional spaces. In the year since the Convening we’ve repeatedly heard how valuable it was for this group to come together, in person, to connect with others with a similar drive and passion.

Fast forward to early last year. My friend Liz Salmi (of #BTSM) reached out Alicia Staley (of #BCSM) and me to share about an exciting, random conversation and brainstorm she had with Steve Downs from Robert Wood Johnson Foundation (RWJF).  The idea: What if there was an ‘unconference’ to bring together more of these individuals–those working outside of traditional pathways–to learn and collaborate, without the agenda driven by an existing organization, association, established conference, or company?

This concept sounded great to me! It feels like a next logical step to take with Opening Pathways especially if we pair it with a few structured activities similar to what we did at the Convening to create more equitable participation opportunities for patients and care partners to help people feel comfortable engaging together in person.

When Liz said she didn’t have time to lead this project I volunteered to take it on. Liz and Alicia agreed and expressed their full support.

I put together a proposal in partnership with John Harlow who also worked on Opening Pathways, and was instrumental in designing the original Convening. We submitted a proposal to RWJF, did a few rounds of feedback and discussion about the proposal, waited a bit, and found out right around the new year that the proposal was accepted and had been awarded funding! Yay!

We’re calling this project “Convening The Center.” This both picks up on the name of the previous Convening, and emphasizes the people/patients as the center on which all of health and healthcare should be focused.

Convening The Center: What if there was a gathering for individuals working outside of traditional healthcare pathways?

What this means:

  • We have funding to put together a ~2 day meeting for ~25 individuals who are doing both the possible and the impossible to change and improve healthcare.
  • The funding includes travel (ground transportation, flights), lodging (hotel), food during the event, and an honorarium for the participants’ time.
  • The meeting was originally scheduled to be sometime in 2020 (August or September was goal; COVID-19 disrupted this planning, TBD for new dates but looking at 2021 instead).

Who will be involved:

Convening The Center project team:

  • Dana Lewis (me), Principal Investigator (PI)
  • John Harlow, Co-Principal Investigator (PI)
  • Convening Advisors: Liz Salmi, Alicia Staley, Nick Dawson

Who can participate?:

  • TBD! Here’s why and how:

Why must we convene the Center?

If you’re reading this, you likely have your own story of doing the “impossible” — you’ve faced barriers and obstacles, but have found a way to innovate, overcome, or steer around. There are a LOT of people doing this “work,” whether it’s their professional work, their personal passion, or a necessity driving them to improve things for themselves or a loved one, building and supporting their communities as unfunded labors of love. But we also know that geography, socioeconomic background, and financial resources, among other reasons, commonly leave some of these individuals siloed, or prevent them and their work from reaching its full potential.

We know there is a lack of connectedness among individual innovators, researchers, and advocates who are not employed in the traditional healthcare system. While there have been a handful of attempts to convene patient advocates to share ideas and connect with opportunities and resources, none have been devoted solely to this type of community. Existing attempts have included ad-hoc social media groups and inclusion at existing conferences and meetings. Both face serious limitations.

Social media is limited by one’s ability to stumble across a network, while conferences or meetings—which are traditionally held by legacy institutions—usually include people who are already “in” a network that invites them to such physical events, and are thus already “doing” the work, but these do not do enough to encourage new participants. Additionally, conferences and meetings prioritize the hosting organization’s agenda rather than facilitating the development of non-traditional innovators. Given the limitations of social media and existing conferences, the status quo leads new “doers” to (unknowingly and repeatedly) duplicate the work of others and fail to effectively share knowledge and scale tools that could help others. Overall, there are not a lot of resources for people who do this outside of a professional job.

Therefore, we aim to do something different to identify participants for this meeting.

Rather than just invite the same individuals who have the resources to participate, or have already succeeded somewhat, even in the face of all the existing barriers, we plan to solicit attendees from a mix of health communities, from a range of experiences, with diverse demographics, including those who are newly working in this space, as well as experienced individuals with established credibility.

How will we reach all of these different communities and individuals? This is where we need your help!

We have a two-phase recruitment process to identify potential attendees.

Phase 1 (right now)

  • Fill out this form! 
    • We’d love for you to nominate yourself, if you’re potentially interested in participating.
    • But a crucial part of this is to ALSO nominate someone else – a friend or someone you know who may not otherwise hear about this opportunity.
  • We’d also love for you to help share this form widely and help us reach people in different networks. If you TikTok, post it on TikTok. If you’re on LinkedIn, share it on your LinkedIn or a group. If you’re part of an offline support group, talk about it there. Or reach out and share the link with your advocacy organization and encourage them to nominate other advocates and ‘doers’ that they know.

Nominate someone you know for Convening The Center!
Phase 2 (in a few weeks):

  • Based on the first wave of nominated folks, we’ll work to make sure we’re striking the balance between people who are longer-timers in this space and people who are newly emerging in this type of work.
  • We’ll reach out to a selection of folks identified in phase 1 and ask for a little bit more information to help determine the final cohort of participants for the in-person meeting. (Goal: ~25 participants).

We’ve learned through Opening Pathways and other work in this space that more — and perhaps different — resources are needed for “doers” in healthcare who are not traditionally employed in this space.

We don’t expect the outcome of this project to solve all problems or identify a one-size-fits-all resource. However, we do hope to help manifest a new, more inclusive, and more effective vision for changing the future of healthcare.

The future we seek augments the existing health efforts of legacy institutions by coordinating the work of individual innovators, researchers, and advocates in a more inclusive community of practice. We do not think this will solve all problems around under-representation and the static network of those already “in” and doing this work, but it’s an important step and one we’re happy to be able to take.

FREQUENTLY ASKED QUESTIONS

  • Who is funding this project? How is it being funded? What organization are you partnering with?Robert Wood Johnson Foundation (RWJF) is a great partner, and I’m proud that they’re willing to fund this meeting. Paul Tarini is our project officer at RWJF. While my co-PI is based at an academic institution, we decided to experiment with using a fiscal sponsorship organization to manage the grant. We identified and selected Trailhead Institute, a 501(c)(3) organization that works with a variety of projects and organizations in the public health space. I’ll write more about this in the future, but so far they have been GREAT administrative partners and have been seamless to work with during the application and kickoff of the grant process. Also, we learned from the past Convening that it would be beneficial to directly fund a meeting planner to do logistics work (rather than me), so we included in our budget a meeting planner that is coming from Trailhead to help with administrative and logistics planning for the meeting. Yay!
  • How will you select participants?Our goal is to gain a diverse slate of people, including diversity in socioeconomic background, ethnicity, gender, education, area of healthcare, type of work, how long they have been doing the work, etc. Before finalizing the list of participants we will collect information from potential participants and make sure they’d be interested and available to participate once the date is selected.
  • What are the outputs?We anticipate one primary output from this meeting to be relationships among attendees. After observing the strength and resilience generated for individuals by participating in our Opening Pathways convening, we see relationships as a powerful support for the efforts of healthcare “doers”. By relationships, we do not mean a community of 25. Community building is long-term labor-intensive work. Rather, we hope that some attendees will find common ground and collaborate in various ways after Convening the Center.We do not expect to produce a particular report or website from this work. However, we do expect to write blog posts about our process of developing the meeting, the experience of facilitating the meeting, and the insights derived from conversations at the meeting. We anticipate those insights to be about the wants and needs of healthcare doers, what they wish they had when they started out, what they’d tell their younger selves, and how to refine and scale various healthcare improvement efforts.
  • What about COVID-19?While we have been planning this meeting for August or September 2020, we are aware that currently (in March 2020) there is a lot of uncertainty about how COVID-19 may impact meetings after the next few months. While we are beginning virtual recruitment of participants, we will work with public health officials to get guidance on whether August/September still makes sense, and if not, work with both participants and public health to determine a suitable alternative timeline for holding the meeting. If that’s not feasible, we may find ways to meet this goal virtually.Update: Obviously, it does not make sense to convene the center physically for an in-person meeting in 2020. We are aiming for a gathering – in-person if safe and appropriate, otherwise adapting to virtual – in 2021. We’ll keep everyone posted!

(Update: see the latest about Convening the Center in 2021 here)

Automated Insulin Delivery: How artificial pancreas “closed loop” systems can aid you in living with diabetes (introducing “the APS book” by @DanaMLewis)

Tl;dr – I wrote a book about artificial pancreas systems / hybrid and fully closed loop systems / automated insulin delivery systems! It’s out today – you can buy a print copy on Amazon; a Kindle copy on Amazon; check out all the content on the web or your phone here; or download a PDF if you prefer.

A few months ago, I saw someone share a link to one of my old blog posts with someone else on Facebook. Quite old in fact – I had written it 5+ years ago! But the content was and is still relevant today.

It made me wonder – how could we as a diabetes community, who have been innovating and exploring new diabetes technology such as closed loop/artificial pancreas systems (APS), package up some of this knowledge and share it with people who are newer to APS? And while yes, much of this is tucked into the documentation for DIY closed loop systems, not everyone will choose a DIY closed loop system and also therefore may not see or find this information. And with regards to some of the things I’ve written here on DIYPS.org, not everyone will be lucky enough to have the right combination of search terms to end up on a particular post to answer their question.

Automated_Insulin_Delivery_by_DanaMLewis_example_covers_renderingThus, the idea for a book was born. I wanted to take much of what I’ve been writing here, sharing on Facebook and Twitter, and seeing others discuss as well, and put it together in one place to be a good starting place for someone to learn about APS in general. My hope is that it’s more accessible for people who don’t know what “DIY” or “open source” diabetes is, and it’s findable by people who also don’t know or don’t consider themselves to be part of the “diabetes online community”.

APSBook_NowAvailable_DanaMLewisIs it perfect? Absolutely not! But, like most of the things in the DIY community…the book is open source. Seriously. Here’s the repository on Github! If you see a typo or have suggestions of content to add, you can make a PR (pull request) or log an issue with content recommendations. (There’s instructions on the book page here with how to do either of those things!) I plan to make rolling updates to it, so you can see on the change log page what’s changed between major versions.)

It’s the first book out there that I know of on APS, but it won’t be the only one. I hope this inspires or moves more people to share their knowledge, through blogs or podcasts or future books, with the rest of our community and loved ones who want and need to learn more about managing type 1 diabetes.

“I will immediately recommend this book not just to people looking to use a DIY closed loop system, but also to anybody looking to improve their grasp on the management of type 1 diabetes, whether patient, caregiver, or healthcare provider.”

Aaron Neinstein, MD
Endocrinologist, UCSF

And as always, I’m happy to share what I’ve learned about the self-publishing process, too. I previously used CreateSpace for my children’s books, which got merged with Amazon’s Kindle Direct Publishing (KDP), and there was a learning curve for KDP for both doing the print version and doing the Kindle version. I didn’t get paid to write this book – and I didn’t write it for a profit. Like my children’s books, I plan to use any proceeds to donate copies to libraries and hospitals, and send any remaining funds to Life For A Child to help ensure as many kids as possible have access to insulin, BG monitoring supplies, and education.

I’m incredibly grateful for many people for helping out with and contributing to this book. You can see the full acknowledgement section with my immense thanks to the many reviewers of early versions of the book! And ditto for the people who shared their stories and experiences with APS. But special thanks go in particular to Scott for thorough first editing and overall support of every project I bring up out of the blue; to Tim Gunn for beautiful cover design of the book; and to Aaron Kowalski to be kind enough to write this amazing foreword.

Amazon_Button_APSBook_DanaMLewis

Presentations and poster content from @DanaMLewis at #ADA2019

Like I did last year, I want to share the work being presented at #ADA2019 with those who are not physically there! (And if you’re presenting at #ADA2019 or another conference and would like suggestions on how to share your content in addition to your poster or presentation, check out these tips.) This year, I’m co-author on three posters and an oral presentation.

  • 1056-P in category 12-D Clinical Therapeutics/New Technology–Insulin Delivery Systems, Preliminary Characterization of Rhythmic Glucose Variability In Individuals With Type 1 Diabetes, co-authored by Dana Lewis and Azure Grant.
    • Come see us at the poster session, 12-1pm on Sunday! Dana & Azure will be presenting this poster.
  • 76-OR, In-Depth Review of Glycemic Control and Glycemic Variability in People with Type 1 Diabetes Using Open Source Artificial Pancreas Systems, co-authored by Andreas Melmer, Thomas Züger, Dana Lewis, Scott Leibrand, Christoph Stettler, and Markus Laimer.
    • Come hear our presentation in room S-157 (South, Upper Mezzanine Level), 2:15-2:30 pm on Saturday!
  • 117-LB, DIWHY: Factors Influencing Motivation, Barriers and Duration of DIY Artificial Pancreas System Use Among Real-World Users, co-authored by Katarina Braune, Shane O’Donnell, Bryan Cleal, Ingrid Willaing, Adrian Tappe, Dana Lewis, Bastian Hauck, Renza Scibilia, Elizabeth Rowley, Winne Ko, Geraldine Doyle, Tahar Kechadi, Timothy C. Skinner, Klemens Raille, and the OPEN consortium.
    • Come see us at the poster session, 12-1pm on Sunday! Scott will be presenting this poster.
  • 78-LB, Detailing the Lived Experiences of People with Diabetes Using Do-it-Yourself Artificial Pancreas Systems – Qualitative Analysis of Responses to Open-Ended Items in an International Survey, co-authored by Bryan Cleal, Shane O’Donnell, Katarina Braune, Dana Lewis, Timothy C. Skinner, Bastian Hauck, Klemens Raille, and the OPEN consortium.
    • Come see us at the poster session, 12-1pm on Sunday! Bryan Cleal will be presenting this poster.

See below for full written summaries and pictures from each poster and the oral presentation.

First up: the biological rhythms poster, formally known as 1056-P in category 12-D Clinical Therapeutics/New Technology–Insulin Delivery Systems, Preliminary Characterization of Rhythmic Glucose Variability In Individuals With Type 1 Diabetes!

Lewis_Grant_BiologicalRhythmsT1D_ADA2019

As mentioned in this DiabetesMine interview, Azure Grant & I were thrilled to find out that we have been awarded a JDRF grant to further this research and undertake the first longitudinal study to characterize biological rhythms in T1D, which could also be used to inform improvements and personalize closed loop systems. This poster is part of the preliminary research we did in order to submit for this grant.

There is also a Twitter thread for this poster:

Background:

  • Human physiology, including blood glucose, exhibits rhythms at multiple timescales, including hours (ultradian, UR), the day (circadian, CR), and the ~28-day female ovulatory cycle (OR).
  • Individuals with T1D may suffer rhythmic disruption due not only to the loss of insulin, but to injection of insulin that does not mimic natural insulin rhythms, the presence of endocrine-timing disruptive medications, and sleep disruption.
  • However, rhythms at multiple timescales in glucose have not been mapped in a large population of T1D, and the extent to which glucose rhythms differ in temporal structure between T1D and non-T1D individuals is not known.

Data & Methods:

  • The initial data set used for this work leverages the OpenAPS Data Commons. (This data set is available for all researchers  – see www.OpenAPS.org/data-commons)
  • All data was processed in Matlab 2018b with code written by Azure Grant. Frequency decompositions using the continuous morlet wavelet transformation were created to assess change in rhythmic composition of normalized blood glucose data from 5 non-T1D individuals and anonymized, retrospective CGM data from 19 T1D individuals using a DIY closed loop APS. Wavelet algorithms were modified from code made available by Dr. Tanya Leise at Amherst College (see http://bit.ly/LeiseWaveletAnalysis)

Results:

  • Inter and Intra-Individual Variability of Glucose Ultradian and Circadian Rhythms is Greater in T1D
Figure_BiologicalRhythms_Lewis_Grant_ADA2019

Figure 1. Single individual blood glucose over ~ 1 year with A.) High daily rhythm stability and B.) Low daily rhythm stability. Low glucose is shown in blue, high glucose in orange.

Figure 2. T1D individuals (N=19) showed a wide range of rhythmic power at the circadian and long-period ultradian timescales compared to individuals without T1D (N=5).

A). Individuals’ CR and UR power, reflecting amplitude and stability of CRs, varies widely in T1D individuals compared to those without T1D. UR power was of longer periodicity (>= 6 h) in T1D, likely due to DIA effects, whereas UR power was most commonly in the 1-3 hour range in non-T1D individuals (*not shown).  B.) On average, both CR and UR power were significantly higher in T1D (p<.05, Kruskal Wallis). This is most likely due to the higher amplitude of glucose oscillation, shown in two individuals in C.

Conclusions:

  • This is the first longitudinal analysis of the structure and variability of multi-timescale biological rhythms in T1D, compared to non-T1D individuals.
  • Individuals with T1D show a wide range of circadian and ultradian rhythmic amplitudes and stabilities, resulting in higher average and more variable wavelet power than in a smaller sample of non-T1D individuals.
  • Ultradian rhythms of people with T1D are of longer periodicity than individuals without T1D. These analyses constitute the first pass of a subset of these data sets, and will be continued over the next year.

Future work:

  • JDRF has recently funded our exploration of the Tidepool Big Data Donation Project, the OpenAPS Data Commons, and a set of non-T1D control data in order to map biological rhythms of glucose/insulin.
  • We will use signal processing techniques to thoroughly characterize URs, CRs, and ORs in the glucose/insulin for T1D; evaluate if stably rhythmic timing of glucose is associated with improved outcomes (lower HBA1C); and ultimately evaluate if modulation of insulin delivery based on time of day or time of ovulatory cycle could lead to improved outcomes.
  • Mapping population heterogeneity of these rhythms in people with and without T1D will improve understanding of real-world rhythmicity, and may lead to non-linear algorithms for optimizing glucose in T1D.

Acknowledgements:

We thank the OpenAPS community for their generous donation of data, and JDRF for the grant award to further this work, beginning in July 2019.

Contact:

Feel free to contact us at Dana@OpenAPS.org or azuredominique@berkeley.edu.

Next up, 78-LB, Detailing the Lived Experiences of People with Diabetes Using Do-it-Yourself Artificial Pancreas Systems – Qualitative Analysis of Responses to Open-Ended Items in an International Survey, co-authored by Bryan Cleal, Shane O’Donnell, Katarina Braune, Dana Lewis, Timothy C. Skinner, Bastian Hauck, Klemens Raille, and the OPEN consortium.

78-LB_LivedExperiencesDIYAPS_OPEN_ADA2019

There is also a Twitter thread for this poster:

Introduction

There is currently a wave of interest in Do-it-Yourself Artificial Pancreas Systems (DIYAPS), but knowledge about how the use of these systems impacts on the lives of those that build and use them remains limited. Until now, only a select few have been able to give voice to their experiences in a research context. In this study we present data that addresses this shortcoming, detailing the lived experiences of people using DIYAPS in an extensive and diverse way.

Methods

An online survey with 34 items was distributed to DIYAPS users recruited through the Facebook groups “Looped” (and regional sub-groups) and Twitter pages of the Diabetes Online Community (DOC). Participants were posed two open-ended questions in the survey, where personal DIYAPS stories were garnered; including knowledge acquisition, decision-making, support and emotional aspects in the initiation of DIYAPS, perceived changes in clinical and quality of life (QoL) outcomes after initiation and difficulties encountered in the process. All answers were analyzed using thematic content analysis.

Results

In total, 886 adults responded to the survey and there were a combined 656 responses to the two open-ended items. Knowledge of DIYAPS was primarily obtained via exposure to the communication fora that constitute the DOC. The DOC was also a primary source of practical and emotional support (QUOTES A). Dramatic improvements in clinical and QoL outcomes were consistently reported (QUOTES B). The emotional impact was overwhelmingly positive, with participants emphasizing that the persistent presence of diabetes in everyday life was markedly reduced (QUOTES C). Acquisition of the requisite devices to initiate DIYAPS was sometimes problematic and some people did find building the systems to be technically challenging (QUOTE D). Overcoming these challenges did, however, leave people with a sense of accomplishment and, in some cases, improved levels of understanding and engagement with diabetes management (QUOTE E).

QuotesA_OPEN_ADA2019 QuotesB_OPEN_ADA2019 QuotesC_OPEN_ADA2019 QuotesD_OPEN_ADA2019 QuotesE_OPEN_ADA2019

Conclusion

The extensive testimony from users of DIYAPS acquired in this study provides new insights regarding the contours of this evolving phenomenon, highlighting factors inspiring people to adopt such solutions and underlining the transformative impact effective closed-loop systems bring to bear on the everyday lives of people with diabetes. Although DIYAPS is not a viable solution for everyone with type 1 diabetes, there is much to learn from those who have taken this route, and the life-changing results they have achieved should inspire all with an interest in artificial pancreas technology to pursue and dream of a future where all people with type 1 diabetes can reap the benefits that it potentially provides.

Also, see this word cloud generated from 665 responses in the two open-ended questions in the survey:

Wordle_OPEN_ADA2019

Next up is 117-LB, DIWHY: Factors Influencing Motivation, Barriers and Duration of DIY Artificial Pancreas System Use Among Real-World Users, co-authored by Katarina Braune, Shane O’Donnell, Bryan Cleal, Ingrid Willaing, Adrian Tappe, Dana Lewis, Bastian Hauck, Renza Scibilia, Elizabeth Rowley, Winne Ko, Geraldine Doyle, Tahar Kechadi, Timothy C. Skinner, Klemens Raille, and the OPEN consortium.

DIWHY_117-LB_OPEN_ADA2019

There is also a Twitter thread for this poster:

Background

Until recently, digital innovations in healthcare have typically followed a ‘top-down’ pathway, with manufacturers leading the design and production of technology-enabled solutions and patients involved only as users of the end-product. However, this is now being disrupted by the increasing influence and popularity of more ‘bottom-up’ and patient-led open source initiatives. A primary example is the growing movement of people with diabetes (PwD) who create their own “Do-it-Yourself” Artificial Pancreas Systems (DIY APS) through remote-control of medical devices employing an open source algorithm.

Objective

Little is known about why PwD leave traditional care pathways and turn to DIY technology. This study aims to examine the motivations of current DIYAPS users and their caregivers.

Research Design and Methods

An online survey with 34 items was distributed to DIYAPS users recruited through the Facebook groups “Looped” (and regional sub-groups) and Twitter pages of the “DOC” (Diabetes Online Community). Self-reported data was collected, managed and analyzed using the secure REDCap electronic data capture tools hosted at Charité – Universitaetsmedizin Berlin.

Results

1058 participants from 34 countries (81.3 % Europe, 14.7 % North America, 6.0 % Australia/WP, 3.1 % Asia, 0.1 % Africa), responded to the survey, of which the majority were adults (80.2 %) with type 1 diabetes (98.9 %) using a DIY APS themselves (43.0 % female, 56.8 % male, 0.3 % other) with a median age of 41 y and an average diabetes duration of 25.2y ±13.3. 19.8 % of the participants were parents and/or caregivers of children with type 1 diabetes (99.4 %) using a DIY APS (47.4 % female, 52.6 % male) with a median age of 10 y and an average diabetes duration of 5.1y ± 3.8. People used various DIYAPS (58.2 % AndroidAPS, 28.5 % Loop, 18.8 % OpenAPS, 5.7 % other) on average for a duration of 10.1 months ±17.6 and reported an overall HbA1c-improvement of -0.83 % (from 7.07 % ±1.07 to 6.24 % ±0.68 %) and an overall Time in Range improvement of +19.86 % (from 63.21 % ±16.27 to 83.07 % ±10.11). Participants indicated that DIY APS use required them to pay out-of-pocket costs in addition to their standard healthcare expenses with an average amount of 712 USD spent per year.

Primary motivations for building a DIYAPS were to improve the overall glycaemic control, reduce acute and long-term complication risk, increase life expectancy and to put diabetes on ‘auto-pilot’ and interact less frequently with the system. Lack of commercially available closed loop systems and improvement of sleep quality was a motivation for some. For caregivers, improvement of their own sleep quality was the leading motivation. For adults, curiosity (medical or technical interest) had a higher impact on their motivation compared to caregivers. Some people feel that commercial systems do not suit their individual needs and prefer to use a customizable system, which is only available to them as a DIY solution. Other reasons, like costs of commercially available systems and unachieved therapy goals played a subordinate role. Lack of medical or psychosocial support was less likely to be motivating factors for both groups.

Figure_OPEN_DIWHY_ADA2019

Conclusions

Our findings suggest that people using Do-it-Yourself Artificial Pancreas systems and their caregivers are highly motivated to improve their/their children’s diabetes management through the use of this novel technology. They are also able to access and afford the tools needed to use these systems. Currently approved and available commercial therapy options may not be sufficiently flexible or customizable enough to fulfill their individual needs. As part of the project “OPEN”, the results of the DIWHY survey may contribute to a better understanding of the unmet needs of PwD and current challenges to uptake, which will, in turn, facilitate dialogue and collaboration to strengthen the involvement of open source approaches in healthcare.

This is a written version of the oral presentation, In-Depth Review of Glycemic Control and Glycemic Variability in People with Type 1 Diabetes Using Open Source Artificial Pancreas Systems, co-authored by Andreas Melmer, Thomas Züger, Dana Lewis, Scott Leibrand, Christoph Stettler, and Markus Laimer.

APSComponents_Melmer_ADA2019

Artificial Pancreas Systems (APS) now exist, leveraging a CGM sensor, pump, and control algorithm. Faster insulin can play a role, too.  Traditionally, APS is developed by commercial industry, tested by clinicians, regulated, and then patients can access it. However, DIYAPS is designed by patients for individual use.

There are now multiple different kinds of DIYAPS systems in use: #OpenAPS, Loop, and AndroidAPS. There are differences in hardware, pump, and software configurations. The main algorithm for OpenAPS is also used in AndroidAPS.  DIYAPS can work offline; and also leverage the cloud for accessing or displaying data, including for remote monitoring.OnlineOffline_Melmer_ADA2019

This study analyzed data from the OpenAPS Data Commons (see more here). At the time this data set was used, there were n=80 anonymized data donors from the #OpenAPS community, with a combined 53+ years worth of CGM data.

TIR_PostLooping_Melmer_ADA2019Looking at results for #OpenAPS data donors post-looping initiation, CV was 35.5±5.9, while eA1c was 6.4±0.7. TIR (3.9-10mmol/L) was 77.5%. Time spent >10 was 18.2%; time <3.9 was 4.3%.

SubcohortData_Melmer_ADA2019We selected a subcohort of n=34 who had data available from before DIY closed looping initiation (6.5 years combined of CGM records), as well as data from after (12.5 years of CGM records).

For these next set of graphs, blue is BEFORE initiation (when just on a traditional pump); red is AFTER, when they were using DIYAPS.

TIR_PrePost_Melmer_ADA2019Time in a range significantly increased for both wider (3.9-10 mmol/L) and tighter (3.9-7.8 mmol/L) ranges.

TOR_PrePost_Melmer_ADA2019Time spent out of range decreased. % time spent >10 mmol/L decreased -8.3±8.6 (p<0.001); >13 mmol/L decreased -3.3±5.0 (p<0.001). Change in % time spent <3.9 mmol/L (-1.1±3.8 (p=0.153)), and <3.0 mmol/L (-0.7±2.2 (p=0.017)) was not significant.

We also analyzed daytime and nightime (the above was reflecting all 24hr combined; these graphs shows the increase in TIR and decrease in time out of range for both day and night).

TIR_TOR_DayAndNight_Melmer_ADA2019

Hypoglemic_event_reduction_Melmer_ADA2019There were less CGM records in the hypoglycemic range after initiating DIYAPS.

Conclusion: this was a descriptive study analyzing available CGM data from  #OpenAPS Data Commons. This study shows OpenAPS has potential to support glycemic control. However, DIYAPS are currently not regulated/approved technology. Further research is recommended.

Conclusion_Melmer_ADA2019

(Note: a version of this study has been submitted and accepted for publication in the Journal of Diabetes. Obesity, and Metabolism.)