Understanding the Difference Between Open Source and DIY in Diabetes

There’s been a lot of excitement (yay!) about the results of the CREATE trial being published in NEJM, followed by the presentation of the continuation results at EASD. This has generated a lot of blog posts, news articles, and discussion about what was studied and what the implications are.

One area that I’ve noticed is frequently misunderstood is how “open source” and “DIY” are different.

Open source means that the source code is openly available to view. There are different licenses with open source; most allow you to also take and reuse and modify the code however you like. Some “copy-left” licenses commercial entities to open-source any software they build using such code. Most companies can and do use open source code, too, although in healthcare most algorithms and other code related to FDA-regulated activity is proprietary. Most open source licenses allow free individual use.

For example, OpenAPS is open source. You can find the core code of the algorithm here, hosted on Github, and read every line of code. You can take it, copy it, use it as-is or modify it however you like, because the MIT license we put on the code says you can!

As an individual, you can choose to use the open source code to “DIY” (do-it-yourself) an automated insulin delivery system. You’re DIY-ing, meaning you’re building it yourself rather than buying it or a service from a company.

In other words, you can DIY with open source. But open source and DIY are not the same thing!

Open source can and is usually is used commercially in most industries. In healthcare and in diabetes specifically, there are only a few examples of this. For OpenAPS, as you can read in our plain language reference design, we wanted companies to use our code as well as individuals (who would DIY with it). There’s at least one commercial company now using ideas from the OpenAPS codebase and our safety design as a safety layer against their ML algorithm, to make sure that the insulin dosing decisions are checked against our safety design. How cool!

However, they’re a company, and they have wrapped up their combination of proprietary software and the open source software they have implemented, gotten a CE mark (European equivalent of FDA approval), and commercialized and sold their AID product to people with diabetes in Europe. So, those customers/users/people with diabetes are benefitting from open source, although they are not DIY-ing their AID.

Outside of healthcare, open source is used far more pervasively. Have you ever used Zoom? Zoom uses open source; you then use Zoom, although not in a DIY way. Same with Firefox, the browser. Ever heard of Adobe? They use open source. Facebook. Google. IBM. Intel. LinkedIn. Microsoft. Netflix. Oracle. Samsung. Twitter. Nearly every product or service you use is built with, depends on, or contains open source components. Often times open source is more commonly used by companies to then provide products to users – but not always.

So, to more easily understand how to talk about open source vs DIY:

  • The CREATE trial used a version of open source software and algorithm (the OpenAPS algorithm inside a modified version of the AndroidAPS application) in the study.
  • The study was NOT on “DIY” automated insulin delivery; the AID system was handed/provided to participants in the study. There was no DIY component in the study, although the same software is used both in the study and in the real world community by those who do DIY it. Instead, the point of the trial was to study the safety and efficacy of this version of open source AID.
  • Open source is not the same as DIY.
  • OpenAPS is open source and can be used by anyone – companies that want to commercialize, or individuals who want to DIY. For more information about our vision for this, check out the OpenAPS plain language reference design.
Venn diagram showing a small overlap between a bigger open source circle and a smaller DIY circle. An arrow points to the overlapping section, along with text of "OpenAPS". Below it text reads: "OpenAPS is open source and can be used DIY. DIY in diabetes often uses open source, but not always. Not all open source is used DIY."

Continuation Results On 48 Weeks of Use Of Open Source Automated Insulin Delivery From the CREATE Trial: Safety And Efficacy Data

In addition to the primary endpoint results from the CREATE trial, which you can read more about in detail here or as published in the New England Journal of Medicine, there was also a continuation phase study of the CREATE trial. This meant that all participants from the CREATE trial, including those who were randomized to the automated insulin delivery (AID) arm and those who were randomized to sensor-augmented insulin pump therapy (SAPT, which means just a pump and CGM, no algorithm), had the option to continue for another 24 weeks using the open source AID system.

These results were presented by Dr. Mercedes J. Burnside at #EASD2022, and I’ve summarized her presentation and the results below on behalf of the CREATE study team.

What is the “continuation phase”?

The CREATE trial was a multi-site, open-labeled, randomized, parallel-group, 24-week superiority trial evaluating the efficacy and safety of an open-source AID system using the OpenAPS algorithm in a modified version of AndroidAPS. Our study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14 percentage points higher among those who used the open-source AID system (95% confidence interval [CI], 9.2 to 18.8; P<0.001) compared to those who used sensor augmented pump therapy; a difference that corresponds to 3 hours 21 minutes more time spent in target range per day. The system did not contribute to any additional hypoglycemia. Glycemic improvements were evident within the first week and were maintained over the 24-week trial. This illustrates that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. This initial study concluded that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS, a widely used open-source AID solution, is efficacious and safe. These results were from the first 24-week phase when the two groups were randomized into SAPT and AID, accordingly.

The second 24-week phase is known as the “continuation phase” of the study.

There were 52 participants who were randomized into the SAPT group that chose to continue in the study and used AID for the 24 week continuation phase. We refer to those as the “SAPT-AID” group. There were 42 participants initially randomized into AID who continued to use AID for another 24 weeks (the AID-AID group).

One slight change to the continuation phase was that those in the SAPT-AID used a different insulin pump than the one used in the primary phase of the study (and 18/42 AID-AID participants also switched to this different pump during the continuation phase), but it was a similar Bluetooth-enabled pump that was interoperable with the AID system (app/algorithm) and CGM used in the primary outcome phase.

All 42 participants in AID-AID completed the continuation phase; 6 participants (out of 52) in the SAPT-AID group withdrew. One withdrew from infusion site issues; three with pump issues; and two who preferred SAPT.

What are the results from the continuation phase?

In the continuation phase, those in the SAPT-AID group saw a change in time in range (TIR) from 55±16% to 69±11% during the continuation phase when they used AID. In the SAPT-AID group, the percentage of participants who were able to achieve the target goals of TIR > 70% and time below range (TBR) <4% increased from 11% of participants during SAPT use to 49% during the 24 week AID use in the continuation phase. Like in the primary phase for AID-AID participants; the SAPT-AID participants saw the greatest treatment effect overnight with a TIR difference of 20.37% (95% CI, 17.68 to 23.07; p <0.001), and 9.21% during the day (95% CI, 7.44 to 10.98; p <0.001) during the continuation phase with open source AID.

Those in the AID-AID group, meaning those who continued for a second 24 week period using AID, saw similar TIR outcomes. Prior to AID use at the start of the study, TIR for that group was 61±14% and increased to 71±12% at the end of the primary outcome phase; after the next 6 months of the continuation phase, TIR was maintained at 70±12%. In this AID-AID group, the percentage of participants achieving target goals of TIR >70% and TBR <4% was 52% of participants in the first 6 months of AID use and 45% during the continuation phase. Similarly to the primary outcomes phase, in the continuation phase there was also no treatment effect by age interaction (p=0.39).

The TIR outcomes between both groups (SAPT-AID and AID-AID) were very similar after each group had used AID for 24 weeks (SAPT-AID group using AID for 24 weeks during the continuation phase and AID-AID using AID for 24 weeks during the initial RCT phase).. The adjusted difference in TIR between these groups was 1% (95% CI, -4 to 6; p=-0.67). There were no glycemic outcome differences between those using the two different study pumps (n=69, which was the SAPT-AID user group and 18 AID-AID participants who switched for continuation; and n=25, from the AID-AID group who elected to continue on the pump they used in the primary outcomes phase).

In the initial primary results (first 24 weeks of trial comparing the AID group to the SAPT group), there was a 14 percentage point difference between the groups. In the continuation phase, all used AID and the adjusted mean difference in TIR between AID and the initial SAPT results was a similar 12.10 percentage points (95% CI, p<0.001, SD 8.40).

Similar to the primary phase, there was no DKA or severe hypoglycemia. Long-term use (over 48 weeks, representing 69 person-years) did not detect any rare severe adverse events.

CREATE results from the full 48 weeks on open source AID with both SAPT (control) and AID (intervention) groups plotted on the graph.

Conclusion of the continuation study from the CREATE trial

In conclusion, the continuation study from the CREATE trial found that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS is efficacious and safe with various hardware (pumps), and demonstrates sustained glycaemic improvements without additional safety concerns.

Key points to takeaway:

  • Over 48 weeks total of the study (6 months or 24 weeks in the primary phase; 6 months/24 weeks in the continuation phase), there were 64 person-years of use of open source AID in the study, compared to 59 person-years of use of sensor-augmented pump therapy.
  • A variety of pump hardware options were used in the primary phase of the study among the SAPT group, due to hardware (pump) availability limitations. Different pumps were also used in the SAPT-AID group during the AID continuation phase, compared to the pumps available in the AID-AID group throughout both phases of trial. (Also, 18/42 of AID-AID participants chose to switch to the other pump type during the continuation phase).
  • The similar TIR results (14 percentage points difference in primary and 12 percentage points difference in continuation phase between AID and SAPT groups) shows durability of the open source AID and algorithm used, regardless of pump hardware.
  • The SAPT-AID group achieved similar TIR results at the end of their first 6 months of use of AID when compared to the AID-AID group at both their initial 6 months use and their total 12 months/48 weeks of use at the end of the continuation phase.
  • The safety data showed no DKA or severe hypoglycemia in either the primary phase or the continuation phases.
  • Glycemic improvements from this version of open source AID (the OpenAPS algorithm in a modified version of AndroidAPS) are not only immediate but also sustained, and do not increase safety concerns.
CREATE Trial Continuation Results were presented at #EASD2022 on 48 weeks of use of open source AID

Wondering about the “how” rather than the “why” of autoimmune conditions

I’ve been thinking a lot about stigma, per a previous post of mine, and how I generally react to, learn about, and figure out how to deal with new chronic diseases.

I’ve observed a pattern in my experiences. When I suspect an issue, I begin with research. I read medical literature to find out the basics of what is known. I read a high volume of material, over a range of years, to see what is known and the general “ground truth” about what has stayed consistent over the years and where things might have changed. This is true for looking into causal mechanisms as well as diagnosis and then more importantly to me, management/treatment.

I went down a new rabbit hole of research and most articles were publicly accessible

A lot of times with autoimmune related diseases…the causal mechanism is unknown. There are correlations, there are known risk factors, but there’s not always a clear answer of why things happen.

I realize that I am lucky that my first “thing” (type 1 diabetes) was known to be an autoimmune condition, and that probably has framed my response to celiac disease (6 years later); exocrine pancreatic insufficiency (19+ years after diabetes); and now Graves’ disease (19+ years after diabetes). Why do I think that is lucky? Because when I’m diagnosed with an autoimmune condition, it’s not a surprise that it IS an autoimmune condition. When you have a nicely overactive immune system, it interferes with how your body is managing things. In type 1 diabetes, it eventually makes it so the beta cells in your pancreas no longer produce insulin. In celiac, it makes it so the body has an immune reaction to gluten, and the villi in your small intestine freak out at the microscopic, crumb-level presence of gluten (and if you keep eating gluten, can cause all sorts of damage). In exocrine pancreatic insufficiency, there is possibly either atrophy as a result of the pancreas not producing insulin or other immune-related responses – or similar theories related to EPI and celiac in terms of immune responses. It’s not clear ‘why’ or which mechanism (celiac, T1D, or autoimmune in general) caused my EPI, and not knowing that doesn’t bother me, because it’s clearly linked to autoimmune shenanigans. Now with Graves’ disease, I also know that low TSH and increased thyroid antibodies are causing subclinical hyperthyroidism symptoms (such as occasional minor tremor, increased resting HR, among others) and Graves’ ophthalmology symptoms as a result of the thyroid antibodies. The low TSH and increased thyroid antibodies are a result of my immune system deciding to poke at my thyroid.

All this to say…I typically wonder less about “why” I have gotten these things, in part because the “why” doesn’t change “what” to do; I simply keep gathering new data points that I have an overactive immune system that gives me autoimmune stuff to deal with.

I have contrasted this with a lot of posts I observe in some of the online EPI groups I am a part of. Many people get diagnosed with EPI as a result of ongoing GI issues, which may or may not be related to other conditions (like IBS, which is often a catch-all for GI issues). But there’s a lot of posts wondering “why” they’ve gotten it, seemingly out of the blue.

When I do my initial research/learning on a new autoimmune thing, as I mentioned I do look for causal mechanisms to see what is known or not known. But that’s primarily, I think, to rule out if there’s anything else “new” going on in my body that this mechanism would inform me about. But 3/3 times (following type 1 diabetes, where I first learned about autoimmune conditions), it’s primarily confirmed that I have autoimmune things due to a kick-ass overactive immune system.

What I’ve realized that I often focus on, and most others do not, is what comes AFTER diagnosis. It’s the management (or treatment) of, and living with, these conditions that I want to know more about.

And sadly, especially in the latest two experiences (exocrine pancreatic insufficiency and Graves’ disease), there is not enough known about management and optimization of dealing with these conditions.

I’ve previously documented and written quite a bit (see a summary of all my posts here) about EPI, including my frustrations about “titrating” or getting the dose right for the enzymes I need to take every single time I eat something. This is part of the “management” gap I find in research and medical knowledge. It seems like clinicians and researchers spend a lot of time on the “why” and the diagnosis/starting point of telling someone they have a condition. But there is way less research about “how” to live and optimally manage these things.

My fellow patients (people with lived experiences) are probably saying “yeah, duh, and that’s the power of social media and patient advocacy groups to share knowledge”. I agree. I say that a lot, too. But one of the reasons these online social media groups are so powerful in sharing knowledge is because of the black hole or vacuum or utter absence of research in this space.

And it’s frustrating! Social media can be super powerful because you can learn about many n=1 experiences. If you’re like me, you analyze the patterns to see what might be reproducible and what is worth experimenting in my own n=1. But often, this knowledge stays in the real world. It is not routinely funded, studied, operationalized, and translated in systematic ways back to healthcare providers. When patients are diagnosed, they’re often told the “what” and occasionally the “why” (if it exists), but left to sometimes fall through the cracks in the “how” of optimally managing the new condition.

(I know, I know. I’m working on that, in diabetes and EPI, and I know dozens of friends, both people with lived experiences and researchers who ARE working on this, from diabetes to brain tumors to Parkinson’s and Alzheimer’s and beyond. And while we are moving the needles here, and making a difference, I’m wanting to highlight the bigger issue to those who haven’t previously been exposed to the issues that cause the gaps we are trying to fill!)

In my newest case of Graves’ disease, it presented with subclinical hyperthyroidism. As I wrote here, that for me means the lower TSH and higher thyroid antibodies but in range T3 and T4. In discussion with my physician, we decided to try an antithyroid drug, to try to lower the antibody levels, because the antibody levels are what cause the related eye symptoms (and they’re quite bothersome). The other primary symptom I have is higher resting HR, which is also really annoying, so I’m also hoping it helps with that, too. But the game plan was to start taking this medication every day; and get follow-up labs in about 2 months, because it takes ~6 weeks to see the change in thyroid levels.

Let me tell you, that’s a long time. I get that the medication works not on stored thyroid levels; thus, it impacts the new production only, and that’s why it takes 6 weeks to see it in the labs because that’s how long it takes to cycle through the stored thyroid stuff in your body.

My hope was that within 2-3 weeks I would see a change in my resting HR levels. I wasn’t sure what else to expect, and whether I’d see any other changes.

But I did.

It was in the course of DAYS, not weeks. It was really surprising! I immediately started to see a change in my resting HR (across two different wearable devices; a ring and a watch). Within a week, my phone’s health flagged it as a “trend”, too, and pinpointed the day (which it didn’t know) that I had started the new medication based on the change in the trending HR values.

Additionally, some of my eye symptoms went away. Prior to commencing the new medication, I would wake up and my eyes would hurt. Lubricating them (with eye drops throughout the day and gel before bed) helped some, but didn’t really fix the problem. I also had pretty significant red, patchy spots around the outside corner of one of my eyes, and eyelid swelling that would push on my eyeball. 4 days into the new medication, I had my first morning where I woke up without my eyes hurting. The next day it returned, and then I had two days without eye pain. Then I had 3-4 days with the painful eyes. Then….now I’m going on 2 weeks without the eye pain?! Meanwhile, I’m also tracking the eye swelling. It went down to match the eye pain going away. But it comes back periodically. Recently, I commented to Scott that I was starting to observe the pattern that the red/patchy skin at the corner and under my right eye would appear; then the next day the swelling of and above the eyelid would return. After 1-2 days of swelling, it would disappear. Because I’ve been tracking various symptoms, I looked at my data the other day and saw that it’s almost a 6-7 day pattern.

Interesting!

Again, the eye stuff is a result of antibody levels. So now I am curious about the production of antibodies and their timeline, and how that differs from TSH and thyroid hormones, and how they’re impacted with this drug.

None of that is information that is easy to get, so I’m deep in the medical literature trying again to find out what is known, whether this type of pattern is known; if it’s common; or if this level of data, like my within-days impact to resting HR change is new information.

Most of the research, sadly, seems to be on pre-diagnosis or what happens if you diagnose someone but not give them medication in hyperthyroid. For example, I found this systematic review on HRV and hyperthyroid and got excited, expecting to learn things that I could use, but found they explicitly removed the 3 studies that involved treating hyperthyroidism and are only studying what happens when you don’t treat it.

Sigh.

This is the type of gap that is so frustrating, as a patient or person who’s living with this. It’s the gap I see in EPI, where little is known on optimal titration and people don’t get prescribed enough enzymes and aren’t taught how to match their dosing to what they are eating, the way we are taught in diabetes to match our insulin dosing to what we’re eating.

And it matters! I’m working on writing up data from a community survey of people with EPI, many of whom shared that they don’t feel like they have their enzyme dosing well matched to what they are eating, in some cases 5+ years after their diagnosis. That’s appalling, to me. Many people with EPI and other conditions like this fall through the cracks with their doctors because there’s no plan or discussion on what managing optimally looks like; what to change if it’s not optimal for a person; and what to do or who to talk to if they need help managing.

Thankfully in diabetes, most people are supported and taught that it’s not “just” a shot of insulin, but there are more variables that need tracking and managing in order to optimize wellbeing and glucose levels when living with diabetes. But it took decades to get there in diabetes, I think.

What would it be like if more chronic diseases, like EPI and Graves’ disease (or any other hyper/hypothyroid-related diseases), also had this type of understanding across the majority of healthcare providers who treated and supported managing these conditions?

How much better would and could people feel? How much more energy would they have to live their lives, work, play with their families and friends? How much more would they thrive, instead of just surviving?

That’s what I wonder.

Wondering "how" rather than "why" of autimmune conditions, by @DanaMLewis from DIYPS.org

2022 Strawberry Fields Forever Ultramarathon Race Report Recap

I recently ran my second-ever 50k ultramarathon. This is my attempt to provide a race recap or “race report”, which in part is to help people in the future considering this race and this course. (I couldn’t find a lot of race reports investigating this race!)

It’s also an effort to provide an example of how I executed fueling, enzyme dosing (because I have exocrine pancreatic insufficiency, known as EPI), and blood sugar management (because I have type 1 diabetes), because there’s also not a lot of practical guidance or examples of how people do this. A lot of it is individual, and what works for me won’t necessarily work for anyone, but if anything hopefully it will help other people feel not alone as they work to figure out what works for them!

Context of my running and training in preparation

I wrote quite a bit in this previous post about my training last year for a marathon and my first 50k. Basically, I’m slow, and I also choose to run/walk for my training and racing. This year I’ve been doing 30:60 intervals, meaning I run 30 seconds and walk 60 seconds.

Due to a combination of improved training (and having a year of training last year), as well as now having recognized I was not getting sufficient pancreatic enzymes so that I was not digesting and using the food I was eating effectively, this year has been going really well. I ended up training as far as a practice 50k about 5 weeks out from my race. I did several more mid- to high-20 mile runs as well. I also did a next-day run following my long runs, starting around 3-4 miles and eventually increasing to 8 miles the day after my 50k. The goal of these next-day runs was to practice running on tired legs.

Overall, I think this training was very effective for me. My training runs were easy paced, and I always felt like I could run more after I was done. I recovered well, and the next-day runs weren’t painful and I did not have to truncate or skip any of those planned runs. (Previous years, running always felt hard and I didn’t know what it was like to recover “well” until this year.) My paces also increased to about a minute/mile faster than last year’s easy pace. Again, that’s probably a combination of increased running overall and better digestion and recovery.

Last year I chose to run a marathon and then do a 50k while I was “trained up” for my marathon. This year, I wanted to do a 50k as a fitness assessment on the path to a 50 mile race this fall. I looked for local-ish 50k options that did not have much elevation, and found the Strawberry Fields Forever Ultra.

Why I chose this race, and the basics about this race

The Strawberry Fields Forever Ultra met most of my goal criteria, including that it was around the time that I wanted to run a 50k, so that I had almost 6 months to train and also before it got to be too hot and risked being during wildfire smoke season. (Sadly, that’s a season that now overlaps significantly with the summers here.) It’s local-ish, meaning we could drive to it, although we did spend the night before the race in the area just to save some stress the morning of the race. The race nicely started at 9am, and we drove home in the evening after the race.

The race is on a 10k (6.2 miles) looped course in North Bonneville, Washington, and hosted a 10k event (1 lap), a 50k event (5 laps), and also had 100k (10 laps) or (almost) 100 miles (16 laps). It does have a little bit of elevation – or “little” by ultramarathon standards. The site and all reports describe one hill and net 200 feet of elevation gain and loss. I didn’t love the idea of a 200 foot hill, but thought I could make do. It also describes the course as “grass and dirt” trails. You’ll see a map later where I’ve described some key points on the course, and it’s also worth noting that this course is very “crew-able”. Most people hang out at the start/finish, since it’s “just” a 10k loop and people are looping through pretty frequently. However, if you want to, either for moral or practical support, crew could walk over to various points, or my husband brought his e-bike and biked around between points on the course very easily using a mix of the other trails and actual roads nearby.

The course is well marked. Any turn had a white sign with a black arrow on it and also white arrows drawn on the ground, and there were dozens of little red/pink fluorescent flags marking the course. Any time there was a fork in the path, these flags (usually 2-3 for emphasis, which was excellent for tired brains) would guide you to the correct direction.

The nice thing about this race is it includes the 100 mile option and that has a course limit of 30 hours, which means all the other distances also have this course limit of 30 hours. That’s fantastic when a lot of 50k or 50 mile (or 100k, which is 62 miles) courses might have 12 hour or similar tighter course limits. If you wanted to have a nice long opportunity to cover the distance, with the ability to stop and rest (or nap/sleep), this is a great option for that.

With the 50k, I was aiming to match or ideally beat my time from my first 50k, recognizing that this course is harder given the terrain and hill. However, I think my fitness is higher, so beating that time even with the elevation gain seemed reasonable.

Special conditions and challenges of the 2022 Strawberry Fields Forever Ultramarathon

It’s worth noting that in 2021 there was a record abnormal heat wave due to a “heat dome” that made it 100+ degrees (F) during the race. Yikes. I read about that and I am not willing to run a race when I have not trained for that type of heat (or any heat), so I actually waited until the week before the race to officially sign up after I saw the forecast for the race. The forecast originally was 80 F, then bounced around mid 60s to mid 70s, all of which seemed doable. I wouldn’t mind some rain during the race, either, as rainy 50s and 60s is what I’ve been training in for months.

But just to make things interesting, for the 2022 event the Pacific Northwest got an “atmospheric river” that dumped inches of rain on Thursday..and Friday. Gulp. Scott and I drove down to spend the night Friday night before the race, and it was dumping hard rain. I began to worry about the mud that would be on the course before we even started the race. However, the rain finished overnight and we woke up to everything being wet, but not actively raining. It was actually fairly warm (60s), so even if it drizzled during the race it wouldn’t be chilly.

During the start of the race, the race director said we would get wet and joked (I thought) about practicing our backstroke. Then the race started, and we took off.

My race recap / race report the 2022 Strawberry Fields Forever Ultramarathon

I’ve included a picture below that I was sent a month or so before the race when I asked for a course map, and a second picture because I also asked for the elevation profile. I’ve marked with letters (A-I) points on the course that I’ll describe below for reference, and we ran counterclockwise this year so the elevation map I’ve marked with matching letters where “A” is on the right and “I” is on the left, matching how I experienced the course.

The course is slightly different in the start/finish area, but otherwise is 95% matching what we actually ran, so I didn’t bother grabbing my actual course map from my run since this one was handy and a lot cleaner than my Runkeeper-derived map of the race.

Annotated course map with points A-I
StrawberryFieldsForever-Ultra-Elevation-Profile

My Runkeeper elevation profile of the 50k (5 repeated laps) looked like this:
Runkeeper elevation profile of 5 loops on the Strawberry Fields Forever 50k course

I’ll describe my first experience through the course (Lap 1) in more detail, then a couple of thoughts about the experiences of the subsequent laps, in part to describe fueling and other choices I made.

Lap 1:

We left the start by running across the soccer field and getting on a paved path that hooked around the ballfield and then headed out a gate and up The Hill. This was the one hill I thought was on the course. I ran a little bit and passed a few people who walked on a shallower slope, then I also converted to a walk for the rest of the hill. It was the most crowded race start I’ve done, because there were so many people (150 across the 10k, 50k, 100k, and 100 miler) and such a short distance between the start and this hill. The Hill, as I thought of it, is point A on the course map.

Luckily, heading up the hill there are gorgeous purple wildflowers along the path and mountain views. At the top of the hill there are some benches at the point where we took a left turn and headed down the hill, going down the same elevation in about half a mile so it was longer than the uphill section. This downhill slope (B) was very runnable and gravel covered, whereas going up the hill was more dirt and mud.

At the bottom of the hill, there was a hairpin turn and we turned and headed back up the hill, although not all the way up, and more along a plateau in the side of the hill. The “plateau” is point C on the map. I thought it would be runnable once I got back up the initial hill, but it was mud pit after mud pit, and I would have two steps of running in between mud pits to carefully walk through. It was really frustrating. I ended up texting to my parents and Scott that it was about 1.7 miles of mud (from the uphill, and the plateau) before I got to some gravel that was more easily runnable. Woohoo for gravel! This was a nice, short downhill slope (D) before we flattened out and switched back to dirt and more mud pits.

This was the E area, although it did feel more runnable than the plateau because there were longer stretches between muddy sections.

Eventually, we saw the river and came out from the trail into a parking lot and then jogged over onto the trail that parallels the river for a while. This trail that I thought of as “River Road” (starting around point F) is just mowed grass and is between a sharp bluff drop with opening where people would be down at the river fishing, and in some cases we were running *underneath* fishing lines from the parking spots down to the river! There were a few people who would be walking back and forth from cars to the river, but in general they were all very courteous and there was no obstruction of the trail. Despite the mowed grass aspect of the trail, this stretch physically and psychologically felt easier because there were no mud pits for 90% of it. Near the end there were a few muddy areas right about the point we hopped back over into the road to connect up a gravel road for a short spurt.

This year, the race actually put a bonus aid station out here. I didn’t partake, but they had a tent up with two volunteers who were cheerful and kind to passing runners, and it looked like they had giant things of gatorade or water, bottled water, and some sugared soda. They probably had other stuff, but that’s just what I saw when passing.

After that short gravel road bit, we turned back onto a dirt trail that led us to the river. Not the big river we had been running next to, but the place where the Columbia River overflowed the trail and we had to cross it. This is what the race director meant by practicing our backstroke.

You can see a video in this tweet of how deep and far across you had to get in this river crossing (around point G, but hopefully in future years this isn’t a point of interest on the map!!)

Showing a text on my watch of my BIL warning me about a river crossing

Coming out of the river, my feet were like blocks of ice. I cheered up at the thought that I had finished the wet feet portion of the course and I’d dry off before I looped back around and hit the muddy hill and plateau again. But, sadly, just around the next curve, came a mud POND. Not a pit, a pond.

Showing how bad the mud was

Again, ankle deep water and mud, not just once but in three different ponds all within 30 seconds or so of each other. It was really frustrating, and obviously you can’t run through them, so it slowed you down.

Then finally after the river crossing and the mud ponds, we hooked a right into a nice, forest trail that we spent about a mile and a half in (point H). It had a few muddy spots like you would normally expect to get muddy on a trail, but it wasn’t ankle deep or water filled or anything else. It was a nice relief!

Then we turned out of the forest and crossed a road and headed up one more (tiny, but it felt annoying despite how small it looks on the elevation profile) hill (point I), ran down the other side of that slope, stepped across another mud pond onto a pleasingly gravel path, and took the gravel path about .3 miles back all the way to complete the first full lap.

Phew.

I actually made pretty good time the first loop despite not knowing about all the mud or river crossing challenges. I was pleased with my time which was on track with my plan. Scott took my pack about .1 miles before I entered the start/finish area and brought it back to me refilled as I exited the start/finish area.

Lap 2:

The second lap was pretty similar. The Hill (A) felt remarkably harder after having experienced the first loop. I did try to run more of the downhill (B) as I recognized I’d make up some time from the walking climb as well as knowing I couldn’t run up the plateau or some of the mud pits along the plateau (C) as well as I had expected. I also decided running in the mud pits didn’t work, and went with the safer approach of stepping through them and then running 2 steps in between. I was a little slower this time, but still a reasonable pace for my goals.

The rest of the loop was roughly the same as the first, the mud was obnoxious, the river crossing freezing, the mud obnoxious again, and relief at running through the forest.

Scott met me at the end of the river road and biked along the short gravel section with me and went ahead so he could park his bike and take video of my second river crossing, which is the video above. I was thrilled to have video of that, because the static pictures of the river crossing didn’t feel like it did the depth and breadth of the water justice!

At the end of lap 2, Scott grabbed my pack again at the end of the loop and said he’d figured out where to meet me to give it back to me after the hill…if I wanted that. Yes, please! The bottom of the hill where you hairpin turn to go back up the plateau is the 1 mile marker point, so that means I ran the first mile of the third lap without my pack, and not having the weight of my full pack (almost 3L of water and lots of snacks and supplies: more on that pack below) was really helpful for my third time up the hill. He met me as planned at the bottom of the downhill (B) and I took my pack back which made a much nicer start to lap 3.

Lap 3:

Lap 3 for some reason I came out of the river crossing and the mud ponds feeling like I got extra mud in my right shoe. It felt gritty around the right side of my right food, and I was worried about having been running for so many hours with soaked feet. I decided to stop at a bench in the forest section and swap for dry socks. In retrospect, I wish I had stopped somewhere else, because I got swarmed by these moth/gnat/mosquito things that looked gross (dozens on my leg within a minute of sitting there) that I couldn’t brush off effectively while I was trying to remove my gaiters, untie my shoes, take my shoes off, peel my socks and bandaids and lambs wool off, put lubrication back on my toes, put more lambs wool on my toes, put the socks and shoes back on, and re-do my gaiters. Sadly, it took me 6 minutes despite me moving as fast as I could to do all of those things (this was a high/weirdly designed bench in a shack that looked like a bus stop in the middle of the woods, so it wasn’t the best way to sit, but I thought it was better than sitting on the ground).

(The bugs didn’t hurt me at the time, but two days later my dozens of bites all over my leg are red and swollen, though thankfully they only itch when they have something chafing against them.)

Anyway, I stood up and took off again and was frustrated knowing that it had taken 6 minutes and basically eaten the margin of time I had against my previous 50k time. I saw Scott about a quarter of a mile later, and I saw him right as I realized I had also somewhere lost my baggie of electrolyte pills. Argh! I didn’t have back up for those (although I had given Scott backups of everything else), so that spiked my stress levels as I was due for some electrolytes and wasn’t sure how I’d do with 3 or so more hours without them.

I gave Scott my pack and tasked him with checking my brother-in-law’s setup to see if he had spare electrolytes, while he was refilling my pack to give me in lap 4.

Lap 4:

I was pretty grumpy given the sock timing and the electrolyte mishap as I headed into lap 4. The hill still sucked, but I told myself “only one more hill after this!” and that thought cheered me up.

Scott had found two electrolyte options from my brother-in-law and brought those to me at the end of mile 1 (again, bottom of B slope) with my pack. He found two chewable and two swallow pills, so I had options for electrolytes. I chewed the first electrolyte tab as I headed up the plateau, and again talked myself through the mud pits with “only one more time through the mud pits after this!”.

I also tried overall to bounce back from the last of mile 4 where I let myself get frustrated, and try to take more advantage of the runnable parts of the course. I ran downhill (B) more than the previous laps, mostly ignoring the audio cues of my 30:60 intervals and probably running more like 45:30 or so. Similarly, the downhill gravel after the mud pits (D) I ran most of without paying attention to the audio run cues.

Scott this time also met me at the start of the river road section, and I gave him my pack again and asked him to take some things out that he had put in. He put in a bag with two pairs of replacement socks instead of just one pair of socks, and also put in an extra beef stick even though I didn’t ask for it. I asked him to remove it, and he did, but explained he had put it in just in case he didn’t find the electrolytes because it had 375g of sodium. (Sodium is primarily the electrolyte I am sensitive to and care most about). So this was actually a smart thing, although because I haven’t practiced eating larger amounts of protein and experienced enzyme dosing for it on the run, I would be pretty nervous about eating it in a race, so that made me a bit unnecessarily grumpy. Overall though, it was great to see him extra times on the course at this point, and I don’t know if he noticed how grumpy I was, but if he did he ignored it and I cheered up again knowing I only had “one more” of everything after this lap!

The other thing that helped was he biked my pack down the road to just before the river crossing, so I ran the river road section like I did lap 3 and 4 on the hill, without a pack. This gave me more energy and I found myself adding 5-10 seconds to the start of my run intervals to extend them.

The 4th river crossing was no less obnoxious and cold, but this time it and the mud ponds didn’t seem to embed grit inside my shoes, so I knew I would finish with the same pair of socks and not need another change to finish the race.

Lap 5:

I was so glad I was only running the 50k so that I only had 5 laps to do!

For the last lap, I was determined to finish strong. I thought I had a chance of making up a tiny bit of the sock change time that I had lost. I walked up the hill, but again ran more than my scheduled intervals downhill, grabbed my bag from Scott, picked my way across the mud pits for the final time (woohoo!), ran the downhill and ran a little long and more efficiently on the single track to the river road.

Scott took my pack again at the river road, and I swapped my intervals to be 30:45, since I was already running closer to that and I knew I only had 3.5 or so miles to go. I took my pack back at the end of river road and did my last-ever ice cold river crossing and mud pond extravaganza. After I left the last mud pond and turned into the forest, I switched my intervals to 30:30. I managed to keep my 30:30 intervals and stayed pretty quick – my last mile and a half was the fastest of the entire race!

I came into the finish line strong, as I had hoped to finish. Woohoo!

Overall strengths and positives from the race

Overall, running-wise I performed fairly well. I had a strong first lap and decent second lap, and I got more efficient on the laps as I went, staying focused and taking advantage of the more runnable parts of the course. I finished strong, with 30:45 intervals for over a mile and 30:30 intervals for over a mile to the finish.

Also, I didn’t quit after experiencing the river crossing and the mud ponds and the mud pits of the first lap. This wasn’t an “A” race for me or my first time at the distance, so it would’ve been really easy to quit. I probably didn’t in part because we did pay to spend the night before and drove all that way, and I didn’t want to have “wasted” Scott’s time by quitting, when I was very capable of continuing and wasn’t injured. But I’m proud of mostly the way I handled the challenges of the course, and for how I readjusted from the mental low and frustration after realizing how long my sock change took in lap 3. I’m also pleased that I didn’t get injured, given the terrain (mud, river crossing, and uneven grass to run on for most of the course). I’m also pleased and amazed I didn’t hurt my feet, cause major blisters, or have anything really happen to them after hours of wet, muddy, never-drying-off feet.

The huge positive was my fueling, electrolytes, and blood glucose management.

I started taking my electrolyte pills that have 200+mg of sodium at about 45 minutes into the race, on schedule. My snack choices also have 100-150mg of sodium, which allowed me to not take electrolyte pills as often as I would otherwise need to (or on a hotter day with more sweat – it was a damp, mid-60s day but I didn’t sweat as much as I usually do). But even with losing my electrolytes, I used two chewable 100mg sodium electrolytes instead and otherwise ended up with sufficient electrolytes. Even with ideal electrolyte supplementation, I’m very sensitive to sodium losses and am a salty sweater, and I have a distinct feeling when my electrolytes are insufficient, so not having that feeling during after the race was a big positive for me.

So was my fueling overall. The race started at 9am, and I woke up at 6am to eat my usual pre-race breakfast (a handful of pecans, plus my enzyme supplementation) so that it would both digest effectively and also be done hitting my blood sugar by the time the race started. My BGs were flat 120s or 130s when I started, which is how I like them. I took my first snack about an hour and 10 minutes into the race, which is about 15g carb (10g fat, 2g protein) of chili cheese flavored Fritos. For this, I didn’t dose any insulin as I was in range, and I took one lipase-only enzyme (which covers about 8g of fat for me) and one multi-enzyme (that covers about 6g of fat and probably over a dozen grams of protein). My second snack was an hour later, when I had a gluten free salted caramel Honey Stinger stroopwaffle (21g carb, 6 fat, 1 protein). For the stroopwaffle I ended up only taking a lipase-only pill to cover the fat, even though there’s 1g of protein. For me, I seem to be ok (or have no symptoms) from 2-3g of uncovered fat and 1-2g of uncovered protein. Anything more than that I like to dose enzymes for, although it depends on the situation. Throughout the day, I always did 1 lipase-only and 1 multi-enzyme for the Fritos, and 1 lipase-only for the stroopwaffle, and that seemed to work fine for me. I think I did a 0.3u (less than a third of the total insulin I would normally need) bolus for my stroopwaffle because I was around 150 mg/dL at the time, having risen following my un-covered Frito snack, and I thought I would need a tiny bit of insulin. This was perfect, and I came back down and flattened out. An hour and 20 minutes after that, I did another round of Fritos. An hour or so after that, a second stroopwaffle – but this time I didn’t dose any insulin for it as my BG was on a downward slope. An hour later, more Fritos. A little bit after that, I did my one single sugar-only correction (an 8g carb Airhead mini) as I was still sliding down toward 90 mg/dL, and while that’s nowhere near low, I thought my Fritos might hit a little late and I wanted to be sure I didn’t experience the feeling of a low. This was during the latter half of loop 4 when I was starting to increase my intensity, so I also knew I’d likely burn a little more glucose and it would balance out – and it did! I did one last round of Fritos during lap 5.
CGM graph during 50k ultramarathon

This all worked perfectly. I had 100% time in range between 90 and 150 mg/dL, even with 102g of “real food” carbs (15g x 4 servings of Fritos, 21g x 2 waffles), and one 8g Airhead mini, so in total I had 110g grams of carbs across ~7+ hours. This perfectly matched my needs with my run/walk moderate efforts.

BG and carb intake plotted along CGM graph during 50k ultramarathon

I also nailed the enzymes, as during the race I didn’t have any GI-related symptoms and after the race and the next day (which is the ultimate verdict for me with EPI), no symptoms.

So it seems like my practice and testing with low carbs, Fritos, and waffles worked out well! I had a few other snacks in my pack (yogurt-covered pretzels, peanut butter pretzel nuggets), but I never thought of wanting them or wanting something different. I did plan to try to do 2 snacks per hour, but I ended up doing about 1 per hour. I probably could have tolerated more, but I wasn’t hungry, my BGs were great, and so although it wasn’t quite according to my original plan I think this was ideal for me and my effort level on race day.

The final thing I think went well was deciding on the fly after loop 2 to have Scott take my pack until after the hill (so I ran the up/downhill mile without it), and then for additional stretches along river road in laps 4 and 5. I had my pocket of my shorts packed with dozens of Airheads and mints, so I was fine in terms of blood sugar management and definitely didn’t need things for a mile at a time. I’m usually concerned about staying hydrated and having water whenever I want to sip, plus for swallowing electrolytes and enzyme pills to go with my snacks, but I think on this course with the number of points Scott could meet me (after B, at F all through G, and from I to the finish), I could have gotten away with not having my pack the whole time; having WAY less water in the pack (I definitely didn’t need to haul 3L the whole time, that was for when I might not see Scott every 2-3 laps) and only one of each snack at a time.

Areas for improvement from my race

I trained primarily on gravel or paved trails and roads, but despite the “easy” elevation profile and terrain, this was essentially my first trail ultra. I coped really well with the terrain, but the cognitive burden of all the challenges (Mud pits! River crossing! Mud ponds!) added up. I’d probably do a little more trail running and hills (although I did some) in the final weeks before the race to help condition my brain a little more.

I’ll also continue to practice fueling so I can eat more regularly than every hour to an hour and a half, even though this was the most I’ve ever eaten during a run, I did well with the quantities, and my enzyme and BG management were also A+. But I didn’t eat as much as I planned for, and I think that might’ve helped with the cognitive fatigue, too, by at least 5-10%.

I also now have the experience of a “stop” during a race, in this case to swap my socks. I’ve only run one ultra and never stopped before to do gear changes, so that experience probably was sufficient prep for future stops, although I do want to be mentally stronger/less frustrated by unanticipated problem solving stops.

Specific to this course, as mentioned above, I could’ve gotten away with less supplies – food and water – in my pack. I actually ran a Ragnar relay race with a group of fellow T1s a few years back where I finished my run segment and…no one was there to meet me. They went for Starbucks and took too long to get there, so I had to stand in the finishing chute waiting for 10-15 minutes until someone showed up to start the next run leg. Oh, and that happened in two of the three legs I ran that day. Ooof. Standing there tired, hot, with nothing to eat or drink, likely added to my already life-with-type-1-diabetes-driven-experiences of always carrying more than enough stuff. But I could’ve gotten away very comfortably with carrying 1L of water and one set of each type of snacks at a time, given that Scott could meet me at 1 mile (end of B), start (F) and end of river road (before G), and at the finish, so I would never have been more than 2-2.5 miles without a refill, and honestly he could’ve gotten to every spot on the trail barring the river crossing bit to meet me if I was really in need of something. Less weight would’ve made it easier to push a little harder along the way. Basically, I carried gear like I was running a solo 30 mile effort at a time, which was safe but not necessary given the course. If I re-ran this race, I’d feel a lot more comfortable with minimal supplies.

Surprises from my race

I crossed the finish line, stopped to get my medal, then was waiting for my brother-in-law to finish another lap (he ran the 100k: 62 miles) before Scott and I left. I sat down for 30 minutes and then walked to the car, but despite sitting for a while, I was not as stiff and sore as I expected. And getting home after a 3.5 hour car ride…again I was shocked at how minimally stiff I was walking into the house. The next morning? More surprises at how little stiff and sore I was. By day 3, I felt like I had run a normal week the week prior. So in general, I think this is reinforcement that I trained really well for the distance and my long runs up to 50k and the short to medium next day runs also likely helped. I physically recovered well, which is again part training but also probably better fueling during the race, and of course now digesting everything that I ate during and after the race with enzyme supplementation for EPI!

However, the interesting (almost negative, but mostly interesting) thing for me has been what I perceived to be adrenal-type fatigue or stress hormone fatigue. I think it’s because I was unused to focusing on challenging trail conditions for so many hours, compared to running the same length of hours on “easy” paved or gravel trails. I actually didn’t listen to an audiobook, music, or podcast for about half of the race, because I was so stimulated by the course itself. What I feel is adrenal fatigue isn’t just being physically or mentally tired but something different that I haven’t experienced before. I’m listening to my body and resting a lot, and I waited until day 4 to do my first easy, slow run with much longer walk intervals (30s run, 90s walk instead of my usual 30:60). Day 1 and 2 had a lot of fatigue and I didn’t feel like doing much, Day 3 had notable improvement on fatigue and my legs and body physically felt back to normal for me. Day 4 I ran slowly, Day 5 I stuck with walking and felt more fatigue but no physical issues, Day 6 again I chose to walk because I didn’t feel like my energy had fully returned. I’ll probably stick with easy, longer walk interval runs for the next week or two with fewer days running until I feel like my fatigue is gone.

General thoughts about ultramarathon training and effective ultra race preparation

I think preparation makes a difference in ultramarathon running. Or maybe that’s just my personality? But a lot of my goal for this race was to learn what I could about the course and the race setup, imagine and plan for the experience I wanted, plan for problem solving (blisters, fuel, enzymes, BGs, etc), and be ready and able to adapt while being aware that I’d likely be tired and mentally fatigued. Generally, any preparation I could do in terms of deciding and making plans, preparing supplies, etc would be beneficial.

Some of the preparation included making lists in the weeks prior about the supplies I’d need in my pack, what Scott should have to refill my pack, what I’d need the night and morning before since we would not be at home, and after-race supplies for the 3.5h drive home.

From the lists, the week before the race I began grouping things. I had my running pack filled and ready to go. I packed my race outfit in a gallon bag, a full set of backup clothes in another gallon bag and labeled them, along with a separate post-run outfit and flip flops for the drive home. I also included a washcloth for wiping sweat or mud off after the run, and I certainly ended up needing that! I packed an extra pair of shoes and about 4 extra pairs of socks. I also had separate baggies with bandaids of different sizes, pre-cut strips of kinesio tape for my leg and smaller patches for blisters, extra squirrel nut butter sticks for anti-chafing purposes, as well as extra lambs wool (that I lay across the top of my toes to prevent socks from rubbing when they get wet from sweat or…river crossings, plus I can use it for padding between my toes or other blister-developing spots). I had sunscreen, bug spray, sungless, rain hat, and my sunny-weather running visor that wicks away sweat. I had low BG carbs for me to put in my pockets, a backup bag for Scott to refill, and a backup to the backup. The same for my fuel stash: my backpack was packed, I packed a small baggie for Scott as well as a larger bag with 5-7 of everything I thought I might want, and also an emergency backup baggie of enzymes.

*The only thing I didn’t have was a backup baggie of electrolyte pills. Next time, I’ll add this to my list and treat them like enzymes to make sure I have a separate backup stash.

I even made a list and gave it to Scott that mapped out where key things were for during and after the race. I don’t think he had to use it, because he was only digging through the snack bag for waffles and Fritos, but I did that so I didn’t have to remember where I had put my extra socks or my spare bandaids, etc. He basically had a map of what was in each larger bag. All of this was to reduce the decision and communication because I knew I’d have decision fatigue.

This also went for post-race planning. I told Scott to encourage me to change clothes, and it was worth the energy to change so I didn’t sit in cold, wet clothes for the long drive home. I pre-made a gluten free ham and cheese quesadilla (take two tortillas, fill with shredded cheese and slices of ham, microwave, cut into quarters, stick in baggies, mark with fat/protein/carb counts, and refrigerate) so we could warm that up in the car (this is what I use) so I had something to eat on the way home that wasn’t more Fritos or waffles. I didn’t end up wanting it, but I also brought a can of beef stew with carrots and potatoes, that I generally like as a post-race or post-run meal, and a plastic container and a spoon so I could warm up the stew if I wanted it. Again, all of this pre-planned and put on the list weeks prior to the race so I didn’t forget things like the container or the spoon.

The other thing I think about a lot is practicing everything I want to do for a race during a training run. People talk about eating the same foods, wearing the same clothes, etc. I think for those of us with type 1 diabetes (or celiac, EPI, or anything else), it’s even more important. With T1D, it’s so helpful to have the experience adjusting to changing BG levels and knowing what to do when you’re dropping or low and having a snack, vs in range and having a fueling snack, or high and having a fueling snack. I had 100% TIR during this run, but I didn’t have that during all of my training runs. Sometimes I’d plateau around 180 mg/dL and be over-cautious and not bring my BGs down effectively; other times I’d overshoot and cause a drop that required extra carbs to prevent or minimize a low. Lots of practice went into making this 100% TIR day happen, and some of it was probably a bit of luck mixed in with all the practice!

But generally, practice makes it a lot easier to know what to do on the fly during a race when you’re tired, stressed, and maybe crossing an icy cold river that wasn’t supposed to be part of your course experience. All that helps you make the best possible decisions in the weirdest of situations. That’s the best you can hope for with ultrarunning!

Findings from the world’s first RCT on open source AID (the CREATE trial) presented at #ADA2022

September 7, 2022 UPDATEI’m thrilled to share that the paper with the primary outcomes from the CREATE trial is now published. You can find it on the journal site here, or view an author copy here. You can also see a Twitter thread here, if you are interested in sharing the study with your networks.

Example citation:

Burnside, M; Lewis, D; Crocket, H; et al. Open-Source Automated Insulin Delivery in Type 1 Diabetes. N Engl J Med 2022;387:869-81. DOI:10.1056/NEJMoa2203913


(You can also see a previous Twitter thread here summarizing the study results, if you are interested in sharing the study with your networks.)

TLDR: The CREATE Trial was a multi-site, open-labeled, randomized, parallel-group, 24-week superiority trial evaluating the efficacy and safety of an open-source AID system using the OpenAPS algorithm in a modified version of AndroidAPS. Our study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14 percentage points higher among those who used the open-source AID system (95% confidence interval [CI], 9.2 to 18.8; P<0.001) compared to those who used sensor augmented pump therapy; a difference that corresponds to 3 hours 21 minutes more time spent in target range per day. The system did not contribute to any additional hypoglycemia. Glycemic improvements were evident within the first week and were maintained over the 24-week trial. This illustrates that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. This study concluded that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS, a widely used open-source AID solution, is efficacious and safe.

The backstory on this study

We developed the first open source AID in late 2014 and shared it with the world as OpenAPS in February 2015. It went from n=1 to (n=1)*2 and up from there. Over time, there were requests for data to help answer the question “how do you know it works (for anybody else)?”. This led to the first survey in the OpenAPS community (published here), followed by additional retrospective studies such as this one analyzing data donated by the community,  prospective studies, and even an in silico study of the algorithm. Thousands of users chose open source AID, first because there was no commercial AID, and later because open source AID such as the OpenAPS algorithm was more advanced or had interoperability features or other benefits such as quality of life improvements that they could not find in commercial AID (or because they were still restricted from being able to access or afford commercial AID options). The pile of evidence kept growing, and each study has shown safety and efficacy matching or surpassing commercial AID systems (such as in this study), yet still, there was always the “but there’s no RCT showing safety!” response.

After Martin de Bock saw me present about OpenAPS and open source AID at ADA Scientific Sessions in 2018, we literally spent an evening at the dinner table drawing the OpenAPS algorithm on a napkin at the table to illustrate how OpenAPS works in fine grained detail (as much as one can do on napkin drawings!) and dreamed up the idea of an RCT in New Zealand to study the open source AID system so many were using. We sought and were granted funding by New Zealand’s Health Research Council, published our protocol, and commenced the study.

This is my high level summary of the study and some significant aspects of it.

Study Design:

This study was a 24-week, multi-centre randomized controlled trial in children (7–15 years) and adults (16–70 years) with type 1 diabetes comparing open-source AID (using the OpenAPS algorithm within a version of AndroidAPS implemented in a smartphone with the DANA-i™ insulin pump and Dexcom G6® CGM), to sensor augmented pump therapy. The primary outcome was change in the percent of time in target sensor glucose range (3.9-10mmol/L [70-180mg/dL]) from run-in to the last two weeks of the randomized controlled trial.

  • This is a LONG study, designed to look for rare adverse events.
  • This study used the OpenAPS algorithm within a modified version of AndroidAPS, meaning the learning objectives were adapted for the purpose of the study. Participants spent at least 72 hours in “predictive low glucose suspend mode” (known as PLGM), which corrects for hypoglycemia but not hyperglycemia, before proceeding to the next stage of closed loop which also then corrected for hyperglycemia.
  • The full feature set of OpenAPS and AndroidAPS, including “supermicroboluses” (SMB) were able to be used by participants throughout the study.

Results:

Ninety-seven participants (48 children and 49 adults) were randomized.

Among adults, mean time in range (±SD) at study end was 74.5±11.9% using AID (Δ+ 9.6±11.8% from run-in; P<0.001) with 68% achieving a time in range of >70%.

Among children, mean time in range at study end was 67.5±11.5% (Δ+ 9.9±14.9% from run-in; P<0.001) with 50% achieving a time in range of >70%.

Mean time in range at study end for the control arm was 56.5±14.2% and 52.5±17.5% for adults and children respectively, with no improvement from run-in. No severe hypoglycemic or DKA events occurred in either arm. Two participants (one adult and one child) withdrew from AID due to frustrations with hardware issues.

  • The pump used in the study initially had an issue with the battery, and there were lots of pumps that needed refurbishment at the start of the study.
  • Aside from these pump issues, and standard pump site/cannula issues throughout the study (that are not unique to AID), there were no adverse events reported related to the algorithm or automated insulin delivery.
  • Only two participants withdrew from AID, due to frustration with pump hardware.
  • No severe hypoglycemia or DKA events occurred in either study arm!
  • In fact, use of open source AID improved time in range without causing additional hypoglycemia, which has long been a concern of critics of open source (and all types of) AID.
  • Time spent in ‘level 1’ and ‘level 2’ hyperglycemia was significantly lower in the AID group as well compared to the control group.

In the primary analysis, the mean (±SD) percentage of time that the glucose level was in the target range (3.9 – 10mmol/L [70-180mg/dL]) increased from 61.2±12.3% during run-in to 71.2±12.1% during the final 2-weeks of the trial in the AID group and decreased from 57.7±14.3% to 54±16% in the control group, with a mean adjusted difference (AID minus control at end of study) of 14.0 percentage points (95% confidence interval [CI], 9.2 to 18.8; P<0.001). No age interaction was detected, which suggests that adults and children benefited from AID similarly.

  • The CREATE study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy.
  • This difference reflects 3 hours 21 minutes more time spent in target range per day!
  • For children AID users, they spent 3 hours 1 minute more time in target range daily (95% CI, 1h 22m to 4h 41m).
  • For adult AID users, they spent 3 hours 41 minutes more time in target range daily (95% CI, 2h 4m to 5h 18m).
  • Glycemic improvements were evident within the first week and were maintained over the 24-week trial. Meaning: things got better quickly and stayed so through the entire 24-week time period of the trial!
  • AID was most effective at night.
Difference between control and AID arms overall, and during day and night separately, of TIR for overall, adults, and kids

One thing I think is worth making note of is that one criticism of previous studies with open source AID is regarding the self-selection effect. There is the theory that people do better with open source AID because of self-selection and self-motivation. However, the CREATE study recruited a diverse cohort of participants, and the study findings (as described above) match all previous reports of safety and efficacy outcomes from previous studies. The CREATE study also found that the greatest improvements in TIR were seen in participants with lowest TIR at baseline. This means one major finding of the CREATE study is that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID.

This therefore means there should be NO gatekeeping by healthcare providers or the healthcare system to restrict AID technology from people with insulin-requiring diabetes, regardless of their outcomes or experiences with previous diabetes treatment modalities.

There is also no age effect observed in the trail, meaning that the results of the CREATE Trial demonstrated that open-source AID is safe and effective in children and adults with type 1 diabetes. If someone wants to use open source AID, they would likely benefit, regardless of age or past diabetes experiences. If they don’t want to use open source AID or commercial AID…they don’t have to! But the choice should 100% be theirs.

In summary:

  • The CREATE trial was the first RCT to look at open source AID, after years of interest in such a study to complement the dozens of other studies evaluating open source AID.
  • The conclusion of the CREATE trial is that open-source AID using the OpenAPS algorithm within a version of AndroidAPS, a widely used open-source AID solution, appears safe and effective.
  • The CREATE trial found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy; a difference that reflects 3 hours 21 minutes more time spent in target range per day.
  • The study recruited a diverse cohort, yet still produced glycemic outcomes consistent with existing open-source AID literature, and that compare favorably to commercially available AID systems. Therefore, the CREATE Trial indicates that a range of people with type 1 diabetes might benefit from open-source AID solutions.

Huge thanks to each and every participant and their families for their contributions to this study! And ditto, big thanks to the amazing, multidisciplinary CREATE study team for their work on this study.


September 7, 2022 UPDATE – I’m thrilled to share that the paper with the primary outcomes from the CREATE trial is now published. You can find it on the journal site here, or like all of the research I contribute to, access an author copy on my research paper.

Example citation:

Burnside, M; Lewis, D; Crocket, H; et al. Open-Source Automated Insulin Delivery in Type 1 Diabetes. N Engl J Med 2022;387:869-81. DOI:10.1056/NE/Moa2203913

Note that the continuation phase study results are slated to be presented this fall at another conference!

Findings from the RCT on open source AID, the CREATE Trial, presented at #ADA2022

Looking back at work and accomplishments in 2021

I decided to do a look back at the last year’s worth of work, in part because it was a(nother) weird year in the world and also because, if you’re interested in my work, unless you read every single Tweet, there may have been a few things you missed that are of interest!

In general, I set goals every year that stretch across personal and professional efforts. This includes a daily physical activity streak that coincides with my walking and running lots of miles this year in pursuit of my second marathon and first (50k) ultramarathon. It’s good for my mental and physical health, which is why I post almost daily updates to help keep myself accountable. I also set goals like “do something creative” which could be personal (last year, knitting a new niece a purple baby blanket ticked the box on this goal!) or professional. This year, it was primarily professional creativity that accomplished this goal (more on that below).

Here’s some specifics about goals I accomplished:

RUNNING

  • My initial goal was training ‘consistently and better’ than I did for my first marathon, with 400 miles as my stretch goal if I was successfully training for the marathon. (Otherwise, 200 miles for the year would be the goal without a marathon.) My biggest-ever running year in 2013 with my first marathon was 356 miles, so that was a good big goal for me. I achieved it in June!
  • I completed my second marathon in July, and PR’d by over half an hour.
  • I completed my first-ever ultramarathon, a 50k!
  • I re-set my mileage goal after achieving 400 miles..to 500..600…etc. I ultimately achieved the biggest-ever mileage goal I’ve ever hit and think I ever will hit: I ran 1,000 miles in a single year!
  • I wrote lots of details about my methods of running (primarily, run/walking) and running with diabetes here. If you’re looking for someone to cheer you on as you set a goal for daily activity, like walking, or learning to run, or returning to running…DM or @ me on Twitter (@DanaMLewis). I love to cheer people on as they work toward their activity goals! It helps keep me inspired, too, to keep aiming at my own goals.

CREATIVITY

  • My efforts to be creative were primarily on the professional side this year. The “Convening The Center” project ended up having 2 out of 3 of my things that I categorized as being creative. The first was the design of the digital activities and the experience of CTC overall (more about that here). The second were the items in the physical “kit” we mailed out to participants: we brainstormed and created custom playing cards and physical custom keychains. They were really fun to make, especially in partnership with our excellent project artist, Rebeka Ryvola, who did the actual design work!
  • My third “creative” endeavor was a presentation, but it was unlike the presentations I usually give. I was tasked to create a presentation that was “visually engaging” and would not involve showing my face in the presentation. I’ve linked to the video below in the presentation section, but it was a lot of work to think about how to create a visually and auditory focused presentation and try to make it engaging, and I’m proud of how it turned out!

RESEARCH AND PUBLICATIONS

  • This is where the bulk of my professional work sits right now. I continue to be a PI on the CREATE trial, the world’s first randomized control trial assessing open-source automated insulin delivery technology, including the algorithm Scott and I dreamed up and that I have been using every day for the past 7 years. The first data from the trial itself is forthcoming in 2022. 
  • Convening The Center also was a grant-funded project that we turned into research with a publication that we submitted, assessing more of what patients “do”, which is typically not assessed by researchers and those looking at patient engagement in research or innovation. Hopefully, the publication of the research article we just submitted will become a 2022 milestone! In the meantime, you can read our report from the project here (https://bit.ly/305iQ1W ), as this grant-funded project is now completed.
  • Goal-wise, I aim to generate a few publications every year. I do not work for any organization and I am not an academic. However, I come from a communications background and see the benefit of reaching different audiences where they are, which is why I write blog posts for the patient community and also seek to disseminate knowledge to the research and clinical communities through traditional peer-reviewed literature. You can see past years’ research articulated on my research page (DIYPS.org/research), but here’s a highlight of some of the 2021 publications:
  • Also, although I’m not a traditional academic researcher, I also participate in the peer review process and frequently get asked to peer-review submitted articles to a variety of journals. I skimmed my email and it looks like I completed (at least) 13 peer reviews, most of which included also reviewing subsequent revisions of those submitted articles. So it looks like my rate of peer reviewing (currently) is matching my rate of publishing. I typically get asked to review articles related to open-source or DIY diabetes technology (OpenAPS, AndroidAPS, Loop, Nightscout, and other efforts), citizen science in healthcare, patient-led research or patient engagement in research, digital health, and diabetes data science. If you’re submitting articles on that topic, you’re welcome to recommend me as a potential reviewer.

PRESENTATIONS

  • I continued to give a lot of virtual presentations this year, such as at conferences like the “Insulin100” celebration conference (you can see the copy I recorded of my conference presentation here). I keynoted at the European Patients Forum Congress as well as at ADA’s Precision Diabetes Medicine 2021; an invited talk ADA Scientific Sessions (session coverage here); the 2021 Federal Wearables Summit: (video here); and the BIH Clinician Scientist Symposium (video here), to name a few (but not all).
  • Additionally, as I mentioned, one of the presentations I’m most proud of was created for the Fall 2021 #DData Exchange event:

OTHER STUFF

I did quite a few other small projects that don’t fit neatly into the above categories.

One final thing I’m excited to share is that also in 2021, Amazon came out with a beta program for producing hardcover/hardback books, alongside the ability to print paperback books on demand (and of course Kindle). So, you can now buy a copy of my book about Automated Insulin Delivery: How artificial pancreas “closed loop” systems can aid you in living with diabetes in paperback, hardback, or on Kindle. (You can also, still, read it 100% for free online via your phone or desktop at ArtificialPancreasBook.com, or download a PDF for free to read on your device of choice. Thousands of people have downloaded the PDF!)

Now available in hardcover, the book about Automated Insulin Delivery by Dana M. Lewis

Designing digital interactive activities that aren’t traditional icebreakers

A participant from Convening The Center recently emailed and asked what technology we had used for some of our interactive components within the phase 2 and 3 gatherings for the project. The short answer was “Google Slides” but there was a lot more that went into the choice of tech and the design of activities, so I ended up writing this blog post in case it was helpful to anyone else looking for ideas for interactive activities, new icebreakers for the digital era, etc.

Design context:

We held four small (8 people max) gatherings during “Phase 2” of CTC and one large (25 participants) gathering for “Phase 3”, and used Zoom as our videoconference platform of choice. But throughout the project, we knew we were bringing together random strangers to a meeting with no agenda (more about the project here, for background), and wanted to have ways to help people introduce themselves without relying on rote introductions that often fall back to name, title/organization (which often did not exist in this context!), or similar credentials.

We also had a few activities during the meeting where we wanted people to interact, and so the “icebreakers” (so to speak) were a low-stress way to introduce people to the types of activities we’d repeat later in the meeting.

Technology choice:

I’ve seen people use Jamboard (made by Google) for this purpose (icebreakers or introductory activities), and it was one that came to mind. However, I’ve been a participant on a Jamboard for a different type of meeting, and there are a few problems with it. There’s a limit to the number of participants; it requires participants to create the item they want to put on the board (e.g. figure out how to add a sticky note), and the examples I’ve seen content-wise ended up using it in a very binary way. That in some cases was due to the people designing the activity (more on content design, below), but given that we wanted to also use Google Slides to display information to participants and also enable notetaking in the same location, it also became easy to replicate the basic functionality in Google Slides instead. (PS – this article was helpful for comparing pros/cons of Jamboard and Google Slides.)

Content choices:

The “icebreakers” we chose served a few purposes. One, as mentioned above, was familiarizing people with the platform so we could use it for meeting-related activities. The other was the point of traditional icebreakers, which is to help everyone feel comfortable and also enable people to introduce themselves. That being said, most of the time introductions rely on credentials, and this was specifically a credential-less or non-credential-focused gathering, so we brainstormed quite a bit to think of what type of activities would allow people to get comfortable interacting with Google Slides and also introduce themselves in non-stressful ways.

The first activity we did for the small groups was a world map image and asked people to drag and drop their image to “if you could be anywhere in the world right now, where would you be?”. (I had asked all participants to send some kind of image in advance, and if they didn’t, supplied an image and told them what it was during the meeting.) I had the images lined up to the side of the map, and in this screenshot you can see the before and after from one of the groups where they dragged and dropped their images.

Visual of a world map with images representing individuals and different places they want to be in the world

The second activity was a slide where we asked everyone to type “one boring or uninteresting fact about themselves”. Again, this was a push back against traditional activities of “introduce yourself by credentials/past work” that feels performative and competitive. I had everyone’s names listed on the slide, so each could type in their fact. It ended up being a really fun discussion and we got to see people’s personalities early on! In some cases, we had people drop in images (see screenshot of example) when there was cross-cultural confusion about the name of something, such as the name of a vegetable that varies worldwide! (In this case, it was okra!)

List of people's names and a boring fact about themselves

We also did the same type of “type in” activity for “Ask me about my expertise in..” and asked people to share an expertise they have personally, or professionally. This is the closest we got to ‘traditional’ introductions but instead of being about titles and organizations it was about expertise in activities.

Finally, we did the activity most related to our meeting that I had wanted people to be comfortable with dragging and dropping their image for. We had a slide, again with everyone’s image present, and a variety of types of activities listed. We queried participants about “where do you spend most of your time now?”. Participants dragged and dropped their images accordingly. In some cases, they duplicated their image (right click, duplicate in Google Slides) to put themselves in multiple categories. We also had an “other” category listed where people could add additional core activities.

Example of slide activity where people drag their image to portray activities they're doing now and want to do in the future

Then, we had another slide asking where do they want to spend most of their time in the future? The point of this was to be able to switch back and forth between each slide and visualize the changes for group members – and also so they could see what types of activities their fellow participants might have experience in.

Some of these activities are similar to what you might do in person at meetings by “dot voting” on topics. This type of slide is a way to achieve the same type of interactivity digitally.

Facilitating or moderating these types of interactive activities

In addition to choosing and designing these activities, I also feel that moderating or facilitating these activities played a big role in the success of them for this project.

As I had mentioned in the technology choice section,  I’ve previously been a participant in other meeting-driven activities (using Jamboard or other tech) where the questions/activities were binary and unrelated to the meeting. Questions such as “are you a dog or cat person? Pick one.” or “Is a hot dog a sandwich?” are binary, and in some cases a meeting facilitator may fall into the trap of then ascribing characteristics to participants based on their response. In a meeting where you’re trying to use these activities to create a comfortable environment for participation amongst virtual strangers…that can backfire and actually cause people to shut down and limit participation in the meeting following those introductory activities.

As a result of having been on the receiving end of that experience, I really wanted to design activities with relevance to our meeting (both in terms of technology used and the content) as well as enough flexibility to support whatever level of involvement people wanted to do. That included being prepared to move people’s images or type in for them, especially if they were on the road and not able to sit stationary and use google slides. (We had recommended people be stationary for this meeting, but knew it wasn’t always possible, and were prepared to still help them verbally direct us to move their image, type in their fact, etc. This also can be very important for people with vision impairment as well, so be prepared to assist people in completing the activities for whatever reason, and also to verbally describe what is going on the slides/boards as people move things or type in their facts. This can aid those with vision impairment and also those who are on the go and can’t look at a screen during the meeting for whatever reason.)

One other reason we used Google Slides is so we’d end up with a slide for each breakout group to be able to take notes, and a “parking lot” slide at the end of the deck for people to add questions or comments they wanted to bring back up in the main group or moving forward in future discussions. Because people already had the Google Slide deck open for the activity, it was easy for them to scroll down and be in the notetaking slide for their breakout group (we colored the background of the slides, and told people they were in the purple, blue, green, etc. slides to make it easier to jump into the right slide).

One other note regarding facilitation with Zoom + Google Slides is that the chat feature in Zoom doesn’t show previous chat to people who join the Zoom meeting after that message is sent. So if you want to use Zoom chat to share the Google Slides link, have your link saved elsewhere and assign someone to copy and paste that message into the chat frequently, so all participants have access and can open the URL as they join the meeting. (This also includes if someone leaves and re-enters the meeting: you may need to re-post the link yet again into chat.)

TLDR, we used Google Slides to facilitate meeting note taking, digital “dot voting” and other interactive icebreaker activities alongside Zoom.

Update – 2021 Convening The Center!

2020 did not go exactly as planned, and that includes Convening the Center (see original announcement/plan here), which we had intended to be an awesome, in-person gathering of individuals who are new or have previous experience working to improve healthcare through advocacy, innovation, design, research, entrepreneurship, or some other category of “doing” and “fixing” problems they see for themselves and their community. But, as an early “I see COVID-19 is going to be a problem” person (see this post Scott and I posted March 7 begging people to stay home), by early February I was warning my co-PI and RWJF contacts that we would likely be postponing Convening the Center, and by May that was pretty clear. So we decided to request (and received) an extension on our grant from RWJF to enable us to push the grant into 2021…and ultimately, ::waves hand at everything still going on:: decided to shift to an all-virtual experience.

I’ll be honest – I was a little disappointed! But now, after several more months of work with John (Harlow, my Co-PI), I’m now very excited about the opportunities an all-virtual experience for Convening the Center will bring. First and foremost, although we planned to pay participants for ALL travel costs, hotel, food, AND for their time, I knew there would likely be people who would still not be able to travel to participate. I am hoping with a virtual experience (where we still pay people for their time!), the reduced time commitment to participate will enable those people to potentially participate.

Secondly, we’ve been thinking quite a bit about the design of virtual meetings and gatherings and have some ideas up our sleeve (which we’ll share as we finish developing them!) about how to achieve the goals of our gathering, online, without triggering video conference fatigue. If you’ve had any fantastic virtual experiences in 2020 (or ever), please let us know what they were, and what you loved (or what to avoid!), so that we can draw on as many inputs as possible to design this virtual experience.

Here’s what Convening the Center will now look like:

  • Starting now: recruitment. We are looking to solicit interest from individuals who are new or have some experience working to change or improve health, healthcare, communities, etc. If that’s you, please self-nominate yourself here, and/or please also consider sharing this with your communities or a friend from another community!
  • January: we will reach out to nominees with another short form to gather a bit more information to help us create the cohort.
  • Early February: we will notify selected participants.
  • February: Phase 1 (2 hours scheduled time commitment from participants, plus some asynchronous opportunities)
  • April: Phase 2 (2-4 hour schedule time commitment from participants, plus some asynchronous opportunities)
  • June: Phase 3 (2-4 hour scheduled time commitment from participants, plus some asynchronous opportunities)

We’ll be sharing more in the future about what the “phases” look like, and this virtual format will allow us to also invite participation from a broader group beyond the original cohort of participants. Stay tuned!

Again, here is the nomination link you can self-nominate or nominate others at. Thanks!

Nominate someone you know for Convening The Center!

How to deal with wildfire smoke and air quality issues during COVID-19

2020. What a year. We’ve been social distancing since late February and being very careful in terms of minimizing interactions even with family, for months. We haven’t traveled, we haven’t gone out to eat, and we basically only go out to get exercise (with a mask when it’s on hiking trails/around anyone) or Scott goes to the grocery store (n95 masked). We’ve been working on CoEpi (see CoEpi.org – an open source exposure notification app based on symptom reports) and staying on top of the scientific literature around COVID-19, regarding NPIs like distancing and masking; at-home diagnostics like temperature and pulse oximetry monitoring, prophylactics and treatments like zinc, quercetine, and even MMR vaccines; and the impact of ventilation and air quality on COVID-19 transmission and susceptibility.

And we live in Washington, so the focus on air quality got very real very quickly during this year’s wildfire season, where we had wildfires across the state of Washington, then got pummeled for over a week with hazardous levels of wildfire smoke coming up from Oregon and California to cover our existing smoke layer. But, one of our DIY air quality hacks for COVID-19 gave us a head start on air quality improvements for smoke-laden air, which I’ll describe below.

Here are various things we’ve gotten and have been using in our personal attempts to thwart COVID-19:

  • Finger pulse oximeter.
    • Just about any cheap pulse oximeter you can find is fine. The goal is to get an idea of your normal baseline oxygen rates. If you dip low, that might be a reason to go to urgent care or the ER or at least talk to your doctor about it. For me, I am typically 98-99% (mine doesn’t read higher than 99%), and my personal plan would be to talk to a healthcare provider if I was sick and started dropping below 94%.
  • Thermometer
    • Use any thermometer that you’ll actually use. I have previously used a no-touch thermometer that could read foreheads but found it varied widely and inconsistently, so I went back to an under the tongue thermometer and took my temperature for several months at different times to figure out my baselines. If sick or you have a suspected exposure, it’s good to be checking at different times of the day (people often have lower temps in the morning than in the evening, so knowing your daily differences may help you evaluate if you’re elevated for you or not).
    • Note: women with menstrual cycles may have changes related to this; such as lower baseline temps at the start of the cycle and having a temperature upswing around or after the mid-point in their cycle. But not all do. Also, certain medications or birth controls can impact basal temperatures, so be aware of that.
  • Originally, n95 masks with outlet valves.
    • Note: n95 masks with valves cannot be used by medical professionals, because the valves make them less effective for protecting others. (So don’t freak out at people who had a box of valved n95 masks from previous wildfire smoke seasons, as we did. Ahem.) 
    • We had a box we bought after previous years’ wildfire smoke, and they work well for us (in low-risk non-medical settings) for repeated use. They’re Scott’s go-to choice. If you’re in a setting where the outlet valve matters (indoors in a doctor’s/medical setting, or on a plane), you can easily pop a surgical/procedure mask over the valve to block the valve to protect others from your exhaust, while still getting good n95-level protection for yourself.
    • They were out of stock since February, but given the focus on n95 without valves for medical PPE, there have been a few boxes of n95 masks with outlet valves showing up online at silly prices ($7 per mask or so). But, kn95’s are a cheaper per mask option that are generally more available – see below.
    • (June 2021 note – they are back to reasonable prices, in the $1-2 range per mask on Amazon, and available again.)
  • kn95 masks.
    • kn95 masks are a different standard than US-rated n95; but they both block 95% of tiny (0.3 micron) particles. For non-medical usage, we consider them equivalent. But like n95, the fit is key.
    • We originally bought these kn95s, but the ear loops were quite big on me. (See below for options if this is the case on any you get.) They aren’t as hardy as the n95s with valves (above); the straps have broken off, tearing the mask, after about 4-5 long wears. That’s still worth it for them being $2-3 each (depending on how many you buy at a time) for me, but I’d always pack a spare mask (of any kind) just in case.
      • Option one to adjust ear loops: I loop them over my ponytail, making them head loops. This has been my favorite kn95 option because I get a great fit and a tight seal with this method.
      • Option two to adjust ear loops: tie knots in the ear loops
      • Option three to adjust ear loops: use things like this to tighten the ear loops
    • We also got a set of these kn95s. They don’t fit quite as well in terms of a tight face fit, but these actually work as ear loops (as designed), and I was able to wear this inside the house on the worst day of air quality.
  • Box fan with a filter to reduce COVID-19 particles in the air:
    • We read this story about using an existing AC air furnace filter on a box fan to help reduce the number of COVID-19 particles in the air. We already had a box fan, so we took one of our spare 20×20 filters and popped it on. I’m allergic to dust, cats (which we just got), trees, grass, etc, so I knew it would also help with regular allergens. There are different levels of filter – all the way up to HEPA filters – but we had MERV 12 so that’s what we used.
  • Phone/object UV sanitizer
    • We got a PhoneSoap Pro (in lavender, but there are other colors). Phones are germy, and being able to pop the phone in (plus keys or any other objects like credit cards or insurance cards that might have been handled by another human) to disinfect has been nice to have.
    • The Pro is done sanitizing in 5 minutes, vs the regular one takes 10 minutes. It’s not quite 2x the price as the non-pro, but I’ve found it to be worthwhile because otherwise, I would be impatient to get my phone back out. I usually pop my phone in it when I get home from my walk, and by the time I’m done washing my hands and all the steps of getting home, the phone is about or already done being sanitized.
  • Bonus (but not as useful to everyone as the above, and pricey): Oura ring
    • Scott and I also both got Oura rings. They are pricey, but every morning when we wake up we can see our lowest resting heart rate (RHR), heart rate variability (HRV), temperature deviations, and respiratory rate (RR). There have been studies showing that HRV, RHR, overnight temperature, and RR changes happen early in COVID-19 and other infections, which can give an early warning sign that you might be getting sick with something. That can be a good early warning sign (before you get to the point of being symptomatic and highly infectious) that you need to mask up and work from home/social distance/not interact with other people if you can help it. I find the data soothing, as I am used to using a lot of diabetes data on a daily and real-time basis (see also: invented an open source artificial pancreas). Due to price and level of interest in self-tracking data, this may not be a great tool for everyone.
    • Note this doesn’t tell you your temperature in real time, or present absolute values, but it’s helpful to see, and get warnings about, any concerning trends in your body temperature data. I’ve seen several anecdotal reports of this being used for early detection of COVID-19 infection and various types of relapses experienced by long-haulers.

And here are some things we’ve added to battle air quality during wildfire smoke season:

  • We were already running a box fan with a filter (see above for more details) for COVID-19 and allergen reduction; so we kept running it on high speed for smoke reduction.
    • Basic steps: get box fan, get a filter, and duct tape or strap it on. Doesn’t have to be cute, but it will help.
    • I run this on high speed during the day in my bedroom, and then on low speed overnight or sleep with earplugs in.
  • We already had a small air purifier for allergens, which we also kept running on high. This one hangs out in our guest bedroom/my office.
  • We caved and got a new, bigger air purifier, since we expect future years to be equally and unfortunately as smoky. This is the new air purifier we got. (Scott chose the 280i version that claims to cover 279 sq. ft.). It’s expensive, but given how miserable I was even inside the house with decent air quality thanks to my box fan and filter, little purifier, and our A/C filtered air… I consider it to be worth the investment.
    • We plugged it in and validated that with our A/C-filtered air combined with my little air purifier and the box fan with filter running on high, we already had ‘good’ air quality (but not excellent). We also stuck it out in the hallway to see what the hallway air quality was running – around 125 ug/m^3 – yikes. Turns out that was almost as high as the outside air, which is I’ve had to wear a kn95 mask even to walk hallway laps, and why my eyes are irritated. example air quality difference between hallway and our kitchen. hallway is much higher.
  • Check your other filters while you’re on air quality monitoring alert. We found our A/C intake duct vent had not had the air filter changed since we moved in over a year ago… and turns out it’s a non-standard size and had a hand-cut stuffed in there, so we ordered a correctly sized one for the vent, and taped a different one over the outside in the interim.
  • The other thing to fight the smoke is having n95 with valves or kn95 masks to wear when we have to go outside, or if it gets particularly bad inside. Our previous strategy was to have several on hand for wildfire season, and we’ll continue to do this. (See above in the COVID-19 section for descriptions in more detail about different kinds of masks we’ve tried.)
  • 2022 update: I got a mini personal air purifier to try for travel (to help reduce risk of COVID-19 in addition to all other precautions like staying masked on planes and indoor spaces), but it also turned out to be beneficial inside during the worst of our 2022 wildfire smoke season. I had a slightly scratchy throat even with two box fans and two different air purifiers inside; but keeping this individual one plugged in and pointed at my face overnight eliminated me waking up with a scratchy throat. That’s great for wildfire smoke, and also shows that there is some efficacy to this fan for it’s intended purpose, which is improving air around my face during travel in inside spaces for COVID-19 and other disease prevention.

Wildfires, their smoke, and COVID-19 combined is a bit of a mess for our health. Stay inside when you can, wear masks when you’re around other people outside your household that you have to share air with, wash your hands, and good luck.

Poster and presentation content from @DanaMLewis at #ADA2020 and #DData20

In previous years (see 2019 and 2018), I mentioned sharing content from ADA Scientific Sessions (this year it’s #ADA2020) with those not physically present at the conference. This year, NO ONE is present at the event, and we’re all virtual! Even more reason to share content from the conference. :)

I contributed to and co-authored two different posters at Scientific Sessions this year:

  • “Multi-Timescale Interactions of Glucose and Insulin in Type 1 Diabetes Reveal Benefits of Hybrid Closed Loop Systems“ (poster 99-LB) along with Azure Grant and Lance Kriegsfeld, PhD.
  • “Do-It-Yourself Artificial Pancreas Systems for Type 1 Diabetes Reduce Hyperglycemia Without Increasing Hypoglycemia” (poster 988-P in category 12-D Clinical Therapeutics/New Technology—Insulin Delivery Systems), alongside Jennifer Zabinsky, MD MEng, Haley Howell, MSHI, Alireza Ghezavati, MD, Andrew Nguyen, PhD, and Jenise Wong, MD PhD.

And, while not a poster at ADA, I also presented the “AID-IRL” study funded by DiabetesMine at #DData20, held in conjunction with Scientific Sessions. A summary of the study is also included in this post.

First up, the biological rhythms poster, “Multi-Timescale Interactions of Glucose and Insulin in Type 1 Diabetes Reveal Benefits of Hybrid Closed Loop Systems” (poster 99-LB). (Twitter thread summary of this poster here.)

Building off our work as detailed last year, Azure, Lance, and I have been exploring the biological rhythms in individuals living with type 1 diabetes. Why? It’s not been done before, and we now have the capabilities thanks to technology (pumps, CGM, and closed loops) to better understand how glucose and insulin dynamics may be similar or different than those without diabetes.

Background:

Mejean et al., 1988Blood glucose and insulin exhibit coupled biological rhythms at multiple timescales, including hours (ultradian, UR) and the day (circadian, CR) in individuals without diabetes. The presence and stability of these rhythms are associated with healthy glucose control in individuals without diabetes. (See right, adapted from Mejean et al., 1988).

However, biological rhythms in longitudinal (e.g., months to years) data sets of glucose and insulin outputs have not been mapped in a wide population of people with Type 1 Diabetes (PWT1D). It is not known how glucose and insulin rhythms compare between T1D and non-T1D individuals. It is also unknown if rhythms in T1D are affected by type of therapy, such as Sensor Augmented Pump (SAP) vs. Hybrid Closed Loop (HCL). As HCL systems permit feedback from a CGM to automatically adjust insulin delivery, we hypothesized that rhythmicity and glycemia would exhibit improvements in HCL users compared to SAP users. We describe longitudinal temporal structure in glucose and insulin delivery rate of individuals with T1D using SAP or HCL systems in comparison to glucose levels from a subset of individuals without diabetes.

Data collection and analysis:

We assessed stability and amplitude of normalized continuous glucose and insulin rate oscillations using the continuous wavelet transformation and wavelet coherence. Data came from 16 non-T1D individuals (CGM only, >2 weeks per individual) from the Quantified Self CGM dataset and 200 (n = 100 HCL, n = 100 SAP; >3 months per individual) individuals from the Tidepool Big Data Donation Project. Morlet wavelets were used for all analyses. Data were analyzed and plotted using Matlab 2020a and Python 3 in conjunction with in-house code for wavelet decomposition modified from the “Jlab” toolbox, from code developed by Dr. Tanya Leise (Leise 2013), and from the Wavelet Coherence toolkit by Dr. Xu Cui. Linear regression was used to generate correlations, and paired t-tests were used to compare AUC for wavelet and wavelet coherences by group (df=100). Stats used 1 point per individual per day.

Wavelets Assess Glucose and Insulin Rhythms and Interactions

Wavelet Coherence flow for glucose and insulin

Morlet wavelets (A) estimate rhythmic strength in glucose or insulin data at each minute in time (a combination of signal amplitude and oscillation stability) by assessing the fit of a wavelet stretched in window and in the x and y dimensions to a signal (B). The output (C) is a matrix of wavelet power, periodicity, and time (days). Transform of example HCL data illustrate the presence of predominantly circadian power in glucose, and predominantly 1-6 h ultradian power in insulin. Color map indicates wavelet power (synonymous with Y axis height). Wavelet coherence (D) enables assessment of rhythmic interactions between glucose and insulin; here, glucose and insulin rhythms are highly correlated at the 3-6 (ultradian) and 24 (circadian) hour timescales.

Results:

Hybrid Closed Loop Systems Reduce Hyperglycemia

Glucose distribution of SAP, HCL, and nonT1D
  • A) Proportional counts* of glucose distributions of all individuals with T1D using SAP (n=100) and HCL (n=100) systems. SAP system users exhibit a broader, right shifted distribution in comparison to individuals using HCL systems, indicating greater hyperglycemia (>7.8 mmol/L). Hypoglycemic events (<4mmol/L) comprised <5% of all data points for either T1D dataset.
  • B) Proportional counts* of non-T1D glucose distributions. Although limited in number, our dataset from people without diabetes exhibits a tighter blood glucose distribution, with the vast majority of values falling in euglycemic range (n=16 non-T1D individuals).
  • C) Median distributions for each dataset.
  • *Counts are scaled such that each individual contributes the same proportion of total data per bin.

HCL Improves Correlation of Glucose-Insulin Level & Rhythm

Glucose and Insulin rhythms in SAP and HCL

SAP users exhibit uncorrelated glucose and insulin levels (A) (r2 =3.3*10-5; p=0.341) and uncorrelated URs of glucose and insulin (B) (r2 =1.17*10-3; p=0.165). Glucose and its rhythms take a wide spectrum of values for each of the standard doses of insulin rates provided by the pump, leading to the striped appearance (B). By contrast, Hybrid Closed Loop users exhibit correlated glucose and insulin levels (C) (r2 =0.02; p=7.63*10-16), and correlated ultradian rhythms of glucose and insulin (D) (r2 =-0.13; p=5.22*10-38). Overlays (E,F).

HCL Results in Greater Coherence than SAP

Non-T1D individuals have highly coherent glucose and insulin at the circadian and ultradian timescales (see Mejean et al., 1988, Kern et al., 1996, Simon and Brandenberger 2002, Brandenberger et al., 1987), but these relationships had not previously been assessed long-term in T1D.

coherence between glucose and insulin in HCL and SAP, and glucose swings between SAP, HCL, and non-T1DA) Circadian (blue) and 3-6 hour ultradian (maroon) coherence of glucose and insulin in HCL (solid) and SAP (dotted) users. Transparent shading indicates standard deviation. Although both HCL and SAP individuals have lower coherence than would be expected in a non-T1D individual,  HCL CR and UR coherence are significantly greater than SAP CR and UR coherence (paired t-test p= 1.51*10-7 t=-5.77 and p= 5.01*10-14 t=-9.19, respectively). This brings HCL users’ glucose and insulin closer to the canonical non-T1D phenotype than SAP users’.

B) Additionally, the amplitude of HCL users’ glucose CRs and URs (solid) is closer (smaller) to that of non-T1D (dashed) individuals than are SAP glucose rhythms (dotted). SAP CR and UR amplitude is significantly higher than that of HCL or non-T1D (T-test,1,98, p= 47*10-17 and p= 5.95*10-20, respectively), but HCL CR amplitude is not significantly different from non-T1D CR amplitude (p=0.61).

Together, HCL users are more similar than SAP users to the canonical Non-T1D phenotype in A) rhythmic interaction between glucose and insulin and B) glucose rhythmic amplitude.

Conclusions and Future Directions

T1D and non-T1D individuals exhibit different relative stabilities of within-a-day rhythms and daily rhythms in blood glucose, and T1D glucose and insulin delivery rhythmic patterns differ by insulin delivery system.

Hybrid Closed Looping is Associated With:

  • Lower incidence of hyperglycemia
  • Greater correlation between glucose level and insulin delivery rate
  • Greater correlation between ultradian glucose and ultradian insulin delivery rhythms
  • Greater degree of circadian and ultradian coherence between glucose and insulin delivery rate than in SAP system use
  • Lower amplitude swings at the circadian and ultradian timescale

These preliminary results suggest that HCL recapitulates non-diabetes glucose-insulin dynamics to a greater degree than SAP. However, pump model, bolusing data, looping algorithms and insulin type likely all affect rhythmic structure and will need to be further differentiated. Future work will determine if stability of rhythmic structure is associated with greater time in range, which will help determine if bolstering of within-a-day and daily rhythmic structure is truly beneficial to PWT1D.
Acknowledgements:

Thanks to all of the individuals who donated their data as part of the Tidepool Big Data Donation Project, as well as the OpenAPS Data Commons, from which data is also being used in other areas of this study. This study is supported by JDRF (1-SRA-2019-821-S-B).

(You can download a full PDF copy of the poster here.)

Next is “Do-It-Yourself Artificial Pancreas Systems for Type 1 Diabetes Reduce Hyperglycemia Without Increasing Hypoglycemia” (poster 988-P in category 12-D Clinical Therapeutics/New Technology—Insulin Delivery Systems), which I co-authored alongside Jennifer Zabinsky, MD MEng, Haley Howell, MSHI, Alireza Ghezavati, MD, Andrew Nguyen, PhD, and Jenise Wong, MD PhD. There is a Twitter thread summarizing this poster here.

This was a retrospective double cohort study that evaluated data from the OpenAPS Data Commons (data ranged from 2017-2019) and compared it to conventional sensor-augmented pump (SAP) therapy from the Tidepool Big Data Donation Project.

Methods:

  • From the OpenAPS Data Commons, one month of CGM data (with more than 70% of the month spent using CGM), as long as they were >1 year of living with T1D, was used. People could be using any type of DIYAPS (OpenAPS, Loop, or AndroidAPS) and there were no age restrictions.
  • A random age-matched sample from the Tidepool Big Data Donation Project of people with type 1 diabetes with SAP was selected.
  • The primary outcome assessed was percent of CGM data <70 mg/dL.
  • The secondary outcomes assessed were # of hypoglycemic events per month (15 minutes or more <70 mg/dL); percent of time in range (70-180mg/dL); percent of time above range (>180mg/dL), mean CGM values, and coefficient of variation.
Methods_DIYAPSvsSAP_ADA2020_DanaMLewis

Demographics:

  • From Table 1, this shows the age of participants was not statistically different between the DIYAPS and SAP cohorts. Similarly, the age at T1D diagnosis or time since T1D diagnosis did not differ.
  • Table 2 shows the additional characteristics of the DIYAPS cohort, which included data shared by a parent/caregiver for their child with T1D. DIYAPS use was an average of 7 months, at the time of the month of CGM used for the study. The self-reported HbA1c in DIYAPS was 6.4%.
Demographics_DIYAPSvsSAP_ADA2020_DanaMLewis DIYAPS_Characteristics_DIYAPSvsSAP_ADA2020_DanaMLewis

Results:

  • Figure 1 shows the comparison in outcomes based on CGM data between the two groups. Asterisks (*) indicate statistical significance.
  • There was no statistically significant difference in % of CGM values below 70mg/dL between the groups in this data set sampled.
  • DIYAPS users had higher percent in target range and lower percent in hyperglycemic range, compared to the SAP users.
  • Table 3 shows the secondary outcomes.
  • There was no statistically significant difference in the average number of hypoglycemic events per month between the 2 groups.
  • The mean CGM glucose value was lower for the DIYAPS group, but the coefficient of variation did not differ between groups.
CGM_Comparison_DIYAPSvsSAP_ADA2020_DanaMLewis SecondaryOutcomes_DIYAPSvsSAP_ADA2020_DanaMLewis

Conclusions:

    • Users of DIYAPS (from this month of sampled data) had a comparable amount of hypoglycemia to those using SAP.
    • Mean CGM glucose and frequency of hyperglycemia were lower in the DIYAPS group.
    • Percent of CGM values in target range (70-180mg/dL) was significantly greater for DIYAPS users.
    • This shows a benefit in DIYAPS in reducing hyperglycemia without compromising a low occurrence of hypoglycemia. 
Conclusions_DIYAPSvsSAP_ADA2020_DanaMLewis

(You can download a PDF of the e-poster here.)

Finally, my presentation at this year’s D-Data conference (#DData20). The study I presented, called AID-IRL, was funded by Diabetes Mine. You can see a Twitter thread summarizing my AID-IRL presentation here.

AID-IRL-Aim-Methods_DanaMLewis

I did semi-structured phone interviews with 7 users of commercial AID systems in the last few months. The study was funded by DiabetesMine – both for my time in conducting the study, as well as funding for study participants. Study participants received $50 for their participation. I sought a mix of longer-time and newer AID users, using a mix of systems. Control-IQ (4) and 670G (2) users were interviewed; as well as (1) a CamAPS FX user since it was approved in the UK during the time of the study.

Based on the interviews, I coded their feedback for each of the different themes of the study depending on whether they saw improvements (or did not have issues); had no changes but were satisfied, or neutral experiences; or saw negative impact/experience. For each participant, I reviewed their experience and what they were happy with or frustrated by.

Here are some of the details for each participant.

AID-IRL-Participant1-DanaMLewisAID-IRL-Participant1-cont_DanaMLewis1 – A parent of a child using Control-IQ (off-label), with 30% increase in TIR with no increased hypoglycemia. They spend less time correcting than before; less time thinking about diabetes; and “get solid uninterrupted sleep for the first time since diagnosis”. They wish they had remote bolusing, more system information available in remote monitoring on phones. They miss using the system during the 2 hour CGM warmup, and found the system dealt well with growth spurt hormones but not as well with underestimated meals.

AID-IRL-Participant2-DanaMLewis AID-IRL-Participant2-cont-DanaMLewis2 – An adult male with T1D who previously used DIYAPS saw 5-10% decrease in TIR (but it’s on par with other participants’ TIR) with Control-IQ, and is very pleased by the all-in-one convenience of his commercial system.He misses autosensitivity (a short-term learning feature of how insulin needs may very from base settings) from DIYAPS and has stopped eating breakfast, since he found it couldn’t manage that well. He is doing more manual corrections than he was before.

AID-IRL-Participant5-DanaMLewis AID-IRL-Participant5-cont_DanaMLewis5 – An adult female with LADA started, stopped, and started using Control-IQ, getting the same TIR that she had before on Basal-IQ. It took artificially inflating settings to achieve these similar results. She likes peace of mind to sleep while the system prevents hypoglycemia. She is frustrated by ‘too high’ target; not having low prevention if she disables Control-IQ; and how much she had to inflate settings to achieve her outcomes. It’s hard to know how much insulin the system gives each hour (she still produces some of own insulin).

AID-IRL-Participant7-DanaMLewis AID-IRL-Participant7-cont-DanaMLewis7 – An adult female with T1D who frequently has to take steroids for other reasons, causing increased BGs. With Control-IQ, she sees 70% increase in TIR overall and increased TIR overnight, and found it does a ‘decent job keeping up’ with steroid-induced highs. She also wants to run ‘tighter’ and have an adjustable target, and does not ever run in sleep mode so that she can always get the bolus corrections that are more likely to bring her closer to target.

AID-IRL-Participant3-DanaMLewis AID-IRL-Participant3-cont-DanaMLewis3 – An adult male with T1D using 670G for 3 years didn’t observe any changes to A1c or TIR, but is pleased with his outcomes, especially with the ability to handle his activity levels by using the higher activity target.  He is frustrated by the CGM and is woken up 1-2x a week to calibrate overnight. He wishes he could still have low glucose suspend even if he’s kicked out of automode due to calibration issues. He also commented on post-meal highs and more manual interventions.

AID-IRL-Participant6-DanaMLewis AID-IRL-Participant6-contDanaMLewis6 – Another adult male user with 670G was originally diagnosed with T2 (now considered T1) with a very high total daily insulin use that was able to decrease significantly when switching to AID. He’s happy with increased TIR and less hypo, plus decreased TDD. Due to #COVID19, he did virtually training but would have preferred in-person. He has 4-5 alerts/day and is woken up every other night due to BG alarms or calibration. He does not like the time it takes to charge CGM transmitter, in addition to sensor warmup.

AID-IRL-Participant4-DanaMLewis AID-IRL-Participant4-contDanaMLewis4 – The last participant is an adult male with T1 who previously used DIYAPS but was able to test-drive the CamAPS FX. He saw no TIR change to DIYAPS (which pleased him) and thought the learning curve was easy – but he had to learn the system and let it learn him. He experienced ‘too much’ hypoglycemia (~7% <70mg/dL, 2x his previous), and found it challenging to not have visibility of IOB. He also found the in-app CGM alarms annoying. He noted the system may work better for people with regular routines.

You can see a summary of the participants’ experiences via this chart. Overall, most cited increased or same TIR. Some individuals saw reduced hypos, but a few saw increases. Post-meal highs were commonly mentioned.

AID-IRL-UniversalThemes2-DanaMLewis AID-IRL-UniversalThemes-DanaMLewis

Those newer to CGM have a noticeable learning curve and were more likely to comment on number of alarms and system alerts they saw. The 670G users were more likely to describe connection/troubleshooting issues and CGM calibration issues, both of which impacted sleep.

This view highlights those who more recently adopted AID systems. One noted their learning experience was ‘eased’ by “lurking” in the DIY community, and previously participating in an AID study. One felt the learning curve was high. Another struggled with CGM.

AID-IRL-NewAIDUsers-DanaMLewis

Both previous DIYAPS users who were using commercial AID systems referenced the convenience factor of commercial systems. One DIYAPS saw decreased TIR, and has also altered his behaviors accordingly, while the other saw no change to TIR but had increased hypo’s.

AID-IRL-PreviousDIYUsers-DanaMLewis

Companies building AID systems for PWDs should consider that the onboarding and learning curve may vary for individuals, especially those newer to CGM. Many want better displays of IOB and the ability to adjust targets. Remote bolusing and remote monitoring is highly desired by all, regardless of age. Post-prandial was frequently mentioned as the weak point in glycemic control of commercial AID systems. Even with ‘ideal’ TIR, many commercial users still are doing frequent manual corrections outside of mealtimes. This is an area of improvement for commercial AID to further reduce the burden of managing diabetes.

AID-IRL-FeedbackForCompanies-DanaMLewis

Note – all studies have their limitations. This was a small deep-dive study that is not necessarily representative, due to the design and small sample size. Timing of system availability influenced the ability to have new/longer time users.

AID-IRL-Limitations-DanaMLewis

Thank you to all of the participants of the study for sharing their feedback about their experiences with AID-IRL!

(You can download a PDF of my slides from the AID-IRL study here.)

Have questions about any of my posters or presentations? You can always reach me via email at Dana@OpenAPS.org.