4 years DIY closed looping with #OpenAPS – what changed and what hasn’t

It’s hard to express the magnitude of how much closed looping can improve a person with diabetes’ life, especially to someone who doesn’t have diabetes or live closely with someone that does. There are so many benefits – and so many way beyond the typically studied “A1c improvement” and “increased time in range”. Sure, those happen (and in case you haven’t seen it, see some of the outcomes from various international studies looking at DIY closed loop outcomes). But everything else…it’s hard to explain all of the magic that happens in real life, that’s made so much richer by having technology that for the most part keeps diabetes out of the way, and more importantly: off the top of your mind.

Personally, my first and most obvious benefit, and the whole reason I started DIYing in the first place, was to have the peace of mind to sleep safely at night. Objective achieved, immediately. Then over time, I got the improvements in A1c and time in range, plus reduction in time spent doing diabetes ‘stuff’ and time spent thinking about my own diabetes. The artificial pancreas ‘rigs’ got smaller. We improved the algorithm, to the point where it can handle the chaos that is everything from menstrual cycle to having the flu or norovirus.

More recently, in the past ~17 months, I’ve achieved an ultimate level of not doing much diabetes work that I never thought was possible: with the help of faster insulin and things like SMB’s (improved algorithm enhancements in OpenAPS), I’ve been able do a simple meal announcement by pressing a button on my watch or phone..and not having to bolus. Not worrying about precise carb counts. Not worrying about specific timing of insulin activity. Not worrying about post-meal lows. Not worrying about lots of exercise. And the results are pretty incredible to me:

We should be measuring and reducing user burden with AID in addition to improving TIR and A1c

But I remember early on when we had announced that we had figured out how to close the loop. We got a lot of push back saying, well, that’s good for you – but will it work for anyone else? And I remember thinking about how if it helped one other person sleep safely at night..it would be worth the amount of work it would take to open source it. Even if we didn’t know how well it would work for other people, we had a feeling it might work for some people. And that for even a few people who it might work for, it was worth doing. Would DIY end up working for everyone, or being something that everyone would want to do? Maybe not, and definitely not. We wouldn’t necessarily change the world for everyone by open sourcing an APS, but that could help change the world for someone else, and we thought that was (and still is) worth doing. After all, the ripple effect may help ultimately change the world for everyone else in ways we couldn’t predict or expect.

Ripple_effect_DanaMLewisThis has become true in more ways than one.

That ‘one other person’ turned into a few..then dozen..hundreds..and now probably thousand(s) around the world using various DIY closed loop systems.

And in addition to more people being able to choose to access different DIY systems with more pumps of choice, CGMs of choice, and algorithm of choice, we’ve also seen the ripple effect in the way the world works, too. There is now, thankfully, at least one company who is evaluating open source code; running simulations with it; and where it is out-performing their original algorithm or code components, utilizing that knowledge to improve their system. They’re also giving back to the open source diabetes community, too. Hopefully more companies will take this approach & bring better products more quickly to the market. When they are ready to submit said products, we know at least U.S. regulators at the FDA are ready to quickly review and work with companies to get better tools on the market. That’s a huge change from years ago, when there was a lot of finger pointing and what felt like a lot of delay preventing newer technology from reaching the market. The other change I’m seeing is in diabetes research, where researchers are increasingly working directly with patients from the start and designing better studies around the things that actually matter to people with diabetes, including analyzing the impact and outcomes of open source technology.

After five years of open source diabetes work, and specifically four years of DIY closed looping, it finally feels like the ripples are ultimately helping achieve the vision we had at the start of OpenAPS, articulated in the conclusion of the OpenAPS Reference Design:

OpenAPS_Reference_Design_conclusionIs there still more work to do? Absolutely.

Even as more commercial APS roll out, it takes too long for these to reach many countries. And in most parts of the world, it’s still insanely hard and/or expensive to get insulin (which is one of the reasons Scott and I support Life For A Child to help get insulin, supplies, and education to as many children as possible in countries where otherwise they wouldn’t be able to access it – more on that here.). And even when APS are “approved” commercially, that doesn’t mean they’ll be affordable or accessible, even with health insurance. So I expect our work to continue, not only to support ongoing improvements with DIY systems directly; but also with encouraging and running studies to generalize knowledge from DIY systems; hopefully seeing DIY systems approved to work with existing interoperable devices; helping any company that will listen to improve their systems, both in terms of algorithms but also in terms of usability; helping regulators to see both what’s possible as well as what’s needed to successfully using these types of system in the real world. I don’t see this work ending for years to come – not until the day where every person with diabetes in every country has access to basic diabetes supplies, and the ability to choose to use – or not – the best technology that we know is possible.

But even so, after four years of DIY closed looping, I’m incredibly thankful for the quality of life that has been made possible by OpenAPS and the community around it. And I’m thankful for the community for sharing their stories of what they’ve accomplished or done while using DIY closed loop systems. It’s incredible to see people sharing stories of how they are achieving their best outcomes after 45 years of diabetes; or people posting from Antartica; or after running marathons; or after a successful and healthy pregnancy where they used their DIY closed loop throughout; or after they’ve seen the swelling in their eyes go done; etc.

The stories of the real-life impacts of this type of technology are some of the best ripple effects that I never want to forget.

Running and fueling for runs with type 1 diabetes

This blog post is not for you. (Well that sounds mean, doesn’t it? It’s not meant to be mean. But this post is written for a very small subset of people like me who are stumbling around on page 16 of Google trying to find someone sharing experiences and specific details around methods (both successful and less so) for fueling for longer endurance events such as full marathons or ultramarathons with type 1 diabetes. So – please don’t be offended, but also don’t be surprised if you don’t find this post very useful!)

I’ve started running again, and more, this year, and am now to the point where I’m considering running another full marathon sometime next year. As I adventure into running longer distances, and more miles, I’m reflecting on what I did in my first full marathon that worked related to diabetes, and what I want to try to do differently. This post is logging some of my experiences and notes to date, in honor of fellow page-16-of-Google-seekers, rather than waiting til after I run another full (if I do) and there continuing to be not much info out there.

Some background on my running:

I’m not a runner. And not a good runner. I never liked running. But, I walked the Seattle half marathon in December 2012 and thought it might be fun to then walk the full marathon in December 2013. However, I also tried snowboarding for the first time in January 2013 and majorly damaged my knee. I could barely walk the few blocks to work every day, let alone do my normal activities. It took several months, and several PT sessions, to get back to normal. But part of my frustration and pain manifested into the idea that I should recover enough to still walk that full marathon in December. And in order to be off the course by the time it closed, I would need to run a little bit. And I could barely walk, and never ran, so I would need to do some training to be able to run a mile or two out of the 26.2 I planned to otherwise walk. So I set off to teach myself how to run with the idea of walk/running the full, which evolved into a plan to run/walk it, and mostly eventually run it. And that’s what I did.

Now – this marathon was December 2013. This was right when we created DIYPS, and a year before we closed the loop, so I was in full, old-school traditional manual diabetes mode. And it sucked quite a bit. But now, almost 5 years later, with the benefit of everything I’ve learned from DIYPS and OpenAPS about insulin and food timing etc., here’s what I realized was happening – and why – in some of my training runs.

What I worried about was going low during the runs. So, I generally would set a low temporary basal rate to reduce insulin during the run, and try to run before dinner instead of after (to reduce the likelihood of running with a lot of active insulin in my body). I would also eat some kind of snack – I think for energy as well as making sure I didn’t go low. I would also carry a bottle of Gatorade to drink along the way.

With the benefit of 5 years of lots of learning/thinking about all the mechanics of diabetes, here’s what was happening:

Per the visualization, the carbs would hit in about 15 minutes. If I reduced insulin at the time of the run, it would drive my blood sugar up as well, over a longer time frame (after around 45+ minutes as the lack of insulin really started to kick in and previous basal impact tailed off). The combination of these usually meant that I would rise toward the middle or end of my short and medium runs, and end up high. In longer runs, I would go higher, then low – and sip gatorade, and have some roller coaster after that.

Now, this was frustrating in training runs, but I did ok for my long runs and my marathon had pretty decent BGs with no lows. However, knowing everything I know now, and commencing a new burst of running, I want to try to do better.

Here’s what I’ve been doing this year in 2018:

My original interest in running was to set a mileage goal for the year, because I didn’t run very much last year (around 50 miles, mostly throughout summer), and I wanted to try to run more regularly throughout the year to get a more regular dose of physical activity. (I am very prone to looking at Seattle weather in October-December and January-March and wanting to stay inside!) That mileage goal was ambitious for me since I didn’t plan to race/train for any distance. To help me stick to it, I divided it by 12 to give myself monthly sub-goals that I would try to hit as a way to stay on top of making regular progress to the goal.

(Ps – pro tip – it doesn’t matter how small or big your goal is. If you track % progress toward whatever your mileage goal is, it’s really nice! And it allows you to compete/compare progress, even if your friends have a much bigger mileage goal than you. That way everyone can celebrate progress, and you don’t have to tell people exactly what your mileage goal might be. What’s tiny for you is big for others; and what’s big for you may be small to others – and that doesn’t matter at all!)

Showing number of runs per week with dips during travel weeks

This has worked really well. The first few months I scraped by in keeping up with my monthly goal. Except for February, when I had three weeks of flu and bronchitis, so I surged in March to finish February’s miles and March’s miles. I then settled back into a regular amount, meeting my monthly goals…and then surged again in August, so I was able to finish my yearly mileage in the middle of September! Wahoo! I didn’t plan to stop there, though, so I planned to keep running, and that’s where the idea of running the Seattle half (always the Sunday after Thanksgiving) popped up again, and maybe a full next year. I started adding some longer runs (two 7.5 miles; a 9.35 miler, and now a 13 miler) over the past month, and have felt really good about those, which has enabled me to start thinking more carefully about what I did last time BG-wise and why this time is so much easier.

Earlier in the year, even on my short runs (one mile or so), I quickly realized that because of the shorter peak of Fiasp, I was less likely to have previous insulin activity drive me low during the run. Within the first handful of runs, I stopped eating a snack or some carbs before the run. I also stopped setting a super high target an hour before my run. I gradually moved into just avoiding >1.5u of insulin on board before short runs; and for longer runs, setting a target of ~110 about 30 minutes before I walked out the door, mostly to avoid any of that insulin activity dosed that would kick in right after I started running. (Keep in mind when I talk about setting targets: I’m using OpenAPS, my DIY closed loop system that does automatic insulin dosing; and for fellow DIY closed loop users, I’m also using exercise mode settings so I can set lower targets like 110 and the targets also automatically adjust my sensitivity and recalculate IOB accordingly. So without those settings, I’d probably set the target to 130 or so.)

And this has worked quite well for me.

Increasing the lengths of my runs

Is it perfect? No, I do still go low sometimes..but probably <10% of my runs instead of 50% of them, which is a huge improvement. Additionally, because of having OpenAPS running to pick up the rebound, there’s not usually much of a rebound and resulting roller coaster like I would have in 2013. Additionally, because autosensitivity is running, it picks up within a few hours of any additional sensitivity to insulin, and I don’t have any overnight lows after running. Yay!

Accomplishing 78% of my yearly run goal so far

However, that all assumes I’m running at a normal-for-my-body or slower speed.

There’s a nice (annoying) phenomenon that if you sprint/run faster than your body can really handle, your liver is going to dump and your BG will spike as a result:

Sprinting can drive BGs up

I didn’t ever notice this in 2013, but I’ve now run enough and at varying paces to really understand what my fitness level is, and see very obvious spikes due to surges like this when I’m sprinting too fast. Some days, if I run too fast (even for a mile), I’ll have a surge up to 180 or 200 mg/dL, and that’ll be higher than my BG is for the rest of that 24 hour period. Which is annoying. Funny, but annoying. Not a big deal, because after my run OpenAPS can take care of bringing my down safely.

But other than the running-too-fast-spikes, my BGs have been incredible during and following my runs. As I thought about contributing factors to what’s working well, this is what’s likely been contributing:

  • with a mix of Fiasp & another short-acting insulin, I’m less likely to have the ‘whoosh’ effect of any IOB
  • but I’m also not starting with much IOB, because I tend to run first thing, or several hours after a meal
  • and of course, I have a DIY closed loop that takes care of any post-run sensitivity and insulin adjustments automatically

As I thought more about how much I’ve been running first thing in the morning/day, and usually not eating breakfast, that made me start reading about fasted long runs, or glycogen depleted runs, or low carb runs. People call them all these things, and I’m putting them in the post for my fellow page-16-of-Google-seekers. I call it “don’t eat breakfast before you run” long runs.

Now, some caveats before I go further into detail about what’s been working for me:

  • Your Diabetes May Vary (YDMV). in fact, it will. and so will your fitness level. what works for you may not be this. what works for you will probably not work for me. So, use this as input as one more blog post that you’ve read about a potential method, and then tweak and try what works for you. And you do you.
  • I’m not doing low carb. (And different people have different definitions of low carb, but I don’t think I’m meeting any of the definitions). What I’m talking about is not eating breakfast, a snack, or a meal before my runs in the morning. When I return from runs, I eat lunch, or a snack/meal, and the rest of my day is the usual amount/type of food that I would eat. (And since I have celiac, often times my gluten free food can be higher carb than a typical diet may be. It depends on whether I’m eating at home or eating out.) So, don’t take away anything related to overall carb consumption, because I’m not touching that! That’s a different topic. (And YDMV there, too.)
  • What I’m doing doesn’t seem to match anything I’ve read for non-T1D runners and what they do (or at least, the ones who are blogging about it).

Most of the recommendations I’ve read for glycogen depletion runs is to only do it for a few of your long runs in a marathon training cycle; that you should still eat breakfast before a full marathon; and you should only do fasted/glycogen depletion for slow, easy long runs.

I’m not sure yet (again, not in a full marathon cycle training), but I actually think based on my runs to date that I will do ok (or better) if I start without breakfast, and take applesauce/gatorade every once in a while as I feel I need it for energy, and otherwise managing my BG line. If I start a downtick, I’d sip some carbs. If I started dropping majorly, I’d definitely eat more. But so far, managing BG rather than trying to prescriptively plan carbs (for breakfast, or the concept of 30-60 per hour), works a lot better for me.

Part of the no-breakfast-works-better-for-me might be because the longevity of insulin in your body is actually like 6 hours (or more). Most non-T1D runners talk about a meal 3 hours before the start of your race. And they’re right that the peak and the bulk of insulin would be gone by then, but you’d still have a fair bit of residual insulin active for the first several hours of your race, and the body’s increased sensitivity to that insulin during exercise is likely what contributes to a lot of low BGs in us T1 runners. There’s also a lot of talk about how fasting during training runs teaches your body to better burn fat; and how running your race (such as a marathon) where you do carb during the race (whether that’s to manage BGs or more proactively) will make your body feel better since it has more fuel than you’re used to. That’s probably true; but given the lower insulin action during a run (because you’ve been fasted, and you may be on a lower temp basal rate to start), you’re likely to have a larger spike from a smaller amount of carbs, so the carb-ing you do before or during these long runs or a marathon race may need to be lower than what a non-T1D might do.

tl;dr – running is going better for me and BG management has been easier; I’m going to keep experimenting with some fasted runs as I build up to longer mileage; and YDMV. Hope some of this was helpful, and if you’ve done no-breakfast-long-runs-or-races, I’d love to hear how it worked for you and what during-race fueling strategy you chose as a result!

Presentations and poster content from @DanaMLewis at #2018ADA

DanaMLewis_ADA2018As I mentioned, I am honored to have two presentations and a co-authored poster being presented at #2018ADA. As per my usual, I plan to post all content and make it fully available online as the embargo lifts. There will be three sets of content:

  • Poster 79-LB in Category 12-A Detecting Insulin Sensitivity Changes for Individuals with Type 1 Diabetes using “Autosensitivity” from OpenAPS’ poster, co-authored by Dana Lewis, Tim Street, Scott Leibrand, and Sayali Phatak.
  • Content from my presentation Saturday, The Data behind DIY Diabetes—Opportunities for Collaboration and Ongoing Research’, which is part of the “The Diabetes Do-It-Yourself (DIY) Revolution” Symposium!
  • Content from my presentation Monday, Improvements in A1c and Time-in-Range in DIY Closed-Loop (OpenAPS) Users’, co-authored by Dana Lewis, Scott Swain, and Tom Donner.

First up: the autosensitivity poster!

Dana_Scott_ADA2018_autosens_posterYou can find the full write up and content of the autosensitivity poster in a post over on OpenAPS.org. There’s also a twitter thread if you’d like to share this poster with others on Twitter or elsewhere.

Summary: we ran autosensitivity retrospectively on the command line to assess patterns of sensitivity changes for 16 individuals who had donated data in the OpenAPS Data Commons. Many had normal distributions of sensitivity, but we found a few people who trended sensitive or resistant, indicating underlying pump settings could likely benefit from a change.
2018 ADA poster on Autosensitivity from OpenAPS by DanaMLewis

 

Presentation:
The Data behind DIY Diabetes—Opportunities for Collaboration and Ongoing Research’

This presentation was a big deal to me, as it was flanked by 3 other excellent presentations on the topic of DIY and diabetes. Jason Wittmer gave a great overview and context setting of DIY diabetes, ranging from DIY remote monitoring and CGM tools all the way to DIY closed loops like OpenAPS. Jason is a dad who created OpenAPS rigs for his son with T1D. Lorenzo Sandini spoke about the clinician’s perspective for when patients come into the office with DIY tools. He knows it from both sides – he’s using OpenAPS rigs, and also has patients who use OpenAPS. And after my presentation, Joyce Lee also spoke about the overarching landscape of diabetes and the role DIY plays in this emerging technology space.

Why did I present as part of this group today? One of the roles I’ve taken on in the last few years in the OpenAPS community (among others) is a collaborator and facilitator of research with and about the community. I put together the first outcomes study (see here in JDST or here in a blog post form on OpenAPS.org) in 2016. We presented a poster on Autotune last year at ADA (see here in a blog post form on OpenAPS.org). I’ve also worked to create and manage the OpenAPS Data Commons, as well as build tools for researchers to use this data, so individuals can easily and anonymously donate their DIY closed loop data for other research projects, lowering the friction and barriers for both patients and researchers. And, I’ve co-led or led several research projects with the community’s data as a result.

My presentation was therefore about setting the stage with background on OpenAPS & how we ended up creating the OpenAPS Data Commons; presenting a selection of research projects that have utilized data from the community; highlighting other research projects working with the OpenAPS community; announcing a new international collaboration (OPEN – more coming on that in the future!) for research with the DIY community; and hopefully encouraging other diabetes researchers to think about sharing their work, data, methods, tools, and insights as openly possible to help us all move forward with improving the lives of people with diabetes.

That is, of course, quite an abbreviated summary! I’ve shared a thread on Twitter that goes into detail on each of the key points as part of the presentation, or there’s a version of this Twitter/presentation content also written below.

If you’re someone who wants to do research with retrospective data from the OpenAPS Data Commons, you can find out more about it here (including instructions on how to request data). And if you’re interested in prospective research, please do reach out as well!

Full content for those who don’t want to read Twitter:

Patients are often seen as passive recipients of care, but many of us PWDs have discovered that problems are opportunities to change things. My journey to DIY began after I was frustrated by my inability to hear CGM alarms at night. 4 years ago, there was no way for me to access my own device data in real time OR retrospectively. Thanks to John Costik for sharing his code, I was able to get my CGM data & send it to the cloud and down to my phone, creating a louder alarm. Scott and I created an algorithm to push notifications to me to take action. This was an ‘open loop’ system we called #DIYPS. With Ben West’s help, we realized could combine our algorithm with small, off-the-shelf hardware & a radio stick to automate insulin delivery. #OpenAPS was thus created, open sourcing all components of DIY closed loop system so others could close the loop, too. An #OpenAPS rig consists of a small computer, radio chip, & battery. The hardware is constantly evolving. Many of us also use Nightscout to visualize our closed loop data, and share with loved ones.

2018ADA_slide12018ADA_slide 42018ADA_slide 32018ADA_Slide 2

 

 

 

 

 

 

I closed the loop in December of 2015. As people learned about it, I got pushback: “It works for you, but how do you know it’s going to work for others?” I didn’t, and I said so. But that didn’t mean I shouldn’t share what was working for me.

Once we had dozens of users of #OpenAPS, we presented a research study at #2016ADA, with 18 individuals sharing outcomes data on A1c, TIR, and QOL improvements. (See that publication here: https://twitter.com/danamlewis/status/763782789070192640 ). I was often asked to share my data for people to analyze, but I’m not representative of entire #OpenAPS community. Plus, the community has kept growing: we estimate there are more than (n=1)*710+ (as of June 2018) people worldwide using different kinds of DIY APs. (Note: if you’d like to keep track of the growing #OpenAPS community, the count of loopers worldwide is updated periodically at  https://openaps.org/outcomes ).  I began to work with Open Humans to build the #OpenAPS Data Commons, enabling individuals to anonymously upload their data and consent to share it with the Data Commons.

2018ADA_Slide 52018ADA_Slide 62018ADA_Slide 72018ADA_Slide 8

 

 

 

 

 

Criteria for using the #OpenAPS Data Commons:

  • 1) share insights back with the community, especially if you find something about an individual’s data set where we should notify them
  • 2) publish in an accessible (and preferably open) manner

I’ve learned that not many are prepared to take advantage of the rich (and complex) data available from #OpenAPS users; and many researchers have varying background and skillsets.  To aid researchers, I created a series of open source tools (described here: http://bit.ly/2l5ypxq, and tools available at https://github.com/danamlewis/OpenHumansDataTools ) to help researchers & patients working with data.

2018ADA_Slide 10 2018ADA_Slide 9

 

 

 

We have a variety of research projects that have leveraged the anonymously donated, DIY closed loop data from the #OpenAPS Data Commons.

  • 2018ADA_Slide 112018ADA_Slide 12One research project, in collaboration with a Stanford team, evaluated published machine learning model predictions & #OpenAPS predictions. Some models (particularly linear regression) = accurate predictions in short term, but less so longer term when insulin peaks. This study is pending publication, but I’d like to note the challenge of more traditional research keeping pace with DIY innovation: the code (and data) studied was from January 2017. #OpenAPS prediction code has been updated 2x since then.
  • In response to the feedback from the #2016ADA #OpenAPS Outcomes study we presented, a follow up study on #OpenAPS outcomes was created in partnership with a team at Johns Hopkins. That study will be presented on Monday, 6-6:15pm (352-OR).
  • 2018ADA_Slide 13Many people share publicly online their outcomes with DIY closed loops. Sulka Haro has shared his script to evaluate the reduction in daily manual diabetes interventions after they began using #OpenAPS. Before: 4.5/day manual corrections; now they treat <1/day.
  • #OpenAPS features such as autosensitivity automatically detect sensitivity changes and insulin needs, improving outcomes. (See above at the top of this post for the full poster content).
  • If you missed it at #2017ADA (see here: http://bit.ly/2rMBFmn) , Autotune is a tool for assessing changes to basal rates, ISF, and carb ratio. Developed for #OpenAPS users but can also be used by traditional pumpers (and some MDI users also utilize it).

I’m also thrilled to share a new tool we’ve created: an #OpenAPS simulator to allow us to more easily back-test and compare settings changes & feature changes in #OpenAPS code.
2018ADA_Slide 14

  • Screen Shot 2018-06-22 at 4.48.06 PM2018ADA_Slide 16  We pulled a recent week of data for n=1 adult PWD who does no-bolus, rough carb entry meal announcements, and ran the simulator to predict what the outcomes would be for no-bolus and no meal-announcement.

 

  • 2018ADA_Slide 172018ADA_Slide 18 We also ran the simulator on n=1 teen PWD who does no-bolus and no-meal-announcement in real life. The simulator tracked closely to his actual outcomes (validated this week with a lab-A1c of 6.1)

 

 

 

The new #OpenAPS simulator will allow us to better test future algorithm changes and features across a diverse data set donated by DIY closed loop users.

There are many other studies & collaborations ongoing with the DIY community.

  • Michelle Litchman, Perry Gee, Lesly Kelly, and myself have a paper pending review analyzing social-media-reported outcomes & themes from DIY community.
  • 2018ADA_Slide 19There are also multiple other posters about DIY outcomes here at #2018ADA:
  • 2018ADA_Slide 20 There are many topics of interest in DIY community we’d like to see studies on, and have data for. These include: “eating soon” (optimal insulin dosing for lesser post-prandial spikes); and variability in sensitivity for various ages, pregnancy, and menstrual cycle.
  • 2018ADA_Slide 21I’m also thrilled to announce funding will be awarded to OPEN (a new collaboration on Outcomes of Patients’ Evidence, with Novel, DIY-AP tech), a 36-month international collaboration assessing outcomes, QOL, further development, access of real-world AP tech, etc. (More to come on this soon!)

In summary: we don’t have a choice in living with diabetes. We *do* have a choice to DIY, and also to research to learn more and improve knowledge and availability of tools for us PWDs, more quickly. We would love to partner and collaborate with anyone interested in working with the DIY community, whether that is utilizing the #OpenAPS Data Commons for retrospective studies or designing prospective studies. If you take away one thing today: let it be the request for us to all openly share our tools, data, and insights so we can all make life with type 1 diabetes better, faster.

2018ADA_Slide 222018ADA_Slide 23

 

 

 

 

A huge thank you as always to the community: those who have donated and shared data; those who have helped develop, test, troubleshoot, and otherwise help power the #OpenAPS and other DIY diabetes communities.

2018ADA_Slide 24

Presentation:
Improvements in A1c and Time-in-Range in DIY Closed-Loop (OpenAPS) Users

(full tweet thread available here; or a description of this presentation below)

#OpenAPS is an open and transparent effort to make safe and effective Artificial Pancreas System (APS) technology widely available to reduce the burden of Type 1 diabetes. #OpenAPS evolved from my first DIY closed loop system and our desire to openly share what we’ve learned living with DIY closed loops. It takes a small, off-the-shelf computer; a radio; and a battery to communicate with existing insulin pumps and CGMs. As a PWD, I care a lot about safety: the safety reference design is the first thing in #OpenAPS that was shared, in order to help set expectations around what a DIY closed loop can (and cannot) do.

ADA2018_Slide 23ADA2018_Slide 24As I shared about my own DIY experience, people questioned whether it would work for others, or just me. At #2016ADA, we presented an outcomes study with data from 18 of the first 40 DIY closed loop users. Feedback on that study included requests to evaluate CGM data, given concerns around accuracy of self-reported outcomes.

This 2018 #OpenAPS outcomes study was the result. We performed a retrospective cross-over analysis of continuous BG readings recorded during 2-week segments 4-6 weeks before and after initiation of OpenAPS.

ADA2018_Slide 26For this study, n=20 based on the availability of data that met the stringent protocol requirements (and the limited number of people who had both recorded that data and donated it to the #OpenAPS Data Commons in early 2017).  Demographics show that, like the 2016 study, the people choosing to #OpenAPS typically have lower A1C than the average T1D population; have had diabetes for over a decade; and are long-time pump and CGM users. Like the 2016 study, this 2018 study found mean BG and TIR improved across all time categories (overall, day, and nighttime).

ADA2018_Slide 28ADA2018_Slide 29ADA2018_Slide 30ADA2018_Slide 31ADA2018_Slide 32

Overall, mean BG (mg/dl) improved (135.7 to 128.3); mean estimated HbA1c improved (6.4 to 6.1%). TIR (70-180) increased from 75.8 to 82.2%. Overall, time spent high and low were all reduced, in addition to eAG and A1c reduction. Overnight (11pm-7am) had smaller improvement in all categories compared to daytime improvements in these categories.

Notably: although this study primarily focused on a 4-6 week time frame pre-looping vs. 4-6 weeks post-looping, the improvements in all categories are sustained over time by #OpenAPS users.

ADA2018_Slide 33 ADA2018_Slide 34

ADA2018_Slide 35Conclusion: Even with tight initial control, persons with T1D saw meaningful improvements in estimated A1c, TIR, and a reduction in time spent high and low, during the day and at night, after initiating #OpenAPS. Although this study focused on BG data from CGM, do not overlook additional QOL benefits when analyzing benefits of hybrid closed loop therapy or designing future studies! See these examples shared from Sulka Haro and Jason Wittmer as example of quality of life impacts of #OpenAPS.

A huge thank you to the community: those who have donated and shared data; those who have helped develop, test, troubleshoot, and otherwise help power the #OpenAPS and other DIY diabetes communities.

And, special thank you to my co-authors, Scott Swain & Tom Donner, for the collaboration on this study. Lewis_Donner_Swain_ADA2018

Getting ready for #2018ADA (@DanaMLewis) & preparing to encourage photography

We’re a few weeks away from the 78th American Diabetes Scientific Sessions (aka, #2018ADA), and I’m getting excited. Partially because of the research I have the honor of presenting; but also because ADA has made strides to (finally) update their photography policy and allow individual presenters to authorize photography & sharing of their content. Yay!

As a result of preparing to encourage people to take pictures & share any and all content from my presentations, I started putting together my slides for each presentation, including the slide about allowing photography, which I’ll also verbally say at the start of the presentation. Interestingly to me, though, ADA only provided an icon for discouraging photography, saying that if staff notice that icon on any photos, that’s who will be asked to take down photos. I don’t want any confusion (in past years, despite explicit permission, people have been asked to take down photos of my work), so I wanted to include obvious ‘photography is approved’ icons.

And this is what I landed on for a photography encouraged slide, and the footer of all my other slides:

Encouraging photography in my slides Example encouraging use of photography in content slidesEncouraging photography in the footer of my slides

And, if anyone else plans to encourage (allow) photography and would like to use this slide design, you can find my example slide deck here that you are welcome to use: http://bit.ly/2018ADAexampleslides

I used camera and check mark icons which are licensed to be freely used; and I also licensed this slide deck and all content to be freely used by all! I hope it’s helpful.

Where you’ll find me at #2018ADA

And if you’re wondering where and what I’ll be presenting on with these slides…I’ll be sharing new content in a few different times and places!

On Saturday, I’m thrilled there is a full, 2-hour session on DIY-related content, and to get to share the stage with Jason Wittmer, Lorenzo Sandini, and Joyce Lee. That’s 1:45-3:45pm (Eastern), “The Diabetes Do-It-Yourself (DIY) Revolution”, in W415C (Valencia Ballroom). I’ll be discussing some of the data & research in DIY diabetes! A huge thanks to Joshua Miller for championing and moderating this session.

I’m also thrilled that a poster has been accepted on one of the projects from my RWJF grant work, in partnership with Tim Street (as well as Scott Leibrand, and Sayali Phatak who is heading our data science work for Opening Pathways). The embargo lifts on Saturday morning (content will be shared online then), and the poster will be displayed Saturday, Sunday, and Monday. Scott and I will also be present with the poster on Monday during the poster session from 12-1pm.

And last but not least, there is also an oral presentation on Monday evening with a new study on outcomes data from using OpenAPS. I’ll be presenting during the 4:30-6:30pm session (again in W415C (Valencia Ballroom)), likely during the 6-6:15pm slot. I’m thrilled that Scott Swain & Tom Donner, who partnered on this study & work, will also be there to help answer questions about this study!

As we have done in the past (see last year’s poster, for example), we plan to share all of this content online once the embargo lifts, in addition to the in-person presentations and poster discussions.

A huge thanks, as always, goes to the many dozens of people who have contributed to this DIY community in so many ways: development, testing, support, feedback, documentation, data donation, and more! <3

Hormones, CGM preferences, DIY, and why so many things are YDMV even when #WeAreNotWaiting

I posted one of my Nightscout graphs yesterday, showing a snapshot of my morning:

Example of how getting out of bed - rather than dawn phenomenon hormones - can increase BG levels.

I hadn’t eaten, and my blood sugar still spiked up. I’ve noticed this happens in the mornings sometimes. When I have mentioned it over the years, people are quick to tell me my basal rates are wrong, and I should adjust them because dawn phenomenon. But actually, this isn’t dawn phenomenon. This happens after I physically get up and start moving for the day, whether that happens at 4am, or 6am, or 10am, or even waking up after noon. So, it’s not a basal thing, and modifying my basal rates doesn’t fix it. (And this is why I wanted to add wake-up mode to my suite of tools, to help address this.)

To me, this is a great example, (as I mentioned in my Twitter thread), of why diabetes is so hard: sooooo many things impact BG levels, and in many cases, we PWDs just have to roll with it and respond the best we can. In my case, #OpenAPS did a great job responding to the spike and bringing me back down within an hour or so.

One of the questions that popped up yesterday in response to that graph, though, was about the BG line: how did I have two BG lines?

The answer: I wear a G4 sensor, and usually have 2 receivers running off the same transmitter and sensor. One receiver is Share-d to my phone, and uploads to NS via the interwebz. The other receiver, although Share-capable, doesn’t (because the company only allows you to pair one receiver and upload via Share). I leave that CGM plugged into a rig to enable it to be a backup for offline looping. When online, this rig with the plugged in CGM uploads BGs from that receiver to NS.

Sometimes, because of different start/stop times and therefore differing calibration records, the receivers “drift” from each other, making it obvious on the graph when that happens.

Because if you give a mouse a cookie, other questions come up, someone had also asked me why I’m using G4, and why not G5. Someone else asked me in a different channel why I’m not using G5 and xDrip+ (a DIY option that doesn’t use Dexcom app or a Dexcom receiver for receiving the data or processing it), or another DIY tool to process my CGM data.

Now, as always, what I chose to use is my personal preference. It’s colored by my preference for what equipment I’m willing to carry; what phone I want to use; what data I want to have; my safety backup preferences; what my insurance covers and what I can afford; where I live; etc. So, just because I use this method, doesn’t mean I expect anyone else to want to do it. It’s just what I do. I don’t try to convince other people to use this method, and I also hope others can share info about what works for them without trying to hammer me over the head because what I’m doing is different. This is where YDMV (your diabetes may vary) comes in. It’s so true, and even within “people who DIY”, there’s a ton of variation – and that’s a good thing! I adore having options to find what works for me, and I want to have other people have options and choices to choose what works for them.

That being said, here’s the answer to how I run my CGMs and some of the things that have factored into my choice to not DIY CGM receivers/data processing most of the time:

  • With two G4 receivers, I can keep one in my pocket, paired to my phone and uploading via Share. When I’m out and about in the city or usually during the day, this is what I carry. When I run, I take the Share receiver.
  • But, I also like emergency back-ups. I like keeping a receiver plugged into an #OpenAPS rig so that if connectivity goes out/down, I can keep looping without a break in my stride. So, I could keep my Share receiver plugged into the rig, but that would involve me unplugging and replugging fairly frequently when I run errands or actually go for a short run, and meh. Hassle. So I keep “non-Share” receiver as the one that’s usually plugged into my ‘offline’ rig.
  • Having the G4 receiver plugged into the rig enables me to see raw data. Raw data is nice for a couple of things: assessing the health of my sensor (if it gets jumpy compared to the filtered data, I know the quality of the sensor is decreasing, and that helps me decide when to change it); giving me a clue to what’s going on when the filtered data goes to ??? or during the start up of a new sensor; and actually being able to run my rig and loop off some* of the raw data when I need to. (*With OpenAPS, you can choose to loop off of it within a certain range, and there’s an option to only set a certain amount of correction for a proportion of what otherwise would be proposed, with a higher level of raw data.)
  • With two receivers running, that also gives me more flexibility around sensor changes. Technically, the sensor is approved for 7 days. At the end of the 7 days, the receiver stops giving you data and forces you to “start” a new sensor session. That could be by inserting a new sensor; or it could be the same sensor on your body. But either way, theoretically it’s a 2 hour ‘warm up’ period from that session where you can’t see data. With 2 receivers, I can stagger the end and start of sensor sessions. I usually set a calendar alarm to restart one of the receivers on the night of the 6th day of the session, allowing me more flexibility on day 7 to choose when to restart or change my sensor.
  • This also means I can choose to “hot swap” when actually changing a sensor. I may choose to not hit ‘stop’ and ‘start’ on a sensor session on one of the receivers, but rather shut it off for about 30 minutes, and just do the stop/start on the other receiver (leaving it plugged into a rig to upload raw data to NS, and be able to see where the new sensor’s readings come in compared to the old one). When I power the non-restarted receiver back on about 30m after swapping the transmitter over to the new receiver (as soon as the raw readings have flattened out), it usually either goes to “no signal” for a few minutes, and then comes back with some data, an hour or more before the restarted sensor allows me to calibrate it and get data. There are downsides to this method: the data on the receiver that didn’t get restarted can be fairly inaccurate, as it’s still using the calibrations from the old sensor. So I don’t always do that, but when it’s more important to me to be able to see relative trend of where BG is (flat, or dropping or spiking), it’s nice to have that option. And since I often soak my new CGM sensors, the data from “day 1” of the sensor after a session “start” on the receiver is often better than if it was truly day 1 of the sensor being in my body.

Phew. Maybe that sounds like a lot of work, but the above setup works well for me for a variety of reasons, and also allows me the flexibility and choice for when I change sensors, when I am forced to be without data or potentially not loop, etc. Given that my schedule varies a lot, it helps since I’m not consistently in the same time zone and what works for starting or changing sensors one week in one part of the world doesn’t always align with convenience exactly 168 hours (7 days) later in another part of the world that I’m in, doing something differently.

Some of the reasons I haven’t switched to G5 include the fact that the transmitters only last for ~3 months instead of 6+ months; I’ve observed many people being frustrated by sensor not talking to the phone even when it’s right beside them; there’s no raw data on G5; you can’t have multiple receivers paired with your transmitter; etc.

Now, you might say, but that’s using Dexcom’s app, etc. With DIY solutions, those limitations don’t apply! And that’s true, to a degree – savvy folks in the community have figured out how to make it so you don’t *have* to use Dexcom’s app to display or process the data; you can replace the batteries on the transmitter; etc. But, just like my method above of using raw data isn’t necessarily going to work for everyone or might not be something someone else choose to do, the DIY options that go with G5 (or even G4 in some cases), aren’t something I believe is the right thing to do for me.

A lot of it comes down to safety. When we first started designing my DIY closed loop, we spent eons discussing how we could do this safely for me. And that evolved into further discussions about how other people could do this safely, too. A core of the OpenAPS Reference Design is that we are using already approved and vetted devices that exist on the market (e.g. existing pumps and CGMs). Those devices include approved and vetted methods for CGM data processing, too, which is even more important when the CGM data is being used to dose insulin as in OpenAPS. Now – this is not a requirement we can enforce: people can do what they want, and some people are even using non-CGMs (such as the Libre, a “Flash Glucose Monitoring” solution, plus a DIY NFC reader) as a CGM source for looping. But, whether it’s a DIY app or algorithm on CGM data, or a different glucose measuring device that’s not a CGM, that’s choice has some safety implications that I hope people are aware of.

First, the background for those who aren’t familiar: the CGM companies display a processed (“filtered”) version of the CGM data. That’s part of their proprietary stuff, but there’s reasons behind it: the raw data can be hectic and weird, and individual readings aren’t the point, anyway. The beauty of CGM is you can see the trends in addition to the estimated BG number.  In some scenarios, such as during sensor starts, during error messages that are displayed as ???, etc, the companies/FDA decided that the CGM should not show data, and instead show an error message/symbol, to help prevent anyone from making incorrect treatment decisions based off of confusing or misleading data.  That’s good enough most of the time.  As mentioned above, there are edge cases when seeing the raw is helpful, but most of the time, I’m happy with the filtered data.

But to me, there’s a difference between using raw or DIY-calibrated data for edge cases, vs. using them all the time. I’ve seen several cases in just the past few days with a newer “DIY CGM app”, which uses its own calibration algorithm for processing the unfiltered CGM readings.  These people have reported the app displaying normal BGs (say, 90 mg/dL), while they found themselves in the 40’s (rather low). It’s not clear whether that is due to the app’s calibration algorithm, something the user did in testing and calibrating, or if it’s just a bad sensor, and since most of them are not using the official receiver/app in parallel, that’s difficult to figure out.  But regardless, it’s happened enough times across numerous people for me to be concerned about a DIY CGM app being used as the primary source of CGM data. There are limitations to using company-built apps or physical devices for CGMs, but in the case where people can afford it, for safety I think it is important to at least use the approved and vetted receiver/app in parallel, to provide a backup and baseline level of alerting and alarming. The FDA & the companies have worked to create something that can be reliable for alarming when your BG is actually low (say <55 mg/dl) and alerting a human that something is going on. This is important regardless of whether people are looping or not, but it’s perhaps even more important when people are looping, since that data is driving insulin dosing decisions. Additionally, the company-created devices have been designed to deal with miscalibrations that aren’t in line with what the data from the receiver is showing, and have safety measures in place to “reject” calibrations and request new ones when necessary. Sure: there are times where that’s frustrating, but those features truly are “there for safety”, and are important for avoiding the rare but potentially serious outcomes that could be caused by incorrect CGM readings. Since safety is what we prioritize and design around in DIY closed looping, I hope people will consider that ,and prioritize safety first when choosing what to use as their primary data source.

Tl;dr – YDMV. I currently use G4 with two receivers, for the reasons described above. I think it’s important to prioritize safety over convenience most of the time, and understand the limitations of the solution that you choose (DIY or commercial). But everyone’s different, and their situation, preferences, etc. may drive different decision making. And did I mention YDMV?

Exploring other sensors that could be used with #OpenAPS and for diabetes in general

Nobody appeared to notice the other day when I tweeted about going through airport security with 13 pieces of adhesive on my body. Which is amusing to me, because normally I sport two: my insulin pump site, and my continuous glucose monitor (CGM) sensor. That particular day, I added another diabetes-related piece of adhesive (I was giving the Freestyle Libre, a flash aka not quite continuous glucose monitor, a try), and 10 pieces of adhesive not directly related to diabetes. Or maybe, it will be in the future – and that’s what I’m trying to figure out!

Last fall, my program officer from RWJF (for my role as PI on this RWJF-funded grant – read more about it here if you don’t know about my research work) made an introduction to a series of people who may know other people that I should speak to about our project’s work. One of these introductions was to a researcher at UCSD, Todd Coleman. I happened to be in San Diego for a meeting, so my co-PI Eric Hekler and I stopped by to meet Todd. He shared about his lab’s work to develop an ambulatory GI sensor to measure gastric (stomach) activity and my brain immediately started drooling over the idea of having a sensor to better help assess our methods in the DIY closed looping community for articulating dynamic carb absorption, aka how slow or fast carbs are absorbing and therefore impacting blood glucose levels. I took over part of the white board in his office, and started drawing him examples of the different data elements that we have #OpenAPS (my DIY hybrid closed loop “artificial pancreas”) calculate every 5 minutes, and how it would be fantastic to wear the GI sensor and graph the gastric activity data alongside this detailed level of diabetes data.

I immediately was envisioning a number of things:

  • Assessing basic digestion patterns and figuring out if the dynamic carb absorption models in OpenAPS were reasonable. (Right now, we’re going off of observations and tweaking the model based on BG data and manual carb entry data from humans. Finding ways to validate these models would be awesome.)
  • Seeing if we can quantify, or use the data to better predict, how post-meal activity like walking home after dinner impacts carb absorption. (I notice a lot of slowed digestion when walking home from dinner, which obviously impacts how insulin can and should be dosed if I know I’ll be walking home from dinner or not. But this is something I’ve learned from a lot of observation and trial and error, and I would love to have a more scientific assessment of this impact).
  • Seeing if this could be used as a tool to help people with T1D and gastroparesis, since slowed digestion impacts insulin dosing, and can be unpredictable and frustrating. (I knew gastroparesis was “common”, but have since learned that 40-50% of PWDs may experience gastroparesis or slowed digestion, and it’s flabbergasting how little is talked about in the diabetes community and how few resources are focused on coming up with new strategies and methods to help!)
  • Learning exactly what happens to digestion when you have celiac disease and get glutened.
  • Etc.

Fast forward a few months, where Todd and his post-doctoral fellow Armen Gharibans, got on a video call to discuss potentially letting me use one of their GI sensors. I still don’t know what I said to convince them to say yes, but I’m thrilled they did! Armen shipped me one of the devices, some electrodes, and a set of lipo batteries.

Here’s what the device looks like – it’s a 3D printed gray box that holds an open source circuit board with connectors to wearable electrodes. (With American chapstick and unicorn for scale, of course.)

DanaMLewis EGG for scale

And here’s what it looked like on me:

DanaMLewis wearing an ambulatory EGG

The device stores data on an SD card, so I had many flash backs to my first OpenAPS rig and how I managed to bork the SD cards pretty easily. Turns out, that’s not just a Pi thing, because I managed to bork one of my first EGG SD cards, too. Go figure!

Sticky notes with data scratched out and a USB stick with data from non-diabetes science experiments

And this device is why I went through airport security the other day with 10 electrodes on. (I disconnected the device, put it in my bag alongside my OpenAPS rigs, and they all went through the x-ray just fine, as always.)

Just like OpenAPS, this device is obviously not waterproof, and neither are the electrodes, so there are limitations to when I can wear it. Generally, I’ve been showering at night as usual, then applying a fresh set of electrodes and wearing the device after that, until the next evening when I take a shower. Right now, hard core activity (e.g. running or situps) generates too much noise in the stomach for the data to be usable during those times, so I’ve been wearing it on days when I’ve not been running and when I’ve not been traveling so Scott can help me apply and connect the right electrodes in the right places.

This device is straight from a lab, too, so like with #OpenAPS I’ve been an interesting guinea pig for the research team, and have found even low-level activity like bending over to put shoes on can trigger the device’s reset button. That means I’ve had to pay attention to “is the light still on and blinking” (which is hard since it’s on my abdomen under my shirt), so thankfully Armen just shipped me another version of the board with the reset button removed to see if that makes it less likely to reset. (Resetting is a problem because then it stops recording data, unless I notice it and hit the “start recording” button again, which drives me bonkers to have to keep looking at it periodically to see if it’s recording.) I just got the new board in the mail, so I’m excited to wear it and see if that resolves the reset problem!

Data-wise, it’s been fascinating to get a peek into my stomach activity and compare it to the data I have from OpenAPS around net insulin activity levels, dynamic carb absorption activity, expectations on what my BG *should* be doing, and what actually ended up happening BG-wise. I wore it one night after a 4 mile run followed by a big dinner, and I had ongoing digestion throughout the night, paired with increased sensitivity from the run so I needed less insulin overall despite still having plenty of digestion happening (and picture-perfect BGs that night, which I wasn’t expecting). I only have a few days worth of data, but I’m excited to wear it more and see if there are differences based on daily activity patterns, the influences of running, and the impact of different types of meals (size, makeup of meal, etc).

A huge thanks to Todd, Armen (who’s been phenomenal about getting me the translated GI data back in super fast turnaround time), and the rest of the group that developed the sensor. They just put out a press release about a publication with data from one of their GI studies, and this press release is a great read if you’re curious to learn more about the GI sensor, or this news piece. I’m excited to see what I can learn from it, and how we can potentially apply some of these learnings and maybe other non-diabetes sensors to help us potentially  improve daily diabetes management!

Vitamin D and insulin sensitivity

tl;dr – for me, Vitamin D hugely influences insulin sensitivity.

After the flu, I continued to be sick. We did the usual song and dance many people do around “hey, do you have pneumonia?”. Which, luckily, I didn’t, but I was still pretty sick and my after visit summary sheet said bronchitis. Also, my average BGs were going up, which was weird. After all, when I had the flu, I had spectacular BGs throughout. So I was pretty concerned when my time in range started dropping and my average BG started rising.

In diabetes, there are a lot of things that influence BGs. It can be a bad pump site; a bad bottle of insulin; stress; sickness; etc etc. that causes out of range BGs. Most of these are helped by having a DIY closed loop like OpenAPS. So, when your BGs start to rise above (your) normal and stay there, it’s indicative of something else going on. And because I was sick, that’s what I thought it was. But as I continued to gradually heal, I noticed something else: not only were my BG averages continuing to rise (not normal), but I also was needing a lot more insulin. Like, 20-30u more per day than usual. And that wasn’t just one day, it was 4 days of that much insulin being required. Yikes. That’s not normal, either.

So, I was thinking that I was hitting the Fiasp plateau, which made me really sad. I’ve been using Fiasp for many months now with good results. (For those of you who haven’t been tuned into the diabetes community online, while many people like Fiasp because it’s slightly faster, many people also have experienced issues with it, ranging from pump sites dying much faster than on other insulins; having issues with prolonged high BGs where “insulin acts like water”, etc.) But, I was prepared mentally to accept the plateau as the likely cause. I debated with Scott whether I should switch back to my other insulin for 2-3 sites and reservoirs to give my body a break, and try again. But I was still sick – so maybe I should wait until I was not clearing gunk out of my lungs. Or I was also pretty convinced that it was correlated with my absolute ZERO level of activity. (I had some rising BG averages briefly over Christmas where I was fearing the plateau, but turns out it was related to my inactivity, and getting more than zero steps a day resolved that.) I knew I would be moving around more the next week as I gradually felt better, so it should hopefully self-resolve. But making changes in diabetes sometimes feels like chicken and egg, with really complicated chickens and eggs – there’s a lot of variables and it’s hard to pin down a single variable that’s causing the root of the problem.

One other topic came up in our discussion – vitamin D. Scott asked me, “when was the last time you saw the sun?”. Which, because I’d been sick for weeks, and traveled for a week before that, AND because we live in Seattle and it’s winter, meant I couldn’t remember the last time I had seen the sun directly on my skin. (That sounds depressing, doesn’t it? Sheesh.)

So, I decided I would not switch back to the previous insulin I was using, and I would give it some time before I tried that, and I would focus on taking my vitamin D (because I hadn’t been taking it) and also trying to get at least SOME activity every day. I took vitamin D that night, went to bed, and….

…woke up with perfect BGs. But I didn’t hold my breath, because I was having ok nights but rough days that required the extra 30 units of insulin. But by the end of the day, I still had picture-perfect BGs (my “normal”), and I was back to using my typical average amount of insulin. PHEW. Day 2 also yielded great BG levels (for me, regardless of sickness) and around average level of insulin needed for the day time. Double phew. Day 3 is also going as expected BG and total insulin usage wise.

You might find yourself thinking, “how can it be as simple as Vitamin D? There’s probably something else going on.” I would think that – except for I have enough data to know that, when I’m vitamin D deficient, getting some vitamin D (either via pill or via natural form from sunlight) can pack a punch for insulin sensitivity. In 2014, Scott and I went out in February even when it was cold to sit in a park and get some sunshine. After about an hour of sitting and doing nothing, with no extra insulin on board, WHOOOSH. I went mega-low. I’ve had several other experiences where after being likely vitamin D deficient, and then spending an hour or so in sunlight, WHOOSH. And same for when there was no sunlight, but I took my vitamin D supplements after a while of not taking them. And no, they’re not mixed with cinnamon 😉 (That’s a diabetes joke, cinnamon does not cure diabetes. Nothing cures type 1 diabetes.)

So tl;dr – my insulin sensitivity is influenced by vitamin D, and I’ll be trying to do a better job to take my vitamin D regularly in the winters from now on!

Making changes in diabetes is hard by DanaMLewis

Quantified sickness when you have #OpenAPS and the flu

Getting “real people sick*” is the worst. And it can be terrifying when you have type 1 diabetes, and know the sickness is both likely to send your blood sugars rocketing sky high, as well as leave you exhausted and weak and that much harder to deal with a plummeting low.

*(Scott hates this term because he doesn’t like the implication that PWD’s aren’t real. We’re real, all right. But I like the phrase because it differentiates between feeling bad from blood sugar-related reasons, and the kind of sickness that anyone can get.)

In February 2014, Scott got home from a conference on Friday, and on Saturday complained about being tired with a headache. By Sunday, I started feeling weary with a sore throat. By Monday morning, I had a raging fever, chills, and the bare minimum of energy required to drag myself into the employee health clinic and get diagnosed with the flu. And since they knew I was single and lived by myself, the conversation went from “here’s your prescription for Tamiflu” to “but you can’t be by yourself, maybe we should find a bed for you in the hospital” because of how sick I was. Luckily, I called Scott and asked him to come pick me up and let me stay at his place. And there I stayed in complete misery for several days, the sickest I’d ever been. I remember at one point on the second day, waking up from a fitful doze and seeing Scott standing across the room with his laptop on a dresser, using it as a standing desk because he was so worried about me that he didn’t want to leave the room at that point. It was that bad.

Luckily, I survived. (And good thing, right, given that we went on to build OpenAPS, yes? ;)) This year’s flu experience was different. This year I was real-people sick, but without the diabetes-related fear that I’d so often experienced in the past. My blood sugars were perfectly managed by OpenAPS. I didn’t go low. It didn’t matter if I didn’t eat, or did eat (potato soup, ice cream, and frozen fruit bars were the foods of choice). My BGs stayed almost entirely in range. And because they were so in range that it was odd, I started watching the sensitivity ratio that is calculated by autosensitivity to see how my insulin sensitivity was changing over the course of the sickness. And by day 5, I finally felt good enough to share some of that data (aka, tweet). Here’s what I found from this year’s flu experience:

  • Night 1 was terrible, because I got hardly any deep sleep (45 minutes, whereas 2+h is my usual average per night) and kept waking up coughing. I also was 40% insulin resistant all night long and into Day 2, meaning it took 40% more insulin than usual to keep my BGs at target.
  • Night 2 was even worse – ZERO deep sleep. Ahhhh! It was terrible. Resistance also nudged up to 50%.
  • Night 3 – hallelujah, deep sleep returned. I ended up getting 4h53m of deep sleep, and also was able to sleep for closer to 2 hour blocks at a time, with less coughing. Also, going into night 3 was pretty much the only “high” I had of being sick – up around 180 for a few hours. Then it fell off a cliff and whooshed down to the bottom of my target, marking the drastic end of insulin resistance. After that, insulin sensitivity was fairly normal.
  • Night 4 yielded more deep sleep (>5 hours), and a tad bit of insulin sensitivity (~10%), but it’s unclear whether that’s totally sickness related or more related to the fact that I wasn’t eating much in day 3 and day 4.
  • Night 5 felt like I was going backward – 1h36m of deep sleep, tons of coughing, and interestingly a tad bit of insulin resistance (~20%) again. Night 6 (last night) I supposedly got plenty of deep sleep again (>4h), but didn’t feel like it at all due to coughing. BGs are still perfectly in range, and insulin sensitivity back to usual.

This was all done still with no-bolus, and just carb announcement when I ate whatever it was I was eating. In several cases there was negative IOB on board, but I didn’t have the usual spikes that I would normally see from that. I had 120 carbs of gluten free biscuits and gravy yesterday, and I didn’t go higher than 130mg/dl.

In-range BGs shown on CGM graph thanks to OpenAPS

It’s a weird feeling to have been this sick, and have perfectly normal blood sugars. But that’s why it’s so interesting to be able to look at other data beyond average, time in range, and A1c – we now have the tools and the data to be able to dive in and really understand more about what our bodies are doing in sick situations, whether it’s norovirus or the flu.

I’m thinking if everyone shared their data from when they had the flu, or norovirus, or strep throat, or whatever – we might be able to start to analyze and detect patterns of resistance and otherwise sensitivity changes over the course of typical illness. This way, when someone gets sick with diabetes, we’d know generally “expect around XX% resistance for Days 1-3, and then expect a drop off that looks like this on Day 4”, etc.

That would be way better than the traditional ways of just bracing yourself for sky-high highs and terrible lows with no understanding or ability to make things better during illness. The peace of mind I had during the flu this year was absolutely priceless. Some people will be able to get that with DIY closed loop technology; but as with so many other things we have learned and are learning from this community, I bet we can find ways to help translate these insights to be of benefit for all people with diabetes, regardless of which therapies they have access to or decide to use.

Want to help? Been sick? Consider donating your data to my diabetes sick-day analysis project. What you should do:

  1. If you’re using a closed loop, donate your data to the OpenAPS Data Commons. You can do all your data (yay!), or just the time frame you’ve been sick. Use the “message the project owner” feature to anonymously message and share what kind of illness you had, and the dates of sickness.
  2. Not using a closed loop, but have Nightscout? Donate your data to the Nightscout Data Commons, and do the same thing: Use the “message the project owner” feature to anonymously message and share what kind of illness you had, and the dates of sickness.

As we have more people who identify batches of sick-day data, I’ll look at what insights we can find around sensitivity changes before, during, and after sickness, plus other insights we can learn from the data.

Why Open Humans is an essential part of my work to change the future of healthcare research

I’ve written about Open Humans before; both in terms of how we’re creating Data Commons there for people using Nightscout and DIY closed loops like OpenAPS to donate data for research, as well as building tools to help other researchers on the Open Humans platform. Madeleine Ball asked me to share some more about the background of the community’s work and interactions with Open Humans, along with how it will play into the Opening Pathways grant work, so here it is! This is also posted on the OpenHumans blog. Thanks, Madeleine, and Open Humans!

 

So, what do you like about Open Humans?

Health data is important to individuals, including myself, and I think it’s important that we as a society find ways to allow individuals to be able to chose when and how we share our data. Open Humans makes that very easy, and I love being able to work with the Open Humans team to create tools like the Nightscout Data Transfer uploader tool that further anonymizes data  uploads. As an individual, this makes it easy to upload my own diabetes data (continuous glucose monitoring data, insulin dosing data, food info, and other data) and share it with projects that I trust. As a researcher, and as a partner to other researchers, it makes it easy to build Data Commons projects on Open Humans to leverage data from the DIY artificial pancreas community to further healthcare research overall.

Wait, “artificial pancreas”? What’s that?

I helped build a DIY “artificial pancreas” that is really an “automated insulin delivery system”. That means a small computer & radio device that can get data from an insulin pump & continuous glucose monitor, process the data and decide what needs to be done, and send commands to adjust the insulin dosing that the insulin pump is doing. Read, write, read, rinse, repeat!

I got into this because, as a patient, I rely on my medical equipment. I want my equipment to be better, for me and everyone else. Medical equipment often isn’t perfect. “One size fits all” really doesn’t fit all. In 2013, I built a smarter alarm system for my continuous glucose monitor to make louder alarms. In 2014, with the partnership of others like Ben West who is also a passionate advocate for understanding medical devices, I “closed the loop” and built a hybrid closed loop artificial pancreas system for myself. In early 2015, we open sourced it, launching the OpenAPS movement to make this kind of technology more broadly accessible to those who wanted it.

You must be the only one who’s doing something like this

Actually, no. There are more than 400+ people worldwide using various types of DIY closed loop systems – and that’s a low estimate! It’s neat to live during a time when off the shelf hardware, existing medical devices, and open source software can be paired to improve our lives. There’s also half a dozen (or more) other DIY solutions in the diabetes community, and likely other examples (think 3D-printing prosthetics, etc.) in other types of communities, too. And there should be even more than there are – which is what I’m hoping to work on.

So what exactly is your project that’s being funded?

I created the OpenAPS Data Commons to address a few issues. First, to stop researchers from emailing and asking me for my individual data. I by no means represent all other DIY closed loopers or people with diabetes! Second, the Data Commons approach allows people to donate their data anonymously to research; since it’s anonymized, it is often IRB-exempt. It also makes this data available to people (patient researchers) who aren’t affiliated with an organization and don’t need IRB approval or anything fancy, and just need data to test new algorithm features or investigate theories.

But, not everyone implicitly knows how to do research. Many people learn research skills, but not everyone has the wherewithal and time to do so. Or maybe they don’t want to become a data science expert! For a variety of reasons, that’s why we decided to create an on-call data science and research team, that can provide support around forming research questions and working through the process of scientific discovery, as well as provide data science resources to expedite the research process. This portion of the project does focus on the diabetes community, since we have multiple Data Commons and communities of people donating data for research, as well as dozens of citizen scientists and researchers already in action (with more interested in getting involved).

What else does Open Humans have to do with it?

Since I’ve been administering the Nightscout and OpenAPS Data Commons, I’ve spent a lot of time on the Open Humans site as both a “participant” of research donating my data, as well as a “researcher” who is pulling down and using data for research (and working to get it to other researchers). I’ve been able to work closely with Madeleine and suggest the addition of a few features to make it easier to use for research and downloading large data sets from projects. I’ve also been documenting some tools I’ve created (like a complex json to csv converter; scripts to pull data from multiple OH download files and into a single file for analysis; plus writing up more details about how to work with data files coming from Nightscout into OH), also with the goal of facilitating more researchers to be able to dive in and do research without needing specific tool or technical experience.

It’s also great to work with a platform like Open Humans that allows us to share data or use data for multiple projects simultaneously. There’s no burdensome data collection or study procedures for individuals to be able to contribute to numerous research projects where their data is useful. People consent to share their data with the commons, fill out an optional survey (which will save them from having to repeat basic demographic-type information that every research project is interested in), and are done!

Are you *only* working with the diabetes community?

Not at all. The first part of our project does focus on learning best practices and lessons learned from the DIY diabetes communities, but with an eye toward creating open source toolkit and materials that will be of use to many other patient health communities. My goal is to help as many other patient health communities spark similar #WeAreNotWaiting projects in the areas that are of most use to them, based on their needs.

How can I find out more about this work?
Make sure to read our project announcement blog post if you haven’t already – it’s got some calls to action for people with diabetes; people interested in leading projects in other health communities; as well as other researchers interested in collaborating! Also, follow me on Twitter, for more posts about this work in progress!

Not bolusing for meals (Fiasp, 0.6.0 algorithm in oref0 dev branch, and more)

I tweeted last week+, “I just realized I’ve now gone about 3 weeks without meal bolusing.” That means just a meal announcement (i.e. carb entry estimate, a la 30 carbs or 60 carbs or whatever, based on my IFTTT buttons). No manual bolus.

Highlighting 3 weeks without meal bolusing, and just doing a carb announcement, with good outcomes thanks to OpenAPS

I kind of keep waiting for the other shoe to drop, because it sounds to good to be true. I’m sure you’re skeptical reading this.

I bet she’s doing SOME bolus.

Well, she must not be eating any carbs.

She must be having worse outcomes, bad post-meal BGs, etc.

Nope, nope, and nope.

  • While I started testing this new set of features with partial boluses and worked my way down (see more below on the testing topic), I’m now literally doing no manual meal bolus. I start eating, and press one button on my watch for a carb estimate entry (that via IFTTT goes to Nightscout and my rig).
  • I eat carbs. I’ve eaten 120 grams of carbs of gluten free biscuits and gravy; 60-90 grams of pasta; dinner followed by a few gluten free cookies, etc.
  • More nuanced details below, but:
    • My 70-180 time in range has stayed the same (93+%) compared to the versions I was testing before with manual meal boluses.
    • My 70-150 and 80-160 time in ranges have decreased slightly compared to manual meal boluses, but…
    • My average blood sugar has actually dropped down (as has my a1c to match).
    • (So this means I’m having a few more spikes above 160, usually topping off in 160-170 whereas before my manual meal boluses would have me top off around 150, when all was well.)

Also note – no eating soon required. No early bolus or pre-bolus. Just single button press as I stick food in my mouth.

Wow.

(See where I said, waiting for the other shoe to drop?)

That’s why I waited a while to even tweet about it. Maybe it’s a fluke. Maybe it won’t work for other people. Maybe, maybe, maybe. Who knows. It’s still fairly early to tell, but as other people are beginning to test the current dev branch of oref0 with 0.6.0-related features, other people are starting to see improvements as well. (And that could be some of the many other features we are adding to 0.6.0, ranging from exponential curves for insulin activity, to allowing SMBs to do more, to carb-ratio-tuned-autosensitivity, to huge autotune improvements, etc.) 

So while I don’t want to over-hype – and never do, what works for me will not work for everyone – I do want to share my cautious excitement over continuing to be able to push the envelope on algorithms and what might be possible outcome-wise for this kind of technology.

Suggesting no meal bolus means we can quit arguing about the name "artificial pancreas"

Here’s what is enabling me to be in the no-bolus zone for now well over a month, with still (to me) great outcomes worth the tradeoffs described above:

  1. Faster insulin. Thanks to our lovely looping friends in Germany/Austria, we came back from Europe with a few vials of Fiasp to try. I was HIGHLY skeptical about this. Some of our European friends saw great results right away, others didn’t. I didn’t get great results on it at first. Some of that may be due to natural changes between insulin types and not knowing exactly how to adjust my manual bolus strategy to the faster insulin action, but until we did some code changes to allow SMB‘s to do more and added some other features to what’s now 0.6.0, I wasn’t thrilled and in fact after about two weeks of it was about to switch off of it. So that brings me to #2.
  2. More improvements to the algorithm, which is now what will become the 0.6.0 release of oref0. There’s a whole lot of stuff packed in there. Exponential curves. Different carb absorption decay calculations. Allowing SMB to do more. Additional safety guards since we ramped SMB up.

How we started testing no-bolus approach:

  • I have always known that about 6u of insulin (thanks to testing dating back to my early DIYPS days, many many many moons ago) is about as much as I should bolus at any time. So, even if I ate 120 carbs, I usually did about a 6u bolus up front, and let the rig pick up the rest as needed over more hours. I started doing ~75% or something like that of boluses, based on wherever I felt like rounding to with my easy bolus buttons.
  • Whether I did 75% or 100%, I didn’t see a ton of difference at first…
  • ..so I took a leap and tried no-bolus with some SMB adjustments to allow it to ramp up faster with carb entry. Behaviorally, I find it a lot easier to do nothing 😀 vs. figure out the right amount of up front bolus. And outcomes wise (see above) it was very similar.

It definitely was an interesting approach to test. Between the Fiasp and the no-bolus up front, in some meals it matched really well and I had practically no rise. Due to incoming netIOB, food type, etc, sometimes I did have a rise – but while it spiked slightly higher (160-170 usually vs my earlier 150s with manual bolus), it was only up there for 2-3 data points and then came sharply down, leveling out smoothly in my preferred post-meal range. So an important lesson I learned was not to over-react to just the BG curve going up, without looking at the predictions to see where I was going to come just back down. (And as I had more than one meal where the spike and drop back to normal happened, it was very easy to adjust to the BG graph and not get that emotional tug to “do more” with a quick short rise like that).

Obviously, starting BG makes a difference. I’m usually starting <130 mg/dL when I see these spikes cap out at 170 or lower. I’ve started higher, and seen higher rises, too. They’re not all perfect: with occasional pump site issues, carb underestimates, unplanned carb stacking, and all the randomness of diabetes and a non-structured lifestyle (including live-testing bleeding edge algorithm changes), I’ve spent 12% of the last month >160 mg/dL, which is about the same as the 3 months before that. But in most cases (I’d say 95%), the no-bolus approach has actually yielded better outcomes than I expected AND has avoided post-meal lows better than I would have achieved with a manual bolus.

This is huge when you think about the QOL aspect of not having to do as much math at a meal; and when you think about all the complicating factors related to food – timing (do you bolus when you order, or when the food arrives, or earlier than that?), and the gluten factor. I have celiac disease, so if I’m eating out (which we do a lot, and especially since I travel frequently), bolusing prior to setting eyes on the food (knowing they didn’t plate it with bread, causing them to have to go back and start all over again) just isn’t smart. That’s why eating soon historically worked so well for me vs. traditional pre-boluses, because I could set the target entering the restaurant, bolus when I laid eyes on my hopefully safe food, and get reasonable (150 topping out) meal outcomes.

It also worked really well in the case where a restaurant cooked my gluten free pasta in the same pasta cooker and water as regular pasta, but didn’t inform me until after I found stray gluten noodles in the bottom of my pasta dish and started asking how that was possible since they (used to) do gluten free well. (Now, I pick up heaps of pasta, and sort pasta noodles one by one to make sure they all match before ever eating gluten free pasta. It makes waiters look at you very worriedly as you wave pasta around in the air, but better safe than glutened (again).) So, I was majorly glutened, and my digestion system was all out of sorts (isn’t that a nice polite way to describe getting glutened?) for many days, which of course impacted BG and insulin right then and for the days afterward. But because I had done carb entry and no-bolus, I was able to edit the carb entry down, and I didn’t have that much insulin stacked, and didn’t end up low after glutening, which is usually what happens.

Is that a super regular situation for most people? No. But it was super nice. And also helped me face pasta again last night, so I could put in a (very low in case of gluten) carb estimate, match my noodles, eat pasta, and let the SMBs ramp up to match absorption. It works very well for me.

Example BG graph from only announcing, not bolusing for, a meal with OpenAPS

Whether you have celiac or not, for many reasons (insert yours here), it’s nice to not to have to commit to the bolus up front. It’s closer to approaching what I think non-PWDs do at mealtimes: just eat.

(I haven’t done much testing (yet? TBD) for no-carb-entry and no-meal-bolus scenario, I expect I would have higher spikes but would be interesting to see if it would still come down reasonably fast. Probably wouldn’t be my go-to strategy because I don’t mind a general meal size estimate one button push, but would be nice to know what that curve shape would look like. If I test that, it’ll start with small snacks and ramp my way up.)

The questions I always get:

  1. Q: HOW DO I GET THIS?
    A: Caution: like all things OpenAPS but especially always true for the development branch, 0.6.0 is NOT released yet to master and is still highly experimental. I wouldn’t install dev unless you want to pay lots of close attention to it, and are willing to update multiple times over the course of the week, because Scott and I are merging features and tweaks almost daily to it.

    Got the disclaimers down? Ok. It’s in the dev branch of oref0. You should read this PR with notes on some more detail of what’s included, but you should also review the code diff to see all that’s changed, because it’s not all documented yet. Also, follow the instructions at the bottom to be able to install it without git. Hop into Gitter if you have questions about it!

    (Big huge thanks to folks like Tim and Matthias for early testing of 0.6.0; and to Tim for writing up about the initial rounds of 0.6.0-dev here (note that we’ve made further changes since this post), and others who’ve been testing & providing feedback and input into the dev branch!)

  2. Q: When will this get “released” to master?
    A: It depends. This is still a highly active dev branch, and we’re making a lot of changes and tweaking and testing things. The more people who test now and provide feedback will enable us to get to the final “prepare for release” testing stage. Lots and lots of testing, and things depend on how much existing needs tweaked, and what else we decide should go with this release. So, there’s never any specific release date.
  3. Q: What is Fiasp?
    A: Faster acting insulin that was only approved in Europe and Canada…until today. Convenient timing. I asked a PR person who messaged me about it, and they said it’s estimated to be available in U.S. pharmacies by late December/earlier Q1. As previously stated, available elsewhere in other parts of the world.

    Fiasp peaks sooner (say, ~45 minutes) with the same tail as everything else. It’s not instantaneous. For your million and one questions about whether it’s approved for your use in a tree, on a plane, at the zoo, and all other extrapolations – please ask Google/your doctor/the manufacturer, and not me. I don’t know. :)

  4. Q: Will any of this work for people NOT on Fiasp?
    A: Nothing is guaranteed (even for other people on Fiasp), but the folks who’ve started testing 0.6.0 even without Fiasp (on Humalog or Novolog/Novorapid, etc.) have been happier on it vs. earlier versions, too.

    I don’t expect Fiasp to work super well forever for me, given what I’ve heard from other people with months of experience on it…and given my first two weeks of Fiasp not being spectacular, I want people to not expect miracles. (Sorry, this blog post does not promise miracles, so sorry if you got super excited at the above. No miracles! This is not a cure! We still have diabetes!) Like all things artificial pancreas, I think it’s better to be cautiously hopeful with realistic expectations that things *might* be a little bit better than before, but as always, YDMV (your diabetes may/will always vary), your body will vary, and life happens, etc. so who knows.

Just 4 months ago, we published a blog post pointing out that the new features had allowed us to achieve 4 out of 5 of: no bolus; not counting carbs, medium/high carb meals, 80%+ time in range; and no hypoglycemia.  With Fiasp and  0.6.0 (currently what’s in the dev branch), we’ve now achieved all 5 simultaneously: I can eat large high-carb meals, enter very vague guesstimates of 60 or 90 carbs (no need for actual carb counting, just general size-based meal announcement), and still achieve 80%+ time in range 70-150 mg/dL without ever going <55 mg/dL.  Does that mean that OpenAPS with Fiasp finally meets the definition of a “real” Artificial Pancreas (step 5 on JDRF’s 6-step AP development pathway)?  We think it does.

So, tl;dr (because long post is long): with Fiasp and 0.6.0-dev branch, I’m able to not bolus for meals, and just enter a very generally sized meal estimate. It’s working well for me, and like all things, we’re working to make it available to other people via OpenAPS for others who want to try similar features/approaches. It may not work well for everyone. If it helps one other person, though, like everything else it’ll be worth it. Big thanks to Scott for LOTS of development in 0.6.0 and partnership in design of these features; too many people to name for testing and providing feedback and helping iterate on these features; and to the entire community for being awesome and helping us to continue to push the envelope on what might be possible for those of us with type 1 diabetes. :)