What you should know about closed looping (DIY like #OpenAPS or otherwise)

I’ve been wearing a DIY closed loop for something like 979 days..which means something like ~20,000 hours with this technology. Additionally, I’m not the only one. At the time of writing this post (see the latest count here), there are (n=1)*369+ (and that’s an undercount just based on who’s told us they’re looping) other DIYers out there, so the community has an estimated 1,800,000+ hours of cumulative experience, too.

Suffice to say, we’ve all learned a lot about this technology and how hybrid closed loop makes a difference in life with diabetes.

I previously gave a talk almost two years ago to the Sports & Diabetes Group Northwest here in Seattle, talking about #DIYPS, how we closed the loop, and #OpenAPS. (And you can see a recent TEDX talk I gave on OpenAPS here.) That was a springboard for meeting some awesome individuals who became very early DIY loopers in the Seattle area. And one of them (who also wore a pancreas at HIS wedding :)) had suggested we do another talk for SDGNW to update on some of what we have learned since then. But unfortunately, he got called out of town for work and couldn’t join me for presenting, so I went solo (ish, because Scott also came and contributed). I used a new analogy, because I think there’s a lot to think about before choosing and using closed loop technology, whether it’s DIY or commercial, and wanted to write it up for sharing here.

what_to_know_about_looping_danamlewis

First, some reminders for those familiar and some context for those who are not close to this technology. We’re talking about a hybrid closed loop, which is what I’m referring to when I say “artificial pancreas” or “AP” here. This type of technology makes small adjustments every few minutes to provide more or less insulin with the goal of keeping blood glucose (BG) levels in range. It’s complicated by the fact that insulin often peaks at 60-90 minutes…but food hits in ~15 minutes. So there’s often “catch up” being done with insulin to deal with food eaten previously, and also with hormones and other things that impact BGs that aren’t measurable. (This is also why it’s called hybrid, because for best outcomes people will still be doing some kind of meal announcement/bolus to deal with insulin timing.) As a result, even with pumps and CGMs, diabetes is still hard. A closed loop can do the needed math every five minutes, doesn’t go to sleep, and is very precise. It can respond more quickly (because it’s paying attention) than a human will in most situations, because we’re out living our lives/working/sleeping and not paying attention ONLY to diabetes. It’s not a cure, but it helps make living with diabetes better than it used to be.

However, I equate it to being a pilot who has seen technology on planes evolve to include “autopilot”. Even with hybrid closed loop technology, we’re still flying the “plane”.

looping_is_like_flying_plane_danamlewis

Here’s what I mean. There are stages for picking out and deciding to use the technology; preparing to use it/getting in the mode where you CAN use it; using it successfully; getting ready for the times when you can’t use it; and smoothing the way for the next time you use it.

It’s not perfect 24/7, you see, because we’re still using pump sites and continuous glucose monitor (CGM) sensors. The CGM sensor may last for 7 days, but then you have to change it out (or cough restart it cough), and you have a gap in data, which means you can’t loop. So you have this type of cycle regularly, and here’s what you need to know about each of these stages, regardless of whether we’re talking about DIY (like OpenAPS) or a commercial closed loop solution.

Preparing for takeoff

prepare_for_looping_danamlewisWhen you’re getting into the plane, you have a flight plan. You know when you will and won’t use the technology on board. Same for diabetes & closed looping. Make sure to think about the following for your tech of choice:

When will your loop work? When does it not? What happens if it breaks? What are your back up tools? How do you operate it: what happens if your sensor loses data, or you don’t calibrate? How does the algorithm work? What will it target your BG to be? What behaviors will you have to do (meal bolus or announcement, etc.) and how can you alter those to optimize performance? Also, what are the warning signs of failure to let you know when you need to take additional action with corrective insulin or eating carbs?

Taking off and the new technology learning curve

taking_off_learning_curve_danamlewisJust like switching from MDI pump (or even iPhone to Android and vice versa), you have a learning curve. When you go into looping or automated insulin delivery mode, you have to figure things out. You need to be able to figure out what’s happening and why it’s doing what it’s doing, so if you’re not happy with what’s happening, you can make a change. Why are you running high? Why are you running low? Knowing why it’s doing what it’s doing is critical for adjusting – either tweaking the closed loop settings, if you can, or adjusting your own behavior. Especially in the first few cycles of new tech, you’ll have a lot of learning around “I used to do things like X, but now I need to do them like Y.”

Why you might not be taking off and able to loop

blocking_takeoff_danamlewisYou also need to know why you can’t loop. There are three major categories of things that will prevent you from looping:

  1. No sensor, no looping.
  2. In some systems, wonky or missing data, no looping
  3. Communication errors between pieces of a system.

Some of these are obvious fixes (put in a new sensor if one fell out, or decide to put in a new sensor if the old one is bad), but depending on the system may involve some troubleshooting to get things going again.

Also, some of the commercial systems will kick you out of looping for various reasons (including lack of calibration), in addition to preventing you from looping in the first place without them, so knowing what these basic things are required for looping is useful to make sure you CAN automate.

Flying high: maintenance when you’re actually looping

maintenance_when_looping_danamlewisThere are some critical behaviors required for looping. (After all, when flying, there’s always a pilot present in the cockpit..right?!)

Some of these are basic behaviors you’ll be used to if you’ve been wearing a pump and CGM previously: keeping pump sites changed so the insulin works, and changing and calibrating CGM sensors.

HOWEVER – many people who “stretch” their CGM sensors find that they don’t want to stretch their sensors as far, as the data degrades over time. You do you, but keep in mind this might change when you’re looping vs. not, because you’re relying on good data to operate the system.

That being said, in addition to good sensor life, calibration hygiene is critical. You don’t want to loop off of wonky data, but also some commercial systems will kick you out if your calibration is way off and/or if you miss a calibration. (Personal opinion on this is a big ugh, which is why no DIY system that I know of does this.)

But if you keep your sites and sensors in good condition, this is where life is good. You’re looping! It’s microadjusting and helping keep things in range. Yay! This means better sleep, more time in range, and feeling better all around.

However, you still have diabetes, you’re still in the plane, so you still need to keep an eye on things. Monitoring the system is important (to make sure you’re still in autopilot and don’t need to actually fly the plane manually), so make sure you know how you (and your loved ones) can monitor the system’s operation, and know what your backup alarms are in case of system failures.

Note: there are approximately eleventy bajillion ways to remote monitor in DIY systems, but even if you have a commercial system that comes pre-baked without remote monitoring… you can add a DIY solution for that. So don’t feel like if you have a commercial AP that you can never use anything DIY – you can totally mix and match!

Dealing with turbulence

turbulence_danamlewisWhat kind of airplane/flight analogy would this be without including turbulence? :)

Like the things that can prevent looping in the first place, there are things that can throw off your looping. I already mentioned wonky sensor data that may mean either a blip in your looping time, or may kick you off looping. Again, your sensor life and your calibration practices will likely change.

But the other big disturbance, so to speak, is around body sensitivity changes. You know all the ways it can happen: you’re getting sick, recovering from getting sick, getting ready for/or are on/or are right after your period, or have an adrenaline spike, or have hormones surging, or have a growth spurt, or just exercised, etc.

This is what makes diabetes oh so hard so often. But this is where different closed loop systems can help, so this is one area you should ask about when picking a system: how does it adjust and adapt to sensitivity changes, and on what time frame? (In the DIY world, we use a number of techniques with this, ranging from autosensitivity to adapt on a 24 hour rolling scale of sensitivity changes, as well as using autotune to track bigger picture trends and changes needed to underlying settings. Reminder – anyone can use autotune if they’re willing to log bolus & carb data in Nightscout, not just closed loopers, so check that out if you’re interested! All DIY closed loop systems also use dynamic carbohydrate absorption in their respective algorithms, so that if you have slowed digestion for ANY reason, ranging from gastroparesis to getting glutened if you have celiac to merely walking after a meal, the system takes that into account and adjusts accordingly.)

The other things that can help you tough out some turbulence? Setting different modes, like an activity mode for exercise. The two things to know about exercise are:

  1. You don’t want to go into exercise with a bucket of IOB, so set activity mode WELL BEFORE you go out for activity. Depending on how much netIOB you have, that time may vary, but planning ahead with an activity mode makes a big difference for not going low during activity – even with a closed loop.
  2. Your sensitivity may be impacted for hours afterward, into the next day. See above about having a system that can respond to sensitivity changes like that, but also think about having multiple targets you can use temporarily (if your system allows it) so you can give the system a bigger buffer while it sorts out your body’s sensitivity changes.

Preparing for landing and making time between loops more smooth

prepare_for_landing_danamlewisJust like you’ll want to plan to go on the closed loop, you’ll want to plan for how to cycle off and then back on again. Depending on your system, there may be things you can do to smooth things out. One of the things I do is pre-soak a CGM sensor to skip the first day jumpy numbers. That makes a big difference for the first hours back on a “new” looping session. The other thing I do is stagger receiver start times (where I have two receivers that I stop/start at different times, so I’m not stuck for two hours without BG data to loop on).

But even if you can’t do that, you can do some other general planning ahead – like making sure your looping session doesn’t end in the middle of a big meal that’s being digested, or overnight. Those are the times when you’ll want to be looping the most.

Landing and preparing for the next looping session

Landing_danamlewisJust like learning to fly where you take a lot of training flights and review how the flight went, you’ll want to think about how things went and what you might change behavior-wise for your next looping session. Some of the things that may change over time as you learn more about your tech of choice:

  • Timing of meal announcement or boluses
  • Precision (if needed, or otherwise lack thereof) around carb counting
  • Using things like “eating soon” mode to optimize meal-time insulin effectiveness and reduce post-meal spikes
  • Using different activity patterns and targets to get ideal outcomes around exercise
  • Tweaking underlying settings (if you can)

General thoughts on looping

general_looping_reminders_danamlewisSome last thoughts about closed looping in general, regardless of the tech you might choose now or in the future:

  1. Picking one kind of technology does NOT lock you into it forever. If you’re DIYing now, you can choose commercial later. If you start on a commercial system, you can still try a DIY system.
  2. Don’t compare the original iPhone with an iPhone 6. Let’s be blunt: the Dexcom 7plus is a different beast than the Dexcom G4/G5. Similarly, Medtronic’s original “harpoon” sensor is different than their newest sensor tech. The Abbott Navigator is different than their Libre. Don’t be held up by perceptions of the old tech – make sure to check out the new stuff with a somewhat open mind.
  3. (Similarly, hopefully, in the future we’ll get to say the same about first-generation devices and algorithms. These things in commercial systems should change over time in terms of algorithm capabilities, targets, features, and usability. They certainly have in DIY – we’ve gotten smaller pancreases, algorithm improvements, all kinds of interoperability integration, etc.)
  4. All systems (both DIY and commercial) have pros and cons. They also each will have their own learning curves. Some of that learning is generalized, and will translate between systems. But again, iPhone to Android or vice versa – your cheese gets moved and there will be learning to do if you switch systems.
  5. Remember, everyone learns differently – and everyone’s diabetes is different. Figure out what works well for you, and rock it!

 

Unexpected side-effect of closed looping: Body re-calibrations

It’s fascinating how bodies adapt to changing situations.

For those of us with diabetes: do you remember the first time you took insulin after diagnosis? For me, I had been fasting for ~18 hours (because I felt so bad, and hadn’t eaten anything since dinner the night before) and drinking water, and my BG was still somehow 550+ at the endo’s office.

Water did nothing for my unquenchable thirst, but that first shot of insulin first sure did.

I still remember the vivid feeling of it being an internal liquid hydration for my body, and everything feeling SO different when it started kicking in.

In case the BG of 550+, the A1c of 14+ (don’t remember exact number), and me feeling terrible for weeks wasn’t enough, that’s one of the things that really reinforced that I have diabetes and insulin is something my body desperately needs but wasn’t getting.

Over the last ~14+ years, I’ve had a handful of times that reinforced the feeling of being dependent on this life-saving drug, and the drastic difference I feel with and without it. Usually, it’s been times where a pump site ripped out, or I was sick and high and highly resistant, and then finally stopped being as resistant and my blood sugar started responding to insulin finally after hours of being really high, and I started dropping.

But I’ve had different ways to experience this feeling lately, as a result of having live with a DIY closed loop (OpenAPS) for 2+ years – and it hasn’t involved anything drastic as a HIGH BG or equipment failure. It’s a result of my body re-calibrating to the new norm of my body being able to spend more and more time close to 100% in range, in a much tighter and lower range than I ever thought possible (especially now true with some of the flexibility and freedom oref1 now offers).

I originally had a brief fleeting thought about how BGs in the low 200s used to feel like the 300s did. Then, I realized that 180 felt “high”. One day, it was 160.

Then one day, my CGM said flat in 120s and I felt “high”. (I calibrated, and turned out that it was really 140). I’ve had several other days where I’d hit 140s and feel like I used to do in the mid-200s (slightly high, and annoying, but no major high symptoms like 300-400 would cause – just enough to feel it and be annoyed).

That was odd enough as a fleeting thought, but it was really odd to wake up one morning and without even looking at my watch or CGM to see what my BGs had been all night, know that I had been running high.

I further classified “really odd” as “completely crazy” when that “running high” meant floating around the 130-140 range, instead of down in the 90-110 range, which is where I probably spend 95% of my nights nowadays.

Last night is what triggered this blog post, plus a recurring observation that because I have a DIY closed loop that does so well at handling the small, unknown variances that cause disturbances in BG levels without me having to do much work, that as result it is MUCH easier to pinpoint major influences, like my liver dumping glucose (either because of a low or because it’s ‘full up’ and needs to get rid of the excess).

In last night’s case, it was a major liver dump of glucose.

Here’s what happened:

Scott and I went on a long walk, with the plan to stop for dinner on the way home. BG started dropping as I was about half a mile out from the restaurant, but I’m stubborn 😀 and didn’t want to eat a fruit strip when I was about to sit down an eat a burger. So, my BG was dropping low when I actually ate. I expected my BG to flatten on its own, given the pause in activity, so I bolused fairly normally for my burger, and we walked the last .5 miles home.

However, I ended up not rising from the burger like I usually do, and started dropping again. It was quite a drop, and I realize my burger digestion was different because of the previous low, so I ended up eating some fruit to handle the second low. My body was unhappy at two lows, and so my liver decided to save the day by dumping a bunch of glucose to help bring my blood sugar up. Double rebound effect, then, from the liver dump and the fruit I had eaten. Oh well, that’s what a closed loop is for!

Instead of rebounding into the high 300s (which I would have expected pre-closed loop), I maxed out at 220. The closed loop did a good job of bolusing on the way up. However, because of how much glucose my liver dumped, I stayed higher longer. (Again, this probably sounds crazy to anyone not looping, as it would have sounded to me before I began looping). I sat around 180 for the first three hours of the night, and then dropped down to ~160 for most of the rest of the night, and ended up waking up around 130.

And boy, did I know I had been high all night. I felt (and still feel, hours later) like I used to years ago when I would wake up in the 300s (or higher).

Visuals

recalibration_3 hourHmm, 3 hours doesn’t look so bad despite feeling it.

recalibration_6 hour6 hour view shows why I feel it.

recalibration_12 hour12 hours. Sheesh.

recalibration_24 hour24 hours shows you the full view of the double low and why my liver decided I needed some help. Thanks, liver, for still being able to help if I really needed it!

recalibrating_pebble view of renormalizing Settling back to normal below 120, hours later.

There are SO many amazing things about DIY closed looping. Better A1c, better average BG, better time in range, less effort, less work, less worrying, more sleep, more time living your life.

One of the benefits, though, is this bit of double-edged sword: your body also re-calibrates to the new “normal”, and that means the occasional extreme BG excursion (even if not that extreme!) may give you a different range of symptoms than you used to experience.

This. Matters. (Why I continue to work on #OpenAPS, for myself and for others)

If you give a mouse a cookie or give a patient their data, great things will happen.

First, it was louder CGM alarms and predictive alerts (#DIYPS).

Next, it was a basic hybrid closed loop artificial pancreas that we open sourced so other people could build one if they wanted to (#OpenAPS, with the oref0 basic algorithm).

Then, it was all kinds of nifty lessons learned about timing insulin activity optimally (do eating soon mode around an hour before a meal) and how to use things like IFTTT integration to squash even the tiniest (like from 100mg/dL to 140mg/dL) predictable rises.

It was also things like displays, button, widgets on the devices of my choice – ranging from being able to “text” my pancreas, to a swipe and button tap on my phone, to a button press on my watch – not to mention tinier sized pancreases that fit in or clip easily to a pocket.

Then it was autosensitivity that enabled the system to adjust to my changing circumstances (like getting a norovirus), plus autotune to make sure my baseline pump settings were where they needed to be.

And now, it’s oref1 features that enable me to make different choices at every meal depending on the social situation and what I feel like doing, while still getting good outcomes. Actually, not good outcomes. GREAT outcomes.

With oref0 and OpenAPS, I’d been getting good or really good outcomes for 2 years. But it wasn’t perfect – I wasn’t routinely getting 100% time in range with lower end of the range BG for a 24hour average. ~90% time in range was more common. (Note – this time in range is generally calculated with 80-160mg/dL. I could easily “get” higher time in range with an 80-180 mg/dL target, or a lot higher also with a 70-170mg/dL target, but 80-160mg/dL was what I was actually shooting for, so that’s what I calculate for me personally). I was fairly happy with my average BGs, but they could have been slightly better.

I wrote from a general perspective this week about being able to “choose one” thing to give up. And oref1 is a definite game changer for this.

  • It’s being able to put in a carb estimate and do a single, partial bolus, and see your BG go from 90 to peaking out at 130 mg/dL despite a large carb (and pure ballpark estimate) meal. And no later rise or drop, either.
  • It’s now seeing multiple days a week with 24 hour average BGs a full ~10 or so points lower than you’re used to regularly seeing – and multiple days in a week with full 100% time in range (for 80-160mg/dL), and otherwise being really darn close to 100% way more often than I’ve been before.

But I have to tell you – seeing is believing, even more than the numbers show.

I remember in the early days of #DIYPS and #OpenAPS, there were a lot of people saying “well, that’s you”. But it’s not just me. See Tim’s take on “changing the habits of a lifetime“. See Katie’s parent perspective on how much her interactions/interventions have lessened on a daily basis when testing SMB.

See this quote from Matthias, an early tester of oref1:

I was pretty happy with my 5.8% from a couple months of SMB, which has included the 2 worst months of eating habits in years.  It almost feels like a break from diabetes, even though I’m still checking hourly to make sure everything is connected and working etc and periodically glancing to see if I need to do anything.  So much of the burden of tight control has been lifted, and I can’t even do a decent job explaining the feeling to family.

And another note from Katie, who started testing SMB and oref1:

We used to battle 220s at this time of day (showing a picture flat at 109). Four basal rates in morning. Extra bolus while leaving house. Several text messages before second class of day would be over. Crazy amount of work [in the morning]. Now I just have to brush my teeth.

And this, too:

I don’t know if I’ve ever gone 24 hours without ANY mention of something that was because of diabetes to (my child).

Ya’ll. This stuff matters. Diabetes is SO much more than the math – it’s the countless seconds that add up and subtract from our focus on school/work/life. And diabetes is taking away this time not just from a person with diabetes, but from our parents/spouses/siblings/children/loved ones. It’s a burden, it’s stressful…and everything we can do matters for improving quality of life. It brings me to tears every time someone posts about these types of transformative experiences, because it’s yet another reminder that this work makes a real difference in the real lives of real people. (And, it’s helpful for Scott to hear this type of feedback, too – since he doesn’t have diabetes himself, it’s powerful for him to see the impact of how his code contributions and the features we’re designing and building are making a difference not just to BG outcomes.)

Thank you to everyone who keeps paying it forward to help others, and to all of you who share your stories and feedback to help and encourage us to keep making things better for everyone.

 

Choose One: What would you give up if you could? (With #OpenAPS, maybe you can – oref1 includes unannounced meals or “UAM”)

What do you have to do today (related to daily insulin dosing for diabetes) that you’d like to give up if you could? Counting carbs? Bolusing? Or what about outcomes – what if you could give up going low after a meal? Or reduce the amount that you spike?

How many of these 5 things do you think are possible to achieve together?

  • No need to bolus
  • No need to count carbs
  • Medium/high carb meals
  • 80%+ time in range
  • no hypoglycemia

How many can you manage with your current therapy and tools of choice?  How many do you think will be possible with hybrid closed loop systems?  Please think about (and maybe even write down) your answers before reading further to get our perspective.

With just pump and CGM, it’s possible to get good time in range with proper boluses, counting carbs, and eating relatively low-carb (or getting lucky/spending a lot of time learning how to time your insulin with regular meals).  Even with all that, some people still go low/have hypoglycemia.  So, let’s call that a 2 (out of 5) that can be achieved simultaneously.

With a first-generation hybrid closed loop system like the original OpenAPS oref0 algorithm, it’s possible to get good time in range overnight, but achieve that for meal times would still require bolusing properly and counting carbs.  But with the perfect night-time BGs, it’s possible to achieve no-hypoglycemia and 80% time in range with medium carb meals (and high-carb meals with Eating Soon mode etc.).  So, let’s call that a 3 (out of 5).

With some of the advanced features we added to OpenAPS with oref0 (like advanced meal assist or “AMA” as we call it), it became a lot easier to achieve a 3 with less bolusing and less need to precisely count carbs.  It also deals better with high-carb meals, and gives the user even more flexibility.  So, let’s call that a 3.5.

A few months ago, when we began discussing how to further improve daily outcomes, we also began to discuss the idea of how to better deal with unannounced meals. This means when someone eats and boluses, but doesn’t enter carbs. (Or in some cases: eats, doesn’t enter carbs, and doesn’t even bolus). How do we design to better help in that safety, all while sticking to our safety principles and dosing safely?

I came up with this idea of “floating carbs” as a way to design a solution for this behavior. Essentially, we’ve learned that if BG spikes at a certain rate, it’s often related to carbs. We observed that AMA can appropriately respond to such a rise, while not dosing extra insulin if BG is not rising.  Which prompted the question: what if we had a “floating” amount of carbs hanging out there, and it could be decayed and dosed upon with AMA if that rise in BG was detected? That led us to build in support for unannounced meals, or “UAM”. (But you’ll probably see us still talk about “floating carbs” some, too, because that was the original way we were thinking about solving the UAM problem.) This is where the suite of tools that make up oref1 came from.  In addition to UAM, we also introduced supermicroboluses, or SMB for short.  (For more background info about oref1 and SMB, read here.)

So with OpenAPS oref1 with SMB and floating carbs for UAM, we are finally at the point to achieve a solid 4 out of 5.  And not just a single set of 4, but any 4 of the 5 (except we’d prefer you don’t choose hypoglycemia, of course):

  • With a low-carb meal, no-hypoglycemia and 80+% time in range is achievable without bolusing or counting carbs (with just an Eating Soon mode that triggers SMB).
  • With a regular meal, the user can either bolus for it (triggering floating carb UAM with SMB) or enter a rough carb count / meal announcement (triggering Eating Now SMB) and achieve 80% time in range.
  • If the user chooses to eat a regular meal and not bolus or enter a carb count (just an Eating Soon mode), the BG results won’t be as good, but oref1 will still handle it gracefully and bring BG back down without causing any hypoglycemia or extended hyperglycemia.

That is huge progress, of course.  And we think that might be about as good as it’s possible to do with current-generation insulin-only pump therapy.  To do better, we’d either need an APS that can dose glucagon and be configured for tight targets, or much faster insulin.  The dual-hormone systems currently in development are targeting an average BG of 140, or an A1c of 6.5, which likely means >20% of time spent > 160mg/dL.  And to achieve that, they do require meal announcements of the small/medium/large variety, similar to what oref1 needs.  Fiasp is promising on the faster-insulin front, and might allow us to develop a future version of oref1 that could deal with completely unannounced and un-bolused meals, but it’s probably not fast enough to achieve 80% time in range on a high-carb diet without some sort of meal announcement or boluses.

But 4 out of 5 isn’t bad, especially when you get to pick which 4, and can pick differently for every meal.

Does that make OpenAPS a “real” artificial pancreas? Is it a hybrid closed loop artificial insulin delivery system? Do we care what it’s called? For Scott and me; the answer is no: instead of focusing on what it’s called, let’s focus on how different tools and techniques work, and what we can do to continue to improve them.

Being Shuttleworth Funded with a Flash Grant as an independent patient researcher

Recently, I have been working on helping OpenAPS’ers collect our data and put it to good use in research (both by traditional researchers as well as using it to enable other fellow patient researchers or “citizen scientists). As a result, I have had the opportunity to work closely with Madeleine Ball at Open Humans. (Open Humans is the platform we use for the OpenAPS Data Commons.)

It’s been awesome to collaborate with Madeleine on many fronts. She’s proven herself really willing to listen to ideas and suggestions for things to change, to make it easier for both individuals to donate their data to research and for researchers who want to use the platform. And, despite me not having the same level of technical skills, she emits a deep respect for people of all experiences and perspectives. She’s also in general a really great person.

As someone who is (perhaps uniquely) utilizing the platform as both a data donor and as a data researcher, it has been fantastic to be able to work through the process of data donation, project creation, and project utilization from both perspectives. And, it’s been great to contribute ideas and make tools (like some of my scripts to download and unpack Open Humans data) that can then be used by other researchers on Open Humans.

Madeleine was also selected this year to be a Shuttleworth Fellow, applying “open” principles to change how we share and study human health data, plus exploring new, participant-centered approaches for health data sharing, research, and citizen science. Which means that everything she’s doing is in almost perfect sync with what we are doing in the OpenAPS and #WeAreNotWaiting communities.

What I didn’t know until this past week was that it also meant (as a Shuttleworth Fellow) that she was able to make nominations of individuals for a Shuttleworth Flash Grant, which is a grant made to a collection of social change agents, no strings attached, in support of their work.

I was astonished to receive an email from the Shuttleworth Foundation saying that I had been nominated by Madeleine for a $5,000 Flash Grant, which goes to individuals they would like to support/reward/encourage in their work for social good.

Shuttleworth Funded

I am so blown away by the Flash Grant itself – and the signal that this grant provides. This is the first (of hopefully many) organizations to recognize the importance of supporting independent patient researchers who are not affiliated with an institution, but rather with an online community. It’s incredibly meaningful for this research and work, which is centered around real needs of patients in the real world, to be funded, even to a small degree.

Many non-traditional researchers like me are unaffiliated with a traditional institution or organization. This means we do the research in our own time, funded solely by our own energy (and in some case resources). Time in of itself is a valuable contribution to research (think of the opportunity costs). However, it is also costly to distribute and disseminate ideas learned from patient-driven research to more traditional researchers. Even ignoring travel costs, most scientific conferences do not have a patient research access program, which means patients in some cases are asked to pay $400 (or more) per person for a single day pass to stand beside their poster if it is accepted for presentation at a conference. In some cases, patients have personal resources and determination and are willing to pay that cost. But not every patient is able to do that. (And to do it year over year as they continue to do new ground-breaking research each year – that adds up, too, especially when you factor in travel, lodging, and the opportunity cost of being away from a day job.)

So what will I use the Flash Grant for? Here’s so far what I’ve decided to put it toward:

#1 – I plan to use it to fund my & Scott’s travel costs this year to ADA’s Scientific Sessions, where our poster on Autotune & data from the #WeAreNotWaiting community will be presented. (I’m still hoping to convince ADA to create a patient researcher program vs. treating us like an individual walking in off the street; but if they again do not choose to do so, it will take $800 for Scott and I to stand with the poster during the poster session). Being at Scientific Sessions is incredibly valuable as researchers and developers, because we can have real-time conversations with traditional researchers who have not yet been introduced to some of our tools or the data collected and donated by the community. It’s one of the most valuable places for us to be in person in terms of facilitating new research partnerships, in addition to renewing and establishing relationships with device manufacturers who could (because our stuff is all open source MIT licensed) utilize our code and tools in commercial devices to more broadly reach people with diabetes.

#2 – Hardware parts. In order to best support the OpenAPS community, Scott and I have also been supporting and contributing to the development of open source hardware like the Explorer Board. Keeping in mind that each version of the board produced needs to be tested to see if the instructions related to OpenAPS need to change, we have been buying every iteration of Explorer Board so we can ensure compatibility and ease of use, which adds up. Having some of this grant funding go toward hardware supplies to support a multitude of setup options is nice!

There are so many individuals who have contributed in various ways to OpenAPS and WeAreNotWaiting and the patient-driven research movements. I’m incredibly encouraged, with a new spurt of energy and motivation, after receiving this Flash Grant to continue to further build upon everyone’s work and to do as much as possible to support every person in our collective communities. Thank you again to Madeleine for the nomination, and to the Shuttleworth Foundation for the Flash Grant, for the financial and emotional support for our community!

Introducing oref1 and super-microboluses (SMB) (and what it means compared to oref0, the original #OpenAPS algorithm)

For a while, I’ve been mentioning “next-generation” algorithms in passing when talking about some of the work that Scott and I have been doing as it relates to OpenAPS development. After we created autotune to help people (even non-loopers) tune underlying pump basal rates, ISF, and CSF, we revisited one of our regular threads of conversations about how it might be possible to further reduce the burden of life with diabetes with algorithm improvements related to meal-time insulin dosing.

This is why we first created meal-assist and then “advanced meal-assist” (AMA), because we learned that most people have trouble with estimating carbs and figuring out optimal timing of meal-related insulin dosing. AMA, if enabled and informed about the number of carbs, is a stronger aid for OpenAPS users who want extra help during and following mealtimes.

Since creating AMA, Scott and I had another idea of a way that we could do even more for meal-time outcomes. Given the time constraints and reality of currently available mealtime insulins (that peak in 60-90 minutes; they’re not instantaneous), we started talking about how to leverage the idea of a “super bolus” for closed loopers.

A super bolus is an approach you can take to give more insulin up front at a meal, beyond what the carb count would call for, by “borrowing” from basal insulin that would be delivered over the next few hours. By adding insulin to the bolus and then low temping for a few hours after that, it essentially “front shifts” some of the insulin activity.

Like a lot of things done manually, it’s hard to do safely and achieve optimal outcomes. But, like a lot of things, we’ve learned that by letting computers do more precise math than we humans are wont to do, OpenAPS can actually do really well with this concept.

Introducing oref1

Those of you who are familiar with the original OpenAPS reference design know that ONLY setting temporary basal rates was a big safety constraint. Why? Because it’s less of an issue if a temporary basal rate is issued over and over again; and if the system stops communicating, the temp basal eventually expires and resume normal pump activity. That was a core part of oref0. So to distinguish this new set of algorithm features that depart from that aspect of the oref0 approach, we are introducing it as “oref1”. Most OpenAPS users will only use oref0, like they have been doing. oref1 should only be enabled specifically by any advanced users who want to test or use these features.

The notable difference between the oref0 and oref1 algorithms is that, when enabled, oref1 makes use of small “supermicroboluses” (SMB) of insulin at mealtimes to more quickly (but safely) administer the insulin required to respond to blood sugar rises due to carb absorption.

Introducing SuperMicroBoluses (or “SMB”)

The microboluses administered by oref1 are called “super” because they use a miniature version of the “super bolus” technique described above.  They allow oref1 to safely dose mealtime insulin more rapidly, while at the same time setting a temp basal rate of zero of sufficient duration to ensure that BG levels will return to a safe range with no further action even if carb absorption slows suddenly (for example, due to post-meal activity or GI upset) or stops completely (for example due to an interrupted meal or a carb estimate that turns out to be too high). Where oref0 AMA might decide that 1 U of extra insulin is likely to be required, and will set a 2U/hr higher-than-normal temporary basal rate to deliver that insulin over 30 minutes, oref1 with SMB might deliver that same 1U of insulin as 0.4U, 0.3U, 0.2U, and 0.1U boluses, at 5 minute intervals, along with a 60 minute zero temp (from a normal basal of 1U/hr) in case the extra insulin proves unnecessary.

As with oref0, the oref1 algorithm continuously recalculates the insulin required every 5 minutes based on CGM data and previous dosing, which means that oref1 will continually issue new SMBs every 5 minutes, increasing or reducing their size as needed as long as CGM data indicates that blood glucose levels are rising (or not falling) relative to what would be expected from insulin alone.  If BG levels start falling, there is generally already a long zero temp basal running, which means that excess IOB is quickly reduced as needed, until BG levels stabilize and more insulin is warranted.

Safety constraints and safety design for SMB and oref1

Automatically administering boluses safely is of course the key challenge with such an algorithm, as we must find another way to avoid the issues highlighted in the oref0 design constraints.  In oref1, this is accomplished by using several new safety checks (as outlined here), and verifying all output, before the system can administer a SMB.

At the core of the oref1 SMB safety checks is the concept that OpenAPS must verify, via multiple redundant methods, that it knows about all insulin that has been delivered by the pump, and that the pump is not currently in the process of delivering a bolus, before it can safely do so.  In addition, it must calculate the length of zero temp required to eventually bring BG levels back in range even with no further carb absorption, set that temporary basal rate if needed, and verify that the correct temporary basal rate is running for the proper duration before administering a SMB.

To verify that it knows about all recent insulin dosing and that no bolus is currently being administered, oref1 first checks the pump’s reservoir level, then performs a full query of the pump’s treatment history, calculates the required insulin dose (noting the reservoir level the pump should be at when the dose is administered) and then checks the pump’s bolusing status and reservoir level again immediately before dosing.  These checks guard against dosing based on a stale recommendation that might otherwise be administered more than once, or the possibility that one OpenAPS rig might administer a bolus just as another rig is about to do so.  In addition, all SMBs are limited to 1/3 of the insulin known to be required based on current information, such that even in the race condition where two rigs nearly simultaneously issue boluses, no more than 2/3 of the required insulin is delivered, and future SMBs can be adjusted to ensure that oref1 never delivers more insulin than it can safely withhold via a zero temp basal.

In some situations, a lack of BG or intermittent pump communications can prevent SMBs from being delivered promptly.  In such cases, oref1 attempts to fall back to oref0 + AMA behavior and set an appropriate high temp basal.  However, if it is unable to do so, manual boluses are sometimes required to finish dosing for the recently consumed meal and prevent BG from rising too high.  As a result, oref1’s SMB features are only enabled as long as carb impact is still present: after a few hours (after carbs all decay), all such features are disabled, and oref1-enabled OpenAPS instances return to oref0 behavior while the user is asleep or otherwise not engaging with the system.

In addition to these safety status checks, the oref1 algorithm’s design helps ensure safety.  As already noted, setting a long-duration temporary basal rate of zero while super-microbolusing provides good protection against hypoglycemia, and very strong protection against severe hypoglycemia, by ensuring that insulin delivery is zero when BG levels start to drop, even if the OpenAPS rig loses communication with the pump, and that such a suspension is long enough to eventually bring BG levels back up to the target range, even if no manual corrective action is taken (for example, during sleep).  Because of these design features, oref1 may even represent an improvement over oref0 w/ AMA in terms of avoiding post-meal hypoglycemia.

In real world testing, oref1 has thus far proven at least as safe as oref0 w/ AMA with regard to hypoglycemia, and better able to prevent post-meal hyperglycemia when SMB is ongoing.

What does SMB “look” like?

Here is what SMB activity currently looks like when displayed on Nightscout, and my Pebble watch:

First oref1 SMB OpenAPS test by @DanaMLewisFirst oref1 SMB OpenAPS test as seen on @DanaMLewis pebble watch

How do features like this get developed and tested?

SMB, like any other advanced feature, goes through extensive testing. First, we talk about it. Then, it becomes written up in plain language as an issue for us to track discussion and development. Then we begin to develop the feature, and Scott and I test it on a spare pump and rig. When it gets to the point of being ready to test it in the real world, I test it during a time period when I can focus on observing and monitoring what it is doing. Throughout all of this, we continue to make tweaks and changes to improve what we’re developing. After several days (or for something this different, weeks) of Dana-testing, we then have a few other volunteers begin to test it on spare rigs. They follow the same process of monitoring it on spare rigs and giving feedback and helping us develop it before choosing to run it on a rig and a pump connected to their body. More feedback, discussion, and observation. Eventually, it gets to a point where it is ready to go to the “dev” branch of OpenAPS code, which is where this code is now heading. Several people will review the code and approve it to be added to the “dev” branch. We will then have others test the “dev” branch with this and any other features or code changes – both by people who want to enable this code feature, as well as people who don’t want this feature (to make sure we don’t break existing setups). Eventually, after numerous thumbs up from multiple members of the community who have helped us test different use cases, that code from the “dev” branch will be “approved” and will go to the “master” branch of code where it is available to a more typical user of OpenAPS.

However, not everyone automatically gets this code or will use it. People already running on the master branch won’t get this code or be able to use it until they update their rig. Even then, unless they were to specifically enable this feature (or any other advanced feature), they would not have this particular segment of code drive any of their rig’s behavior.

Where to find out more about oref1, SMB, etc.:

  • We have updated the OpenAPS Reference Design to reflect the differences between oref0 and the oref1 features.
  • OpenAPS documentation about oref1, which as of July 13, 2017 is now part of the master branch of oref0 code.
  • Ask questions! Like all things developed in the OpenAPS community, SMB and oref1-related features will evolve over time. We encourage you to hop into Gitter and ask questions about these features & whether they’re right for you (if you’re DIY closed looping).

Special note of thanks to several people who have contributed to ongoing discussions about SMB, plus the very early testers who have been running this on spare rigs and pumps. Plus always, ongoing thanks to everyone who is contributing and has contributed to OpenAPS development!

Making it possible for researchers to work with #OpenAPS or general Nightscout data – and creating a complex json to csv command line tool that works with unknown schema

This is less of an OpenAPS/DIYPS/diabetes-related post, although that is normally what I blog about. However, since we created the #OpenAPS Data Commons on Open Humans, to allow those of us who desire to donate our diabetes data to research, I have been spending a lot of time figuring out the process from uploading your data to how data is managed and shared securely with researchers. The hardest part is helping researchers figure out how to handle the data – because we PWDs produce a lot of data :) . So this post explains some of the challenges of the data management to get it to a researcher-friendly format. I have been greatly helped over the years by general purpose open-source work from other people, and one of the things that helps ME the most as a non-traditional programmer is plain language posts explaining the thought process by behind the tools and the attempted solution paths. Especially because sometimes the web pages and blog posts pop higher in search than nitty gritty tool documentation without context. (Plus, I’ve been taking my own advice about not letting myself hold me back from trying, even when I don’t know how to do things yet.) So that’s what this post is!

OH that I "certainly stress tested" a tool with lots of data

Background/inspiration for the project and the tools I had to build:

We’re using Nightscout, which is a remote data-viewing platform for diabetes data, made with love and open source and freely available for anyone with diabetes to use. It’s one of the best ways to display not only continuous glucose monitor (CGM) data, but also data from our DIY closed loop artificial pancreases (#OpenAPS). It can store data from a number of different kinds and brands of diabetes devices (pumps, CGMs, manual data entries, etc.), which means it’s a rich source of data. As the number of DIY OpenAPS users are growing, we estimate that our real-world use is overtaking the amount of total hours of data from clinical trials of closed loop artificial pancreas systems.  In the #WeAreNotWaiting spirit of moving quickly (rather than waiting years for research teams to collect and analyze their own data) we want to see what we can learn from OpenAPS usage, not only by donating data to help traditional researchers speed up their work, but also by co-designing research studies of the things of most value to the diabetes community.

Step 1: Data from users to Open Humans

I thought Step 1 would be the hardest. However, thanks to Madeleine Ball, John Costik, and others in the Nightscout community, a simple Nightscout Data Transfer App was created that enables people with Nightscout data to pop it into their Open Humans accounts. It’s then very easy to join different projects (like the OpenAPS Data Commons) and share your data with those projects. And as the volunteer administrator of the OpenAPS Data Commons, it’s also easy for me to provide data to researchers.

The biggest challenge at this stage was figuring out how much data to pull from the API. I have almost 3 years worth of DIY diabetes data, and I have numerous devices over time uploading all at once…which makes for large chunks of data. Not everyone has this much data (or 6-7 rigs uploading constantly ;)). Props to Madeleine for the patience in working with me to make sure the super users with large data sets will be able to use all of these tools!

Step 2: Sharing the data with researchers

This was easy. Yay for data-sharing tools like Dropbox.

Step 3: Researchers being able to use the data

Here’s where thing started to get interesting. We have large data files that come in json format from Nightscout. I know some researchers we will be working with are probably very comfortable working with tools that can take large, complex json files. However…not all will be, especially because we also want to encourage independent researchers to engage with the data for projects. So I had the belated realization that we need to do something other than hand over json files. We need to convert, at the least, to csv so it can be easily viewed in Excel.

Sounds easy, right?

According to basic searches, there’s roughly a gazillion ways to convert json to csv. There’s even websites that will do it for you, without making you run it on the command line. However, most of them require you to know the types of data and the number of types, in order to therefore construct headers in the csv file to make it readable and useful to a human.

This is where the DIY and infinite possibility nature of all the kinds of diabetes tools anyone could be using with Nightscout, plus the infinite ways they can self-describe profiles and alarms and methods of entering data, makes it tricky. Just based on an eyeball search between two individuals, I was unable to find and count the hundred+ types of data entry possibilities. This is definitely a job for the computer, but I had to figure out how to train the computer to deal with this.

Again, json to csv tools are so common I figured there HAD to be someone who had done this. Finally, after a dozen varying searches and trying a variety of command line tools, I finally found one web-based tool that would take json, create the schema without knowing the data types in advance, and convert it to csv. It was (is) super slick. I got very excited when I saw it linked to a Github repository, because that meant it was probably open source and I can use it. I didn’t see any instructions for how to use it on the command line, though, so I message the author on Twitter and found out that it didn’t yet exist and was a not-yet-done TODO for him.

Sigh. Given this whole #WeAreNotWaiting thing (and given I’ve promised to help some of the researchers in figuring this out so we can initiate some of the research projects), I needed to figure out how to convert this tool into a command line version.

So, I did.

  • I taught myself how to unzip json files (ended up picking `gzip -cd`, because it works on both Mac and Linux)
  • I planned to then convert the web tool to be able to work on the command line, and use it to translate the json files to csv.

But..remember the big file issue? It struck again. So I first had to figure out the best way to estimate the size and splice or split the json into a series of files, without splitting it in a weird place and messing up the data. That became jsonsplit.sh, a tool to split a json file based on the size you give it (and if you don’t specify, it defaults to something like 100000 records).

FWIW: 100,000 records was too much for the more complex schema of the data I was working with, so I often did it in smaller chunks, but you can set it to whatever size you prefer.

So now “all” I had to do was:

  • Unzip the json
  • Break it down if it was too large, using jsonsplit.sh
  • Convert each of these files from json to csv

Phew. Each of these looks really simple now, but took a good chunk of time to figure out. Luckily, the author of the web tool had done much of the hard json-to-csv work, and Scott helped me figure out how to take the html-based version of the conversion and make it useable in the command line using javascript. That became complex-json2csv.js.

Because I knew how hard this all was, and wanted other people to be able to easily use this tool if they had large, complex json with unknown schema to deal with, I created a package.json so I could publish it to npm so you can download and run it anywhere.

I also had to create a script that would pass it all of the Open Humans data; unzip the file; run jsonsplit.sh, run complex-json2csv.js, and organize the data in a useful way, given the existing file structure of the data. Therefore I also created an “OpenHumansDataTools” repository on Github, so that other researchers who will be using Nightscout-based Open Humans data can use this if they want to work with the data. (And, there may be something useful to others using Open Humans even if they’re not using Nightscout data as their data source – again, see “large, complex, challenging json since you don’t know the data type and count of data types” issue. So this repo can link them to complex-json2csv.js and jsonsplit.sh for discovery purposes, as they’re general purpose tools.) That script is here.

My next TODO will be to write a script to take only slices of data based on information shared as part of the surveys that go with the Nightscout data; i.e. if you started your DIY closed loop on X data, take data from 2 weeks prior and 6 weeks after, etc.

I also created a pull request (PR) back to the original tool that inspired my work, in case he wants to add it to his repository for others who also want to run his great stuff from the command line. I know my stuff isn’t perfect, but it works :) and I’m proud of being able to contribute to general-purpose open source in addition to diabetes-specific open source work. (Big thanks as always to everyone who devotes their work to open source for others to use!)

So now, I can pass researchers json or csv files for use in their research. We have a number of studies who are planning to request access to the OpenAPS Data Commons, and I’m excited about how work like this to make diabetes data more broadly available for research will help improve our lives in the short and long term!

Autotune (automatically assessing basal rates, ISF, and carb ratio with #OpenAPS – and even without it!)

What if, instead of guessing needed changes (the current most used method) basal rates, ISF, and carb ratios…we could use data to empirically determine how these ratios should be adjusted?

Meet autotune.

What if we could use data to determine basal rates, ISF and carb ratio? Meet autotune

Historically, most people have guessed basal rates, ISF, and carb ratios. Their doctors may use things like the “rule of 1500” or “1800” or body weight. But, that’s all a general starting place. Over time, people have to manually tweak these underlying basals and ratios in order to best live life with type 1 diabetes. It’s hard to do this manually, and know if you’re overcompensating with meal boluses (aka an incorrect carb ratio) for basal, or over-basaling to compensate for meal times or an incorrect ISF.

And why do these values matter?

It’s not just about manually dosing with this information. But importantly, for most DIY closed loops (like #OpenAPS), dose adjustments are made based on the underlying basals, ISF, and carb ratio. For someone with reasonably tuned basals and ratios, that’s works great. But for someone with values that are way off, it means the system can’t help them adjust as much as someone with well-tuned values. It’ll still help, but it’ll be a fraction as powerful as it could be for that person.

There wasn’t much we could do about that…at first. We designed OpenAPS to fall back to whatever values people had in their pumps, because that’s what the person/their doctor had decided was best. However, we know some people’s aren’t that great, for a variety of reasons. (Growth, activity changes, hormonal cycles, diet and lifestyle changes – to name a few. Aka, life.)

With autosensitivity, we were able to start to assess when actual BG deltas were off compared to what the system predicted should be happening. And with that assessment, it would dynamically adjust ISF, basals, and targets to adjust. However, a common reaction was people seeing the autosens result (based on 24 hours data) and assume that mean that their underlying ISF/basal should be changed. But that’s not the case for two reasons. First, a 24 hour period shouldn’t be what determines those changes. Second, with autosens we cannot tell apart the effects of basals vs. the effect of ISF.

Autotune, by contrast, is designed to iteratively adjust basals, ISF, and carb ratio over the course of weeks – based on a longer stretch of data. Because it makes changes more slowly than autosens, autotune ends up drawing on a larger pool of data, and is therefore able to differentiate whether and how basals and/or ISF need to be adjusted, and also whether carb ratio needs to be changed. Whereas we don’t recommend changing basals or ISF based on the output of autosens (because it’s only looking at 24h of data, and can’t tell apart the effects of basals vs. the effect of ISF), autotune is intended to be used to help guide basal, ISF, and carb ratio changes because it’s tracking trends over a large period of time.

Ideally, for those of us using DIY closed loops like OpenAPS, you can run autotune iteratively inside the closed loop, and let it tune basals, ISF, and carb ratio nightly and use those updated settings automatically. Like autosens, and everything else in OpenAPS, there are safety caps. Therefore, none of these parameters can be tuned beyond 20-30% from the underlying pump values. If someone’s autotune keeps recommending the maximum (20% more resistant, or 30% more sensitive) change over time, then it’s worth a conversation with their doctor about whether your underlying values need changing on the pump – and the person can take this report in to start the discussion.

Not everyone will want to let it run iteratively, though – not to mention, we want it to be useful to anyone, regardless of which DIY closed loop they choose to use – or not! Ideally, this can be run one-off by anyone with Nightscout data of BG and insulin treatments. (Note – I wrote this blog post on a Friday night saying “There’s still some more work that needs to be done to make it easier to run as a one-off (and test it with people who aren’t looping but have the right data)…but this is the goal of autotune!” And as by Saturday morning, we had volunteers who sat down with us and within 1-2 hours had it figured out and documented! True #WeAreNotWaiting. :))

And from what we know, this may be the first tool to help actually make data-driven recommendations on how to change basal rates, ISF, and carb ratios.

How autotune works:

Step 1: Autotune-prep

  • Autotune-prep takes three things initially: glucose data; treatments data; and starting profile (originally from pump; afterwards autotune will set a profile)
  • It calculates BGI and deviation for each glucose value based on treatments
  • Then, it categorizes each glucose value as attributable to either carb sensitivity factor (CSF), ISF, or basals
  • To determine if a “datum” is attributable to CSF, carbs on board (COB) are calculated and decayed over time based on observed BGI deviations, using the same algorithm used by Advanced Meal Asssit. Glucose values after carb entry are attributed to CSF until COB = 0 and BGI deviation <= 0. Subsequent data is attributed as ISF or basals.
  • If BGI is positive (meaning insulin activity is negative), BGI is smaller than 1/4 of basal BGI, or average delta is positive, that data is attributed to basals.
  • Otherwise, the data is attributed to ISF.
  • All this data is output to a single file with 3 sections: ISF, CSF, and basals.

Step 2: Autotune-core

  • Autotune-core reads the prepped glucose file with 3 sections. It calculates what adjustments should be made to ISF, CSF, and basals accordingly.
  • For basals, it divides the day into hour long increments. It calculates the total deviations for that hour increment and calculates what change in basal would be required to adjust those deviations to 0. It then applies 20% of that change needed to the three hours prior (because of insulin impact time). If increasing basal, it increases each of the 3 hour increments by the same amount. If decreasing basal, it does so proportionally, so the biggest basal is reduced the most.
  • For ISF, it calculates the 50th percentile deviation for the entire day and determines how much ISF would need to change to get that deviation to 0. It applies 10% of that as an adjustment to ISF.
  • For CSF, it calculates the total deviations over all of the day’s mealtimes and compares to the deviations that are expected based on existing CSF and the known amount of carbs entered, and applies 10% of that adjustment to CSF.
  • Autotune applies a 20% limit on how much a given basal, or ISF or CSF, can vary from what is in the existing pump profile, so that if it’s running as part of your loop, autotune can’t get too far off without a chance for a human to review the changes.

(See more about how to run autotune here in the OpenAPS docs.)

What autotune output looks like:

Here’s an example of autotune output.

OpenAPS autotune example by @DanaMLewis

Autotune is one of the things Scott and I spent time on over the holidays (and hinted about at the end of my development review of 2016 for OpenAPS). As always with #OpenAPS, it’s awesome to take an idea, get it coded up, get it tested with some early adopters/other developers within days, and continue to improve it!

Highlighting someone successfully using Autotune to help adjust baseline settings

A big thank you to those who’ve been testing and helping iterate on autotune (and of course, all other things OpenAPS). It’s currently in the dev branch of oref0 for anyone who wants to try it out, either one-off or for part of their dev loop. Documentation is currently here, and this is the issue in Github for logging feedback/input, along with sharing and asking questions as always in Gitter!

 

 

OpenAPS feature development in 2016

It’s been two years since my first DIY closed loop and almost two years since OpenAPS (the vision and resulting ecosystem to help make artificial pancreas technology, DIY or otherwise, more quickly available to more people living with diabetes) was created.  I’ve spent time here (on DIYPS.org) talking about a variety of things that are applicable to people who are DIY closed looping, but also focusing on things (like how to “soak” a CGM sensorr and how to do “eating soon” mode) that may be (in my opinion) universally applicable.

OpenAPS feature development in 2016

However, I think it’s worth recapping some of the amazing work that’s been done in the OpenAPS ecosystem over the past year, sometimes behind the scenes, because there are some key features and tools that have been added in that seem small, but are really impactful for people living with DIY closed loops.

  1. Advanced meal assist (aka AMA)
    1. This is an “advanced feature” that can be turned on by OpenAPS users, and, with reliable entry of carb information, will help the closed loop assist sooner with a post-meal BG rise where there is mis-timed or insufficient insulin coverage for the meal. It’s easy to use, because the PWD only has to put carbs and a bolus in – then AMA acts based on the observed absorption. This means that if absorption is delayed because you walk home from dinner, have gastroparesis, etc., it backs off and wait until the carbs actually start taking effect (even if it is later than the human would expect).
    2. We also now have the purple line predictions back in Nightscout to visualize some of these predictions. This is a hallmark of the original iob-cob branch in Nightscout that Scott and I originally created, that took my COB calculated by DIYPS and visualized the resulting BG graph. With AMA, there are actually 3 purple lines displayed when there is carb activity. As described here in the OpenAPS docs, the top purple line assumes 10 mg/dL/5m carb (0.6 mmol/L/5m) absorption and is most accurate right after eating before carb absorption ramps up. The line that is usually in the middle is based on current carb absorption trends and is generally the most accurate once carb absorption begins; and the bottom line assumes no carb absorption and reflects insulin only. Having the 3 lines is helpful for when you do something out of the ordinary following a meal (taking a walk; taking a shower; etc.) and helps a human decide if they need to do anything or if the loop will be able to handle the resulting impact of those decisions.
  2. The approach with a “preferences” file
    1. This is the file where people can adjust default safety and other parameters, like maxIOB which defaults to 0 during a standard setup, ultimately creating a low-glucose-suspend-mode closed loop when people are first setting up their closed loops. People have to intentionally change this setting to allow the system to high temp above a netIOB = 0 amount, which is an intended safety-first approach.
    2. One particular feature (“override_high_target_with_low”) makes it easier for secondary caregivers (like school nurses) to do conservative boluses at lunch/snack time, and allow the closed loop to pick up from there. The secondary caregiver can use the bolus wizard, which will correct down to the high end of the target; and setting this value in preferences to “true” allows the closed loop to target the low end of the target. Based on anecdotal reports from those using it, this feature sounds like it’s prevented a lot of (unintentional, diabetes is hard) overreacting by secondary caregivers when the closed loop can more easily deal with BG fluctuations. The same for “carbratio_adjustmentratio”, if parents would prefer for secondary caregivers to bolus with a more conservative carb ratio, this can be set so the closed loop ultimately uses the correct carb amount for any needed additional calculations.
  3. Autosensitivity
    1. I’ve written about autosensitivity before and how impressive it has been in the face of a norovirus and not eating to have the closed loop detect excessive sensitivity and be able to deal with it – resulting in 0 lows. It’s also helpful during other minor instances of sensitivity after a few active days; or resistance due to hormone cycles and/or an aging pump site.
    2. Autosens is a feature that has to be turned on specifically (like AMA) in order for people to utilize it, because it’s making adjustments to ISF and targets and looping accordingly from those values. It also have safety caps that are set and automatically included to limit the amount of adjustment in either direction that autosens can make to any of the parameters.
  4. Tiny rigs
    1. Thanks to Intel, we were introduced to a board designer who collaborated with the OpenAPS community and inspired the creation of the “Explorer Board”. It’s a multipurpose board that can be used for home automation and all kinds of things, and it’s another tool in the toolbox of off-the-shelf and commercial hardware that can be used in an OpenAPS setup. It’s enabled us, due to the built in radio stick, to be able to drastically reduce the size of an OpenAPS setup to about the size of two Chapsticks.
  5. Setup scripts
    1. As soon as we were working on the Explorer Board, I envisioned that it would be a game changer for increasing access for those who thought a Pi was too big/too burdensome for regular use with a DIY closed loop system. I knew we had a lot of work to do to continue to improve the setup process to cut down on the friction of the setup process – but balancing that with the fact that the DIY part of setting up a closed loop system was and still is incredibly important. We then worked to create the oref0-setup script to streamline the setup process. For anyone building a loop, you still have to set up your hardware and build a system, expressing intention in many places of what you want to do and how…but it’s cut down on a lot of friction and increased the amount of energy people have left, which can instead be focused on reading the code and understanding the underlying algorithm(s) and features that they are considering using.
  6. Streamlined documentation
    1. The OpenAPS “docs” are an incredible labor of love and a testament to dozens and dozens of people who have contributed by sharing their knowledge about hardware, software, and the process it takes to weave all of these tools together. It has gotten to be very long, but given the advent of the Explorer Board hardware and the setup scripts, we were able to drastically streamline the docs and make it a lot easier to go from phase 0 (get and setup hardware, depending on the kind of gear you have); to phase 1 (monitoring and visualizing tools, like Nightscout); to phase 2 (actually setup openaps tools and build your system); to phase 3 (starting with a low glucose suspend only system and how to tune targets and settings safely); to phase 4 (iterating and improving on your system with advanced features, if one so desires). The “old” documentation and manual tool descriptions are still in the docs, but 95% of people don’t need them.
  7. IFTTT and other tool integrations
    1. It’s definitely worth calling out the integration with IFTTT that allows people to use things like Alexa, Siri, Pebble watches, Google Assistant (and just about anything else you can think of), to easily enter carbs or “modes” for OpenAPS to use, or to easily get information about the status of the system. (My personal favorite piece of this is my recent “hack” to automatically have OpenAPS trigger a “waking up” mode to combat hormone-driven BG increases that happen when I start moving around in the morning – but without having to remember to set the mode manually!)

..and that was all just things the community has done in 2016! :) There are some other exciting things that are in development and being tested right now by the community, and I look forward to sharing more as this advanced algorithm development continues.

Happy New Year, everyone!

Half life

I have now lived with diabetes for more than half of my life.

That also means I have now lived less than half of my life without diabetes.

This somehow makes the passing of another year living with diabetes seem much more impactful to me. Maybe not to you, or to someone else with a different experience of living with diabetes and a different timeline of life before and after diagnosis…but to me this is a big one.

I’m happy to have context, though, to help me keep things in perspective. For example, I’ve now lived with a closed loop artificial pancreas (or automated insulin delivery) system for almost two full years.

(That’s almost as significant a marker of a “with” vs. “without” comparison as living “with” vs. “without” diabetes.)

And because I ended up with type 1 diabetes, I found out that doing things for other people and the communities you’re a part of is a powerful way to help yourself, both in the short term and the long term. That’s what drove me to figure out a way to take #DIYPS closed loop and make it something open source. And by doing that, I learned so much more about open source, and have been able to partner with incredible people innovating in hardware and software. These collaborations have resulted in an incredibly rich community of passionate people I like to call #OpenAPS-ers.

While #OpenAPS is by no means a cure, and no artificial pancreas will be a cure, they provide an immeasurably improved quality of life that a lot of us didn’t realize was possible with diabetes. Someone told me he can get the same results for his child living with diabetes, but with #OpenAPS it requires about 85% less work. And given the enormous time and cognitive burden of diabetes, this is a HUGE reduction.

And now doors are opening for us collectively to make even more of a significant impact on the diabetes community, and our fellow patient communities. Yesterday, while at the White House Frontiers conference, NIH Director Dr. Francis Collins was in the audience during my panel. At the end of the day, he stopped me to ask questions about my experiences and perspective on the FDA and what we need from the government. I was able to talk with him about the need for FDA & other parts of the government to help foster and support open source innovation. We talked about the importance of data access for patients, and the need for data visibility on commercially approved medical devices.

Showing former NIH Director Francis Collins my OpenAPS rig and talking about data interoperability.

This is not just a need of people with diabetes (although it’s certainly very applicable for all of the manufacturers with pipelines full of artificial pancreas products): these are universal needs of people dealing with serious health conditions.

Given what I heard yesterday, it’s working. The #WeAreNotWaiting spirit is infusing our partners in these other areas. We are planting seeds, building relationships, and working in collaboration with those at the FDA, NIH, HHS in addition to those in industry and academia. I know they were working toward these same goals before, but social media has helped raise up our collective voices about the burning need to make things better, sooner, for more people.

So if I have to live the rest of my life at a ratio where more than half of it has been spent living with diabetes, I look forward to continuing to work to get to an 85% reduction in the burden of daily life with diabetes for everyone.