The only thing to fear is fear itself

(Things I didn’t realize were involved in open-sourcing a DIY artificial pancreas: writing “yes you can” style self-help blog posts to encourage people to take the first step to TRY and use the open source code and instructions that are freely available….for those who are willing to try.)

You are the only thing holding yourself back from trying. Maybe it’s trying to DIY closed loop at all. Maybe it’s trying to make a change to your existing rig that was set up a long time ago.  Maybe it’s doing something your spouse/partner/parent has previously done for you. Maybe it’s trying to think about changing the way you deal with diabetes at all.

Trying is hard. Learning is hard. But even harder (I think) is listening to the negative self-talk that says “I can’t do this” and perhaps going without something that could make a big difference in your daily life.

99% of the time, you CAN do the thing. But it primarily starts with being willing to try, and being ok with not being perfect right out of the gate.

I blogged last year (wow, almost two years ago actually) about making and doing and how I’ve learned to do so many new things as part of my OpenAPS journey that I never thought possible. I am not a traditional programmer, developer, engineer, or anything like that. Yes, I can code (some)…because I taught myself as I went and continue to teach myself as I go. It’s because I keep trying, and failing, then trying, and succeeding, and trying some more and asking lots of questions along the way.

Here’s what I’ve learned in 3+ years of doing DIY, technical diabetes things that I never thought I’d be able to accomplish:

  1. You don’t need to know everything.
  2. You really don’t particularly need to have any technical “ability” or experience.
  3. You DO need to know that you don’t know it all, even if you already know a thing or two about computers.
  4. (People who come into this process thinking they know everything tend to struggle even more than people who come in humble and ready to learn.)
  5. You only need to be willing to TRY, try, and try again.
  6. It might not always work on the first try of a particular thing…
  7. …but there’s help from the community to help you learn what you need to know.
  8. The learning is a big piece of this, because we’re completely changing the way we treat our diabetes when we go from manual interventions to a hybrid closed loop (and we learned some things to help do it safely).
  9. You can do this – as long as you think you can.
  10. If you think you can’t, you’re right – but it’s not that you can’t, it’s that you’re not willing to even try.

This list of things gets proved out to me on a weekly basis.

I see many people look at the #OpenAPS docs and think “I can’t do that” (and tell me this) and not even attempt to try.

What’s been interesting, though, is how many non-technical people jumped in and gave autotune a try. Even with the same level of no technical ability, several people jumped in, followed the instructions, asked questions, and were able to spin up a Linux virtual machine and run beta-level (brand new, not by any means perfect) code and get output and results. It was amazing, and really proved all those points above. People were deeply interested in getting the computer to help them, and it did. It sometimes took some work, but they were able to accomplish it.

OpenAPS, or anything else involving computers, is the same way. (And OpenAPS is even easier than most anything else that requires coding, in my opinion.) Someone recently estimated that setting up OpenAPS takes only 20 mouse clicks; 29 copy and paste lines of code; 10 entries of passwords or logins; and probably about 15-20 random small entries at prompts (like your NS site address or your email address or wifi addresses). There’s a reference guide, documentation that walks you through exactly what to do, and a supportive community.

You can do it. You can do this. You just have to be willing to try.

Autotune (automatically assessing basal rates, ISF, and carb ratio with #OpenAPS – and even without it!)

What if, instead of guessing needed changes (the current most used method) basal rates, ISF, and carb ratios…we could use data to empirically determine how these ratios should be adjusted?

Meet autotune.

What if we could use data to determine basal rates, ISF and carb ratio? Meet autotune

Historically, most people have guessed basal rates, ISF, and carb ratios. Their doctors may use things like the “rule of 1500” or “1800” or body weight. But, that’s all a general starting place. Over time, people have to manually tweak these underlying basals and ratios in order to best live life with type 1 diabetes. It’s hard to do this manually, and know if you’re overcompensating with meal boluses (aka an incorrect carb ratio) for basal, or over-basaling to compensate for meal times or an incorrect ISF.

And why do these values matter?

It’s not just about manually dosing with this information. But importantly, for most DIY closed loops (like #OpenAPS), dose adjustments are made based on the underlying basals, ISF, and carb ratio. For someone with reasonably tuned basals and ratios, that’s works great. But for someone with values that are way off, it means the system can’t help them adjust as much as someone with well-tuned values. It’ll still help, but it’ll be a fraction as powerful as it could be for that person.

There wasn’t much we could do about that…at first. We designed OpenAPS to fall back to whatever values people had in their pumps, because that’s what the person/their doctor had decided was best. However, we know some people’s aren’t that great, for a variety of reasons. (Growth, activity changes, hormonal cycles, diet and lifestyle changes – to name a few. Aka, life.)

With autosensitivity, we were able to start to assess when actual BG deltas were off compared to what the system predicted should be happening. And with that assessment, it would dynamically adjust ISF, basals, and targets to adjust. However, a common reaction was people seeing the autosens result (based on 24 hours data) and assume that mean that their underlying ISF/basal should be changed. But that’s not the case for two reasons. First, a 24 hour period shouldn’t be what determines those changes. Second, with autosens we cannot tell apart the effects of basals vs. the effect of ISF.

Autotune, by contrast, is designed to iteratively adjust basals, ISF, and carb ratio over the course of weeks – based on a longer stretch of data. Because it makes changes more slowly than autosens, autotune ends up drawing on a larger pool of data, and is therefore able to differentiate whether and how basals and/or ISF need to be adjusted, and also whether carb ratio needs to be changed. Whereas we don’t recommend changing basals or ISF based on the output of autosens (because it’s only looking at 24h of data, and can’t tell apart the effects of basals vs. the effect of ISF), autotune is intended to be used to help guide basal, ISF, and carb ratio changes because it’s tracking trends over a large period of time.

Ideally, for those of us using DIY closed loops like OpenAPS, you can run autotune iteratively inside the closed loop, and let it tune basals, ISF, and carb ratio nightly and use those updated settings automatically. Like autosens, and everything else in OpenAPS, there are safety caps. Therefore, none of these parameters can be tuned beyond 20-30% from the underlying pump values. If someone’s autotune keeps recommending the maximum (20% more resistant, or 30% more sensitive) change over time, then it’s worth a conversation with their doctor about whether your underlying values need changing on the pump – and the person can take this report in to start the discussion.

Not everyone will want to let it run iteratively, though – not to mention, we want it to be useful to anyone, regardless of which DIY closed loop they choose to use – or not! Ideally, this can be run one-off by anyone with Nightscout data of BG and insulin treatments. (Note – I wrote this blog post on a Friday night saying “There’s still some more work that needs to be done to make it easier to run as a one-off (and test it with people who aren’t looping but have the right data)…but this is the goal of autotune!” And as by Saturday morning, we had volunteers who sat down with us and within 1-2 hours had it figured out and documented! True #WeAreNotWaiting. :))

And from what we know, this may be the first tool to help actually make data-driven recommendations on how to change basal rates, ISF, and carb ratios.

How autotune works:

Step 1: Autotune-prep

  • Autotune-prep takes three things initially: glucose data; treatments data; and starting profile (originally from pump; afterwards autotune will set a profile)
  • It calculates BGI and deviation for each glucose value based on treatments
  • Then, it categorizes each glucose value as attributable to either carb sensitivity factor (CSF), ISF, or basals
  • To determine if a “datum” is attributable to CSF, carbs on board (COB) are calculated and decayed over time based on observed BGI deviations, using the same algorithm used by Advanced Meal Asssit. Glucose values after carb entry are attributed to CSF until COB = 0 and BGI deviation <= 0. Subsequent data is attributed as ISF or basals.
  • If BGI is positive (meaning insulin activity is negative), BGI is smaller than 1/4 of basal BGI, or average delta is positive, that data is attributed to basals.
  • Otherwise, the data is attributed to ISF.
  • All this data is output to a single file with 3 sections: ISF, CSF, and basals.

Step 2: Autotune-core

  • Autotune-core reads the prepped glucose file with 3 sections. It calculates what adjustments should be made to ISF, CSF, and basals accordingly.
  • For basals, it divides the day into hour long increments. It calculates the total deviations for that hour increment and calculates what change in basal would be required to adjust those deviations to 0. It then applies 20% of that change needed to the three hours prior (because of insulin impact time). If increasing basal, it increases each of the 3 hour increments by the same amount. If decreasing basal, it does so proportionally, so the biggest basal is reduced the most.
  • For ISF, it calculates the 50th percentile deviation for the entire day and determines how much ISF would need to change to get that deviation to 0. It applies 10% of that as an adjustment to ISF.
  • For CSF, it calculates the total deviations over all of the day’s mealtimes and compares to the deviations that are expected based on existing CSF and the known amount of carbs entered, and applies 10% of that adjustment to CSF.
  • Autotune applies a 20% limit on how much a given basal, or ISF or CSF, can vary from what is in the existing pump profile, so that if it’s running as part of your loop, autotune can’t get too far off without a chance for a human to review the changes.

(See more about how to run autotune here in the OpenAPS docs.)

What autotune output looks like:

Here’s an example of autotune output.

OpenAPS autotune example by @DanaMLewis

Autotune is one of the things Scott and I spent time on over the holidays (and hinted about at the end of my development review of 2016 for OpenAPS). As always with #OpenAPS, it’s awesome to take an idea, get it coded up, get it tested with some early adopters/other developers within days, and continue to improve it!

A big thank you to those who’ve been testing and helping iterate on autotune (and of course, all other things OpenAPS). It’s currently in the dev branch of oref0 for anyone who wants to try it out, either one-off or for part of their dev loop. Documentation is currently here, and this is the issue in Github for logging feedback/input, along with sharing and asking questions as always in Gitter!

 

 

OpenAPS feature development in 2016

It’s been two years since my first DIY closed loop and almost two years since OpenAPS (the vision and resulting ecosystem to help make artificial pancreas technology, DIY or otherwise, more quickly available to more people living with diabetes) was created.  I’ve spent time here (on DIYPS.org) talking about a variety of things that are applicable to people who are DIY closed looping, but also focusing on things (like how to “soak” a CGM sensorr and how to do “eating soon” mode) that may be (in my opinion) universally applicable.

OpenAPS feature development in 2016

However, I think it’s worth recapping some of the amazing work that’s been done in the OpenAPS ecosystem over the past year, sometimes behind the scenes, because there are some key features and tools that have been added in that seem small, but are really impactful for people living with DIY closed loops.

  1. Advanced meal assist (aka AMA)
    1. This is an “advanced feature” that can be turned on by OpenAPS users, and, with reliable entry of carb information, will help the closed loop assist sooner with a post-meal BG rise where there is mis-timed or insufficient insulin coverage for the meal. It’s easy to use, because the PWD only has to put carbs and a bolus in – then AMA acts based on the observed absorption. This means that if absorption is delayed because you walk home from dinner, have gastroparesis, etc., it backs off and wait until the carbs actually start taking effect (even if it is later than the human would expect).
    2. We also now have the purple line predictions back in Nightscout to visualize some of these predictions. This is a hallmark of the original iob-cob branch in Nightscout that Scott and I originally created, that took my COB calculated by DIYPS and visualized the resulting BG graph. With AMA, there are actually 3 purple lines displayed when there is carb activity. As described here in the OpenAPS docs, the top purple line assumes 10 mg/dL/5m carb (0.6 mmol/L/5m) absorption and is most accurate right after eating before carb absorption ramps up. The line that is usually in the middle is based on current carb absorption trends and is generally the most accurate once carb absorption begins; and the bottom line assumes no carb absorption and reflects insulin only. Having the 3 lines is helpful for when you do something out of the ordinary following a meal (taking a walk; taking a shower; etc.) and helps a human decide if they need to do anything or if the loop will be able to handle the resulting impact of those decisions.
  2. The approach with a “preferences” file
    1. This is the file where people can adjust default safety and other parameters, like maxIOB which defaults to 0 during a standard setup, ultimately creating a low-glucose-suspend-mode closed loop when people are first setting up their closed loops. People have to intentionally change this setting to allow the system to high temp above a netIOB = 0 amount, which is an intended safety-first approach.
    2. One particular feature (“override_high_target_with_low”) makes it easier for secondary caregivers (like school nurses) to do conservative boluses at lunch/snack time, and allow the closed loop to pick up from there. The secondary caregiver can use the bolus wizard, which will correct down to the high end of the target; and setting this value in preferences to “true” allows the closed loop to target the low end of the target. Based on anecdotal reports from those using it, this feature sounds like it’s prevented a lot of (unintentional, diabetes is hard) overreacting by secondary caregivers when the closed loop can more easily deal with BG fluctuations. The same for “carbratio_adjustmentratio”, if parents would prefer for secondary caregivers to bolus with a more conservative carb ratio, this can be set so the closed loop ultimately uses the correct carb amount for any needed additional calculations.
  3. Autosensitivity
    1. I’ve written about autosensitivity before and how impressive it has been in the face of a norovirus and not eating to have the closed loop detect excessive sensitivity and be able to deal with it – resulting in 0 lows. It’s also helpful during other minor instances of sensitivity after a few active days; or resistance due to hormone cycles and/or an aging pump site.
    2. Autosens is a feature that has to be turned on specifically (like AMA) in order for people to utilize it, because it’s making adjustments to ISF and targets and looping accordingly from those values. It also have safety caps that are set and automatically included to limit the amount of adjustment in either direction that autosens can make to any of the parameters.
  4. Tiny rigs
    1. Thanks to Intel, we were introduced to a board designer who collaborated with the OpenAPS community and inspired the creation of the “Explorer Board”. It’s a multipurpose board that can be used for home automation and all kinds of things, and it’s another tool in the toolbox of off-the-shelf and commercial hardware that can be used in an OpenAPS setup. It’s enabled us, due to the built in radio stick, to be able to drastically reduce the size of an OpenAPS setup to about the size of two Chapsticks.
  5. Setup scripts
    1. As soon as we were working on the Explorer Board, I envisioned that it would be a game changer for increasing access for those who thought a Pi was too big/too burdensome for regular use with a DIY closed loop system. I knew we had a lot of work to do to continue to improve the setup process to cut down on the friction of the setup process – but balancing that with the fact that the DIY part of setting up a closed loop system was and still is incredibly important. We then worked to create the oref0-setup script to streamline the setup process. For anyone building a loop, you still have to set up your hardware and build a system, expressing intention in many places of what you want to do and how…but it’s cut down on a lot of friction and increased the amount of energy people have left, which can instead be focused on reading the code and understanding the underlying algorithm(s) and features that they are considering using.
  6. Streamlined documentation
    1. The OpenAPS “docs” are an incredible labor of love and a testament to dozens and dozens of people who have contributed by sharing their knowledge about hardware, software, and the process it takes to weave all of these tools together. It has gotten to be very long, but given the advent of the Explorer Board hardware and the setup scripts, we were able to drastically streamline the docs and make it a lot easier to go from phase 0 (get and setup hardware, depending on the kind of gear you have); to phase 1 (monitoring and visualizing tools, like Nightscout); to phase 2 (actually setup openaps tools and build your system); to phase 3 (starting with a low glucose suspend only system and how to tune targets and settings safely); to phase 4 (iterating and improving on your system with advanced features, if one so desires). The “old” documentation and manual tool descriptions are still in the docs, but 95% of people don’t need them.
  7. IFTTT and other tool integrations
    1. It’s definitely worth calling out the integration with IFTTT that allows people to use things like Alexa, Siri, Pebble watches, Google Assistant (and just about anything else you can think of), to easily enter carbs or “modes” for OpenAPS to use, or to easily get information about the status of the system. (My personal favorite piece of this is my recent “hack” to automatically have OpenAPS trigger a “waking up” mode to combat hormone-driven BG increases that happen when I start moving around in the morning – but without having to remember to set the mode manually!)

..and that was all just things the community has done in 2016! :) There are some other exciting things that are in development and being tested right now by the community, and I look forward to sharing more as this advanced algorithm development continues.

Happy New Year, everyone!

Automating “wake up” mode with IFTTT and #OpenAPS to blunt morning hormonal rises

tl;dr – automate a trigger to your #OpenAPS rig to start “wake up” mode (or “eating soon”, assuming you eat breakfast) without you having to remember to do it.

Yesterday morning, I woke up and headed to my desk to start working. Because I’m getting some amazing flat line overnights now, thanks to my DIY closed loop (#OpenAPS), I’m more attuned to the fact that after I wake up and start moving around, my hormones kick in to help wake me up (I guess), and I have a small BG rise that’s not otherwise explained by anything else. (It’s not a baseline basal problem, because it happens after I wake up regardless of it being 6am or 8am or even 10:30am if I sleep in on a weekend. It’s also more pronounced when I feel sleep deprived, like my body is working even harder to wake me up.)

Later in the morning, I took a break to jot down my thoughts in response to a question about normal meal rises on #OpenAPS and strategies to optimize mealtimes. It occurred to me later, after being hyper attuned to my lunch results, that my morning wake-up rise up from 1oo perfectly flat to ~140 was higher than the 131 peak I hit after my lunchtime bowl of potato soup.

Hmm, I thought. I wish there was something I could do to help with those morning rises. I often do a temporary target down to 80 mg/dL (a la “eating soon” mode) once I spot the rise, but that’s after it’s already started and very dependent on me paying attention/noticing the rise.

I also have a widely varied schedule (and travel a lot), so I don’t like the idea of scheduling the temp target, or having recurring calendar events that is yet another thing to babysit and change constantly.

What I want is something that is automatically triggered when I wake up, so whether I pop out of the bed or read for 15 minutes first, it kicks in automatically and I (the non-morning person) don’t have to remember to do one more thing. And the best trigger that I could think of is when I end Sleep Cycle, the sleep tracking app I use.

I started looking online to see if there was an easy IFTTT integration with Sleep Cycle. (There’s not. Boo.) So I started looking to see if I could stick my Sleep Cycle data elsewhere that could be used with IFTTT. I stumbled across this blog post describing Sleep Cycle -> iOS Apple HealthKit -> UP -> Google Spreadsheet -> Zapier -> Add to Google Calendar. And then I thought I would add another IFTTT trigger for when the calendar entry was added, to then send “waking up” mode to #OpenAPS. But I don’t need all of the calendar steps. The ideal recipe for me then might be Sleep Cycle ->  iOS Health Kit -> UP -> IFTTT sends “waking up mode” -> Nightscout -> my rig. However, I then learned that UP doesn’t necessarily automatically sync the data from HealthKit, unless the app is open. Hmm. More rabbit holing. Thanks to the tweet-a-friend option, I talked to Ernesto Ramirez (long time QS guru and now at Fitabase), who found the same blog post I did (above) and when I described the constraints, then pointed me to Hipbone to grab Healthkit sleep data and stuff it into Dropbox.

(Why Sleep Cycle? It is my main sleep tracker, but there’s IFTTT integration with Fitbit, Jawbone Up, and a bunch of other stuff, so if you’re interested in this, figure out how to plug your data into IFTTT, otherwise follow the OpenAPS docs for using IFTTT to get data into Nightscout for OpenAPS, and you’ll be all set. I’m trying to avoid having to go back to my Fitbit as the sleep tracker, since I’m wearing my Pebble and I was tired of wearing 2 things. And for some reason my Pebble is inconsistent and slow about showing the sleep data in the morning, so that’s not reliable for this purpose. )

Here’s how I have enabled this “wake up” mode trigger for now:

  1. If you’re using Sleep Cycle, enable it to write sleep analysis data to Apple HealthKit.
  2. Download the Hipbone app for iPhone, connect it with your Dropbox, and allow Hipbone to read sleep data from HealthKit.
  3. Log in or create an account in IFTTT.com and create a recipe using Dropbox as the trigger, and Maker as the action to send a web request to Nightscout. (Again, see the OpenAPS docs for using IFTTT triggers to post to Nightscout, there’s all kinds of great things you can do with your Pebble, Alexa, etc. thanks to IFTTT.) To start, I made “waking up” soon a temporary target to 80 for 30 minutes.

Guess what? This morning, I woke up, ended sleep cycle, and ~10-11 minutes later got notifications that I had new data in Dropbox and checked and found “waking up” mode showing in Nightscout! Woohoo. And it worked well for not having a hormone-driven BG rise after I started moving around.

First "waking up" mode in #OpenAPS automation success

Ideally, this would run immediately, and not take 10-11 minutes, but it went automatically without me having to open Hipbone (or any other app), so this is a great interim solution for me until we find an app that will run more quickly to get the sleep data from HealthKit.

We keep finding great ways to use IFTTT triggers, so if you have any other cool ones you’ve added to your DIY closed loop ecosystem, please let me know!

Half life

I have now lived with diabetes for more than half of my life.

That also means I have now lived less than half of my life without diabetes.

This somehow makes the passing of another year living with diabetes seem much more impactful to me. Maybe not to you, or to someone else with a different experience of living with diabetes and a different timeline of life before and after diagnosis…but to me this is a big one.

I’m happy to have context, though, to help me keep things in perspective. For example, I’ve now lived with a closed loop artificial pancreas (or automated insulin delivery) system for almost two full years.

(That’s almost as significant a marker of a “with” vs. “without” comparison as living “with” vs. “without” diabetes.)

And because I ended up with type 1 diabetes, I found out that doing things for other people and the communities you’re a part of is a powerful way to help yourself, both in the short term and the long term. That’s what drove me to figure out a way to take #DIYPS closed loop and make it something open source. And by doing that, I learned so much more about open source, and have been able to partner with incredible people innovating in hardware and software. These collaborations have resulted in an incredibly rich community of passionate people I like to call #OpenAPS-ers.

While #OpenAPS is by no means a cure, and no artificial pancreas will be a cure, they provide an immeasurably improved quality of life that a lot of us didn’t realize was possible with diabetes. Someone told me he can get the same results for his child living with diabetes, but with #OpenAPS it requires about 85% less work. And given the enormous time and cognitive burden of diabetes, this is a HUGE reduction.

And now doors are opening for us collectively to make even more of a significant impact on the diabetes community, and our fellow patient communities. Yesterday, while at the White House Frontiers conference, NIH Director Dr. Francis Collins was in the audience during my panel. At the end of the day, he stopped me to ask questions about my experiences and perspective on the FDA and what we need from the government. I was able to talk with him about the need for FDA & other parts of the government to help foster and support open source innovation. We talked about the importance of data access for patients, and the need for data visibility on commercially approved medical devices.

This is not just a need of people with diabetes (although it’s certainly very applicable for all of the manufacturers with pipelines full of artificial pancreas products): these are universal needs of people dealing with serious health conditions.

Given what I heard yesterday, it’s working. The #WeAreNotWaiting spirit is infusing our partners in these other areas. We are planting seeds, building relationships, and working in collaboration with those at the FDA, NIH, HHS in addition to those in industry and academia. I know they were working toward these same goals before, but social media has helped raise up our collective voices about the burning need to make things better, sooner, for more people.

So if I have to live the rest of my life at a ratio where more than half of it has been spent living with diabetes, I look forward to continuing to work to get to an 85% reduction in the burden of daily life with diabetes for everyone.

 

Why you should post that in Gitter

“You should post that question in Gitter.” –something @DanaMLewis says a lot

I realize it’s not always obvious to those who are newer to #WeAreNotWaiting or #OpenAPS why I am often pointing people to ask their questions in Gitter.

Let me explain:

There is a Gitter chat channel where most of the #OpenAPS and other DIY development conversations happens. (There are several other channels, so if you pop onto the main OpenAPS one, we’re happy to point you to another if there’s another one already built for another or related project.)

Gitter connects with Github, so you can use the same username to log in and ask a question. And very importantly, it’s public, so *anyone* can see the conversations in channel – even if you decide not to log in. See for yourself – click here to view the chat channel. This is very key for an open source project: anyone can jump in and check things out.

Transparency & archiving community knowledge
It also means that questions can be asked – and answered – openly, so that when someone has the same question, they can often find an existing answer with a little bit of searching.

(Tip: there’s a Gitter app for your phone, and for your desktop, both of which I use on the go – but for searching back in the channel, the web Gitter interface has a better search experience, or you can also use Google to search through the archives.)

Faster responses from a smarter, broader, worldwide community
There’s another key reason why asking a question in an open channel is helpful for Q&A. Two words: time zones. There are now (n=1)*104+ people around the world with DIY closed loops, and a lot of them pay it forward and provide guidance and also help answer questions.

If you send one individual a question in a private channel, you have to cross your fingers and hope they’re a) awake b) not working and c) otherwise available to respond.

But if you ask your question openly in a public channel, anyone out of the large community with the answer can jump in and answer more quickly. Given that we have a worldwide community and people across many time zones, this means faster answers and more “ah-ha’s” as people collaboratively work through new and already-documented-sticky-points in the build process.

So if you ask an individual a question in a private channel, you might get a “You should ask that in Gitter!” response. And this is why. :) You’re welcome as always to ping me across any channel, but often when there’s a good question, I’m going to want you to re-post in Gitter, anyway, so people can benefit from having the knowledge (including any answers from the community) archived for the next person who has the same or similar question.
4 reasons to post OpenAPS questions in Gitter

Our take on how to DIY closed loop, safely

You will often see similar growth and evolution cycles across any type of online community, and the closed loop community is following this growth cycle as expected. Much like how Nightscout went from one very hard way to setup to get your CGM data in the cloud, to ultimately having dozens of DIY options and now more recently, multiple commercial options, closed looping is following similar trends. OpenAPS was the first open source option for people who wanted to DIY loop, and now there are a growing number of ways to build or run closed loops! And next year, there should be at least one commercial option publicly available in the U.S. followed by several more options in 2018 on the commercial market. Awesome! This is exactly the progress we were hoping to see, and facilitate happening more quickly, by making our work & encouraging others to make their work open source.

We’ve learned a lot (from building our own closed loop and watching others do so through OpenAPS) that we think is relevant to anyone who pursues DIY closed looping, regardless of the technology option they choose. This thought process and approach will likely also be relevant to those who switch to a closed loop commercial option in the future, so we wanted to document some of the thought process that may be involved.

Approaching closed looping safely

Before considering closed looping, people should know:

  • A (hybrid or even full) closed loop is not a cure. There will be a learning curve, much like switching to a pump for the first time.
  • Even after you get comfortable with a closed loop, there will still sometimes be high or low BGs, because we are still dealing with insulin that peaks in 60-90 minutes; we’ll still get kinked pump sites or pooled insulin; and we’ll still have hormones that drive our BGs up and down very rapidly in ways we can’t predict, but must react to. Closed looping helps a lot, but there’s still a lot that goes into managing diabetes.

Before using a DIY closed loop, people should consider:

  • Identifying or creating the method to visualize their data in a way they are comfortable with, both for real-time monitoring of loop activity and retrospective monitoring. This is a key component of DIY looping.
  • Running in “open loop” mode, where the system provides recommendations and you spend days or weeks analyzing and comparing those recommendations to how you would calculate and choose to take action manually.
  • Based on watching the “open loop” suggestions, decide your safety limits: you should set max basal and bolus rates, as well as max net IOB limits where relevant. Start conservative, knowing you can change them over time as you watch and validate how a particular DIY loop works with your body and your lifestyle.

Getting started with a DIY closed loop, people should think about the following:

  • Understand how it works, so you know how to fix it. Remember, by pursuing a DIY closed loop, you are responsible for it and the operation of it. No one is forcing you to do this; it’s one of many choices you can and will make with regards to how you personally choose to manage your diabetes.
  • But even more importantly, you need to understand how it works so you can choose if you need to step in and take manual action. You should understand how it works so you can validate “this is what it should be doing” and “I am getting the output and outcomes that I would expect if I were doing this decision making manually”.
  • Often, people will get frustrated by diabetes and take actions that the loop then has to compensate for. Or they’ll get lax on when they meal bolus, or not enter carbs into the system, etc. You will get much better results by putting better data into the system, and also by having a better understanding of insulin timing in your body, especially at meal times. Using techniques like “eating soon mode” will dramatically help anyone, with or without a closed loop, reduce and limit severity of meal spikes. Ditto goes for having good CGM “calibration hygiene” (h/t to Pete for this phrase) and ensuring you have thought about the ramifications of automating insulin dosing based on CGM data, and how you may or may not want to loop if you doubt your CGM data. (Like “eating soon”, ‘soaking’ a CGM sensor may yield you better first day results.)
  • Start with higher targets for the loop than you might correct to manually.
  • Move first from an “open loop” mode to a “low glucose suspend” type mode first, where max net IOB is 0 and/or max basal is set at or just above your max daily scheduled basal, so it low temps to prevent and limit lows, but does not high temp above bringing net IOB back to 0.
  • Gradually increase max net IOB above 0 (and/or increase max basal) every few days after several days without low BGs; similarly, adjust targets down 10 points for every few days gone without experiencing low BGs.
  • Test basic algorithms and adjust targets and various max rates before moving on to testing advanced features. (It will be a lot easier to troubleshoot, and learn how a new feature works, if you’re not also adjusting to closed looping in its entirety).
This is our (Dana & Scott‘s) take on things to think about before and when pursuing a closed loop option. But there’s about a hundred others running around the world with closed loops, too, so if you have input to share with people that they should consider before looping, leave a comment below! :) And if you’re looking to DIY closed loop before a commercial solution is available, you might also be interested in checking out the OpenAPS Reference Design and some FAQs related to OpenAPS.

How to “soak” a new CGM sensor for better first day BGs

Having used a CGM for years (and years and years, and years before that), and having chosen to build a DIY system that provides smart alerts and recommendations based on said CGM data (learn more about my #DIYPS system here) and ultimately using CGM data to build the open source closed loop system that automates insulin delivery (find out more about #OpenAPS here)…I’ve learned a few things about how to get the best data out of my sensors. Currently, I’m using Dexcom, so this applies to the Dexcom sensors.

The biggest thing I do to get better first day results from a continuous glucose monitor (CGM) sensor is to “soak” my sensors. Here’s what I mean by this:

Normally, you’d expect to see a person with one CGM sensor on their body, like this:

Dana Lewis_first day CGM sensor illustrationHowever, 12-24 hours before I expect my sensor to end, I insert my next sensor into my body. To protect the sensor (you don’t want the sensor filament itself to get torn off or lost in your body), I plop an old (“dead” battery) transmitter on it.

If you don’t have an old/dead transmitter, you could try taping over it – the goal is just to protect the sensor filament from ripping.

Dana Lewis_last day CGM sensor soak illustrationThe next day, when my sensor session ends:

  • I take the “live” transmitter off the old sensor, and remove the old sensor from my body. I hit “stop sensor” on my receiver, if it hasn’t already stopped itself.
  • I gently remove the “dead”/old transmitter from ‘new’ sensor.
  • I then stick the “live” transmitter onto the new sensor.
  • I hit “start sensor” on my receiver.

Dana Lewis_last day CGM sensor soak change illustrationThe outcome (for me) has always been significantly improved “first day” BG readings from the sensor. This works great when you can plan ahead and your outfits (don’t judge, sometimes you have important outfits like a wedding dress to plan around) and skin real estate support two sites on your body for 24 hours or so. This doesn’t work if you rip a sensor out by accident, so in those scenarios I go ahead and put a new sensor on, put the ‘live’ transmitter on, and hit ‘start’ to get through the 2 hour calibration period as soon as possible to get back to having live data. (All the while knowing that the first day is going to be more “meh” than it would be otherwise.)

What we heard and saw at #DData16 and #2016ADA

As mentioned in the previous post, we had the privilege of coming to New Orleans this past weekend for two events – #DData16 and the American Diabetes Association Scientific Sessions (#2016ADA). A few things stuck out, which I wanted to highlight here.

At #DData16:

  • The focus was on artificial pancreas, and there was a great panel moderated by Howard Look with several of the AP makers. I was struck by how many of them referenced or made mention of #OpenAPS or the DIY/#WeAreNotWaiting movement, and the need for industry to collaborate with the DIY community (yes).
  • I was also floored when someone from Dexcom referenced having read one of my older blog posts that mentioned a question of why ??? was displayed to me instead of the information about what was actually going on with my sensor. It was a great reminder to me of how important it is for us to speak up and keep sharing our experiences and help device manufacturers know what we need for current and future products, the ones we use every day to help keep us alive.
  • Mark Wilson gave a PHENOMENAL presentation, using a great analogy about driving and accessing the dashboard to help people understand why people with diabetes might choose to DIY. He also talked about his experiences with #OpenAPS, and I highly recommend watching it. (Kudos to Wes for livestreaming it and making it broadly available to all – watch it here!) I’ve mentioned Mark & his DIY-ing here before, especially because one of his creations (the Urchin watchface) is one of my favorite ways to help me view my data, my way.
  • Howard DM’ed me in the middle of the day to ask if I minded going up as part of the patient panel of people with AP experiences. I wasn’t sure what the topic was, but the questions allowed us to talk about our experiences with AP (and in my case, I’ve been using a hybrid closed loop for something like 557 or so days at this point). I made several points about the need for a “plug n play” system, with modularity so I can choose the best pump, sensor, and algorithm for me – which may or may not be made all by the same company. (This is also FDA’s vision for the future, and Dr. Courtney Lias both gave a good presentation on this topic and was engaged in the event’s conversation all day!).

At #2016ADA:

  • There needs to be a patient research access program developed (not just by the American Diabetes Association for their future Scientific Sessions meetings, but at all scientific and academic conferences). Technology has enabled patients to make significant contributions to the medical and scientific fields, and cost and access are huge barriers to preventing this knowledge from scaling. At #2016ADA, “patient” is not even an option on the back of the registration form. Scott and I are privileged that we could potentially pay for this, but we don’t think we should have to pay ($410 for a day pass or $900 for a weekend pass) so much when we are not backed by industry or an academic organization of any sort. (As a side note, a big thank you to the many people who have a) engaged in discussion around this topic b) helped reach out to contacts at ADA to discuss this topic and c) asked about ways to contribute to the cost of us presenting this research this weekend.)
  • We presented research from 18 of the first 40 users of #OpenAPS. You can find the FULL CONTENT of our findings and the research poster content in this post on OpenAPS.org. We specifically posted our content online (and tweeted it out – see this thread) for a few reasons:
    • First, everything about #OpenAPS is open source. The content of our poster or any presentation is similarly open source.
    • Not everyone had time to come by the poster.
    • Not everyone has the privilege or funds to attend #2016ADA, and there’s no reason not to share this content online, especially when we will likely get more knowledge sharing as a result of doing so.
  • With the above in mind, we encouraged people stopping by to take whatever photos of our poster that they wanted, and told them about the content being posted online. (And in fact, in addition to the blog post about the poster, that information is now on the “Outcomes” page on OpenAPS.org.)
  • Frustratingly, some people were asked to take down posted photos of our poster. If anyone received such a note, please feel free to pass on my tweet that you have authorization by the authors to have taken/used the photo. This is another area (like the need to develop patient research access programs) that needs to be figured out by scientific/academic conferences – presenters/authors should be able to specifically allow sharing and dissemination of information that they are presenting.
  • Speaking of photos, I was surprised that around half a dozen clinicians (HCPs) stopped by and made mention of having used the picture of the #OpenAPS rig and the story of #OpenAPS in one of their presentations! I am thrilled this story is spreading, and being spread even by people we haven’t had direct contact with previously! (Feel free to use this photo in presentations, too, although I’d love to hear about your presentation and see a copy of it!)
  • We had many amazing conversations during the poster session on Sunday. It was scheduled for two hours (12-2pm), but we ended up being there around four hours and had hundreds of fantastic dialogues. Here were some of the most common themes of conversation:
    • Why are patients doing this?
      • Here’s my why: I originally needed louder alarms, built a smart alarm system that had predictive alerts and turned into an open loop system, and ultimately realized I could close the loop.
    • What can we learn from the people who are DIY-ing?
    • How can we further study the DIY closed loop community?
      • This is my second favorite topic, which touches on a few things – 1) the plan to do a follow up study of the larger cohort (since we now have (n=1)*84 loopers) with a full retrospective analysis of the data rather than just self-reported outcomes, as this study used; 2) ideas around doing a comparison study between one or more of the #OpenAPS algorithms and some of the commercial or academic algorithms; 3) ideas to use some of the #OpenAPS-developed tools (like a basal tuning tool that we are planning to build) in a clinical trial to help HCPs help patients adjust more quickly and easily to pump therapy.
    • What other pumps will work with this? How can there be more access to this type of DIY technology?
      • We utilize older pumps that allow us to send temp basal commands; we would love to use a more modern pump that’s able to be purchased on the market today, and had several conversations with device manufacturers about how that might be possible;  we’ll continue to have these conversations until it becomes a reality.
  • There is some great coverage coming of the poster & the #OpenAPS community, and I’ll post links here as I see them come out. For starters, Dave deBronkart did a 22 minute interview with Scott & I, which you can see here. DiabetesMine also included mention of the #OpenAPS poster in their conference roundup. And diaTribe wrote up the the poster as a “new now next”! Plus, WebMD wrote an article on #OpenAPS and the poster as well.

Scott and I walked away from this weekend with energy for new collaborations (and new contacts for clinical trial and retrospective analysis partnerships) and several ideas for the next phase of studies that we want to plan in partnership with the #OpenAPS community. (We were blown away to discover that OpenAPS advanced meal assist algorithm is considered by some experts to be one of the most advanced and aggressive algorithms in existence for managing post-meal BG, and may be more advanced than anything that has yet been tested in clinical trials.) Stay tuned for more!

Research studies and usability thoughts

It’s been a busy couple (ok, more than couple) of months since we last blogged here related to developments from #DIYPS and #OpenAPS. (For context, #DIYPS is Dana’s personal system that started as a louder alarms system and evolved into an open loop and then closed loop (background here). #OpenAPS is the open source reference design that enables anyone to build their own DIY closed loop artificial pancreas. See www.OpenAPS.org for more about that specifically.)

We’ve instead spent time spreading the word about OpenAPS in other channels (in the Wall Street Journal; on WNYC’s Only Human podcast; in a keynote at OSCON, and many other places like at the White House), further developing OpenAPS algorithms (incorporating “eating soon mode” and temporary targets in addition to building in auto-sensitivity and meal assist features), working our day jobs, traveling, and more of all of the above.

Some of the biggest improvements we’ve made to OpenAPS recently have been usability improvements. In February, someone kindly did the soldering of an Edison/Rileylink “rig” for me. This was just after I did a livestream Q&A with the TuDiabetes community, saying that I didn’t mind the size of my Raspberry Pi rig. I don’t. It works, it’s an artificial pancreas, the size doesn’t matter.

That being said… Wow! Having a small rig that clips to my pocket does wonders for being able to just run out the door and go to dinner, run an errand, go on an actual run, and more. I could do all those things before, but downsizing the rig makes it even easier, and it’s a fantastic addition to the already awesome experience of having a closed loop for the past 18 months (and >11,000 hours of looping). I’m so thankful for all of the people (Pete on Rileylink, Oscar on mmeowlink, Toby for soldering my first Edison rig for me, and many many others) who have been hard at work enabling more hardware options for OpenAPS, in addition to everyone who’s been contributing to algorithm improvements, assisting with improving the documentation, helping other people navigate the setup process, and more!

That leads me to today. I just finished participating in a month-long usability study focused on OpenAPS users. (One of the cool parts was that several OpenAPS users contributed heavily to the design of the study, too!) We tracked every day (for up to 30 days) any time we interacted with the loop/system, and it was fascinating.

At one point, for a stretch of 3 days, we counted how many times we looked at our BGs. Between my watch, 3 phone apps/ways to view my data, the CGM receivers, Scott’s watch, the iPad by the bed, etc: dozens and dozens of glances. I wasn’t too surprised at how many times I glance/notice my BGs or what the loop is doing, but I bet other people are. Even with a closed loop, I still have diabetes and it still requires me to pay attention to it. I don’t *have* to pay attention as often as I would without a closed loop, and the outcomes are significantly better, but it’s still important to note that the human is still ultimately in control and responsible for keeping an eye on their system.

That’s one of the things I’ve been thinking about lately: the need to set expectations when a loop comes out on the commercial market and is more widely available. A closed loop is a tool, but it’s not a cure. Managing type 1 diabetes will still require a lot of work, even with a polished commercial APS: you’ll still need to deal with BG checks, CGM calibrations, site changes, dealing with sites and sensors that fall out or get ripped out…  And of course there will still be days where you’re sensitive or resistant and BGs are not perfect for whatever reason. In addition, it will take time to transition from the standard of care as we have it today (pump, CGM, but no algorithms and no connected devices) to open and/or closed loops.

This is one of the things among many that we are hoping to help the diabetes community with as a result of the many (80+ as of June 8, 2016!) users with #OpenAPS. We have learned a lot about trusting a closed loop system, about what it takes to transition, how to deal if the system you trust breaks, and how to use more data than you’re used to getting in order to improve diabetes care.

As a step to helping the healthcare provider community start thinking about some of these things, the #OpenAPS community submitted a poster that was accepted and will be presented this weekend at the 2016 American Diabetes Association Scientific Sessions meeting. This will be the first data published from the community, and it’s significant because it’s a study BY the community itself. We’re also working with other clinical research partners on various studies (in addition to the usability study, other studies to more thoroughly examine data from the community) for the future, but this study was a completely volunteer DIY effort, just like the entire OpenAPS movement has been.

Our hope is that clinicians walk away this weekend with insight into how engaged patients are and can be with their care, and a new way of having conversations with patients about the tools they are choosing to use and/or build. (And hopefully we’ll help many of them develop a deeper understanding of how artificial pancreas technology works: #OpenAPS is a great learning tool not only for patients, but also for all the physicians who have not had any patients on artificial pancreas systems yet.)

Stay tuned: the poster is embargoed until Saturday morning, but we’ll be sharing our results online beginning this weekend once the embargo lifts! (The hashtag for the conference is #2016ADA, and we’ll of course be posting via @OpenAPS and to #OpenAPS with the data and any insights coming out of the conference.)