Being Shuttleworth Funded with a Flash Grant as an independent patient researcher

Recently, I have been working on helping OpenAPS’ers collect our data and put it to good use in research (both by traditional researchers as well as using it to enable other fellow patient researchers or “citizen scientists). As a result, I have had the opportunity to work closely with Madeleine Ball at Open Humans. (Open Humans is the platform we use for the OpenAPS Data Commons.)

It’s been awesome to collaborate with Madeleine on many fronts. She’s proven herself really willing to listen to ideas and suggestions for things to change, to make it easier for both individuals to donate their data to research and for researchers who want to use the platform. And, despite me not having the same level of technical skills, she emits a deep respect for people of all experiences and perspectives. She’s also in general a really great person.

As someone who is (perhaps uniquely) utilizing the platform as both a data donor and as a data researcher, it has been fantastic to be able to work through the process of data donation, project creation, and project utilization from both perspectives. And, it’s been great to contribute ideas and make tools (like some of my scripts to download and unpack Open Humans data) that can then be used by other researchers on Open Humans.

Madeleine was also selected this year to be a Shuttleworth Fellow, applying “open” principles to change how we share and study human health data, plus exploring new, participant-centered approaches for health data sharing, research, and citizen science. Which means that everything she’s doing is in almost perfect sync with what we are doing in the OpenAPS and #WeAreNotWaiting communities.

What I didn’t know until this past week was that it also meant (as a Shuttleworth Fellow) that she was able to make nominations of individuals for a Shuttleworth Flash Grant, which is a grant made to a collection of social change agents, no strings attached, in support of their work.

I was astonished to receive an email from the Shuttleworth Foundation saying that I had been nominated by Madeleine for a $5,000 Flash Grant, which goes to individuals they would like to support/reward/encourage in their work for social good.

Shuttleworth Funded

I am so blown away by the Flash Grant itself – and the signal that this grant provides. This is the first (of hopefully many) organizations to recognize the importance of supporting independent patient researchers who are not affiliated with an institution, but rather with an online community. It’s incredibly meaningful for this research and work, which is centered around real needs of patients in the real world, to be funded, even to a small degree.

Many non-traditional researchers like me are unaffiliated with a traditional institution or organization. This means we do the research in our own time, funded solely by our own energy (and in some case resources). Time in of itself is a valuable contribution to research (think of the opportunity costs). However, it is also costly to distribute and disseminate ideas learned from patient-driven research to more traditional researchers. Even ignoring travel costs, most scientific conferences do not have a patient research access program, which means patients in some cases are asked to pay $400 (or more) per person for a single day pass to stand beside their poster if it is accepted for presentation at a conference. In some cases, patients have personal resources and determination and are willing to pay that cost. But not every patient is able to do that. (And to do it year over year as they continue to do new ground-breaking research each year – that adds up, too, especially when you factor in travel, lodging, and the opportunity cost of being away from a day job.)

So what will I use the Flash Grant for? Here’s so far what I’ve decided to put it toward:

#1 – I plan to use it to fund my & Scott’s travel costs this year to ADA’s Scientific Sessions, where our poster on Autotune & data from the #WeAreNotWaiting community will be presented. (I’m still hoping to convince ADA to create a patient researcher program vs. treating us like an individual walking in off the street; but if they again do not choose to do so, it will take $800 for Scott and I to stand with the poster during the poster session). Being at Scientific Sessions is incredibly valuable as researchers and developers, because we can have real-time conversations with traditional researchers who have not yet been introduced to some of our tools or the data collected and donated by the community. It’s one of the most valuable places for us to be in person in terms of facilitating new research partnerships, in addition to renewing and establishing relationships with device manufacturers who could (because our stuff is all open source MIT licensed) utilize our code and tools in commercial devices to more broadly reach people with diabetes.

#2 – Hardware parts. In order to best support the OpenAPS community, Scott and I have also been supporting and contributing to the development of open source hardware like the Explorer Board. Keeping in mind that each version of the board produced needs to be tested to see if the instructions related to OpenAPS need to change, we have been buying every iteration of Explorer Board so we can ensure compatibility and ease of use, which adds up. Having some of this grant funding go toward hardware supplies to support a multitude of setup options is nice!

There are so many individuals who have contributed in various ways to OpenAPS and WeAreNotWaiting and the patient-driven research movements. I’m incredibly encouraged, with a new spurt of energy and motivation, after receiving this Flash Grant to continue to further build upon everyone’s work and to do as much as possible to support every person in our collective communities. Thank you again to Madeleine for the nomination, and to the Shuttleworth Foundation for the Flash Grant, for the financial and emotional support for our community!

Introducing oref1 and super-microboluses (SMB) (and what it means compared to oref0, the original #OpenAPS algorithm)

For a while, I’ve been mentioning “next-generation” algorithms in passing when talking about some of the work that Scott and I have been doing as it relates to OpenAPS development. After we created autotune to help people (even non-loopers) tune underlying pump basal rates, ISF, and CSF, we revisited one of our regular threads of conversations about how it might be possible to further reduce the burden of life with diabetes with algorithm improvements related to meal-time insulin dosing.

This is why we first created meal-assist and then “advanced meal-assist” (AMA), because we learned that most people have trouble with estimating carbs and figuring out optimal timing of meal-related insulin dosing. AMA, if enabled and informed about the number of carbs, is a stronger aid for OpenAPS users who want extra help during and following mealtimes.

Since creating AMA, Scott and I had another idea of a way that we could do even more for meal-time outcomes. Given the time constraints and reality of currently available mealtime insulins (that peak in 60-90 minutes; they’re not instantaneous), we started talking about how to leverage the idea of a “super bolus” for closed loopers.

A super bolus is an approach you can take to give more insulin up front at a meal, beyond what the carb count would call for, by “borrowing” from basal insulin that would be delivered over the next few hours. By adding insulin to the bolus and then low temping for a few hours after that, it essentially “front shifts” some of the insulin activity.

Like a lot of things done manually, it’s hard to do safely and achieve optimal outcomes. But, like a lot of things, we’ve learned that by letting computers do more precise math than we humans are wont to do, OpenAPS can actually do really well with this concept.

Introducing oref1

Those of you who are familiar with the original OpenAPS reference design know that ONLY setting temporary basal rates was a big safety constraint. Why? Because it’s less of an issue if a temporary basal rate is issued over and over again; and if the system stops communicating, the temp basal eventually expires and resume normal pump activity. That was a core part of oref0. So to distinguish this new set of algorithm features that depart from that aspect of the oref0 approach, we are introducing it as “oref1”. Most OpenAPS users will only use oref0, like they have been doing. oref1 should only be enabled specifically by any advanced users who want to test or use these features.

The notable difference between the oref0 and oref1 algorithms is that, when enabled, oref1 makes use of small “supermicroboluses” (SMB) of insulin at mealtimes to more quickly (but safely) administer the insulin required to respond to blood sugar rises due to carb absorption.

Introducing SuperMicroBoluses (or “SMB”)

The microboluses administered by oref1 are called “super” because they use a miniature version of the “super bolus” technique described above.  They allow oref1 to safely dose mealtime insulin more rapidly, while at the same time setting a temp basal rate of zero of sufficient duration to ensure that BG levels will return to a safe range with no further action even if carb absorption slows suddenly (for example, due to post-meal activity or GI upset) or stops completely (for example due to an interrupted meal or a carb estimate that turns out to be too high). Where oref0 AMA might decide that 1 U of extra insulin is likely to be required, and will set a 2U/hr higher-than-normal temporary basal rate to deliver that insulin over 30 minutes, oref1 with SMB might deliver that same 1U of insulin as 0.4U, 0.3U, 0.2U, and 0.1U boluses, at 5 minute intervals, along with a 60 minute zero temp (from a normal basal of 1U/hr) in case the extra insulin proves unnecessary.

As with oref0, the oref1 algorithm continuously recalculates the insulin required every 5 minutes based on CGM data and previous dosing, which means that oref1 will continually issue new SMBs every 5 minutes, increasing or reducing their size as needed as long as CGM data indicates that blood glucose levels are rising (or not falling) relative to what would be expected from insulin alone.  If BG levels start falling, there is generally already a long zero temp basal running, which means that excess IOB is quickly reduced as needed, until BG levels stabilize and more insulin is warranted.

Safety constraints and safety design for SMB and oref1

Automatically administering boluses safely is of course the key challenge with such an algorithm, as we must find another way to avoid the issues highlighted in the oref0 design constraints.  In oref1, this is accomplished by using several new safety checks (as outlined here), and verifying all output, before the system can administer a SMB.

At the core of the oref1 SMB safety checks is the concept that OpenAPS must verify, via multiple redundant methods, that it knows about all insulin that has been delivered by the pump, and that the pump is not currently in the process of delivering a bolus, before it can safely do so.  In addition, it must calculate the length of zero temp required to eventually bring BG levels back in range even with no further carb absorption, set that temporary basal rate if needed, and verify that the correct temporary basal rate is running for the proper duration before administering a SMB.

To verify that it knows about all recent insulin dosing and that no bolus is currently being administered, oref1 first checks the pump’s reservoir level, then performs a full query of the pump’s treatment history, calculates the required insulin dose (noting the reservoir level the pump should be at when the dose is administered) and then checks the pump’s bolusing status and reservoir level again immediately before dosing.  These checks guard against dosing based on a stale recommendation that might otherwise be administered more than once, or the possibility that one OpenAPS rig might administer a bolus just as another rig is about to do so.  In addition, all SMBs are limited to 1/3 of the insulin known to be required based on current information, such that even in the race condition where two rigs nearly simultaneously issue boluses, no more than 2/3 of the required insulin is delivered, and future SMBs can be adjusted to ensure that oref1 never delivers more insulin than it can safely withhold via a zero temp basal.

In some situations, a lack of BG or intermittent pump communications can prevent SMBs from being delivered promptly.  In such cases, oref1 attempts to fall back to oref0 + AMA behavior and set an appropriate high temp basal.  However, if it is unable to do so, manual boluses are sometimes required to finish dosing for the recently consumed meal and prevent BG from rising too high.  As a result, oref1’s SMB features are only enabled as long as carb impact is still present: after a few hours (after carbs all decay), all such features are disabled, and oref1-enabled OpenAPS instances return to oref0 behavior while the user is asleep or otherwise not engaging with the system.

In addition to these safety status checks, the oref1 algorithm’s design helps ensure safety.  As already noted, setting a long-duration temporary basal rate of zero while super-microbolusing provides good protection against hypoglycemia, and very strong protection against severe hypoglycemia, by ensuring that insulin delivery is zero when BG levels start to drop, even if the OpenAPS rig loses communication with the pump, and that such a suspension is long enough to eventually bring BG levels back up to the target range, even if no manual corrective action is taken (for example, during sleep).  Because of these design features, oref1 may even represent an improvement over oref0 w/ AMA in terms of avoiding post-meal hypoglycemia.

In real world testing, oref1 has thus far proven at least as safe as oref0 w/ AMA with regard to hypoglycemia, and better able to prevent post-meal hyperglycemia when SMB is ongoing.

What does SMB “look” like?

Here is what SMB activity currently looks like when displayed on Nightscout, and my Pebble watch:

First oref1 SMB OpenAPS test by @DanaMLewisFirst oref1 SMB OpenAPS test as seen on @DanaMLewis pebble watch

How do features like this get developed and tested?

SMB, like any other advanced feature, goes through extensive testing. First, we talk about it. Then, it becomes written up in plain language as an issue for us to track discussion and development. Then we begin to develop the feature, and Scott and I test it on a spare pump and rig. When it gets to the point of being ready to test it in the real world, I test it during a time period when I can focus on observing and monitoring what it is doing. Throughout all of this, we continue to make tweaks and changes to improve what we’re developing. After several days (or for something this different, weeks) of Dana-testing, we then have a few other volunteers begin to test it on spare rigs. They follow the same process of monitoring it on spare rigs and giving feedback and helping us develop it before choosing to run it on a rig and a pump connected to their body. More feedback, discussion, and observation. Eventually, it gets to a point where it is ready to go to the “dev” branch of OpenAPS code, which is where this code is now heading. Several people will review the code and approve it to be added to the “dev” branch. We will then have others test the “dev” branch with this and any other features or code changes – both by people who want to enable this code feature, as well as people who don’t want this feature (to make sure we don’t break existing setups). Eventually, after numerous thumbs up from multiple members of the community who have helped us test different use cases, that code from the “dev” branch will be “approved” and will go to the “master” branch of code where it is available to a more typical user of OpenAPS.

However, not everyone automatically gets this code or will use it. People already running on the master branch won’t get this code or be able to use it until they update their rig. Even then, unless they were to specifically enable this feature (or any other advanced feature), they would not have this particular segment of code drive any of their rig’s behavior.

Where to find out more about oref1, SMB, etc.:

  • We have updated the OpenAPS Reference Design to reflect the differences between oref0 and the oref1 features.
  • OpenAPS documentation about oref1, which as of July 13, 2017 is now part of the master branch of oref0 code.
  • Ask questions! Like all things developed in the OpenAPS community, SMB and oref1-related features will evolve over time. We encourage you to hop into Gitter and ask questions about these features & whether they’re right for you (if you’re DIY closed looping).

Special note of thanks to several people who have contributed to ongoing discussions about SMB, plus the very early testers who have been running this on spare rigs and pumps. Plus always, ongoing thanks to everyone who is contributing and has contributed to OpenAPS development!

Edison Foundation honors #OpenAPS community with 2017 Edison Innovation Award

One of my favorite things about open source projects is the amazing humans behind it. OpenAPS came into existence because of numerous open source efforts; and has continued to evolve in both software and hardware improvements because of ongoing contributions in the open source world.

Some of the contributors and their stories are fairly well known (John Costik’s work to pull data from the CGM originally, which allowed Scott/me to create #DIYPS; Ben West’s work to study pump RF communications and create tools to communicate with the pump, in addition to his work on the building blocks that make up the openaps toolkit). Others have worked on areas that have drastically changed the trajectory of our community’s tools, as well. And two of the individuals who we also owe repeated thanks for facilitating our ability to utilize pocket-sized pancreas are Oskar Pearson and Pete Schwamb.

  • Oskar wanted to find a way to replace the Carelink stick, which has dismal range. (How dismal? Back in the day, I used to have a Pi on the bedside table connected to a Carelink stick under the mattress, plus another Carelink hanging over the middle of my bed to try to keep me looping all night in case I rolled over.) He ultimately leveraged some of Ben and Pete’s other work and created “mmeowlink“, which enabled other radio sticks (think TI cc1111 stick & other radios using the cc1110 and cc1111 chipsets) to similarly communicate with our loopable pumps.  He was also (I think) one of the first users of the Intel Edison for OpenAPS. When he shared his pictures showing the potential down sized rigs, jaws dropped across the Internet. This led to a bunch of new hardware options for OpenAPS rigs; Pi/Carelink was no longer the sole go-to. You could pick Pi/TI; Edison/TI; Pi/Slice of Radio; etc. And the range on these radio sticks is such that a single rig and radio stick can read (in most cases) across room(s). It greatly improved the reliability of real-world looping, and especially was a game changer for those on the go!
  • Pete is a wizard in the world of device RF communications. He created the RileyLink (named after his daughter) to bridge communications between a phone and any Sub-1 GHz device (like say, an insulin pump…). But he’s also done some other stellar projects – like subg_rfspy (general purpose firmware for cc111x for sub-GHz RF comms, which is what is leveraged in mmeowlink); and also ccprog (which enables you to flash the cc1110 radio chip on an Explorer board (see below) without having any separate equipment). And, as someone who has been building boards and decoding RF stuff for years..he’s also incredibly generous with sharing his knowledge with other people building open source hardware boards, including with those of us who collaborated on the Explorer Board.

In addition, there have been other people outside the OpenAPS community who have been touched by our stories or by diabetes in their families and have also stepped up to contribute to open source projects. This is how the Explorer Board came into existence. Someone from Intel had stumbled across OpenAPS on Twitter and reached out to meet up with Scott and me when he was in Seattle; he invited a hardware board designer he knew (Morgan Redfield) to stop by the meetup and offered to help initiate development of a smaller board. And amazingly, that’s exactly what happened. Morgan collaborated with us & others like Pete to design, build, and iterate on a small, open source hardware board (the “900 Mhz Explorer Board“) for the Edison that had a built in cc1110 radio, further allowing us to reduce the size of our rigs.

Eventually others at Intel heard about this collaboration, and we (OpenAPS and Morgan) were nominated for the Edison Foundation’s 2017 Edison Innovation Award for “Best Use of the Intel Edison Module”.

I was blown away to find out tonight that we are honored to actually receive the award on behalf of everyone who made these projects possible. I’m incredibly proud of this community and the dozens of people who have contributed in so many ways to a) make DIY artificial pancreases a thing and b) make it more feasible for hundreds of people to be DIYing these themselves with open source software and hardware. (And, this is very much in line with Thomas Edison’s work – the Edison Foundation spoke tonight about how Edison really created the group collaboration and innovation process!)

Representing the OpenAPS community and accepting the "Best Use of Intel Edison" award

Big thanks to Intel and the Edison Foundation for highlighting the community’s efforts…and endless hugs and ongoing appreciation for everyone who has contributed to OpenAPS and other #WeAreNotWaiting projects!

Write It Do It: Tips for Troubleshooting DIY Diabetes Devices (#OpenAPS or otherwise)

When I was in elementary school, I did Science Olympiad. (Are you surprised? Once a geek, always a geek…) One of my favorite “events” was “Write It Do It”, where one person would get a sculpture/something constructed (could be Legos, could be other stuff) and you had to write down instructions for telling someone else how to build it. Your partner got your list of instructions, the equipment, and was tasked with re-building the structure.

Building open source code and tools is very similar, now that I look back on the experiences of having built #DIYPS and then working on #OpenAPS. First step? Build the structure. Second step? Figure out how to tell someone ELSE how to do it. (That’s what the documentation is). But then when someone takes the list of parts and your instructions off elsewhere, depending on how they interpreted the instructions…it can end up looking a little bit different. Sometimes that’s ok, if it still works. But sometimes they skip a step, or you forget to write down something that looks obvious to you (but leaves them wondering how one part got left out) – and it doesn’t work.

Unlike in Science Olympiad, where you were “scored” on the creation and that was that, in DIY diabetes this is where you next turn to asking questions and troubleshooting about what to change/fix/do next.

But, sometimes it’s hard.

If you’re the person building a rig:

  • You know what you’re looking at, what equipment you used to get here, what step you’re on, what you’ve tried that works and what hasn’t worked.
  • You either know “it doesn’t work” or “I don’t know what to do next.”

If you’re the troubleshooter:

  • You only know generally how it can/should work and what the documentation says to do; but you only know as much about the specific problem is shared with you in context of a question.

As someone who spends a lot of time in the troubleshooter role these days, trying to answer questions or assist people in getting past where they’re stuck, here are my tips to help you if you’re building something DIY and are stuck.

Tips_online_troubleshooting_DIY_diabetes_DanaMLewis

DO:

  1. Start by explaining your setup. Example: “I’m building an Edison/Explorer Board rig, and am using a Mac computer to flash my Edison.”
  2. Explain the problem as specifically as you can. Example: “I am unable to get my Edison flashed with jubilinux.”
  3. Explain what step you’re stuck on, and in which page/version of the docs. Example: “I am following the Mac Edison flashing instructions, and I’m stuck on step 1-4.” Paste a URL to the exact page in the docs you’re looking at.  Clarify whether your problem is “it doesn’t work” or “I don’t know what to do next.”
  4. Explain what it’s telling you and what you see. Pro tip: Copy/paste the output that the computer is telling you rather than trying to summarize the error message. Example: “I can’t get the login prompt, it says “can’t find a PTY”.”
    (This is ESPECIALLY important for OpenAPS’ers who want help troubleshooting logs when they’ve finished the setup script – the status messages in there are very specific and helpful to other people who may be helping you troubleshoot.)
  5. Be patient! You may have tagged someone with an @mention; and they may be off doing something else. But don’t feel like you must tag someone with an @mention – if you’re posting in a specific troubleshooting channel, chances are there are numerous people who can (and will) help you when they are in channel and see your message.
  6. Be aware of what channel you’re in and pros/cons for what type of troubleshooting happens where.
    My suggestions:

    1. Facebook – best for questions that don’t need an immediate fix, or are more experience related questions. Remember you’re also at the mercy of Facebook’s algorithm for showing a post to a particular group of people, even if someone’s a member of the same group. And, it’s really hard to do back-and-forth troubleshooting because of the way Facebook threads posts. However, it IS a lot easier to post a picture in Facebook.
    2. Gitter – best for detailed, and hard, troubleshooting scenarios and live back-and-forth conversations. It’s hard to do photos on the go from your mobile device, but it’s usually better to paste logs and error output messages as text anyway (and there are some formatting tricks you can learn to help make your pasted text more readable, too). Those who are willing to help troubleshoot will generally skim and catch up on the channel when they get back, so you might have a few hours delay and get an answer later, if you still haven’t resolved or gotten an answer to your question from the people in channel when you first post.
    3. Email groups – best for if no one in the other channels knows the questions, or you have a general discussion starter that isn’t time-constrained
  7. Start with the basic setup, and come back and customize later. The documentation is usually written to support several kinds of configurations, but the general rule of thumb is get something basic working first, and then you can come back later and add features and tweaks. If you try to skip steps or customize too early, it makes it a lot harder to help troubleshoot what you’re doing if you’re not exactly following the documentation that’s worked for dozens of other people.
  8. Pay it forward. You may not have a certain skill, but you certainly have other skills that can likely help. Don’t be afraid to jump in and help answer questions of things you do know, or steps you successfully got through, even if you’re not “done” with your setup yet. Paying it forward as you go is an awesome strategy J and helps a lot!

SOME THINGS TO TRY TO AVOID:

  1. Avoid vague descriptions of what’s going on, and using the word “it”. Troubleshooter helpers have no idea which “it” or what “thing” you’re referring to, unless you tell them. Nouns are good :) . Saying “I am doing a thing, and it stopped working/doesn’t work” requires someone to play the game of 20 questions to draw out the above level of detail, before they can even start to answer your question of what to do next.
  2. Don’t get upset at people/blame people. Remember, most of the DIY diabetes projects are created by people who donated their work so others could use it, and many continue to donate their time to help other people. That’s time away from their families and lives. So even if you get frustrated, try to be polite. If you get upset, you’re likely to alienate potential helpers and revert into vagueness (“but it doesn’t work!”) which further hinders troubleshooting. And, remember, although these tools are awesome and make a big difference in your life – a few minutes, or a few hours, or a few days without them will be ok. We’d all prefer not to go without, which is why we try to help each other, but it’s ok if there’s a gap in use time. You have good baseline diabetes skills to fall back on during this time. If you’re feeling overwhelmed, turn off the DIY technology, go back to doing things the way you’re comfortable, and come back and troubleshoot further when you’re no longer feeling overwhelmed.
  3. Don’t go radio silent: report back what you tried and if it worked. One of the benefits of these channels is many people are watching and learning alongside you; and the troubleshooters are also learning, too. Everything from “describing the steps ABC way causes confusion, but saying XYZ seems to be more clear” and even “oh wow, we found a bug, 123 no longer is ideal and we should really do 456.” Reporting back what you tried and if it resolved your issue or not is a very simple way to pay it forward and keep the community’s knowledge base growing!
  4. Try not to get annoyed if someone helping out asks you to switch channels to continue troubleshooting. Per the above, sometimes one channel has benefits over the other. It may not be your favorite, but it shouldn’t hurt you to switch channels for a few minutes to resolve your issue.
  5. Don’t wait until you’re “done” to pay it forward. You definitely have things to contribute as you go, too! Don’t wait until you’re done to make edits (PRs) to the documentation. Make edits while they’re fresh in your mind (and it’s a good thing to do while you’re waiting for things to install/compile ;)).

These are the tips that come to mind as I think about how to help people seek help more successfully online in DIY diabetes projects. What tips would you add?

Scuba diving, snorkeling, and swimming with diabetes (and #OpenAPS)

tl;dr – yes, you can scuba dive with diabetes, snorkel with diabetes, and swim with diabetes! Here’s what you need to know.

I meant to write this post before I left for a two-week Hawaii trip, and since I answered about a question a day on various platforms as I posted pictures from the trip, I really wish I had done it ahead of time. Oh well. :) I especially wish someone had written this post for me 2 years ago, before my first scuba dive, because I couldn’t find a lot of good information on the practicalities of good approaches for dealing with all the details of scuba diving with diabetes and an insulin pump and CGM and now closed loops. Scuba diving, snorkeling, and swimming with diabetes are actually pretty common, so here are a few things to keep in mind/tips from me, before diving (pun intended) into some explanations of what I think about for each activity diabetes-wise.

scuba_diving_with_diabetes_tips_water_activities_by_Dana_M_Lewis

General tips for water activities when living with diabetes:

  1. Most important: be aware of your netIOB going into the activity. Positive netIOB plus activity of any kind = expedited low BG. This is the biggest thing I do to avoid lows while scuba diving or snorkeling – trying to time breakfast or the previous meal to be a few hours prior so I don’t have insulin peaking and accelerated by the activity when I’m out in the water and untethered from my usual devices.
  2. Second most important: CGM sensor and transmitter on your body can get wet (shower, pools, hot tubs, oceans, etc.), but keep in mind it can’t read underwater. And sometimes it gets waterlogged from short or long exposure to the water, so it may take a while to read even after you get it above water or dry off. And, historically I’ve had sensors come back and the CGM will sometimes read falsely high (100-200 points higher than actual BG), so exercise extreme caution and I highly recommend fingerstick testing before dosing insulin after prolonged water exposure.
  3. Know which of your devices are waterproof, watertight, etc. Tip: most pumps are not waterproof. Some are watertight*. The * is because with usual wear and tear and banging into things, small surface cracks start showing up and make your pump no longer even watertight, so even a light splash can kill it. Be aware of the state of your pump and protect it accordingly, especially if you have a limited edition super special super rare DIY-loopable pump. I generally take a baggie full of different sized baggies to put pump/CGM/OpenAPS rig into, and I also have a supposedly waterproof bag that seals that I sometimes put my bagged devices into. (More on that below).
    1. And in general, it’s always wise to have a backup pump (even if it’s non-loopable) on long/tropical/far away trips, and many of the pump companies have a loaner program for overseas/cruise/tropical travel.
  4. Apply sunscreen around your sites/sensors because sunburn and applying or removing them hurts. However, as I learned on this trip, don’t do TOO much/any sunscreen directly on top of the adhesive, as it may loosen the adhesive (just surrounding the edges is fine). I usually use a rub sunscreen around the edges of my pump site and CGM sensor, and do the rest of my body with a spray sunscreen. And pack extra sites and sensors on top of your extras.

Why extras on top of your extras? Because you don’t want to have a vacation like I did where I managed to go through 5 pump site catastrophes in 72 hours and run out of pump sites and worry about that instead of enjoying your vacation. Here’s what happened on my last vacation pump-site wise:

  • Planned to change my site the next morning instead of at night, because then I would properly use up all the insulin in my reservoir. So I woke up, put in a new pump site (B) on my back hip, and promptly went off to walk to brunch with Scott.
  • Sitting down and waiting for food, I noticed my BG was rocketing high. I first guessed that I forgot to exit the prime screen on the pump, which means it wasn’t delivering any insulin (even basal). Wrong. As I pulled my pump off my waist band, I could finally hear the “loud siren escalating alarm” that is “supposed” to be really audible to anyone…but wasn’t audible to me outside on a busy street. Scott didn’t hear it, either. That nice “siren” alarm was “no delivery”, which meant there was something wrong with the pump site and I hadn’t been getting any insulin for the last hour and a half. Luckily, I have gotten into the habit of keeping the “old” pump site (A) on in case of problems like this, so I swapped the tubing to connect to the “old” site A and an hour or so later as insulin started peaking, felt better. I pulled site B out, and it was bent (that’s why it was no delivery-ing). I waited until that afternoon to put in the next pump site (C) into my leg. It was working well into dinner, so I removed site A.
  • However, that night when I changed clothes after dinner, site C ripped out. ARGHHHH. And I had removed site A, so I  had to put on another site (D). Bah, humbug. Throw in someone bumping a mostly-full insulin vial off the counter and it shattering, and I was in one of my least-pleased-because-of-diabetes moods, ever. It was a good reminder of how much a closed loop is not a cure, because we still have to deal with bonked sites and sites in general and all this hoopla.
  • Site D lasted the next day, while we went hiking at Haleakala (a 12.2 mile hike, which was amazing that neither my site or my sensor acted up!). However, on the third day in this adventure, I put on sunscreen to go to the beach with the whole family. When we came back from the beach, I went to remove my cover up to shower off sand before getting into the pool. As my shirt came over my head, I saw something white fly by – which turned out to be 4th pump site, flying around on the end of the pump tube, rather than being connected to my body. There went Site D. In went my fifth site (E), which I tackled down onto my body with extra flexifix tape that I usually use for CGM sensors because I. Was. Fed. Up. With. Pump. Sites!
  • Thankfully, site E lasted a normal life and lasted til I got home and did my next normal site change, and all is back to normal.

Lessons learned about pump sites: I repeat, don’t sunscreen too much on the adhesive, just sunscreen AROUND the adhesive. And pack extras, because I went through ~2 weeks of pump sites in 3 days, which I did not expect – luckily I had plenty of extra and extras behind those!

Now on to the fun stuff.

Scuba Diving with diabetes:

  • 2 years ago was my “Discovery” dive, where you aren’t certified but they teach you the basics and do all the equipment for you so you just do some safety tutorials and go down with a guide who keeps you safe. For that dive, I couldn’t find a lot of good info about scuba diving with diabetes, other than logical advice about the CGM sensor not transmitting under water, the receiver not being waterproof, and the pump not being waterproof. I decided to try to target my BG in advance to be around 180 mg/dl to avoid lows during the dive, and for extra safety eat some skittles before I went down – plus I suspended and removed my pump. Heh. That worked too well, and I was high in the mid-200s in between my two dives, so I found myself struggling to peel my wetsuit off in between dives to connect my pump and give a small bolus. The resulting high feeling after the second dive when my BG hadn’t re-normalized yet plus the really choppy waves made me sea-sick. Not fun. But actually diving was awesome and I didn’t have any lows.
    • Pro tip #1 for scuba diving with diabetes: If you can, have your pump site on your abdomen, arm, or other as-easy-as-possible location to reconnect your pump for between-dive boluses so you don’t have to try to get your arm down the leg of your wetsuit to re- and disconnect.
  • I decided I wanted to get PADI certified to scuba dive. I decided to do the lessons (video watching and test taking) and pool certification and 2/4 of my open water dives while on a cruise trip last February. Before getting in the pool, I didn’t do anything special other than avoid having too much (for me that’s >.5u) of netIOB. For the open water dives at cruise ports, I did the same thing. However, due to the excitement/exertion of the first long dive, along with having to do some open water safety training after the first dive but before getting out (and doing my swim test in choppy open water), I got out of the water after that to find that I was low. I had to take a little bit longer (although maybe only 10 extra minutes) than the instructor wanted to finish waiting for my BG to come up before we headed out to the second dive. I was fine during and after the second dive, other than being exhausted.
    • Pro tip #2 for scuba diving with diabetes: Some instructors or guides get freaked out about the idea of having someone diving with diabetes. Get your medical questionnaire signed by a doctor in advance, and photocopy a bunch so you can take one on every trip to hand to people so they can cover themselves legally. Mostly, it helps for you to be confident and explain the safety precautions you have in place to take care of yourself. It also helps if you are diving with a buddy/loved one who understands diabetes and is square on your safety plan (what do you do if you feel low? how will you signal that? how will they help you if you need help in the water vs. on the boat, etc.?). For my training dives, because Scott was not with me, I made sure my instructor knew what my plan was (I would point to my arm where my sensor was if I felt low and wanted to pause/stop/head to the surface, compared to the other usual safety signals).
  • This past trip in Hawaii I was finishing off a cold at the beginning, so at the end of the trip I started with a shore dive so I could go slow and make sure it was safe for me to descend. I was worried about going low on this one, since we had to lug our gear a hundred feet or so down to the beach and then into the water (and I’ve never done a shore dive prior to this). I did my usual prep: temp basal to 0 on my pump for a few hours (so it can track IOB properly) and suspend; place it and CGM and OpenAPS rigs in baggies in my backpack; and confirming that my BG was flat at a good place without IOB, I didn’t eat anything extra. We went out slowly, had a great dive (yay, turtles), and I was actually a little high coming back up after the dive rather than low. My CGM didn’t come back right away, so I tested with a fingerstick and hooked my pump back up right away and gave a bolus to make up for the missed insulin during the dive. I did that before we headed off the beach and up to clean off our gear.
    • Pro tip #3 for scuba diving with diabetes: Don’t forget that insulin takes 60-90 minutes to peak, so if you’ve been off your pump and diving for a while, even if you are low or fine in that moment, that missing basal will impact you later on. Often if I am doing two dives, even with normal BG levels I will do a small bolus in between to be active by the time I am done with my second dive, rather than going 3+ hours with absolutely no insulin. You need some baseline insulin even if you are very active.
  • While in Hawaii, we also got up before the crack of dawn to head out and do a boat dive at Molokini. It was almost worth the 5am wakeup (I’m not a morning person :)). As soon as I woke up at 5am, I did an “eating soon” and bolused fully for my breakfast, knowing that we’d be getting on the boat at 6:30amish (peak insulin time), but it’d take a while to get out to the dive site (closer to 7:30am), so it was better to get the breakfast bolus in and let it finish counteracting the carbs. I did, but still ran a little higher than I would have liked while heading out, so I did another small correction bolus about half an hour before I temped to zero, suspended, and disconnected and baggied/bagged/placed the bag up in the no-water-shelf on the boat. I then did the first dive, which was neat because Molokini is a cool location, and it was also my first “deep” dive where we went down to about ~75 feet. (My previous dives have all been no deeper than about ~45 feet.) Coming back onto the boat, I did my usual of getting the gear off, then finding a towel to dry my hands and do a fingerstick BG test to see what I was. In this case, 133 mg/dl. Perfect! It would take us almost an hour for everyone to get back on the boat and then move to dive spot #2, so I peeled down my wetsuit and reconnected my pump to get normal basal during this time and also do a small bolus for the bites of pineapple I was eating. (Given the uncertainties of accuracy of CGM coming out of prolonged water exposure, since they sometimes run 100+ points high for me, I chose not to have the loop running during this dive and just manually adjust as needed). We got to spot #2 and went down for the dive, where we saw sharks, eels, and some neat purple-tailed fish. By the end of the dive, I started to feel tired, and also felt hungry. Those are the two signs I feel underwater that probably translate to being low, so I was the first from our group to come up when we got back from the boat. I got on the boat, removed gear, dried hands, tested, and…yep. 73 mg/dl. Not a bad low, but I’m glad I stopped when I did, because it’s always better to be sure and safe than not know. I had a few skittles while reconnecting my pump, and otherwise was fine and enjoyed the rest of the experience including some epic dolphin and whale watching on the return boat ride back to the harbor!
    • Pro tip #4 for scuba diving with diabetes: You may or may not be able to feel lows underwater; but listening to your body and using your brain to pay attention to changes, about low or not, is always a really good idea when scuba diving. I haven’t dived enough  (7 dives total now?) or had enough lows while diving to know for sure what my underwater low symptoms are, but fatigue + hunger are very obvious to me underwater. Again, you may want to dive with a buddy and have a signal (like pointing to the part of your body that has the CGM) if you want to go up and check things out. Some things I read years ago talked about consuming glucose under water, but that seems above my skill level so I don’t think I’ll be the type of diver who does that – I’d rather come to the surface and have someone hand me from the boat something to eat, or shorten the dive and get back on the boat/on shore to take care of things.

All things considered, scuba diving with diabetes is just like anything else with diabetes – it mostly just takes planning ahead, extra snacks (and extra baggies) to have on hand, and you can do it just like anyone else. (The real pain and suffering of scuba diving in my opinion comes not from high or low BGs; but rather pulling hair out of your mask when you take it off after a dive! Every time = ouch.)

Snorkeling with diabetes:

  • Most of my snorkeling experiences/tips sound very similar to the scuba diving ones, so read the above if you haven’t. Remember:
    • Don’t go into a snorkel with tons of positive IOB.
    • Have easy-access glucose supplies in the outer pockets of your bag – you don’t want to have to be digging into the bottom of your beach bag to get skittles out when you’re low!
    • Sunscreen your back well 😉 but don’t over-sunscreen the adhesive on sites and sensors!
    • Make sure your pump doesn’t get too hot while you’re out snorkeling if you leave it on the beach (cover it with something).
    • You could possibly do baggies inside a waterproof bag and take your pump/cgm/phone out into the water with you. I did that two years ago when I didn’t trust leaving my pump/receiver/phone on shore, but even with a certified waterproof bag I spent more time worrying about that than I did enjoying the snorkel. Stash your pump/gear in a backpack and cover it with a towel, or stick it in the trunk/glove compartment of your car, etc.
    • Remember CGMs may not read right away, or may read falsely high, so fingerstick before correcting for any highs or otherwise dosing if needed.

Swimming with diabetes:

  • Same deal as the above described activities, but with less equipment/worries. Biggest things to think about are keeping your gear protected from splashes which seem more common poolside than oceanside…and remember to take your pump off, phone or receiver out of your pocket, etc. before getting in the water!

Wait, all of this has been about pump/CGM. What about closed looping? Can you #OpenAPS in the water?

    • If you don’t have your pump on (in the water), and you don’t have CGM data (in the water, because it can’t transmit there), you can’t loop. So for the most part, you don’t closed loop DURING these activities, but it can be incredibly helpful (especially afterward to make up for the missing basal insulin) to have once you get your pump back on.

However, if your CGM is reading falsely high because it’s waterlogged, you may want to set a high temporary target or turn your rig off during that time until it normalizes. And follow all the same precautions about baggies/waterproofing your rig, because unlike the pump, it’s not designed for even getting the lightest of splashes on it, so treat it like you treat your laptop. For my Hawaii trip, I often had my #OpenAPS rig in a baggie inside of my bag, so that when my pump was on and un-suspended and I had CGM data, it would loop – however, I kept a closer eye on my BGs in general, including how the loop was behaving, in the hour following water activities since I know CGM is questionable during this time.

I’m really glad I didn’t let diabetes stop me from trying scuba diving, and I hope blog posts like this help you figure out how you need to plan ahead for trying new water activites. I’m thankful for technology of pumps and CGMs and tools like #OpenAPS that make it even easier for us to go climb mountains and scuba dive while living with diabetes (although not in the same day ;)).

UPDATE in 2023: I went scuba diving recently using a Dexcom G6, and it did not have any issues once out of the water with falsely high readings! It reconnected instantly (no delay) to my phone once I was back in range and backfilled correctly and had a correct value for the most recent value. So, this is a huge improvement beyond what I described above with earlier generation (e.g., G4 and G5) sensors, but it still has the downside that it can’t transmit data underwater. You can also read here about how I use Libre for underwater reading when I’m doing several water activities and find it worth my while to invest in a single Libre sensor for having CGM data underwater.

Making it possible for researchers to work with #OpenAPS or general Nightscout data – and creating a complex json to csv command line tool that works with unknown schema

This is less of an OpenAPS/DIYPS/diabetes-related post, although that is normally what I blog about. However, since we created the #OpenAPS Data Commons on Open Humans, to allow those of us who desire to donate our diabetes data to research, I have been spending a lot of time figuring out the process from uploading your data to how data is managed and shared securely with researchers. The hardest part is helping researchers figure out how to handle the data – because we PWDs produce a lot of data :) . So this post explains some of the challenges of the data management to get it to a researcher-friendly format. I have been greatly helped over the years by general purpose open-source work from other people, and one of the things that helps ME the most as a non-traditional programmer is plain language posts explaining the thought process by behind the tools and the attempted solution paths. Especially because sometimes the web pages and blog posts pop higher in search than nitty gritty tool documentation without context. (Plus, I’ve been taking my own advice about not letting myself hold me back from trying, even when I don’t know how to do things yet.) So that’s what this post is!

OH that I "certainly stress tested" a tool with lots of data

Background/inspiration for the project and the tools I had to build:

We’re using Nightscout, which is a remote data-viewing platform for diabetes data, made with love and open source and freely available for anyone with diabetes to use. It’s one of the best ways to display not only continuous glucose monitor (CGM) data, but also data from our DIY closed loop artificial pancreases (#OpenAPS). It can store data from a number of different kinds and brands of diabetes devices (pumps, CGMs, manual data entries, etc.), which means it’s a rich source of data. As the number of DIY OpenAPS users are growing, we estimate that our real-world use is overtaking the amount of total hours of data from clinical trials of closed loop artificial pancreas systems.  In the #WeAreNotWaiting spirit of moving quickly (rather than waiting years for research teams to collect and analyze their own data) we want to see what we can learn from OpenAPS usage, not only by donating data to help traditional researchers speed up their work, but also by co-designing research studies of the things of most value to the diabetes community.

Step 1: Data from users to Open Humans

I thought Step 1 would be the hardest. However, thanks to Madeleine Ball, John Costik, and others in the Nightscout community, a simple Nightscout Data Transfer App was created that enables people with Nightscout data to pop it into their Open Humans accounts. It’s then very easy to join different projects (like the OpenAPS Data Commons) and share your data with those projects. And as the volunteer administrator of the OpenAPS Data Commons, it’s also easy for me to provide data to researchers.

The biggest challenge at this stage was figuring out how much data to pull from the API. I have almost 3 years worth of DIY diabetes data, and I have numerous devices over time uploading all at once…which makes for large chunks of data. Not everyone has this much data (or 6-7 rigs uploading constantly ;)). Props to Madeleine for the patience in working with me to make sure the super users with large data sets will be able to use all of these tools!

Step 2: Sharing the data with researchers

This was easy. Yay for data-sharing tools like Dropbox.

Step 3: Researchers being able to use the data

Here’s where thing started to get interesting. We have large data files that come in json format from Nightscout. I know some researchers we will be working with are probably very comfortable working with tools that can take large, complex json files. However…not all will be, especially because we also want to encourage independent researchers to engage with the data for projects. So I had the belated realization that we need to do something other than hand over json files. We need to convert, at the least, to csv so it can be easily viewed in Excel.

Sounds easy, right?

According to basic searches, there’s roughly a gazillion ways to convert json to csv. There’s even websites that will do it for you, without making you run it on the command line. However, most of them require you to know the types of data and the number of types, in order to therefore construct headers in the csv file to make it readable and useful to a human.

This is where the DIY and infinite possibility nature of all the kinds of diabetes tools anyone could be using with Nightscout, plus the infinite ways they can self-describe profiles and alarms and methods of entering data, makes it tricky. Just based on an eyeball search between two individuals, I was unable to find and count the hundred+ types of data entry possibilities. This is definitely a job for the computer, but I had to figure out how to train the computer to deal with this.

Again, json to csv tools are so common I figured there HAD to be someone who had done this. Finally, after a dozen varying searches and trying a variety of command line tools, I finally found one web-based tool that would take json, create the schema without knowing the data types in advance, and convert it to csv. It was (is) super slick. I got very excited when I saw it linked to a Github repository, because that meant it was probably open source and I can use it. I didn’t see any instructions for how to use it on the command line, though, so I message the author on Twitter and found out that it didn’t yet exist and was a not-yet-done TODO for him.

Sigh. Given this whole #WeAreNotWaiting thing (and given I’ve promised to help some of the researchers in figuring this out so we can initiate some of the research projects), I needed to figure out how to convert this tool into a command line version.

So, I did.

  • I taught myself how to unzip json files (ended up picking `gzip -cd`, because it works on both Mac and Linux)
  • I planned to then convert the web tool to be able to work on the command line, and use it to translate the json files to csv.

But..remember the big file issue? It struck again. So I first had to figure out the best way to estimate the size and splice or split the json into a series of files, without splitting it in a weird place and messing up the data. That became jsonsplit.sh, a tool to split a json file based on the size you give it (and if you don’t specify, it defaults to something like 100000 records).

FWIW: 100,000 records was too much for the more complex schema of the data I was working with, so I often did it in smaller chunks, but you can set it to whatever size you prefer.

So now “all” I had to do was:

  • Unzip the json
  • Break it down if it was too large, using jsonsplit.sh
  • Convert each of these files from json to csv

Phew. Each of these looks really simple now, but took a good chunk of time to figure out. Luckily, the author of the web tool had done much of the hard json-to-csv work, and Scott helped me figure out how to take the html-based version of the conversion and make it useable in the command line using javascript. That became complex-json2csv.js.

Because I knew how hard this all was, and wanted other people to be able to easily use this tool if they had large, complex json with unknown schema to deal with, I created a package.json so I could publish it to npm so you can download and run it anywhere.

I also had to create a script that would pass it all of the Open Humans data; unzip the file; run jsonsplit.sh, run complex-json2csv.js, and organize the data in a useful way, given the existing file structure of the data. Therefore I also created an “OpenHumansDataTools” repository on Github, so that other researchers who will be using Nightscout-based Open Humans data can use this if they want to work with the data. (And, there may be something useful to others using Open Humans even if they’re not using Nightscout data as their data source – again, see “large, complex, challenging json since you don’t know the data type and count of data types” issue. So this repo can link them to complex-json2csv.js and jsonsplit.sh for discovery purposes, as they’re general purpose tools.) That script is here.

My next TODO will be to write a script to take only slices of data based on information shared as part of the surveys that go with the Nightscout data; i.e. if you started your DIY closed loop on X data, take data from 2 weeks prior and 6 weeks after, etc.

I also created a pull request (PR) back to the original tool that inspired my work, in case he wants to add it to his repository for others who also want to run his great stuff from the command line. I know my stuff isn’t perfect, but it works :) and I’m proud of being able to contribute to general-purpose open source in addition to diabetes-specific open source work. (Big thanks as always to everyone who devotes their work to open source for others to use!)

So now, I can pass researchers json or csv files for use in their research. We have a number of studies who are planning to request access to the OpenAPS Data Commons, and I’m excited about how work like this to make diabetes data more broadly available for research will help improve our lives in the short and long term!

The only thing to fear is fear itself

(Things I didn’t realize were involved in open-sourcing a DIY artificial pancreas: writing “yes you can” style self-help blog posts to encourage people to take the first step to TRY and use the open source code and instructions that are freely available….for those who are willing to try.)

You are the only thing holding yourself back from trying. Maybe it’s trying to DIY closed loop at all. Maybe it’s trying to make a change to your existing rig that was set up a long time ago.  Maybe it’s doing something your spouse/partner/parent has previously done for you. Maybe it’s trying to think about changing the way you deal with diabetes at all.

Trying is hard. Learning is hard. But even harder (I think) is listening to the negative self-talk that says “I can’t do this” and perhaps going without something that could make a big difference in your daily life.

99% of the time, you CAN do the thing. But it primarily starts with being willing to try, and being ok with not being perfect right out of the gate.

I blogged last year (wow, almost two years ago actually) about making and doing and how I’ve learned to do so many new things as part of my OpenAPS journey that I never thought possible. I am not a traditional programmer, developer, engineer, or anything like that. Yes, I can code (some)…because I taught myself as I went and continue to teach myself as I go. It’s because I keep trying, and failing, then trying, and succeeding, and trying some more and asking lots of questions along the way.

Here’s what I’ve learned in 3+ years of doing DIY, technical diabetes things that I never thought I’d be able to accomplish:

  1. You don’t need to know everything.
  2. You really don’t particularly need to have any technical “ability” or experience.
  3. You DO need to know that you don’t know it all, even if you already know a thing or two about computers.
  4. (People who come into this process thinking they know everything tend to struggle even more than people who come in humble and ready to learn.)
  5. You only need to be willing to TRY, try, and try again.
  6. It might not always work on the first try of a particular thing…
  7. …but there’s help from the community to help you learn what you need to know.
  8. The learning is a big piece of this, because we’re completely changing the way we treat our diabetes when we go from manual interventions to a hybrid closed loop (and we learned some things to help do it safely).
  9. You can do this – as long as you think you can.
  10. If you think you can’t, you’re right – but it’s not that you can’t, it’s that you’re not willing to even try.

This list of things gets proved out to me on a weekly basis.

I see many people look at the #OpenAPS docs and think “I can’t do that” (and tell me this) and not even attempt to try.

What’s been interesting, though, is how many non-technical people jumped in and gave autotune a try. Even with the same level of no technical ability, several people jumped in, followed the instructions, asked questions, and were able to spin up a Linux virtual machine and run beta-level (brand new, not by any means perfect) code and get output and results. It was amazing, and really proved all those points above. People were deeply interested in getting the computer to help them, and it did. It sometimes took some work, but they were able to accomplish it.

OpenAPS, or anything else involving computers, is the same way. (And OpenAPS is even easier than most anything else that requires coding, in my opinion.) Someone recently estimated that setting up OpenAPS takes only 20 mouse clicks; 29 copy and paste lines of code; 10 entries of passwords or logins; and probably about 15-20 random small entries at prompts (like your NS site address or your email address or wifi addresses). There’s a reference guide, documentation that walks you through exactly what to do, and a supportive community.

You can do it. You can do this. You just have to be willing to try.

Making it easier to run OpenAPS commands again..and again..and again

Today I built (another) new (really tiny) tool to make it easier for people using OpenAPS rigs to continually update and improve their tools. Woohoo!

When we switched last year to using the “setup scripts” for OpenAPS, this became the tool for setting up new, advanced features like Advanced Meal Assist, Autosensitivity, Autotune, and other things. Which means that people were running the setup scripts multiple times.

It wasn’t bad, because we built in an interactive setup guide to walk people through the process to select which features they did or did not want. But, it took a bit of time to do, and upon your 8th (or 80th) run of the setup script, especially for those of us developing the script, it got tiring. So we decided to automate some output, that could be copied and pasted to speed up running the same set of options on the command line the next time.

Many people, however, in their first setup run-through don’t see that, or don’t remember to copy and paste it.

Last night, it occurred to me that I should add a more explicit note to the docs for people to stop and copy and paste it. But then I had an idea – what if we could stash away the content in another file, so you could find it anytime without having to run the setup script interactively?

Lightbulb. So today, I sat down and gave it a stab. It’s simple-ish code being added in (now in dev branch of oref0; docs for it here), but it will save little bits of time that over time add up to a lot of time saved.
showing output from oref0-runagain.shcreating the oref0-runagain.sh

This is how almost all of the iterative OpenAPS development occurs: we repeat something enough times, decide it needs to be automated, and find a way to make it happen. And that’s how the tools and code and documentation continues to get to be better and better!

#WeAreNotWaiting, even with the small stuff, that eventually adds up to make a bigger difference :)

Autotune (automatically assessing basal rates, ISF, and carb ratio with #OpenAPS – and even without it!)

What if, instead of guessing needed changes (the current most used method) basal rates, ISF, and carb ratios…we could use data to empirically determine how these ratios should be adjusted?

Meet autotune.

What if we could use data to determine basal rates, ISF and carb ratio? Meet autotune

Historically, most people have guessed basal rates, ISF, and carb ratios. Their doctors may use things like the “rule of 1500” or “1800” or body weight. But, that’s all a general starting place. Over time, people have to manually tweak these underlying basals and ratios in order to best live life with type 1 diabetes. It’s hard to do this manually, and know if you’re overcompensating with meal boluses (aka an incorrect carb ratio) for basal, or over-basaling to compensate for meal times or an incorrect ISF.

And why do these values matter?

It’s not just about manually dosing with this information. But importantly, for most DIY closed loops (like #OpenAPS), dose adjustments are made based on the underlying basals, ISF, and carb ratio. For someone with reasonably tuned basals and ratios, that’s works great. But for someone with values that are way off, it means the system can’t help them adjust as much as someone with well-tuned values. It’ll still help, but it’ll be a fraction as powerful as it could be for that person.

There wasn’t much we could do about that…at first. We designed OpenAPS to fall back to whatever values people had in their pumps, because that’s what the person/their doctor had decided was best. However, we know some people’s aren’t that great, for a variety of reasons. (Growth, activity changes, hormonal cycles, diet and lifestyle changes – to name a few. Aka, life.)

With autosensitivity, we were able to start to assess when actual BG deltas were off compared to what the system predicted should be happening. And with that assessment, it would dynamically adjust ISF, basals, and targets to adjust. However, a common reaction was people seeing the autosens result (based on 24 hours data) and assume that mean that their underlying ISF/basal should be changed. But that’s not the case for two reasons. First, a 24 hour period shouldn’t be what determines those changes. Second, with autosens we cannot tell apart the effects of basals vs. the effect of ISF.

Autotune, by contrast, is designed to iteratively adjust basals, ISF, and carb ratio over the course of weeks – based on a longer stretch of data. Because it makes changes more slowly than autosens, autotune ends up drawing on a larger pool of data, and is therefore able to differentiate whether and how basals and/or ISF need to be adjusted, and also whether carb ratio needs to be changed. Whereas we don’t recommend changing basals or ISF based on the output of autosens (because it’s only looking at 24h of data, and can’t tell apart the effects of basals vs. the effect of ISF), autotune is intended to be used to help guide basal, ISF, and carb ratio changes because it’s tracking trends over a large period of time.

Ideally, for those of us using DIY closed loops like OpenAPS, you can run autotune iteratively inside the closed loop, and let it tune basals, ISF, and carb ratio nightly and use those updated settings automatically. Like autosens, and everything else in OpenAPS, there are safety caps. Therefore, none of these parameters can be tuned beyond 20-30% from the underlying pump values. If someone’s autotune keeps recommending the maximum (20% more resistant, or 30% more sensitive) change over time, then it’s worth a conversation with their doctor about whether your underlying values need changing on the pump – and the person can take this report in to start the discussion.

Not everyone will want to let it run iteratively, though – not to mention, we want it to be useful to anyone, regardless of which DIY closed loop they choose to use – or not! Ideally, this can be run one-off by anyone with Nightscout data of BG and insulin treatments. (Note – I wrote this blog post on a Friday night saying “There’s still some more work that needs to be done to make it easier to run as a one-off (and test it with people who aren’t looping but have the right data)…but this is the goal of autotune!” And as by Saturday morning, we had volunteers who sat down with us and within 1-2 hours had it figured out and documented! True #WeAreNotWaiting. :))

And from what we know, this may be the first tool to help actually make data-driven recommendations on how to change basal rates, ISF, and carb ratios.

How autotune works:

Step 1: Autotune-prep

  • Autotune-prep takes three things initially: glucose data; treatments data; and starting profile (originally from pump; afterwards autotune will set a profile)
  • It calculates BGI and deviation for each glucose value based on treatments
  • Then, it categorizes each glucose value as attributable to either carb sensitivity factor (CSF), ISF, or basals
  • To determine if a “datum” is attributable to CSF, carbs on board (COB) are calculated and decayed over time based on observed BGI deviations, using the same algorithm used by Advanced Meal Asssit. Glucose values after carb entry are attributed to CSF until COB = 0 and BGI deviation <= 0. Subsequent data is attributed as ISF or basals.
  • If BGI is positive (meaning insulin activity is negative), BGI is smaller than 1/4 of basal BGI, or average delta is positive, that data is attributed to basals.
  • Otherwise, the data is attributed to ISF.
  • All this data is output to a single file with 3 sections: ISF, CSF, and basals.

Step 2: Autotune-core

  • Autotune-core reads the prepped glucose file with 3 sections. It calculates what adjustments should be made to ISF, CSF, and basals accordingly.
  • For basals, it divides the day into hour long increments. It calculates the total deviations for that hour increment and calculates what change in basal would be required to adjust those deviations to 0. It then applies 20% of that change needed to the three hours prior (because of insulin impact time). If increasing basal, it increases each of the 3 hour increments by the same amount. If decreasing basal, it does so proportionally, so the biggest basal is reduced the most.
  • For ISF, it calculates the 50th percentile deviation for the entire day and determines how much ISF would need to change to get that deviation to 0. It applies 10% of that as an adjustment to ISF.
  • For CSF, it calculates the total deviations over all of the day’s mealtimes and compares to the deviations that are expected based on existing CSF and the known amount of carbs entered, and applies 10% of that adjustment to CSF.
  • Autotune applies a 20% limit on how much a given basal, or ISF or CSF, can vary from what is in the existing pump profile, so that if it’s running as part of your loop, autotune can’t get too far off without a chance for a human to review the changes.

(See more about how to run autotune here in the OpenAPS docs.)

What autotune output looks like:

Here’s an example of autotune output.

OpenAPS autotune example by @DanaMLewis

Autotune is one of the things Scott and I spent time on over the holidays (and hinted about at the end of my development review of 2016 for OpenAPS). As always with #OpenAPS, it’s awesome to take an idea, get it coded up, get it tested with some early adopters/other developers within days, and continue to improve it!

Highlighting someone successfully using Autotune to help adjust baseline settings

A big thank you to those who’ve been testing and helping iterate on autotune (and of course, all other things OpenAPS). It’s currently in the dev branch of oref0 for anyone who wants to try it out, either one-off or for part of their dev loop. Documentation is currently here, and this is the issue in Github for logging feedback/input, along with sharing and asking questions as always in Gitter!

 

 

OpenAPS feature development in 2016

It’s been two years since my first DIY closed loop and almost two years since OpenAPS (the vision and resulting ecosystem to help make artificial pancreas technology, DIY or otherwise, more quickly available to more people living with diabetes) was created.  I’ve spent time here (on DIYPS.org) talking about a variety of things that are applicable to people who are DIY closed looping, but also focusing on things (like how to “soak” a CGM sensorr and how to do “eating soon” mode) that may be (in my opinion) universally applicable.

OpenAPS feature development in 2016

However, I think it’s worth recapping some of the amazing work that’s been done in the OpenAPS ecosystem over the past year, sometimes behind the scenes, because there are some key features and tools that have been added in that seem small, but are really impactful for people living with DIY closed loops.

  1. Advanced meal assist (aka AMA)
    1. This is an “advanced feature” that can be turned on by OpenAPS users, and, with reliable entry of carb information, will help the closed loop assist sooner with a post-meal BG rise where there is mis-timed or insufficient insulin coverage for the meal. It’s easy to use, because the PWD only has to put carbs and a bolus in – then AMA acts based on the observed absorption. This means that if absorption is delayed because you walk home from dinner, have gastroparesis, etc., it backs off and wait until the carbs actually start taking effect (even if it is later than the human would expect).
    2. We also now have the purple line predictions back in Nightscout to visualize some of these predictions. This is a hallmark of the original iob-cob branch in Nightscout that Scott and I originally created, that took my COB calculated by DIYPS and visualized the resulting BG graph. With AMA, there are actually 3 purple lines displayed when there is carb activity. As described here in the OpenAPS docs, the top purple line assumes 10 mg/dL/5m carb (0.6 mmol/L/5m) absorption and is most accurate right after eating before carb absorption ramps up. The line that is usually in the middle is based on current carb absorption trends and is generally the most accurate once carb absorption begins; and the bottom line assumes no carb absorption and reflects insulin only. Having the 3 lines is helpful for when you do something out of the ordinary following a meal (taking a walk; taking a shower; etc.) and helps a human decide if they need to do anything or if the loop will be able to handle the resulting impact of those decisions.
  2. The approach with a “preferences” file
    1. This is the file where people can adjust default safety and other parameters, like maxIOB which defaults to 0 during a standard setup, ultimately creating a low-glucose-suspend-mode closed loop when people are first setting up their closed loops. People have to intentionally change this setting to allow the system to high temp above a netIOB = 0 amount, which is an intended safety-first approach.
    2. One particular feature (“override_high_target_with_low”) makes it easier for secondary caregivers (like school nurses) to do conservative boluses at lunch/snack time, and allow the closed loop to pick up from there. The secondary caregiver can use the bolus wizard, which will correct down to the high end of the target; and setting this value in preferences to “true” allows the closed loop to target the low end of the target. Based on anecdotal reports from those using it, this feature sounds like it’s prevented a lot of (unintentional, diabetes is hard) overreacting by secondary caregivers when the closed loop can more easily deal with BG fluctuations. The same for “carbratio_adjustmentratio”, if parents would prefer for secondary caregivers to bolus with a more conservative carb ratio, this can be set so the closed loop ultimately uses the correct carb amount for any needed additional calculations.
  3. Autosensitivity
    1. I’ve written about autosensitivity before and how impressive it has been in the face of a norovirus and not eating to have the closed loop detect excessive sensitivity and be able to deal with it – resulting in 0 lows. It’s also helpful during other minor instances of sensitivity after a few active days; or resistance due to hormone cycles and/or an aging pump site.
    2. Autosens is a feature that has to be turned on specifically (like AMA) in order for people to utilize it, because it’s making adjustments to ISF and targets and looping accordingly from those values. It also have safety caps that are set and automatically included to limit the amount of adjustment in either direction that autosens can make to any of the parameters.
  4. Tiny rigs
    1. Thanks to Intel, we were introduced to a board designer who collaborated with the OpenAPS community and inspired the creation of the “Explorer Board”. It’s a multipurpose board that can be used for home automation and all kinds of things, and it’s another tool in the toolbox of off-the-shelf and commercial hardware that can be used in an OpenAPS setup. It’s enabled us, due to the built in radio stick, to be able to drastically reduce the size of an OpenAPS setup to about the size of two Chapsticks.
  5. Setup scripts
    1. As soon as we were working on the Explorer Board, I envisioned that it would be a game changer for increasing access for those who thought a Pi was too big/too burdensome for regular use with a DIY closed loop system. I knew we had a lot of work to do to continue to improve the setup process to cut down on the friction of the setup process – but balancing that with the fact that the DIY part of setting up a closed loop system was and still is incredibly important. We then worked to create the oref0-setup script to streamline the setup process. For anyone building a loop, you still have to set up your hardware and build a system, expressing intention in many places of what you want to do and how…but it’s cut down on a lot of friction and increased the amount of energy people have left, which can instead be focused on reading the code and understanding the underlying algorithm(s) and features that they are considering using.
  6. Streamlined documentation
    1. The OpenAPS “docs” are an incredible labor of love and a testament to dozens and dozens of people who have contributed by sharing their knowledge about hardware, software, and the process it takes to weave all of these tools together. It has gotten to be very long, but given the advent of the Explorer Board hardware and the setup scripts, we were able to drastically streamline the docs and make it a lot easier to go from phase 0 (get and setup hardware, depending on the kind of gear you have); to phase 1 (monitoring and visualizing tools, like Nightscout); to phase 2 (actually setup openaps tools and build your system); to phase 3 (starting with a low glucose suspend only system and how to tune targets and settings safely); to phase 4 (iterating and improving on your system with advanced features, if one so desires). The “old” documentation and manual tool descriptions are still in the docs, but 95% of people don’t need them.
  7. IFTTT and other tool integrations
    1. It’s definitely worth calling out the integration with IFTTT that allows people to use things like Alexa, Siri, Pebble watches, Google Assistant (and just about anything else you can think of), to easily enter carbs or “modes” for OpenAPS to use, or to easily get information about the status of the system. (My personal favorite piece of this is my recent “hack” to automatically have OpenAPS trigger a “waking up” mode to combat hormone-driven BG increases that happen when I start moving around in the morning – but without having to remember to set the mode manually!)

..and that was all just things the community has done in 2016! :) There are some other exciting things that are in development and being tested right now by the community, and I look forward to sharing more as this advanced algorithm development continues.

Happy New Year, everyone!