What I wish CDEs (diabetes educators) and other HCPs knew about DIY and other diabetes tech (#OpenAPS or otherwise)

I had the awesome opportunity to present at #AADE17, the annual education meeting for the American Association of Diabetes Educators, this past weekend. My topic was about OpenAPS and DIY diabetes… which really translates to some broader things I want all educators and HCPs to know about patients and technology, whether it’s DIY or just unknown to them. Unfortunately AADE didn’t record or livestream my session, so I wanted to write up a summary of the content here.

(If you’re new to this blog/me/OpenAPS, you can also watch this June 2017 TEDX talk where I share some of the story of how I ended up with a DIY artificial pancreas and how the OpenAPS community came to be; or this older talk from OSCON 2016 as well. As always, if you’re curious to learn more about OpenAPS or wondering how to build your own DIY artificial pancreas, OpenAPS.org is the first place to learn more!)

Diabetes is hard. Even if you are privileged to have access to insulin, education, and technology – it can still be so incredibly hard to get it right. And even if you do everything “right”, the outcomes will still vary. And after all, the devices themselves are not perfect, and we still have diabetes.

The lack of varying alarms and the unchangeable volume is what led me to create DIYPS (my open loop and louder alarm system), and the same frustration with lack of data access and visualization led John Costik, Lane Desborough, Ben West, and so many others to explore creating other DIY tools, such as Nightscout. And thanks to social media, we all didn’t have to create in a vacuum: we can share code (this is what open source means) and insight through social media, and build upon each other’s work. As a result, these collaborations, sharing, and iterative development is how OpenAPS, the open source artificial pancreas system movement, was created.

I tweet and talk and share frequently about how great it is having #OpenAPS in my life. Norovirus? No problem. Changes in sensitivity due to exercise? Not the biggie it used to be.

Showing flat overnight CGM graph representing sleep uninterupted by hypoglycemia thanks to OpenAPS

However, this technology is by no means a cure. It still requires work on the part of the person with diabetes. We still have to:

  • Change pump sites
  • Change CGM sensors
  • Calibrate regularly
  • Deal with bonked pump sites and sensors that fall out

And also, given the speed of insulin, most people are still going to engage with the system for some kind of meal bolus or announcement. This is why it’s called “hybrid” closed loop technology. (However, depending on the sophistication of the technology, you start to get to be able to choose what you want to optimize for and the behaviors you want to choose to do less of, which is great.)

In some cases, we humans know more than the technology: such as when a meal is going to happen/is coming, and when exercise is going to happen. So it’s nice to be able to interoperate your devices and be able to use your phone, watch, computer, etc. to be able to tell the system what to do differently (i.e. set higher targets in the case of activity, or lower targets to achieve “eating soon” mode , or in the case of waking up).

But in a LOT of cases, it’s tiring for the human to have to think about all the things. Such as whether a pump site is slowly dying and causing apparent insulin resistant. Or such as when you’re more sensitive 12-24 hours after exercise. Or during menstrual cycles. Or when sick. Or during a growth spurt. Or during jet lag. Or during a trip where you can’t find anything to eat. Etc. It’s a lot for us PWD’s to track, and this is where computers come in handy. Things like autosensitivity in OpenAPS to automatically detect changes in sensitivity and adjust the variables for calculations automatically; and autotune, to track the data of what’s actually happening and make recommendations for changing your underlying pump settings (ISF, carb ratio, and basal rates).

And how has this technology been developed by patients? Iteratively, as we figure out what’s possible. It’s not about boiling the ocean; it’s about approaching problems bit by bit as we have new tools to solve them, or new people with energy to think about the problem in different ways. It’s like thinking about getting a car – you wouldn’t expect the manufacturer to sell bits and pieces of the car frame, and you don’t really expect medical device manufacturers to sell bits and pieces of a pump or other device. However, patients are closest to the REAL problems in living with diabetes. Instead of a “car”, they’re looking for solutions for getting from point A to point B. And so in the car analogy, that means starting with a skateboard, scooter, or bike – and ending up with a car is great, but the car is not the point.

So no, any piece of technology isn’t going to be a cure or solve all problems or work perfectly for everyone. But that is true whether it’s DIY or a commercial tool: one size certainly does not fit all. And patients are individuals with their own lives and their own challenges with diabetes, with different motivations around what aspects of life with diabetes feel like friction and what they feel equipped to tackle and solve.

So, here’s some of what’s on my list for what I’d like CDE’s and other HCP’s to know as a result of the proliferation of technology around diabetes:

  • Yes, DIY tech is often off label. But that’s ok – it just means it’s off label; it doesn’t prevent you from listening to why patients are using it, what we think it’s doing for us, and it doesn’t prevent you from asking questions, learning more, or still advising patients.
  • Don’t make us switch providers by refusing to discuss it or listen to it, just because it’s new/different/you don’t understand it. (By the way: we don’t expect you to understand all possible technology! You can’t be experts on everything, but that doesn’t mean shunning what you don’t know.)
  • You get to take advantage of the opportunity when someone brings something new into the office – it’s probably the first of many times you’ll see it, and the first patient is often on the bleeding edge and deeply engaged and understands what they’re using, and open to sharing what they’ve learned to help you, so you can also help other patients!
  • You also get to take advantage of the open source community. It’s open, not just for patients to use, but for companies, and for CDEs and other HCPs as well. There are dozens if not hundreds of active people on Twitter, Facebook, blogs, forums, and more who are happy to answer questions and help give perspective and insight into why/how/what things are.
  • Don’t forget – many of the DIY tools provide data and insight that currently don’t exist in any traditional and/or commercially and/or FDA-approved tool. Take autotune for example – there’s nothing else out there that we know of that will tune basal rates, ISF, and carb ratio for people with pumps. And the ability of tools like Nightscout reports to show data from a patient’s disparate devices is also incredibly helpful for healthcare providers and educators to use to help patients.

And one final point specific to hybrid closed loop technology: this technology is going to solve a lot of problems and frustrations. But, it may mean that patients will shift the prioritization of other quality of life factors like ease of use over older, traditionally learned diabetes behaviors. This means things like precise carb counting may go by the wayside for general meal size estimations, because the technology yields similar outcomes. Being aware of this will be important for when CDE’s are working with patients; knowing what the patterns of behaviors are and knowing where a patient has shifted their choices will be helpful for identifying what behaviors can be adapted to yield different outcomes.

I think the increase in technology (especially various types of closed loops, DIY and commercial) will yield MORE work for CDE’s and HCP’s, rather than less. This means it’s even more important for them to get up to speed on current and evolving technology – because it’s by no means going away. And the first wave of DIY’ers have a lot we can share and teach not just other patients, but also CDE’s. So again, many thanks to AADE for the opportunity to share some of this perspective at #AADE17, and thanks to everyone for the engagement during and after the session!

Different ways to make a difference

tl,dr: There are many ways to make a difference, ranging from donating time/energy/ideas to financially supporting organizations who are making a difference.

When I was first diagnosed with diabetes (at age 14, three months into high school – ugh), I was stunned. And I didn’t want anyone to know, because I didn’t want to be treated differently. So for the first few months, I learned how to take care of myself, and did that quietly and went about my life: school, color guard, etc. I was frustrated with the idea of having to do all this stuff for the rest of my life, and wanted as little as possible to have to talk/think about it beyond the bare minimum I had to do.

However, after I talked the Latin Club into making our fundraising dollars from the Rake-a-thon go toward the American Diabetes Association, and I saw the reaction of the local staff when I walked in and dropped a check on the desk and turned around and tried to walk out the door. (They didn’t let me just walk out!) I agreed to volunteer and do more, and it changed my life.

I don’t know what first thing I did, but I quickly came to realize that doing things for the broader population of people with diabetes – maybe they had type 2, maybe they lived somewhere else, didn’t matter – made me feel SO much better about my own life with type 1 diabetes. I wasn’t alone. And so my mantra became “Doing something for someone else is more important than anything you would do for yourself.” And it’s proved to be true for me for 14 years.

Since I grew up in Alabama, that’s where I started getting involved first. Inspired by my parents’ volunteer efforts that I saw growing up, I would volunteer my time and energy for a variety of things:

  • Fundraising for the local walk
  • Actually helping out the day of the walk
  • Joining the planning committee for the walk and spending months helping figure everything out and doing both actual and metaphorical heavy lifting to help make the event happen

Because of my volunteer efforts, I was asked to speak at a fundraising breakfast in Birmingham. It was my first time ever giving a public talk, let alone publicly talking about living with diabetes. And because of the people I met that day, I began doing more volunteer things around the state – and it led me to applying and being selected as the National Youth Advocate for the American Diabetes Association, and later serving on national committees like the National Youth Strategies committee (where we developed and improved the “Wisdom” kits for newly diagnosed kids with diabetes, created a kid-focused section of the ADA website, etc.). And my involvement continued as I graduated college and moved to Seattle, still serving on national committees but also joining the Western Washington Leadership Board and doing the same type of local event volunteering, but now in Seattle. I also have done more around advocacy over the years, beyond my time as NYA. While in college, I was asked to testify before the Senate HELP committee, talking about the need to increase funding for disease research. I’ve also participated regularly in ADA’s Call to Congress, including this year, where Scott and I paid to fly to DC and talk with our Washington state representatives and senators about the critical need for funding NIH & CDC; maintaining critical diabetes programs; and the issues around insulin affordability.

But it was when I was asked to represent the US and attend the World Diabetes Congress in 2006 when my eyes were opened to the issues around insulin access and affordability.

IDF first did a youth ambassador program in 2006, bringing around two dozen young adults with diabetes to the World Diabetes Congress to participate, train in advocacy activities, etc.

Having grown up in Alabama, where diabetes (particularly type 2) is highly prevalent, I knew that not everyone could afford pumps and CGMs (especially back then, when CGMs were brand new, way less accurate, and still super expensive, even with insurance coverage). I also knew that insulin & supplies were expensive, and some people struggled with gaining access to them. (And I always felt very fortunate that since diagnosis, my parents were able to afford my insulin & supplies.)

However, while in South Africa, I learned from my new friends from other parts of Africa and the rest of the world that this was the tippy top of the ice berg. I learned about:

  • Kids are walking alone on roads for miles and hours to get to a clinic to get a single, daily shot of insulin.
  • They may only test their BGs once a week, or month, or quarter.
  • It’s not just kids – adults would have to stop working and walk for hours, too, choosing to get insulin to stay alive to be able to work another few days to help their family survive.
  • Some people would only get insulin once a week, if that, or once a day – compared to me, where I might have several injections a day, as often as needed to keep my BGs in a safe range.

It was astonishing, saddening, maddening, and terrifying. And living so far away from this part of the world, I wasn’t sure how I could help, until I met Graham Ogle who created the “Life for a Child” program to help tackle the problem, with the vision that no child should die of diabetes. Life for a Child helps less resource-supported countries provide insulin, syringes, other supplies, and education (both for people living with diabetes and healthcare providers). And, they’re a very resource-efficient organization.

When Scott and I first met, he knew nothing about diabetes (and actually thought my insulin pump was a pager – hah!). And while I volunteered a lot of my time and energy to help organizations, he is also dedicated to finding effective ways to safe lives, and as a result, is a longtime donor to Givewell.org and some of their top charities, like Against Malaria Foundation. Givewell is a nonprofit dedicated to finding giving opportunities and publishing the full details of their analysis to help donors decide where to give. And unlike charity evaluators that focus solely on financials, assessing administrative or fundraising costs, they conduct in-depth research aiming to determine how much good a given program accomplishes (in terms of lives saved, lives improved, etc.) per dollar spent.

Therefore, when Scott and I got married, we decided that in lieu of wedding-related gifts, we would ask people to support our charities of choice, to further increase the impact we would be able to have in addition to our own financial and other resource donations.

However, Life for a Child was not evaluated by Givewell. So Scott and I got on a Skype call with Graham Ogle to crunch through the numbers and try to come up with an idea for how effective Life for a Child is, similar to what Givewell has already done for other organizations.

For example, the Against Malaria Foundation, the recommended charity with the most transparent and straightforward impact on people’s lives, can buy and distribute an insecticide-treated bed net for about $5.  Distributing about 600-1000 such nets results in one child living who otherwise would have died, and prevents dozens of cases of malaria.  As such, donating 10% of a typical American household’s income to AMF will save the lives of 1-2 African kids *every year*.

Life for a Child seems like a fairly effective charity, spending about $200-$300/yr for each person they serve (thanks in part to in-kind donations from pharmaceutical firms). If we assume that providing insulin and other diabetes supplies to one individual (and hopefully keeping them alive) for 40 years is approximately the equivalent of preventing a death from malaria, that would mean that Life for a Child might be about half as effective as AMF, which is quite good compared to the far lower effectiveness of most charities, especially those that work in first world countries.

(And some of the other charities and organizations don’t have clear numbers that can be this clearly tracked to lives saved. It’s not to say they’re not doing good work and improving lives – they absolutely are, and we support them, too – but this is one of the most clear and measurable ways to donate money and have a known life-saving impact related to diabetes.)

I am asked fairly frequently about what organization I would recommend donating to, in terms of diabetes research or furthering the type of work we’re doing with the OpenAPS community. It’s a bit of a complicated answer, because there is no organization around or backing the OpenAPS community’s work, and there are many ways to donate to diabetes research (i.e. through bigger organizations like ADA and JDRF or directly to research projects and labs if you know of a particular research effort you want to fund in particular).

And also, I think it comes down to seeing your donation make a difference. If you’d ask Scott, he would recommend AMF or other Givewell charities – but he’s seen enough people ask me about diabetes-related donation targets to know that people are often asking us because of wanting to make a difference in the lives of people with diabetes.

So, given all the ways I’ve talked about making a difference with different volunteer efforts (and the numerous organizations with which you could do so), and the options for making a financial donation: my recommendation for the biggest life-saving effort for your dollar will be to donate to Life for a Child, to help increase the number from the 18,000 children and 46 countries they’re currently helping in. (And, they now have a US arm, so if you are in the US your donation is tax-deductible).

You may have a different organization you decide to support – and that’s great. Thank you to everyone who donates money, time, and energy to organizations who are working to make our lives better, longer, and the world in general to be a better place for us all.

Write It Do It: Tips for Troubleshooting DIY Diabetes Devices (#OpenAPS or otherwise)

When I was in elementary school, I did Science Olympiad. (Are you surprised? Once a geek, always a geek…) One of my favorite “events” was “Write It Do It”, where one person would get a sculpture/something constructed (could be Legos, could be other stuff) and you had to write down instructions for telling someone else how to build it. Your partner got your list of instructions, the equipment, and was tasked with re-building the structure.

Building open source code and tools is very similar, now that I look back on the experiences of having built #DIYPS and then working on #OpenAPS. First step? Build the structure. Second step? Figure out how to tell someone ELSE how to do it. (That’s what the documentation is). But then when someone takes the list of parts and your instructions off elsewhere, depending on how they interpreted the instructions…it can end up looking a little bit different. Sometimes that’s ok, if it still works. But sometimes they skip a step, or you forget to write down something that looks obvious to you (but leaves them wondering how one part got left out) – and it doesn’t work.

Unlike in Science Olympiad, where you were “scored” on the creation and that was that, in DIY diabetes this is where you next turn to asking questions and troubleshooting about what to change/fix/do next.

But, sometimes it’s hard.

If you’re the person building a rig:

  • You know what you’re looking at, what equipment you used to get here, what step you’re on, what you’ve tried that works and what hasn’t worked.
  • You either know “it doesn’t work” or “I don’t know what to do next.”

If you’re the troubleshooter:

  • You only know generally how it can/should work and what the documentation says to do; but you only know as much about the specific problem is shared with you in context of a question.

As someone who spends a lot of time in the troubleshooter role these days, trying to answer questions or assist people in getting past where they’re stuck, here are my tips to help you if you’re building something DIY and are stuck.

Tips_online_troubleshooting_DIY_diabetes_DanaMLewis

DO:

  1. Start by explaining your setup. Example: “I’m building an Edison/Explorer Board rig, and am using a Mac computer to flash my Edison.”
  2. Explain the problem as specifically as you can. Example: “I am unable to get my Edison flashed with jubilinux.”
  3. Explain what step you’re stuck on, and in which page/version of the docs. Example: “I am following the Mac Edison flashing instructions, and I’m stuck on step 1-4.” Paste a URL to the exact page in the docs you’re looking at.  Clarify whether your problem is “it doesn’t work” or “I don’t know what to do next.”
  4. Explain what it’s telling you and what you see. Pro tip: Copy/paste the output that the computer is telling you rather than trying to summarize the error message. Example: “I can’t get the login prompt, it says “can’t find a PTY”.”
    (This is ESPECIALLY important for OpenAPS’ers who want help troubleshooting logs when they’ve finished the setup script – the status messages in there are very specific and helpful to other people who may be helping you troubleshoot.)
  5. Be patient! You may have tagged someone with an @mention; and they may be off doing something else. But don’t feel like you must tag someone with an @mention – if you’re posting in a specific troubleshooting channel, chances are there are numerous people who can (and will) help you when they are in channel and see your message.
  6. Be aware of what channel you’re in and pros/cons for what type of troubleshooting happens where.
    My suggestions:

    1. Facebook – best for questions that don’t need an immediate fix, or are more experience related questions. Remember you’re also at the mercy of Facebook’s algorithm for showing a post to a particular group of people, even if someone’s a member of the same group. And, it’s really hard to do back-and-forth troubleshooting because of the way Facebook threads posts. However, it IS a lot easier to post a picture in Facebook.
    2. Gitter – best for detailed, and hard, troubleshooting scenarios and live back-and-forth conversations. It’s hard to do photos on the go from your mobile device, but it’s usually better to paste logs and error output messages as text anyway (and there are some formatting tricks you can learn to help make your pasted text more readable, too). Those who are willing to help troubleshoot will generally skim and catch up on the channel when they get back, so you might have a few hours delay and get an answer later, if you still haven’t resolved or gotten an answer to your question from the people in channel when you first post.
    3. Email groups – best for if no one in the other channels knows the questions, or you have a general discussion starter that isn’t time-constrained
  7. Start with the basic setup, and come back and customize later. The documentation is usually written to support several kinds of configurations, but the general rule of thumb is get something basic working first, and then you can come back later and add features and tweaks. If you try to skip steps or customize too early, it makes it a lot harder to help troubleshoot what you’re doing if you’re not exactly following the documentation that’s worked for dozens of other people.
  8. Pay it forward. You may not have a certain skill, but you certainly have other skills that can likely help. Don’t be afraid to jump in and help answer questions of things you do know, or steps you successfully got through, even if you’re not “done” with your setup yet. Paying it forward as you go is an awesome strategy J and helps a lot!

SOME THINGS TO TRY TO AVOID:

  1. Avoid vague descriptions of what’s going on, and using the word “it”. Troubleshooter helpers have no idea which “it” or what “thing” you’re referring to, unless you tell them. Nouns are good :) . Saying “I am doing a thing, and it stopped working/doesn’t work” requires someone to play the game of 20 questions to draw out the above level of detail, before they can even start to answer your question of what to do next.
  2. Don’t get upset at people/blame people. Remember, most of the DIY diabetes projects are created by people who donated their work so others could use it, and many continue to donate their time to help other people. That’s time away from their families and lives. So even if you get frustrated, try to be polite. If you get upset, you’re likely to alienate potential helpers and revert into vagueness (“but it doesn’t work!”) which further hinders troubleshooting. And, remember, although these tools are awesome and make a big difference in your life – a few minutes, or a few hours, or a few days without them will be ok. We’d all prefer not to go without, which is why we try to help each other, but it’s ok if there’s a gap in use time. You have good baseline diabetes skills to fall back on during this time. If you’re feeling overwhelmed, turn off the DIY technology, go back to doing things the way you’re comfortable, and come back and troubleshoot further when you’re no longer feeling overwhelmed.
  3. Don’t go radio silent: report back what you tried and if it worked. One of the benefits of these channels is many people are watching and learning alongside you; and the troubleshooters are also learning, too. Everything from “describing the steps ABC way causes confusion, but saying XYZ seems to be more clear” and even “oh wow, we found a bug, 123 no longer is ideal and we should really do 456.” Reporting back what you tried and if it resolved your issue or not is a very simple way to pay it forward and keep the community’s knowledge base growing!
  4. Try not to get annoyed if someone helping out asks you to switch channels to continue troubleshooting. Per the above, sometimes one channel has benefits over the other. It may not be your favorite, but it shouldn’t hurt you to switch channels for a few minutes to resolve your issue.
  5. Don’t wait until you’re “done” to pay it forward. You definitely have things to contribute as you go, too! Don’t wait until you’re done to make edits (PRs) to the documentation. Make edits while they’re fresh in your mind (and it’s a good thing to do while you’re waiting for things to install/compile ;)).

These are the tips that come to mind as I think about how to help people seek help more successfully online in DIY diabetes projects. What tips would you add?

Making it possible for researchers to work with #OpenAPS or general Nightscout data – and creating a complex json to csv command line tool that works with unknown schema

This is less of an OpenAPS/DIYPS/diabetes-related post, although that is normally what I blog about. However, since we created the #OpenAPS Data Commons on Open Humans, to allow those of us who desire to donate our diabetes data to research, I have been spending a lot of time figuring out the process from uploading your data to how data is managed and shared securely with researchers. The hardest part is helping researchers figure out how to handle the data – because we PWDs produce a lot of data :) . So this post explains some of the challenges of the data management to get it to a researcher-friendly format. I have been greatly helped over the years by general purpose open-source work from other people, and one of the things that helps ME the most as a non-traditional programmer is plain language posts explaining the thought process by behind the tools and the attempted solution paths. Especially because sometimes the web pages and blog posts pop higher in search than nitty gritty tool documentation without context. (Plus, I’ve been taking my own advice about not letting myself hold me back from trying, even when I don’t know how to do things yet.) So that’s what this post is!

OH that I "certainly stress tested" a tool with lots of data

Background/inspiration for the project and the tools I had to build:

We’re using Nightscout, which is a remote data-viewing platform for diabetes data, made with love and open source and freely available for anyone with diabetes to use. It’s one of the best ways to display not only continuous glucose monitor (CGM) data, but also data from our DIY closed loop artificial pancreases (#OpenAPS). It can store data from a number of different kinds and brands of diabetes devices (pumps, CGMs, manual data entries, etc.), which means it’s a rich source of data. As the number of DIY OpenAPS users are growing, we estimate that our real-world use is overtaking the amount of total hours of data from clinical trials of closed loop artificial pancreas systems.  In the #WeAreNotWaiting spirit of moving quickly (rather than waiting years for research teams to collect and analyze their own data) we want to see what we can learn from OpenAPS usage, not only by donating data to help traditional researchers speed up their work, but also by co-designing research studies of the things of most value to the diabetes community.

Step 1: Data from users to Open Humans

I thought Step 1 would be the hardest. However, thanks to Madeleine Ball, John Costik, and others in the Nightscout community, a simple Nightscout Data Transfer App was created that enables people with Nightscout data to pop it into their Open Humans accounts. It’s then very easy to join different projects (like the OpenAPS Data Commons) and share your data with those projects. And as the volunteer administrator of the OpenAPS Data Commons, it’s also easy for me to provide data to researchers.

The biggest challenge at this stage was figuring out how much data to pull from the API. I have almost 3 years worth of DIY diabetes data, and I have numerous devices over time uploading all at once…which makes for large chunks of data. Not everyone has this much data (or 6-7 rigs uploading constantly ;)). Props to Madeleine for the patience in working with me to make sure the super users with large data sets will be able to use all of these tools!

Step 2: Sharing the data with researchers

This was easy. Yay for data-sharing tools like Dropbox.

Step 3: Researchers being able to use the data

Here’s where thing started to get interesting. We have large data files that come in json format from Nightscout. I know some researchers we will be working with are probably very comfortable working with tools that can take large, complex json files. However…not all will be, especially because we also want to encourage independent researchers to engage with the data for projects. So I had the belated realization that we need to do something other than hand over json files. We need to convert, at the least, to csv so it can be easily viewed in Excel.

Sounds easy, right?

According to basic searches, there’s roughly a gazillion ways to convert json to csv. There’s even websites that will do it for you, without making you run it on the command line. However, most of them require you to know the types of data and the number of types, in order to therefore construct headers in the csv file to make it readable and useful to a human.

This is where the DIY and infinite possibility nature of all the kinds of diabetes tools anyone could be using with Nightscout, plus the infinite ways they can self-describe profiles and alarms and methods of entering data, makes it tricky. Just based on an eyeball search between two individuals, I was unable to find and count the hundred+ types of data entry possibilities. This is definitely a job for the computer, but I had to figure out how to train the computer to deal with this.

Again, json to csv tools are so common I figured there HAD to be someone who had done this. Finally, after a dozen varying searches and trying a variety of command line tools, I finally found one web-based tool that would take json, create the schema without knowing the data types in advance, and convert it to csv. It was (is) super slick. I got very excited when I saw it linked to a Github repository, because that meant it was probably open source and I can use it. I didn’t see any instructions for how to use it on the command line, though, so I message the author on Twitter and found out that it didn’t yet exist and was a not-yet-done TODO for him.

Sigh. Given this whole #WeAreNotWaiting thing (and given I’ve promised to help some of the researchers in figuring this out so we can initiate some of the research projects), I needed to figure out how to convert this tool into a command line version.

So, I did.

  • I taught myself how to unzip json files (ended up picking `gzip -cd`, because it works on both Mac and Linux)
  • I planned to then convert the web tool to be able to work on the command line, and use it to translate the json files to csv.

But..remember the big file issue? It struck again. So I first had to figure out the best way to estimate the size and splice or split the json into a series of files, without splitting it in a weird place and messing up the data. That became jsonsplit.sh, a tool to split a json file based on the size you give it (and if you don’t specify, it defaults to something like 100000 records).

FWIW: 100,000 records was too much for the more complex schema of the data I was working with, so I often did it in smaller chunks, but you can set it to whatever size you prefer.

So now “all” I had to do was:

  • Unzip the json
  • Break it down if it was too large, using jsonsplit.sh
  • Convert each of these files from json to csv

Phew. Each of these looks really simple now, but took a good chunk of time to figure out. Luckily, the author of the web tool had done much of the hard json-to-csv work, and Scott helped me figure out how to take the html-based version of the conversion and make it useable in the command line using javascript. That became complex-json2csv.js.

Because I knew how hard this all was, and wanted other people to be able to easily use this tool if they had large, complex json with unknown schema to deal with, I created a package.json so I could publish it to npm so you can download and run it anywhere.

I also had to create a script that would pass it all of the Open Humans data; unzip the file; run jsonsplit.sh, run complex-json2csv.js, and organize the data in a useful way, given the existing file structure of the data. Therefore I also created an “OpenHumansDataTools” repository on Github, so that other researchers who will be using Nightscout-based Open Humans data can use this if they want to work with the data. (And, there may be something useful to others using Open Humans even if they’re not using Nightscout data as their data source – again, see “large, complex, challenging json since you don’t know the data type and count of data types” issue. So this repo can link them to complex-json2csv.js and jsonsplit.sh for discovery purposes, as they’re general purpose tools.) That script is here.

My next TODO will be to write a script to take only slices of data based on information shared as part of the surveys that go with the Nightscout data; i.e. if you started your DIY closed loop on X data, take data from 2 weeks prior and 6 weeks after, etc.

I also created a pull request (PR) back to the original tool that inspired my work, in case he wants to add it to his repository for others who also want to run his great stuff from the command line. I know my stuff isn’t perfect, but it works :) and I’m proud of being able to contribute to general-purpose open source in addition to diabetes-specific open source work. (Big thanks as always to everyone who devotes their work to open source for others to use!)

So now, I can pass researchers json or csv files for use in their research. We have a number of studies who are planning to request access to the OpenAPS Data Commons, and I’m excited about how work like this to make diabetes data more broadly available for research will help improve our lives in the short and long term!

The only thing to fear is fear itself

(Things I didn’t realize were involved in open-sourcing a DIY artificial pancreas: writing “yes you can” style self-help blog posts to encourage people to take the first step to TRY and use the open source code and instructions that are freely available….for those who are willing to try.)

You are the only thing holding yourself back from trying. Maybe it’s trying to DIY closed loop at all. Maybe it’s trying to make a change to your existing rig that was set up a long time ago.  Maybe it’s doing something your spouse/partner/parent has previously done for you. Maybe it’s trying to think about changing the way you deal with diabetes at all.

Trying is hard. Learning is hard. But even harder (I think) is listening to the negative self-talk that says “I can’t do this” and perhaps going without something that could make a big difference in your daily life.

99% of the time, you CAN do the thing. But it primarily starts with being willing to try, and being ok with not being perfect right out of the gate.

I blogged last year (wow, almost two years ago actually) about making and doing and how I’ve learned to do so many new things as part of my OpenAPS journey that I never thought possible. I am not a traditional programmer, developer, engineer, or anything like that. Yes, I can code (some)…because I taught myself as I went and continue to teach myself as I go. It’s because I keep trying, and failing, then trying, and succeeding, and trying some more and asking lots of questions along the way.

Here’s what I’ve learned in 3+ years of doing DIY, technical diabetes things that I never thought I’d be able to accomplish:

  1. You don’t need to know everything.
  2. You really don’t particularly need to have any technical “ability” or experience.
  3. You DO need to know that you don’t know it all, even if you already know a thing or two about computers.
  4. (People who come into this process thinking they know everything tend to struggle even more than people who come in humble and ready to learn.)
  5. You only need to be willing to TRY, try, and try again.
  6. It might not always work on the first try of a particular thing…
  7. …but there’s help from the community to help you learn what you need to know.
  8. The learning is a big piece of this, because we’re completely changing the way we treat our diabetes when we go from manual interventions to a hybrid closed loop (and we learned some things to help do it safely).
  9. You can do this – as long as you think you can.
  10. If you think you can’t, you’re right – but it’s not that you can’t, it’s that you’re not willing to even try.

This list of things gets proved out to me on a weekly basis.

I see many people look at the #OpenAPS docs and think “I can’t do that” (and tell me this) and not even attempt to try.

What’s been interesting, though, is how many non-technical people jumped in and gave autotune a try. Even with the same level of no technical ability, several people jumped in, followed the instructions, asked questions, and were able to spin up a Linux virtual machine and run beta-level (brand new, not by any means perfect) code and get output and results. It was amazing, and really proved all those points above. People were deeply interested in getting the computer to help them, and it did. It sometimes took some work, but they were able to accomplish it.

OpenAPS, or anything else involving computers, is the same way. (And OpenAPS is even easier than most anything else that requires coding, in my opinion.) Someone recently estimated that setting up OpenAPS takes only 20 mouse clicks; 29 copy and paste lines of code; 10 entries of passwords or logins; and probably about 15-20 random small entries at prompts (like your NS site address or your email address or wifi addresses). There’s a reference guide, documentation that walks you through exactly what to do, and a supportive community.

You can do it. You can do this. You just have to be willing to try.

Autotune (automatically assessing basal rates, ISF, and carb ratio with #OpenAPS – and even without it!)

What if, instead of guessing needed changes (the current most used method) basal rates, ISF, and carb ratios…we could use data to empirically determine how these ratios should be adjusted?

Meet autotune.

What if we could use data to determine basal rates, ISF and carb ratio? Meet autotune

Historically, most people have guessed basal rates, ISF, and carb ratios. Their doctors may use things like the “rule of 1500” or “1800” or body weight. But, that’s all a general starting place. Over time, people have to manually tweak these underlying basals and ratios in order to best live life with type 1 diabetes. It’s hard to do this manually, and know if you’re overcompensating with meal boluses (aka an incorrect carb ratio) for basal, or over-basaling to compensate for meal times or an incorrect ISF.

And why do these values matter?

It’s not just about manually dosing with this information. But importantly, for most DIY closed loops (like #OpenAPS), dose adjustments are made based on the underlying basals, ISF, and carb ratio. For someone with reasonably tuned basals and ratios, that’s works great. But for someone with values that are way off, it means the system can’t help them adjust as much as someone with well-tuned values. It’ll still help, but it’ll be a fraction as powerful as it could be for that person.

There wasn’t much we could do about that…at first. We designed OpenAPS to fall back to whatever values people had in their pumps, because that’s what the person/their doctor had decided was best. However, we know some people’s aren’t that great, for a variety of reasons. (Growth, activity changes, hormonal cycles, diet and lifestyle changes – to name a few. Aka, life.)

With autosensitivity, we were able to start to assess when actual BG deltas were off compared to what the system predicted should be happening. And with that assessment, it would dynamically adjust ISF, basals, and targets to adjust. However, a common reaction was people seeing the autosens result (based on 24 hours data) and assume that mean that their underlying ISF/basal should be changed. But that’s not the case for two reasons. First, a 24 hour period shouldn’t be what determines those changes. Second, with autosens we cannot tell apart the effects of basals vs. the effect of ISF.

Autotune, by contrast, is designed to iteratively adjust basals, ISF, and carb ratio over the course of weeks – based on a longer stretch of data. Because it makes changes more slowly than autosens, autotune ends up drawing on a larger pool of data, and is therefore able to differentiate whether and how basals and/or ISF need to be adjusted, and also whether carb ratio needs to be changed. Whereas we don’t recommend changing basals or ISF based on the output of autosens (because it’s only looking at 24h of data, and can’t tell apart the effects of basals vs. the effect of ISF), autotune is intended to be used to help guide basal, ISF, and carb ratio changes because it’s tracking trends over a large period of time.

Ideally, for those of us using DIY closed loops like OpenAPS, you can run autotune iteratively inside the closed loop, and let it tune basals, ISF, and carb ratio nightly and use those updated settings automatically. Like autosens, and everything else in OpenAPS, there are safety caps. Therefore, none of these parameters can be tuned beyond 20-30% from the underlying pump values. If someone’s autotune keeps recommending the maximum (20% more resistant, or 30% more sensitive) change over time, then it’s worth a conversation with their doctor about whether your underlying values need changing on the pump – and the person can take this report in to start the discussion.

Not everyone will want to let it run iteratively, though – not to mention, we want it to be useful to anyone, regardless of which DIY closed loop they choose to use – or not! Ideally, this can be run one-off by anyone with Nightscout data of BG and insulin treatments. (Note – I wrote this blog post on a Friday night saying “There’s still some more work that needs to be done to make it easier to run as a one-off (and test it with people who aren’t looping but have the right data)…but this is the goal of autotune!” And as by Saturday morning, we had volunteers who sat down with us and within 1-2 hours had it figured out and documented! True #WeAreNotWaiting. :))

And from what we know, this may be the first tool to help actually make data-driven recommendations on how to change basal rates, ISF, and carb ratios.

How autotune works:

Step 1: Autotune-prep

  • Autotune-prep takes three things initially: glucose data; treatments data; and starting profile (originally from pump; afterwards autotune will set a profile)
  • It calculates BGI and deviation for each glucose value based on treatments
  • Then, it categorizes each glucose value as attributable to either carb sensitivity factor (CSF), ISF, or basals
  • To determine if a “datum” is attributable to CSF, carbs on board (COB) are calculated and decayed over time based on observed BGI deviations, using the same algorithm used by Advanced Meal Asssit. Glucose values after carb entry are attributed to CSF until COB = 0 and BGI deviation <= 0. Subsequent data is attributed as ISF or basals.
  • If BGI is positive (meaning insulin activity is negative), BGI is smaller than 1/4 of basal BGI, or average delta is positive, that data is attributed to basals.
  • Otherwise, the data is attributed to ISF.
  • All this data is output to a single file with 3 sections: ISF, CSF, and basals.

Step 2: Autotune-core

  • Autotune-core reads the prepped glucose file with 3 sections. It calculates what adjustments should be made to ISF, CSF, and basals accordingly.
  • For basals, it divides the day into hour long increments. It calculates the total deviations for that hour increment and calculates what change in basal would be required to adjust those deviations to 0. It then applies 20% of that change needed to the three hours prior (because of insulin impact time). If increasing basal, it increases each of the 3 hour increments by the same amount. If decreasing basal, it does so proportionally, so the biggest basal is reduced the most.
  • For ISF, it calculates the 50th percentile deviation for the entire day and determines how much ISF would need to change to get that deviation to 0. It applies 10% of that as an adjustment to ISF.
  • For CSF, it calculates the total deviations over all of the day’s mealtimes and compares to the deviations that are expected based on existing CSF and the known amount of carbs entered, and applies 10% of that adjustment to CSF.
  • Autotune applies a 20% limit on how much a given basal, or ISF or CSF, can vary from what is in the existing pump profile, so that if it’s running as part of your loop, autotune can’t get too far off without a chance for a human to review the changes.

(See more about how to run autotune here in the OpenAPS docs.)

What autotune output looks like:

Here’s an example of autotune output.

OpenAPS autotune example by @DanaMLewis

Autotune is one of the things Scott and I spent time on over the holidays (and hinted about at the end of my development review of 2016 for OpenAPS). As always with #OpenAPS, it’s awesome to take an idea, get it coded up, get it tested with some early adopters/other developers within days, and continue to improve it!

Highlighting someone successfully using Autotune to help adjust baseline settings

A big thank you to those who’ve been testing and helping iterate on autotune (and of course, all other things OpenAPS). It’s currently in the dev branch of oref0 for anyone who wants to try it out, either one-off or for part of their dev loop. Documentation is currently here, and this is the issue in Github for logging feedback/input, along with sharing and asking questions as always in Gitter!

 

 

OpenAPS feature development in 2016

It’s been two years since my first DIY closed loop and almost two years since OpenAPS (the vision and resulting ecosystem to help make artificial pancreas technology, DIY or otherwise, more quickly available to more people living with diabetes) was created.  I’ve spent time here (on DIYPS.org) talking about a variety of things that are applicable to people who are DIY closed looping, but also focusing on things (like how to “soak” a CGM sensorr and how to do “eating soon” mode) that may be (in my opinion) universally applicable.

OpenAPS feature development in 2016

However, I think it’s worth recapping some of the amazing work that’s been done in the OpenAPS ecosystem over the past year, sometimes behind the scenes, because there are some key features and tools that have been added in that seem small, but are really impactful for people living with DIY closed loops.

  1. Advanced meal assist (aka AMA)
    1. This is an “advanced feature” that can be turned on by OpenAPS users, and, with reliable entry of carb information, will help the closed loop assist sooner with a post-meal BG rise where there is mis-timed or insufficient insulin coverage for the meal. It’s easy to use, because the PWD only has to put carbs and a bolus in – then AMA acts based on the observed absorption. This means that if absorption is delayed because you walk home from dinner, have gastroparesis, etc., it backs off and wait until the carbs actually start taking effect (even if it is later than the human would expect).
    2. We also now have the purple line predictions back in Nightscout to visualize some of these predictions. This is a hallmark of the original iob-cob branch in Nightscout that Scott and I originally created, that took my COB calculated by DIYPS and visualized the resulting BG graph. With AMA, there are actually 3 purple lines displayed when there is carb activity. As described here in the OpenAPS docs, the top purple line assumes 10 mg/dL/5m carb (0.6 mmol/L/5m) absorption and is most accurate right after eating before carb absorption ramps up. The line that is usually in the middle is based on current carb absorption trends and is generally the most accurate once carb absorption begins; and the bottom line assumes no carb absorption and reflects insulin only. Having the 3 lines is helpful for when you do something out of the ordinary following a meal (taking a walk; taking a shower; etc.) and helps a human decide if they need to do anything or if the loop will be able to handle the resulting impact of those decisions.
  2. The approach with a “preferences” file
    1. This is the file where people can adjust default safety and other parameters, like maxIOB which defaults to 0 during a standard setup, ultimately creating a low-glucose-suspend-mode closed loop when people are first setting up their closed loops. People have to intentionally change this setting to allow the system to high temp above a netIOB = 0 amount, which is an intended safety-first approach.
    2. One particular feature (“override_high_target_with_low”) makes it easier for secondary caregivers (like school nurses) to do conservative boluses at lunch/snack time, and allow the closed loop to pick up from there. The secondary caregiver can use the bolus wizard, which will correct down to the high end of the target; and setting this value in preferences to “true” allows the closed loop to target the low end of the target. Based on anecdotal reports from those using it, this feature sounds like it’s prevented a lot of (unintentional, diabetes is hard) overreacting by secondary caregivers when the closed loop can more easily deal with BG fluctuations. The same for “carbratio_adjustmentratio”, if parents would prefer for secondary caregivers to bolus with a more conservative carb ratio, this can be set so the closed loop ultimately uses the correct carb amount for any needed additional calculations.
  3. Autosensitivity
    1. I’ve written about autosensitivity before and how impressive it has been in the face of a norovirus and not eating to have the closed loop detect excessive sensitivity and be able to deal with it – resulting in 0 lows. It’s also helpful during other minor instances of sensitivity after a few active days; or resistance due to hormone cycles and/or an aging pump site.
    2. Autosens is a feature that has to be turned on specifically (like AMA) in order for people to utilize it, because it’s making adjustments to ISF and targets and looping accordingly from those values. It also have safety caps that are set and automatically included to limit the amount of adjustment in either direction that autosens can make to any of the parameters.
  4. Tiny rigs
    1. Thanks to Intel, we were introduced to a board designer who collaborated with the OpenAPS community and inspired the creation of the “Explorer Board”. It’s a multipurpose board that can be used for home automation and all kinds of things, and it’s another tool in the toolbox of off-the-shelf and commercial hardware that can be used in an OpenAPS setup. It’s enabled us, due to the built in radio stick, to be able to drastically reduce the size of an OpenAPS setup to about the size of two Chapsticks.
  5. Setup scripts
    1. As soon as we were working on the Explorer Board, I envisioned that it would be a game changer for increasing access for those who thought a Pi was too big/too burdensome for regular use with a DIY closed loop system. I knew we had a lot of work to do to continue to improve the setup process to cut down on the friction of the setup process – but balancing that with the fact that the DIY part of setting up a closed loop system was and still is incredibly important. We then worked to create the oref0-setup script to streamline the setup process. For anyone building a loop, you still have to set up your hardware and build a system, expressing intention in many places of what you want to do and how…but it’s cut down on a lot of friction and increased the amount of energy people have left, which can instead be focused on reading the code and understanding the underlying algorithm(s) and features that they are considering using.
  6. Streamlined documentation
    1. The OpenAPS “docs” are an incredible labor of love and a testament to dozens and dozens of people who have contributed by sharing their knowledge about hardware, software, and the process it takes to weave all of these tools together. It has gotten to be very long, but given the advent of the Explorer Board hardware and the setup scripts, we were able to drastically streamline the docs and make it a lot easier to go from phase 0 (get and setup hardware, depending on the kind of gear you have); to phase 1 (monitoring and visualizing tools, like Nightscout); to phase 2 (actually setup openaps tools and build your system); to phase 3 (starting with a low glucose suspend only system and how to tune targets and settings safely); to phase 4 (iterating and improving on your system with advanced features, if one so desires). The “old” documentation and manual tool descriptions are still in the docs, but 95% of people don’t need them.
  7. IFTTT and other tool integrations
    1. It’s definitely worth calling out the integration with IFTTT that allows people to use things like Alexa, Siri, Pebble watches, Google Assistant (and just about anything else you can think of), to easily enter carbs or “modes” for OpenAPS to use, or to easily get information about the status of the system. (My personal favorite piece of this is my recent “hack” to automatically have OpenAPS trigger a “waking up” mode to combat hormone-driven BG increases that happen when I start moving around in the morning – but without having to remember to set the mode manually!)

..and that was all just things the community has done in 2016! :) There are some other exciting things that are in development and being tested right now by the community, and I look forward to sharing more as this advanced algorithm development continues.

Happy New Year, everyone!

Automating “wake up” mode with IFTTT and #OpenAPS to blunt morning hormonal rises

tl;dr – automate a trigger to your #OpenAPS rig to start “wake up” mode (or “eating soon”, assuming you eat breakfast) without you having to remember to do it.

Yesterday morning, I woke up and headed to my desk to start working. Because I’m getting some amazing flat line overnights now, thanks to my DIY closed loop (#OpenAPS), I’m more attuned to the fact that after I wake up and start moving around, my hormones kick in to help wake me up (I guess), and I have a small BG rise that’s not otherwise explained by anything else. (It’s not a baseline basal problem, because it happens after I wake up regardless of it being 6am or 8am or even 10:30am if I sleep in on a weekend. It’s also more pronounced when I feel sleep deprived, like my body is working even harder to wake me up.)

Later in the morning, I took a break to jot down my thoughts in response to a question about normal meal rises on #OpenAPS and strategies to optimize mealtimes. It occurred to me later, after being hyper attuned to my lunch results, that my morning wake-up rise up from 1oo perfectly flat to ~140 was higher than the 131 peak I hit after my lunchtime bowl of potato soup.

Hmm, I thought. I wish there was something I could do to help with those morning rises. I often do a temporary target down to 80 mg/dL (a la “eating soon” mode) once I spot the rise, but that’s after it’s already started and very dependent on me paying attention/noticing the rise.

I also have a widely varied schedule (and travel a lot), so I don’t like the idea of scheduling the temp target, or having recurring calendar events that is yet another thing to babysit and change constantly.

What I want is something that is automatically triggered when I wake up, so whether I pop out of the bed or read for 15 minutes first, it kicks in automatically and I (the non-morning person) don’t have to remember to do one more thing. And the best trigger that I could think of is when I end Sleep Cycle, the sleep tracking app I use.

I started looking online to see if there was an easy IFTTT integration with Sleep Cycle. (There’s not. Boo.) So I started looking to see if I could stick my Sleep Cycle data elsewhere that could be used with IFTTT. I stumbled across this blog post describing Sleep Cycle -> iOS Apple HealthKit -> UP -> Google Spreadsheet -> Zapier -> Add to Google Calendar. And then I thought I would add another IFTTT trigger for when the calendar entry was added, to then send “waking up” mode to #OpenAPS. But I don’t need all of the calendar steps. The ideal recipe for me then might be Sleep Cycle ->  iOS Health Kit -> UP -> IFTTT sends “waking up mode” -> Nightscout -> my rig. However, I then learned that UP doesn’t necessarily automatically sync the data from HealthKit, unless the app is open. Hmm. More rabbit holing. Thanks to the tweet-a-friend option, I talked to Ernesto Ramirez (long time QS guru and now at Fitabase), who found the same blog post I did (above) and when I described the constraints, then pointed me to Hipbone to grab Healthkit sleep data and stuff it into Dropbox.

(Why Sleep Cycle? It is my main sleep tracker, but there’s IFTTT integration with Fitbit, Jawbone Up, and a bunch of other stuff, so if you’re interested in this, figure out how to plug your data into IFTTT, otherwise follow the OpenAPS docs for using IFTTT to get data into Nightscout for OpenAPS, and you’ll be all set. I’m trying to avoid having to go back to my Fitbit as the sleep tracker, since I’m wearing my Pebble and I was tired of wearing 2 things. And for some reason my Pebble is inconsistent and slow about showing the sleep data in the morning, so that’s not reliable for this purpose. )

Here’s how I have enabled this “wake up” mode trigger for now:

  1. If you’re using Sleep Cycle, enable it to write sleep analysis data to Apple HealthKit.
  2. Download the Hipbone app for iPhone, connect it with your Dropbox, and allow Hipbone to read sleep data from HealthKit.
  3. Log in or create an account in IFTTT.com and create a recipe using Dropbox as the trigger, and Maker as the action to send a web request to Nightscout. (Again, see the OpenAPS docs for using IFTTT triggers to post to Nightscout, there’s all kinds of great things you can do with your Pebble, Alexa, etc. thanks to IFTTT.) To start, I made “waking up” soon a temporary target to 80 for 30 minutes.

Guess what? This morning, I woke up, ended sleep cycle, and ~10-11 minutes later got notifications that I had new data in Dropbox and checked and found “waking up” mode showing in Nightscout! Woohoo. And it worked well for not having a hormone-driven BG rise after I started moving around.

First "waking up" mode in #OpenAPS automation success

Ideally, this would run immediately, and not take 10-11 minutes, but it went automatically without me having to open Hipbone (or any other app), so this is a great interim solution for me until we find an app that will run more quickly to get the sleep data from HealthKit.

We keep finding great ways to use IFTTT triggers, so if you have any other cool ones you’ve added to your DIY closed loop ecosystem, please let me know!

Autosensitivity (automatically adjusting insulin sensitivity factor for insulin dosing with #OpenAPS)

There’s a secret behind why #OpenAPS was able to deal so well with my BGs during norovirus. Namely, “autosensitivity”.

Autosensitivity (or “autosens”, for short hand) is an advanced feature that can optionally be enabled in OpenAPS.

We know how hard it is for a PWD (person with diabetes) to pay attention to all the numbers and all the things and realize when something is “off”. This could be a bad pump site, a pump site going bad, hormones from growth, hormones from menstrual cycles, sensitivity from exercise the day before, etc. So at the beginning of the year, Scott and I started brainstorming with the community about automatically detecting when the PWD is more or less sensitive to insulin than normal, and adjusting accordingly. Building on the success we’d had in DIYPS with fixed “sensitivity” and “resistance” modes, we built the feature to assess how sensitive or resistant the body is (compared to normal), rather than just a binary mode that sets a predefined response.

How OpenAPS calculates autosensitivity/how it works

It looks at each BG data point for the last 24 hours and calculates the delta (actual observed change) over the last 5 minutes. It then compares it to “BGI” (blood glucose impact, which is how much BG *should* be dropping from insulin alone), and assesses the “deviations” (differences between the delta and BGI).

When sensitivity is normal and basals are well tuned, we expect somewhere between 45-50% of non-meal deviations to be negative, and the remaining 50-55% of deviations should be positive. (To exclude meal-related deviations, we exclude overly large deviations from the sample.) So if you’re outside of that range, you are probably running sensitive or resistant, and we want to adjust accordingly. The output of the detect-sensitivity code is a single ratio number, which is then used to adjust both the baseline basal rate as well as the insulin sensitivity factor (and, optionally, BG targets).

Autosens is designed to detect to food-free downward drift, due to basal rates being too high for the current state of the body, and will adjust basals downward to compensate. The other meal-assist related portion of the algorithms do a pretty good job of dealing with larger than expected post-meal spikes due to resistance: auto-sensitivity mostly comes into play for resistance when you’re sick or otherwise riding high even without food.

Does this calculate basals?

No. Similar to everything else in OpenAPS, this works from your established basals – meaning the baseline basal rates in your pump are what the sensitivity calculations are adjusting from. If you run a marathon and your sensitivity is normally 40, it might adjust your sensitivity to 60 (meaning 1u of insulin would drop your BG an expected 60mg/dl instead of 40 mg/dl) and temporarily adjust your baseline basal rate of 1u to .6u/hour, for example.

This algorithm is simply saying “there’s something going on, let’s adjust proportionately to deal with the lower-than-usual or higher-than-usual sensitivity, regardless of cause”. It easily detects “your basals are too high and/or your ISF is too low” or “your basals are too low and/or your ISF is too high”, but actually differentiating between the effect of basal and ISF is a bit more difficult to do with a simple algorithm like this, so we’re working on a number of new algorithms and tools (see “oref0 issue 99” for our brainstorming on basal tuning and the subsequent issues linked from there) to tackle this in the future.

#OpenAPS’s autosensitivity adjustments during norovirus

After I got over the worst of the norovirus, I started looking at what OpenAPS was calculating for my sensitivity during this time. I was especially curious what would happen during the 2-3 days when I was eating very little.

My normal ISF is 40, but OpenAPS gradually calculated the shift in my sensitivity all the way to 50. That’s really sensitive, and in fact I don’t remember ever seeing a sensitivity adjustment that dramatic – but makes sense given that I usually don’t go so long without eating. (Usually when I notice I’m a little sensitive, I’ll check and see that autosens has been adjusting based on an estimated 43 or so sensitivity.)

And in later days, as expected when sick, I shifted to being more resistant. So autosens continued to assess the data and began adjusting to an estimated sensitivity of 38 as my body continued fighting the virus.

It is so nice to have the tools to automatically make these assessments and adjustments, rather than having to manually deal with them on top of being sick!

 

Sick days solved with a DIY closed loop #OpenAPS

Ask me about the time I got a norovirus over Thanksgiving.

As expected, it was TERRIBLE. Even though the source of the norovirus was cute, the symptoms aren’t. (You can read about the symptoms from the CDC if you’ve never heard of it before.)

But, unexpectedly, it was only terrible on the norovirus symptoms front. My BGs were astoundingly perfect. So much so that I didn’t think about diabetes for 3 days.

Let me explain.

Since I use an OpenAPS DIY closed loop “artificial pancreas”, I have a small computer rig that automatically reads my CGM and pump and automatically adjusts the insulin dosing on my pump.

OpenAPS temp basal adjustments during day 2 norovirus November 2016
Showing the net basal adjustments made on day 2 of my norovirus – the dotted line is what my basals usually are, so anything higher than that dotted line is a “high” temp and anything lower is a “low” temp of various sorts.
  • When I first started throwing up over the first 8 hours, as is pretty normal for norovirus, I first worried about going low, because obviously my stomach was empty.

Nope. I never went lower than about 85 mg/dl. Even when I didn’t eat at all for > 24 hours and very little over the course of 5 days.

  • After that, I worried about going high as my body was fighting off the virus.

Nope. I never went much higher than a few minutes in the 160s. Even when I sipped Gatorade or gasp, ate two full crackers at the end of day two and didn’t bolus for the carbs.

  • The closed loop (as designed – read the OpenAPS reference design for more details) observed the rising or dropping BGs and adjusted insulin delivery (using temporary basal rates) up or down as needed. I sometimes would slowly rise to 150s and then slowly head back down to the 100s. I only once started dropping slowly toward the 80s, but leveled off and then slowly rose back up to the 110s.

None of this (\/\/\/\/\) crazy spiking and dropping fast that causes me to overreact.

No fear for having to force myself to drink sugar while in the midst of the worst of the norovirus.

No worries, diabetes-wise, at all. In fact, it didn’t even OCCUR to me to test or think about ketones (I’m actually super sensitive and can usually feel them well before they’ll register otherwise on a blood test) until someone asked on Twitter.

Celebrating no lows despite a many-day stomache bug, thanks to OpenAPS managing BGs

Confirming I did not get ketones

Why this matters

I was talking with my father-in-law (an ER doc) and listening to him explain how anti-nausea medications (like Zofran) has reduced ER visits. And I think closed loop technology will similarly dramatically reduce ER visits for people with diabetes when sick with things like norovirus and flu and that sort of thing. Because instead of the first instance of vomiting causing a serious spiral and roller coaster of BGs, the closed loop can respond to the BG fluctuations in a safe way and prevent human overreaction in either direction.

TIR was amazing 92-97% despite the stomach bug

This isn’t what you hear about when you look at various reports and articles (like hey, OpenAPS mentioned in The Lancet this week!) about this type of technology – it’s either general outcome reports or traditional clinical trial results. But we need to show the full power of these systems, which is what I experienced over the past week.

I’m reassured now for the future that norovirus, flu, or anything else I may get will likely be not as hard to deal with as it was for the first 12 years of living with diabetes when getting sick. That’s more peace of mind (in addition to what I get just being able to safely sleep every night) that I never expected to have, and I’m incredibly thankful for it.

(I’m also thankful for the numerous wonderful people who share their stories about how this technology impacts their lives – check out this wonderful video featuring the Mazaheri family to see what a difference this is making in other people’s lives. I’m so happy that the benefits I see from using DIY technology are available to so many other people, too. At latest count, there are (n=1)*174 other people worldwide using DIY closed loop technology, and we collectively have over half a million real-world hours using closed loop technology.)