Choose One: What would you give up if you could? (With #OpenAPS, maybe you can – oref1 includes unannounced meals or “UAM”)

What do you have to do today (related to daily insulin dosing for diabetes) that you’d like to give up if you could? Counting carbs? Bolusing? Or what about outcomes – what if you could give up going low after a meal? Or reduce the amount that you spike?

How many of these 5 things do you think are possible to achieve together?

  • No need to bolus
  • No need to count carbs
  • Medium/high carb meals
  • 80%+ time in range
  • no hypoglycemia

How many can you manage with your current therapy and tools of choice?  How many do you think will be possible with hybrid closed loop systems?  Please think about (and maybe even write down) your answers before reading further to get our perspective.

With just pump and CGM, it’s possible to get good time in range with proper boluses, counting carbs, and eating relatively low-carb (or getting lucky/spending a lot of time learning how to time your insulin with regular meals).  Even with all that, some people still go low/have hypoglycemia.  So, let’s call that a 2 (out of 5) that can be achieved simultaneously.

With a first-generation hybrid closed loop system like the original OpenAPS oref0 algorithm, it’s possible to get good time in range overnight, but achieve that for meal times would still require bolusing properly and counting carbs.  But with the perfect night-time BGs, it’s possible to achieve no-hypoglycemia and 80% time in range with medium carb meals (and high-carb meals with Eating Soon mode etc.).  So, let’s call that a 3 (out of 5).

With some of the advanced features we added to OpenAPS with oref0 (like advanced meal assist or “AMA” as we call it), it became a lot easier to achieve a 3 with less bolusing and less need to precisely count carbs.  It also deals better with high-carb meals, and gives the user even more flexibility.  So, let’s call that a 3.5.

A few months ago, when we began discussing how to further improve daily outcomes, we also began to discuss the idea of how to better deal with unannounced meals. This means when someone eats and boluses, but doesn’t enter carbs. (Or in some cases: eats, doesn’t enter carbs, and doesn’t even bolus). How do we design to better help in that safety, all while sticking to our safety principles and dosing safely?

I came up with this idea of “floating carbs” as a way to design a solution for this behavior. Essentially, we’ve learned that if BG spikes at a certain rate, it’s often related to carbs. We observed that AMA can appropriately respond to such a rise, while not dosing extra insulin if BG is not rising.  Which prompted the question: what if we had a “floating” amount of carbs hanging out there, and it could be decayed and dosed upon with AMA if that rise in BG was detected? That led us to build in support for unannounced meals, or “UAM”. (But you’ll probably see us still talk about “floating carbs” some, too, because that was the original way we were thinking about solving the UAM problem.) This is where the suite of tools that make up oref1 came from.  In addition to UAM, we also introduced supermicroboluses, or SMB for short.  (For more background info about oref1 and SMB, read here.)

So with OpenAPS oref1 with SMB and floating carbs for UAM, we are finally at the point to achieve a solid 4 out of 5.  And not just a single set of 4, but any 4 of the 5 (except we’d prefer you don’t choose hypoglycemia, of course):

  • With a low-carb meal, no-hypoglycemia and 80+% time in range is achievable without bolusing or counting carbs (with just an Eating Soon mode that triggers SMB).
  • With a regular meal, the user can either bolus for it (triggering floating carb UAM with SMB) or enter a rough carb count / meal announcement (triggering Eating Now SMB) and achieve 80% time in range.
  • If the user chooses to eat a regular meal and not bolus or enter a carb count (just an Eating Soon mode), the BG results won’t be as good, but oref1 will still handle it gracefully and bring BG back down without causing any hypoglycemia or extended hyperglycemia.

That is huge progress, of course.  And we think that might be about as good as it’s possible to do with current-generation insulin-only pump therapy.  To do better, we’d either need an APS that can dose glucagon and be configured for tight targets, or much faster insulin.  The dual-hormone systems currently in development are targeting an average BG of 140, or an A1c of 6.5, which likely means >20% of time spent > 160mg/dL.  And to achieve that, they do require meal announcements of the small/medium/large variety, similar to what oref1 needs.  Fiasp is promising on the faster-insulin front, and might allow us to develop a future version of oref1 that could deal with completely unannounced and un-bolused meals, but it’s probably not fast enough to achieve 80% time in range on a high-carb diet without some sort of meal announcement or boluses.

But 4 out of 5 isn’t bad, especially when you get to pick which 4, and can pick differently for every meal.

Does that make OpenAPS a “real” artificial pancreas? Is it a hybrid closed loop artificial insulin delivery system? Do we care what it’s called? For Scott and me; the answer is no: instead of focusing on what it’s called, let’s focus on how different tools and techniques work, and what we can do to continue to improve them.

Being Shuttleworth Funded with a Flash Grant as an independent patient researcher

Recently, I have been working on helping OpenAPS’ers collect our data and put it to good use in research (both by traditional researchers as well as using it to enable other fellow patient researchers or “citizen scientists). As a result, I have had the opportunity to work closely with Madeleine Ball at Open Humans. (Open Humans is the platform we use for the OpenAPS Data Commons.)

It’s been awesome to collaborate with Madeleine on many fronts. She’s proven herself really willing to listen to ideas and suggestions for things to change, to make it easier for both individuals to donate their data to research and for researchers who want to use the platform. And, despite me not having the same level of technical skills, she emits a deep respect for people of all experiences and perspectives. She’s also in general a really great person.

As someone who is (perhaps uniquely) utilizing the platform as both a data donor and as a data researcher, it has been fantastic to be able to work through the process of data donation, project creation, and project utilization from both perspectives. And, it’s been great to contribute ideas and make tools (like some of my scripts to download and unpack Open Humans data) that can then be used by other researchers on Open Humans.

Madeleine was also selected this year to be a Shuttleworth Fellow, applying “open” principles to change how we share and study human health data, plus exploring new, participant-centered approaches for health data sharing, research, and citizen science. Which means that everything she’s doing is in almost perfect sync with what we are doing in the OpenAPS and #WeAreNotWaiting communities.

What I didn’t know until this past week was that it also meant (as a Shuttleworth Fellow) that she was able to make nominations of individuals for a Shuttleworth Flash Grant, which is a grant made to a collection of social change agents, no strings attached, in support of their work.

I was astonished to receive an email from the Shuttleworth Foundation saying that I had been nominated by Madeleine for a $5,000 Flash Grant, which goes to individuals they would like to support/reward/encourage in their work for social good.

Shuttleworth Funded

I am so blown away by the Flash Grant itself – and the signal that this grant provides. This is the first (of hopefully many) organizations to recognize the importance of supporting independent patient researchers who are not affiliated with an institution, but rather with an online community. It’s incredibly meaningful for this research and work, which is centered around real needs of patients in the real world, to be funded, even to a small degree.

Many non-traditional researchers like me are unaffiliated with a traditional institution or organization. This means we do the research in our own time, funded solely by our own energy (and in some case resources). Time in of itself is a valuable contribution to research (think of the opportunity costs). However, it is also costly to distribute and disseminate ideas learned from patient-driven research to more traditional researchers. Even ignoring travel costs, most scientific conferences do not have a patient research access program, which means patients in some cases are asked to pay $400 (or more) per person for a single day pass to stand beside their poster if it is accepted for presentation at a conference. In some cases, patients have personal resources and determination and are willing to pay that cost. But not every patient is able to do that. (And to do it year over year as they continue to do new ground-breaking research each year – that adds up, too, especially when you factor in travel, lodging, and the opportunity cost of being away from a day job.)

So what will I use the Flash Grant for? Here’s so far what I’ve decided to put it toward:

#1 – I plan to use it to fund my & Scott’s travel costs this year to ADA’s Scientific Sessions, where our poster on Autotune & data from the #WeAreNotWaiting community will be presented. (I’m still hoping to convince ADA to create a patient researcher program vs. treating us like an individual walking in off the street; but if they again do not choose to do so, it will take $800 for Scott and I to stand with the poster during the poster session). Being at Scientific Sessions is incredibly valuable as researchers and developers, because we can have real-time conversations with traditional researchers who have not yet been introduced to some of our tools or the data collected and donated by the community. It’s one of the most valuable places for us to be in person in terms of facilitating new research partnerships, in addition to renewing and establishing relationships with device manufacturers who could (because our stuff is all open source MIT licensed) utilize our code and tools in commercial devices to more broadly reach people with diabetes.

#2 – Hardware parts. In order to best support the OpenAPS community, Scott and I have also been supporting and contributing to the development of open source hardware like the Explorer Board. Keeping in mind that each version of the board produced needs to be tested to see if the instructions related to OpenAPS need to change, we have been buying every iteration of Explorer Board so we can ensure compatibility and ease of use, which adds up. Having some of this grant funding go toward hardware supplies to support a multitude of setup options is nice!

There are so many individuals who have contributed in various ways to OpenAPS and WeAreNotWaiting and the patient-driven research movements. I’m incredibly encouraged, with a new spurt of energy and motivation, after receiving this Flash Grant to continue to further build upon everyone’s work and to do as much as possible to support every person in our collective communities. Thank you again to Madeleine for the nomination, and to the Shuttleworth Foundation for the Flash Grant, for the financial and emotional support for our community!

Introducing oref1 and super-microboluses (SMB) (and what it means compared to oref0, the original #OpenAPS algorithm)

For a while, I’ve been mentioning “next-generation” algorithms in passing when talking about some of the work that Scott and I have been doing as it relates to OpenAPS development. After we created autotune to help people (even non-loopers) tune underlying pump basal rates, ISF, and CSF, we revisited one of our regular threads of conversations about how it might be possible to further reduce the burden of life with diabetes with algorithm improvements related to meal-time insulin dosing.

This is why we first created meal-assist and then “advanced meal-assist” (AMA), because we learned that most people have trouble with estimating carbs and figuring out optimal timing of meal-related insulin dosing. AMA, if enabled and informed about the number of carbs, is a stronger aid for OpenAPS users who want extra help during and following mealtimes.

Since creating AMA, Scott and I had another idea of a way that we could do even more for meal-time outcomes. Given the time constraints and reality of currently available mealtime insulins (that peak in 60-90 minutes; they’re not instantaneous), we started talking about how to leverage the idea of a “super bolus” for closed loopers.

A super bolus is an approach you can take to give more insulin up front at a meal, beyond what the carb count would call for, by “borrowing” from basal insulin that would be delivered over the next few hours. By adding insulin to the bolus and then low temping for a few hours after that, it essentially “front shifts” some of the insulin activity.

Like a lot of things done manually, it’s hard to do safely and achieve optimal outcomes. But, like a lot of things, we’ve learned that by letting computers do more precise math than we humans are wont to do, OpenAPS can actually do really well with this concept.

Introducing oref1

Those of you who are familiar with the original OpenAPS reference design know that ONLY setting temporary basal rates was a big safety constraint. Why? Because it’s less of an issue if a temporary basal rate is issued over and over again; and if the system stops communicating, the temp basal eventually expires and resume normal pump activity. That was a core part of oref0. So to distinguish this new set of algorithm features that depart from that aspect of the oref0 approach, we are introducing it as “oref1”. Most OpenAPS users will only use oref0, like they have been doing. oref1 should only be enabled specifically by any advanced users who want to test or use these features.

The notable difference between the oref0 and oref1 algorithms is that, when enabled, oref1 makes use of small “supermicroboluses” (SMB) of insulin at mealtimes to more quickly (but safely) administer the insulin required to respond to blood sugar rises due to carb absorption.

Introducing SuperMicroBoluses (or “SMB”)

The microboluses administered by oref1 are called “super” because they use a miniature version of the “super bolus” technique described above.  They allow oref1 to safely dose mealtime insulin more rapidly, while at the same time setting a temp basal rate of zero of sufficient duration to ensure that BG levels will return to a safe range with no further action even if carb absorption slows suddenly (for example, due to post-meal activity or GI upset) or stops completely (for example due to an interrupted meal or a carb estimate that turns out to be too high). Where oref0 AMA might decide that 1 U of extra insulin is likely to be required, and will set a 2U/hr higher-than-normal temporary basal rate to deliver that insulin over 30 minutes, oref1 with SMB might deliver that same 1U of insulin as 0.4U, 0.3U, 0.2U, and 0.1U boluses, at 5 minute intervals, along with a 60 minute zero temp (from a normal basal of 1U/hr) in case the extra insulin proves unnecessary.

As with oref0, the oref1 algorithm continuously recalculates the insulin required every 5 minutes based on CGM data and previous dosing, which means that oref1 will continually issue new SMBs every 5 minutes, increasing or reducing their size as needed as long as CGM data indicates that blood glucose levels are rising (or not falling) relative to what would be expected from insulin alone.  If BG levels start falling, there is generally already a long zero temp basal running, which means that excess IOB is quickly reduced as needed, until BG levels stabilize and more insulin is warranted.

Safety constraints and safety design for SMB and oref1

Automatically administering boluses safely is of course the key challenge with such an algorithm, as we must find another way to avoid the issues highlighted in the oref0 design constraints.  In oref1, this is accomplished by using several new safety checks (as outlined here), and verifying all output, before the system can administer a SMB.

At the core of the oref1 SMB safety checks is the concept that OpenAPS must verify, via multiple redundant methods, that it knows about all insulin that has been delivered by the pump, and that the pump is not currently in the process of delivering a bolus, before it can safely do so.  In addition, it must calculate the length of zero temp required to eventually bring BG levels back in range even with no further carb absorption, set that temporary basal rate if needed, and verify that the correct temporary basal rate is running for the proper duration before administering a SMB.

To verify that it knows about all recent insulin dosing and that no bolus is currently being administered, oref1 first checks the pump’s reservoir level, then performs a full query of the pump’s treatment history, calculates the required insulin dose (noting the reservoir level the pump should be at when the dose is administered) and then checks the pump’s bolusing status and reservoir level again immediately before dosing.  These checks guard against dosing based on a stale recommendation that might otherwise be administered more than once, or the possibility that one OpenAPS rig might administer a bolus just as another rig is about to do so.  In addition, all SMBs are limited to 1/3 of the insulin known to be required based on current information, such that even in the race condition where two rigs nearly simultaneously issue boluses, no more than 2/3 of the required insulin is delivered, and future SMBs can be adjusted to ensure that oref1 never delivers more insulin than it can safely withhold via a zero temp basal.

In some situations, a lack of BG or intermittent pump communications can prevent SMBs from being delivered promptly.  In such cases, oref1 attempts to fall back to oref0 + AMA behavior and set an appropriate high temp basal.  However, if it is unable to do so, manual boluses are sometimes required to finish dosing for the recently consumed meal and prevent BG from rising too high.  As a result, oref1’s SMB features are only enabled as long as carb impact is still present: after a few hours (after carbs all decay), all such features are disabled, and oref1-enabled OpenAPS instances return to oref0 behavior while the user is asleep or otherwise not engaging with the system.

In addition to these safety status checks, the oref1 algorithm’s design helps ensure safety.  As already noted, setting a long-duration temporary basal rate of zero while super-microbolusing provides good protection against hypoglycemia, and very strong protection against severe hypoglycemia, by ensuring that insulin delivery is zero when BG levels start to drop, even if the OpenAPS rig loses communication with the pump, and that such a suspension is long enough to eventually bring BG levels back up to the target range, even if no manual corrective action is taken (for example, during sleep).  Because of these design features, oref1 may even represent an improvement over oref0 w/ AMA in terms of avoiding post-meal hypoglycemia.

In real world testing, oref1 has thus far proven at least as safe as oref0 w/ AMA with regard to hypoglycemia, and better able to prevent post-meal hyperglycemia when SMB is ongoing.

What does SMB “look” like?

Here is what SMB activity currently looks like when displayed on Nightscout, and my Pebble watch:

First oref1 SMB OpenAPS test by @DanaMLewisFirst oref1 SMB OpenAPS test as seen on @DanaMLewis pebble watch

How do features like this get developed and tested?

SMB, like any other advanced feature, goes through extensive testing. First, we talk about it. Then, it becomes written up in plain language as an issue for us to track discussion and development. Then we begin to develop the feature, and Scott and I test it on a spare pump and rig. When it gets to the point of being ready to test it in the real world, I test it during a time period when I can focus on observing and monitoring what it is doing. Throughout all of this, we continue to make tweaks and changes to improve what we’re developing. After several days (or for something this different, weeks) of Dana-testing, we then have a few other volunteers begin to test it on spare rigs. They follow the same process of monitoring it on spare rigs and giving feedback and helping us develop it before choosing to run it on a rig and a pump connected to their body. More feedback, discussion, and observation. Eventually, it gets to a point where it is ready to go to the “dev” branch of OpenAPS code, which is where this code is now heading. Several people will review the code and approve it to be added to the “dev” branch. We will then have others test the “dev” branch with this and any other features or code changes – both by people who want to enable this code feature, as well as people who don’t want this feature (to make sure we don’t break existing setups). Eventually, after numerous thumbs up from multiple members of the community who have helped us test different use cases, that code from the “dev” branch will be “approved” and will go to the “master” branch of code where it is available to a more typical user of OpenAPS.

However, not everyone automatically gets this code or will use it. People already running on the master branch won’t get this code or be able to use it until they update their rig. Even then, unless they were to specifically enable this feature (or any other advanced feature), they would not have this particular segment of code drive any of their rig’s behavior.

Where to find out more about oref1, SMB, etc.:

  • We have updated the OpenAPS Reference Design to reflect the differences between oref0 and the oref1 features.
  • OpenAPS documentation about oref1, which as of July 13, 2017 is now part of the master branch of oref0 code.
  • Ask questions! Like all things developed in the OpenAPS community, SMB and oref1-related features will evolve over time. We encourage you to hop into Gitter and ask questions about these features & whether they’re right for you (if you’re DIY closed looping).

Special note of thanks to several people who have contributed to ongoing discussions about SMB, plus the very early testers who have been running this on spare rigs and pumps. Plus always, ongoing thanks to everyone who is contributing and has contributed to OpenAPS development!

Making it possible for researchers to work with #OpenAPS or general Nightscout data – and creating a complex json to csv command line tool that works with unknown schema

This is less of an OpenAPS/DIYPS/diabetes-related post, although that is normally what I blog about. However, since we created the #OpenAPS Data Commons on Open Humans, to allow those of us who desire to donate our diabetes data to research, I have been spending a lot of time figuring out the process from uploading your data to how data is managed and shared securely with researchers. The hardest part is helping researchers figure out how to handle the data – because we PWDs produce a lot of data :) . So this post explains some of the challenges of the data management to get it to a researcher-friendly format. I have been greatly helped over the years by general purpose open-source work from other people, and one of the things that helps ME the most as a non-traditional programmer is plain language posts explaining the thought process by behind the tools and the attempted solution paths. Especially because sometimes the web pages and blog posts pop higher in search than nitty gritty tool documentation without context. (Plus, I’ve been taking my own advice about not letting myself hold me back from trying, even when I don’t know how to do things yet.) So that’s what this post is!

OH that I "certainly stress tested" a tool with lots of data

Background/inspiration for the project and the tools I had to build:

We’re using Nightscout, which is a remote data-viewing platform for diabetes data, made with love and open source and freely available for anyone with diabetes to use. It’s one of the best ways to display not only continuous glucose monitor (CGM) data, but also data from our DIY closed loop artificial pancreases (#OpenAPS). It can store data from a number of different kinds and brands of diabetes devices (pumps, CGMs, manual data entries, etc.), which means it’s a rich source of data. As the number of DIY OpenAPS users are growing, we estimate that our real-world use is overtaking the amount of total hours of data from clinical trials of closed loop artificial pancreas systems.  In the #WeAreNotWaiting spirit of moving quickly (rather than waiting years for research teams to collect and analyze their own data) we want to see what we can learn from OpenAPS usage, not only by donating data to help traditional researchers speed up their work, but also by co-designing research studies of the things of most value to the diabetes community.

Step 1: Data from users to Open Humans

I thought Step 1 would be the hardest. However, thanks to Madeleine Ball, John Costik, and others in the Nightscout community, a simple Nightscout Data Transfer App was created that enables people with Nightscout data to pop it into their Open Humans accounts. It’s then very easy to join different projects (like the OpenAPS Data Commons) and share your data with those projects. And as the volunteer administrator of the OpenAPS Data Commons, it’s also easy for me to provide data to researchers.

The biggest challenge at this stage was figuring out how much data to pull from the API. I have almost 3 years worth of DIY diabetes data, and I have numerous devices over time uploading all at once…which makes for large chunks of data. Not everyone has this much data (or 6-7 rigs uploading constantly ;)). Props to Madeleine for the patience in working with me to make sure the super users with large data sets will be able to use all of these tools!

Step 2: Sharing the data with researchers

This was easy. Yay for data-sharing tools like Dropbox.

Step 3: Researchers being able to use the data

Here’s where thing started to get interesting. We have large data files that come in json format from Nightscout. I know some researchers we will be working with are probably very comfortable working with tools that can take large, complex json files. However…not all will be, especially because we also want to encourage independent researchers to engage with the data for projects. So I had the belated realization that we need to do something other than hand over json files. We need to convert, at the least, to csv so it can be easily viewed in Excel.

Sounds easy, right?

According to basic searches, there’s roughly a gazillion ways to convert json to csv. There’s even websites that will do it for you, without making you run it on the command line. However, most of them require you to know the types of data and the number of types, in order to therefore construct headers in the csv file to make it readable and useful to a human.

This is where the DIY and infinite possibility nature of all the kinds of diabetes tools anyone could be using with Nightscout, plus the infinite ways they can self-describe profiles and alarms and methods of entering data, makes it tricky. Just based on an eyeball search between two individuals, I was unable to find and count the hundred+ types of data entry possibilities. This is definitely a job for the computer, but I had to figure out how to train the computer to deal with this.

Again, json to csv tools are so common I figured there HAD to be someone who had done this. Finally, after a dozen varying searches and trying a variety of command line tools, I finally found one web-based tool that would take json, create the schema without knowing the data types in advance, and convert it to csv. It was (is) super slick. I got very excited when I saw it linked to a Github repository, because that meant it was probably open source and I can use it. I didn’t see any instructions for how to use it on the command line, though, so I message the author on Twitter and found out that it didn’t yet exist and was a not-yet-done TODO for him.

Sigh. Given this whole #WeAreNotWaiting thing (and given I’ve promised to help some of the researchers in figuring this out so we can initiate some of the research projects), I needed to figure out how to convert this tool into a command line version.

So, I did.

  • I taught myself how to unzip json files (ended up picking `gzip -cd`, because it works on both Mac and Linux)
  • I planned to then convert the web tool to be able to work on the command line, and use it to translate the json files to csv.

But..remember the big file issue? It struck again. So I first had to figure out the best way to estimate the size and splice or split the json into a series of files, without splitting it in a weird place and messing up the data. That became jsonsplit.sh, a tool to split a json file based on the size you give it (and if you don’t specify, it defaults to something like 100000 records).

FWIW: 100,000 records was too much for the more complex schema of the data I was working with, so I often did it in smaller chunks, but you can set it to whatever size you prefer.

So now “all” I had to do was:

  • Unzip the json
  • Break it down if it was too large, using jsonsplit.sh
  • Convert each of these files from json to csv

Phew. Each of these looks really simple now, but took a good chunk of time to figure out. Luckily, the author of the web tool had done much of the hard json-to-csv work, and Scott helped me figure out how to take the html-based version of the conversion and make it useable in the command line using javascript. That became complex-json2csv.js.

Because I knew how hard this all was, and wanted other people to be able to easily use this tool if they had large, complex json with unknown schema to deal with, I created a package.json so I could publish it to npm so you can download and run it anywhere.

I also had to create a script that would pass it all of the Open Humans data; unzip the file; run jsonsplit.sh, run complex-json2csv.js, and organize the data in a useful way, given the existing file structure of the data. Therefore I also created an “OpenHumansDataTools” repository on Github, so that other researchers who will be using Nightscout-based Open Humans data can use this if they want to work with the data. (And, there may be something useful to others using Open Humans even if they’re not using Nightscout data as their data source – again, see “large, complex, challenging json since you don’t know the data type and count of data types” issue. So this repo can link them to complex-json2csv.js and jsonsplit.sh for discovery purposes, as they’re general purpose tools.) That script is here.

My next TODO will be to write a script to take only slices of data based on information shared as part of the surveys that go with the Nightscout data; i.e. if you started your DIY closed loop on X data, take data from 2 weeks prior and 6 weeks after, etc.

I also created a pull request (PR) back to the original tool that inspired my work, in case he wants to add it to his repository for others who also want to run his great stuff from the command line. I know my stuff isn’t perfect, but it works :) and I’m proud of being able to contribute to general-purpose open source in addition to diabetes-specific open source work. (Big thanks as always to everyone who devotes their work to open source for others to use!)

So now, I can pass researchers json or csv files for use in their research. We have a number of studies who are planning to request access to the OpenAPS Data Commons, and I’m excited about how work like this to make diabetes data more broadly available for research will help improve our lives in the short and long term!

Autotune (automatically assessing basal rates, ISF, and carb ratio with #OpenAPS – and even without it!)

What if, instead of guessing needed changes (the current most used method) basal rates, ISF, and carb ratios…we could use data to empirically determine how these ratios should be adjusted?

Meet autotune.

What if we could use data to determine basal rates, ISF and carb ratio? Meet autotune

Historically, most people have guessed basal rates, ISF, and carb ratios. Their doctors may use things like the “rule of 1500” or “1800” or body weight. But, that’s all a general starting place. Over time, people have to manually tweak these underlying basals and ratios in order to best live life with type 1 diabetes. It’s hard to do this manually, and know if you’re overcompensating with meal boluses (aka an incorrect carb ratio) for basal, or over-basaling to compensate for meal times or an incorrect ISF.

And why do these values matter?

It’s not just about manually dosing with this information. But importantly, for most DIY closed loops (like #OpenAPS), dose adjustments are made based on the underlying basals, ISF, and carb ratio. For someone with reasonably tuned basals and ratios, that’s works great. But for someone with values that are way off, it means the system can’t help them adjust as much as someone with well-tuned values. It’ll still help, but it’ll be a fraction as powerful as it could be for that person.

There wasn’t much we could do about that…at first. We designed OpenAPS to fall back to whatever values people had in their pumps, because that’s what the person/their doctor had decided was best. However, we know some people’s aren’t that great, for a variety of reasons. (Growth, activity changes, hormonal cycles, diet and lifestyle changes – to name a few. Aka, life.)

With autosensitivity, we were able to start to assess when actual BG deltas were off compared to what the system predicted should be happening. And with that assessment, it would dynamically adjust ISF, basals, and targets to adjust. However, a common reaction was people seeing the autosens result (based on 24 hours data) and assume that mean that their underlying ISF/basal should be changed. But that’s not the case for two reasons. First, a 24 hour period shouldn’t be what determines those changes. Second, with autosens we cannot tell apart the effects of basals vs. the effect of ISF.

Autotune, by contrast, is designed to iteratively adjust basals, ISF, and carb ratio over the course of weeks – based on a longer stretch of data. Because it makes changes more slowly than autosens, autotune ends up drawing on a larger pool of data, and is therefore able to differentiate whether and how basals and/or ISF need to be adjusted, and also whether carb ratio needs to be changed. Whereas we don’t recommend changing basals or ISF based on the output of autosens (because it’s only looking at 24h of data, and can’t tell apart the effects of basals vs. the effect of ISF), autotune is intended to be used to help guide basal, ISF, and carb ratio changes because it’s tracking trends over a large period of time.

Ideally, for those of us using DIY closed loops like OpenAPS, you can run autotune iteratively inside the closed loop, and let it tune basals, ISF, and carb ratio nightly and use those updated settings automatically. Like autosens, and everything else in OpenAPS, there are safety caps. Therefore, none of these parameters can be tuned beyond 20-30% from the underlying pump values. If someone’s autotune keeps recommending the maximum (20% more resistant, or 30% more sensitive) change over time, then it’s worth a conversation with their doctor about whether your underlying values need changing on the pump – and the person can take this report in to start the discussion.

Not everyone will want to let it run iteratively, though – not to mention, we want it to be useful to anyone, regardless of which DIY closed loop they choose to use – or not! Ideally, this can be run one-off by anyone with Nightscout data of BG and insulin treatments. (Note – I wrote this blog post on a Friday night saying “There’s still some more work that needs to be done to make it easier to run as a one-off (and test it with people who aren’t looping but have the right data)…but this is the goal of autotune!” And as by Saturday morning, we had volunteers who sat down with us and within 1-2 hours had it figured out and documented! True #WeAreNotWaiting. :))

And from what we know, this may be the first tool to help actually make data-driven recommendations on how to change basal rates, ISF, and carb ratios.

How autotune works:

Step 1: Autotune-prep

  • Autotune-prep takes three things initially: glucose data; treatments data; and starting profile (originally from pump; afterwards autotune will set a profile)
  • It calculates BGI and deviation for each glucose value based on treatments
  • Then, it categorizes each glucose value as attributable to either carb sensitivity factor (CSF), ISF, or basals
  • To determine if a “datum” is attributable to CSF, carbs on board (COB) are calculated and decayed over time based on observed BGI deviations, using the same algorithm used by Advanced Meal Asssit. Glucose values after carb entry are attributed to CSF until COB = 0 and BGI deviation <= 0. Subsequent data is attributed as ISF or basals.
  • If BGI is positive (meaning insulin activity is negative), BGI is smaller than 1/4 of basal BGI, or average delta is positive, that data is attributed to basals.
  • Otherwise, the data is attributed to ISF.
  • All this data is output to a single file with 3 sections: ISF, CSF, and basals.

Step 2: Autotune-core

  • Autotune-core reads the prepped glucose file with 3 sections. It calculates what adjustments should be made to ISF, CSF, and basals accordingly.
  • For basals, it divides the day into hour long increments. It calculates the total deviations for that hour increment and calculates what change in basal would be required to adjust those deviations to 0. It then applies 20% of that change needed to the three hours prior (because of insulin impact time). If increasing basal, it increases each of the 3 hour increments by the same amount. If decreasing basal, it does so proportionally, so the biggest basal is reduced the most.
  • For ISF, it calculates the 50th percentile deviation for the entire day and determines how much ISF would need to change to get that deviation to 0. It applies 10% of that as an adjustment to ISF.
  • For CSF, it calculates the total deviations over all of the day’s mealtimes and compares to the deviations that are expected based on existing CSF and the known amount of carbs entered, and applies 10% of that adjustment to CSF.
  • Autotune applies a 20% limit on how much a given basal, or ISF or CSF, can vary from what is in the existing pump profile, so that if it’s running as part of your loop, autotune can’t get too far off without a chance for a human to review the changes.

(See more about how to run autotune here in the OpenAPS docs.)

What autotune output looks like:

Here’s an example of autotune output.

OpenAPS autotune example by @DanaMLewis

Autotune is one of the things Scott and I spent time on over the holidays (and hinted about at the end of my development review of 2016 for OpenAPS). As always with #OpenAPS, it’s awesome to take an idea, get it coded up, get it tested with some early adopters/other developers within days, and continue to improve it!

Highlighting someone successfully using Autotune to help adjust baseline settings

A big thank you to those who’ve been testing and helping iterate on autotune (and of course, all other things OpenAPS). It’s currently in the dev branch of oref0 for anyone who wants to try it out, either one-off or for part of their dev loop. Documentation is currently here, and this is the issue in Github for logging feedback/input, along with sharing and asking questions as always in Gitter!

 

 

OpenAPS feature development in 2016

It’s been two years since my first DIY closed loop and almost two years since OpenAPS (the vision and resulting ecosystem to help make artificial pancreas technology, DIY or otherwise, more quickly available to more people living with diabetes) was created.  I’ve spent time here (on DIYPS.org) talking about a variety of things that are applicable to people who are DIY closed looping, but also focusing on things (like how to “soak” a CGM sensorr and how to do “eating soon” mode) that may be (in my opinion) universally applicable.

OpenAPS feature development in 2016

However, I think it’s worth recapping some of the amazing work that’s been done in the OpenAPS ecosystem over the past year, sometimes behind the scenes, because there are some key features and tools that have been added in that seem small, but are really impactful for people living with DIY closed loops.

  1. Advanced meal assist (aka AMA)
    1. This is an “advanced feature” that can be turned on by OpenAPS users, and, with reliable entry of carb information, will help the closed loop assist sooner with a post-meal BG rise where there is mis-timed or insufficient insulin coverage for the meal. It’s easy to use, because the PWD only has to put carbs and a bolus in – then AMA acts based on the observed absorption. This means that if absorption is delayed because you walk home from dinner, have gastroparesis, etc., it backs off and wait until the carbs actually start taking effect (even if it is later than the human would expect).
    2. We also now have the purple line predictions back in Nightscout to visualize some of these predictions. This is a hallmark of the original iob-cob branch in Nightscout that Scott and I originally created, that took my COB calculated by DIYPS and visualized the resulting BG graph. With AMA, there are actually 3 purple lines displayed when there is carb activity. As described here in the OpenAPS docs, the top purple line assumes 10 mg/dL/5m carb (0.6 mmol/L/5m) absorption and is most accurate right after eating before carb absorption ramps up. The line that is usually in the middle is based on current carb absorption trends and is generally the most accurate once carb absorption begins; and the bottom line assumes no carb absorption and reflects insulin only. Having the 3 lines is helpful for when you do something out of the ordinary following a meal (taking a walk; taking a shower; etc.) and helps a human decide if they need to do anything or if the loop will be able to handle the resulting impact of those decisions.
  2. The approach with a “preferences” file
    1. This is the file where people can adjust default safety and other parameters, like maxIOB which defaults to 0 during a standard setup, ultimately creating a low-glucose-suspend-mode closed loop when people are first setting up their closed loops. People have to intentionally change this setting to allow the system to high temp above a netIOB = 0 amount, which is an intended safety-first approach.
    2. One particular feature (“override_high_target_with_low”) makes it easier for secondary caregivers (like school nurses) to do conservative boluses at lunch/snack time, and allow the closed loop to pick up from there. The secondary caregiver can use the bolus wizard, which will correct down to the high end of the target; and setting this value in preferences to “true” allows the closed loop to target the low end of the target. Based on anecdotal reports from those using it, this feature sounds like it’s prevented a lot of (unintentional, diabetes is hard) overreacting by secondary caregivers when the closed loop can more easily deal with BG fluctuations. The same for “carbratio_adjustmentratio”, if parents would prefer for secondary caregivers to bolus with a more conservative carb ratio, this can be set so the closed loop ultimately uses the correct carb amount for any needed additional calculations.
  3. Autosensitivity
    1. I’ve written about autosensitivity before and how impressive it has been in the face of a norovirus and not eating to have the closed loop detect excessive sensitivity and be able to deal with it – resulting in 0 lows. It’s also helpful during other minor instances of sensitivity after a few active days; or resistance due to hormone cycles and/or an aging pump site.
    2. Autosens is a feature that has to be turned on specifically (like AMA) in order for people to utilize it, because it’s making adjustments to ISF and targets and looping accordingly from those values. It also have safety caps that are set and automatically included to limit the amount of adjustment in either direction that autosens can make to any of the parameters.
  4. Tiny rigs
    1. Thanks to Intel, we were introduced to a board designer who collaborated with the OpenAPS community and inspired the creation of the “Explorer Board”. It’s a multipurpose board that can be used for home automation and all kinds of things, and it’s another tool in the toolbox of off-the-shelf and commercial hardware that can be used in an OpenAPS setup. It’s enabled us, due to the built in radio stick, to be able to drastically reduce the size of an OpenAPS setup to about the size of two Chapsticks.
  5. Setup scripts
    1. As soon as we were working on the Explorer Board, I envisioned that it would be a game changer for increasing access for those who thought a Pi was too big/too burdensome for regular use with a DIY closed loop system. I knew we had a lot of work to do to continue to improve the setup process to cut down on the friction of the setup process – but balancing that with the fact that the DIY part of setting up a closed loop system was and still is incredibly important. We then worked to create the oref0-setup script to streamline the setup process. For anyone building a loop, you still have to set up your hardware and build a system, expressing intention in many places of what you want to do and how…but it’s cut down on a lot of friction and increased the amount of energy people have left, which can instead be focused on reading the code and understanding the underlying algorithm(s) and features that they are considering using.
  6. Streamlined documentation
    1. The OpenAPS “docs” are an incredible labor of love and a testament to dozens and dozens of people who have contributed by sharing their knowledge about hardware, software, and the process it takes to weave all of these tools together. It has gotten to be very long, but given the advent of the Explorer Board hardware and the setup scripts, we were able to drastically streamline the docs and make it a lot easier to go from phase 0 (get and setup hardware, depending on the kind of gear you have); to phase 1 (monitoring and visualizing tools, like Nightscout); to phase 2 (actually setup openaps tools and build your system); to phase 3 (starting with a low glucose suspend only system and how to tune targets and settings safely); to phase 4 (iterating and improving on your system with advanced features, if one so desires). The “old” documentation and manual tool descriptions are still in the docs, but 95% of people don’t need them.
  7. IFTTT and other tool integrations
    1. It’s definitely worth calling out the integration with IFTTT that allows people to use things like Alexa, Siri, Pebble watches, Google Assistant (and just about anything else you can think of), to easily enter carbs or “modes” for OpenAPS to use, or to easily get information about the status of the system. (My personal favorite piece of this is my recent “hack” to automatically have OpenAPS trigger a “waking up” mode to combat hormone-driven BG increases that happen when I start moving around in the morning – but without having to remember to set the mode manually!)

..and that was all just things the community has done in 2016! :) There are some other exciting things that are in development and being tested right now by the community, and I look forward to sharing more as this advanced algorithm development continues.

Happy New Year, everyone!

Half life

I have now lived with diabetes for more than half of my life.

That also means I have now lived less than half of my life without diabetes.

This somehow makes the passing of another year living with diabetes seem much more impactful to me. Maybe not to you, or to someone else with a different experience of living with diabetes and a different timeline of life before and after diagnosis…but to me this is a big one.

I’m happy to have context, though, to help me keep things in perspective. For example, I’ve now lived with a closed loop artificial pancreas (or automated insulin delivery) system for almost two full years.

(That’s almost as significant a marker of a “with” vs. “without” comparison as living “with” vs. “without” diabetes.)

And because I ended up with type 1 diabetes, I found out that doing things for other people and the communities you’re a part of is a powerful way to help yourself, both in the short term and the long term. That’s what drove me to figure out a way to take #DIYPS closed loop and make it something open source. And by doing that, I learned so much more about open source, and have been able to partner with incredible people innovating in hardware and software. These collaborations have resulted in an incredibly rich community of passionate people I like to call #OpenAPS-ers.

While #OpenAPS is by no means a cure, and no artificial pancreas will be a cure, they provide an immeasurably improved quality of life that a lot of us didn’t realize was possible with diabetes. Someone told me he can get the same results for his child living with diabetes, but with #OpenAPS it requires about 85% less work. And given the enormous time and cognitive burden of diabetes, this is a HUGE reduction.

And now doors are opening for us collectively to make even more of a significant impact on the diabetes community, and our fellow patient communities. Yesterday, while at the White House Frontiers conference, NIH Director Dr. Francis Collins was in the audience during my panel. At the end of the day, he stopped me to ask questions about my experiences and perspective on the FDA and what we need from the government. I was able to talk with him about the need for FDA & other parts of the government to help foster and support open source innovation. We talked about the importance of data access for patients, and the need for data visibility on commercially approved medical devices.

Showing former NIH Director Francis Collins my OpenAPS rig and talking about data interoperability.

This is not just a need of people with diabetes (although it’s certainly very applicable for all of the manufacturers with pipelines full of artificial pancreas products): these are universal needs of people dealing with serious health conditions.

Given what I heard yesterday, it’s working. The #WeAreNotWaiting spirit is infusing our partners in these other areas. We are planting seeds, building relationships, and working in collaboration with those at the FDA, NIH, HHS in addition to those in industry and academia. I know they were working toward these same goals before, but social media has helped raise up our collective voices about the burning need to make things better, sooner, for more people.

So if I have to live the rest of my life at a ratio where more than half of it has been spent living with diabetes, I look forward to continuing to work to get to an 85% reduction in the burden of daily life with diabetes for everyone.

 

Old news alert: FDA is monitoring the DIY community

There was a news article today that got a lot of people to react strongly. In that sense, the article did it’s job, to get people talking. But that doesn’t mean it got all the details right, as an insider to the DIY community would know.

What am I talking about?

There was an article posted today in “Clinical Endocrinology News” with the titillating headline of “FDA Official: We’re monitoring DIY artificial pancreas boom”.

Guess what, though? This is NOT news. We’ve been talking to the FDA, and they’ve in fact been “monitoring” us (especially if monitoring includes reading this blog, DIYPS.org ;)) since the summer of 2014, before we even turned our eyes toward closing the loop. Definitely since we, while they were in the room at a D-Data in 2015, announced we would close the loop. And even more so after we closed the loop and then decided to go the #OpenAPS route and find a way to make closed loop technology open source. And others from the community, like Ben West, have been talking with the FDA for even longer than we have.

What the article got right:

Recently at AADE, Courtney Lias from FDA (who gave a similar presentation at D-Data a month ago) gave a presentation talking about AP technology. She addressed both how the FDA is looking at the DIY community (they believe that they have enforcement discretion, even though no one in the DIY community is distributing a medical device, which is legally where FDA has it’s jurisdiction) and how it’s looking at the commercial vendors with products in the pipeline.

Courtney highlighted questions for CDE’s to ask patients of theirs who may be bringing up (or bringing in) DIY closed loops. They are good questions – they’re questions we also recommend people ask themselves and are a critical part of the safety-first approach the DIY community advocates every day.

(It was not mentioned in this article, but Aaron Kowalski’s presentation at AADE also highlighted some critical truths that I think are key about setting and managing expectations regarding closed looping. I often talk about these in addition to pointing out that it should be a personal, informed choice in choosing to closed loop. I hope these points about setting expectations and our points about the stages of switching from standard diabetes tool to closed looping becomes a bigger part of the conversation about closed loop safety and usability in the future.)

Where the article linked together some sentences that caused friction today:

The end of the article had a statement along the lines of an FDA concern about what happens if an AP breaks and you have a newly diagnosed person who doesn’t have old school, manual diabetes methods to fall back on. The implication appeared to be that these concerns were solely about the DIY looping “boom”. However, we know from previous presentations that Courtney/FDA usually brings this up as a concern for commercial/all AP technology – this isn’t a “concern” unique to DIY loops.

And that’s the catch – all of the concerns and questions FDA has, the DIY community has, too!

In fact, we want FDA to ask the same questions of commercial vendors, and we are going to be reaching out to the FDA to ask how they will ensure that we, as patients, can ask and get answers to these questions as end users when the FDA is approving this technology.

Because that’s the missing piece.
Right now, with the current technology on the market, we don’t get answers or insight into how these systems and devices work. This is even MORE critical when we’re talking about devices that automate insulin delivery, as the #OpenAPS community has learned from our experiences with looping. Getting the right level of data access and visibility is key to successful looping, and we expect the same from the commercial products that will be coming to market – so the FDA has a role to play here.

What we can do as a result
And we have a role, too. We’ll play our part by communicating our concerns and questions directly to the FDA, which is the only way they can officially respond or react or adjust what they’re doing. They unfortunately can’t respond to tweets. So I’m drafting an email to send to FDA, which will include a compilation of many of the questions and concerns the community has voiced today (and previously) on this topic.

Moving forward, I hope to see others do the same when concerns and questions come up. You don’t need to work for a commercial manufacturer, or be a part of a formal initiative, in order to talk to the FDA. Anyone can communicate with them! You can do that by sending an email, submitting a pre-submission, responding to draft guidances, and more. And we can all, in our informal or formal interactions, ask for clarity and push for transparency and set expectations about the features and products we want to see coming from commercial manufacturers.

OpenAPS poster cited in Nature!

I was thrilled to read a commentary by John Wilbanks and Eric Topol, out in Nature today, titled “Stop the privatization of health data“. (Click here to read a PDF version of the article.)

Tucked on the bottom of the second page of the (PDF version of the) article:

“For instance, in 2014, a woman with type 1 diabetes wired together a tiny processor, an insulin pump and a continuous glucose monitor to automate the regulation of her blood sugar levels. For a small community of patients, the collective use of such ‘home-made’ systems has resulted in improvements that are well ahead of those provided by devices and interventions emerging from conventional markets.1”

(The citation is to the poster that we presented on behalf of the #OpenAPS community at the American Diabetes Association Scientific Sessions meeting last month, with self-reported outcomes from 18 of the first 40 users and builders of DIY artificial pancreas systems)

OpenAPS (n=1)*98 as of July 19, 2016It’s worth noting that there are now (n=1)*98 users of #OpenAPS, so this “small community” is growing fast: doubling approximately every three months.

Wilbanks and Topol highlight some critical truths in their commentary, and call out another (frustrating) diabetes example to illustrate:

“Although patients can monitor their glucose levels at any instant, their aggregate records are not made accessible to them. And there is no mechanism by which patients or researchers outside the company can gain access to Medtronic’s tens of thousands of measurements.”

I’ve written about this specific example before, in fact: new ‘partnerships’ mean my personal health data is likely shared with IBM for Watson’s usage…but I don’t have access to this data or insights, and am in fact missing critical information and data visualization on my FDA-approved medical device that’s been on the market for years.

The call to action for device manufacturers, regulators, and the medical industry is simple: Give me, the patient, my data that I need so I can safely take care of myself and better manage my diabetes.

Wilbanks and Topol emphasize that this won’t happen “…unless each of us takes responsibility for our own health and disease, and for the information that we can generate about ourselves. When it comes to control over our own data, health data must be where we draw the line.”

This needs to happen everywhere, not just in diabetes. Will you join us in drawing the line?

What we heard and saw at #DData16 and #2016ADA

As mentioned in the previous post, we had the privilege of coming to New Orleans this past weekend for two events – #DData16 and the American Diabetes Association Scientific Sessions (#2016ADA). A few things stuck out, which I wanted to highlight here.

At #DData16:

  • The focus was on artificial pancreas, and there was a great panel moderated by Howard Look with several of the AP makers. I was struck by how many of them referenced or made mention of #OpenAPS or the DIY/#WeAreNotWaiting movement, and the need for industry to collaborate with the DIY community (yes).
  • I was also floored when someone from Dexcom referenced having read one of my older blog posts that mentioned a question of why ??? was displayed to me instead of the information about what was actually going on with my sensor. It was a great reminder to me of how important it is for us to speak up and keep sharing our experiences and help device manufacturers know what we need for current and future products, the ones we use every day to help keep us alive.
  • Mark Wilson gave a PHENOMENAL presentation, using a great analogy about driving and accessing the dashboard to help people understand why people with diabetes might choose to DIY. He also talked about his experiences with #OpenAPS, and I highly recommend watching it. (Kudos to Wes for livestreaming it and making it broadly available to all – watch it here!) I’ve mentioned Mark & his DIY-ing here before, especially because one of his creations (the Urchin watchface) is one of my favorite ways to help me view my data, my way.
  • Howard DM’ed me in the middle of the day to ask if I minded going up as part of the patient panel of people with AP experiences. I wasn’t sure what the topic was, but the questions allowed us to talk about our experiences with AP (and in my case, I’ve been using a hybrid closed loop for something like 557 or so days at this point). I made several points about the need for a “plug n play” system, with modularity so I can choose the best pump, sensor, and algorithm for me – which may or may not be made all by the same company. (This is also FDA’s vision for the future, and Dr. Courtney Lias both gave a good presentation on this topic and was engaged in the event’s conversation all day!).

At #2016ADA:

  • There needs to be a patient research access program developed (not just by the American Diabetes Association for their future Scientific Sessions meetings, but at all scientific and academic conferences). Technology has enabled patients to make significant contributions to the medical and scientific fields, and cost and access are huge barriers to preventing this knowledge from scaling. At #2016ADA, “patient” is not even an option on the back of the registration form. Scott and I are privileged that we could potentially pay for this, but we don’t think we should have to pay ($410 for a day pass or $900 for a weekend pass) so much when we are not backed by industry or an academic organization of any sort. (As a side note, a big thank you to the many people who have a) engaged in discussion around this topic b) helped reach out to contacts at ADA to discuss this topic and c) asked about ways to contribute to the cost of us presenting this research this weekend.)
  • We presented research from 18 of the first 40 users of #OpenAPS. You can find the FULL CONTENT of our findings and the research poster content in this post on OpenAPS.org. We specifically posted our content online (and tweeted it out – see this thread) for a few reasons:
    • First, everything about #OpenAPS is open source. The content of our poster or any presentation is similarly open source.
    • Not everyone had time to come by the poster.
    • Not everyone has the privilege or funds to attend #2016ADA, and there’s no reason not to share this content online, especially when we will likely get more knowledge sharing as a result of doing so.
  • With the above in mind, we encouraged people stopping by to take whatever photos of our poster that they wanted, and told them about the content being posted online. (And in fact, in addition to the blog post about the poster, that information is now on the “Outcomes” page on OpenAPS.org.)
  • Frustratingly, some people were asked to take down posted photos of our poster. If anyone received such a note, please feel free to pass on my tweet that you have authorization by the authors to have taken/used the photo. This is another area (like the need to develop patient research access programs) that needs to be figured out by scientific/academic conferences – presenters/authors should be able to specifically allow sharing and dissemination of information that they are presenting.
  • Speaking of photos, I was surprised that around half a dozen clinicians (HCPs) stopped by and made mention of having used the picture of the #OpenAPS rig and the story of #OpenAPS in one of their presentations! I am thrilled this story is spreading, and being spread even by people we haven’t had direct contact with previously! (Feel free to use this photo in presentations, too, although I’d love to hear about your presentation and see a copy of it!)
  • We had many amazing conversations during the poster session on Sunday. It was scheduled for two hours (12-2pm), but we ended up being there around four hours and had hundreds of fantastic dialogues. Here were some of the most common themes of conversation:
    • Why are patients doing this?
      • Here’s my why: I originally needed louder alarms, built a smart alarm system that had predictive alerts and turned into an open loop system, and ultimately realized I could close the loop.
    • What can we learn from the people who are DIY-ing?
    • How can we further study the DIY closed loop community?
      • This is my second favorite topic, which touches on a few things – 1) the plan to do a follow up study of the larger cohort (since we now have (n=1)*84 loopers) with a full retrospective analysis of the data rather than just self-reported outcomes, as this study used; 2) ideas around doing a comparison study between one or more of the #OpenAPS algorithms and some of the commercial or academic algorithms; 3) ideas to use some of the #OpenAPS-developed tools (like a basal tuning tool that we are planning to build) in a clinical trial to help HCPs help patients adjust more quickly and easily to pump therapy.
    • What other pumps will work with this? How can there be more access to this type of DIY technology?
      • We utilize older pumps that allow us to send temp basal commands; we would love to use a more modern pump that’s able to be purchased on the market today, and had several conversations with device manufacturers about how that might be possible;  we’ll continue to have these conversations until it becomes a reality.
  • There is some great coverage coming of the poster & the #OpenAPS community, and I’ll post links here as I see them come out. For starters, Dave deBronkart did a 22 minute interview with Scott & I, which you can see here. DiabetesMine also included mention of the #OpenAPS poster in their conference roundup. And diaTribe wrote up the the poster as a “new now next”! Plus, WebMD wrote an article on #OpenAPS and the poster as well.
A picture of our #ADA2016 poster in the exhibit hall

Scott and I walked away from this weekend with energy for new collaborations (and new contacts for clinical trial and retrospective analysis partnerships) and several ideas for the next phase of studies that we want to plan in partnership with the #OpenAPS community. (We were blown away to discover that OpenAPS advanced meal assist algorithm is considered by some experts to be one of the most advanced and aggressive algorithms in existence for managing post-meal BG, and may be more advanced than anything that has yet been tested in clinical trials.) Stay tuned for more!