Hormones, CGM preferences, DIY, and why so many things are YDMV even when #WeAreNotWaiting

I posted one of my Nightscout graphs yesterday, showing a snapshot of my morning:

I hadn’t eaten, and my blood sugar still spiked up. I’ve noticed this happens in the mornings sometimes. When I have mentioned it over the years, people are quick to tell me my basal rates are wrong, and I should adjust them because dawn phenomenon. But actually, this isn’t dawn phenomenon. This happens after I physically get up and start moving for the day, whether that happens at 4am, or 6am, or 10am, or even waking up after noon. So, it’s not a basal thing, and modifying my basal rates doesn’t fix it. (And this is why I wanted to add wake-up mode to my suite of tools, to help address this.)

To me, this is a great example, (as I mentioned in my Twitter thread), of why diabetes is so hard: sooooo many things impact BG levels, and in many cases, we PWDs just have to roll with it and respond the best we can. In my case, #OpenAPS did a great job responding to the spike and bringing me back down within an hour or so.

One of the questions that popped up yesterday in response to that graph, though, was about the BG line: how did I have two BG lines?

The answer: I wear a G4 sensor, and usually have 2 receivers running off the same transmitter and sensor. One receiver is Share-d to my phone, and uploads to NS via the interwebz. The other receiver, although Share-capable, doesn’t (because the company only allows you to pair one receiver and upload via Share). I leave that CGM plugged into a rig to enable it to be a backup for offline looping. When online, this rig with the plugged in CGM uploads BGs from that receiver to NS.

Sometimes, because of different start/stop times and therefore differing calibration records, the receivers “drift” from each other, making it obvious on the graph when that happens.

Because if you give a mouse a cookie, other questions come up, someone had also asked me why I’m using G4, and why not G5. Someone else asked me in a different channel why I’m not using G5 and xDrip+ (a DIY option that doesn’t use Dexcom app or a Dexcom receiver for receiving the data or processing it), or another DIY tool to process my CGM data.

Now, as always, what I chose to use is my personal preference. It’s colored by my preference for what equipment I’m willing to carry; what phone I want to use; what data I want to have; my safety backup preferences; what my insurance covers and what I can afford; where I live; etc. So, just because I use this method, doesn’t mean I expect anyone else to want to do it. It’s just what I do. I don’t try to convince other people to use this method, and I also hope others can share info about what works for them without trying to hammer me over the head because what I’m doing is different. This is where YDMV (your diabetes may vary) comes in. It’s so true, and even within “people who DIY”, there’s a ton of variation – and that’s a good thing! I adore having options to find what works for me, and I want to have other people have options and choices to choose what works for them.

That being said, here’s the answer to how I run my CGMs and some of the things that have factored into my choice to not DIY CGM receivers/data processing most of the time:

  • With two G4 receivers, I can keep one in my pocket, paired to my phone and uploading via Share. When I’m out and about in the city or usually during the day, this is what I carry. When I run, I take the Share receiver.
  • But, I also like emergency back-ups. I like keeping a receiver plugged into an #OpenAPS rig so that if connectivity goes out/down, I can keep looping without a break in my stride. So, I could keep my Share receiver plugged into the rig, but that would involve me unplugging and replugging fairly frequently when I run errands or actually go for a short run, and meh. Hassle. So I keep “non-Share” receiver as the one that’s usually plugged into my ‘offline’ rig.
  • Having the G4 receiver plugged into the rig enables me to see raw data. Raw data is nice for a couple of things: assessing the health of my sensor (if it gets jumpy compared to the filtered data, I know the quality of the sensor is decreasing, and that helps me decide when to change it); giving me a clue to what’s going on when the filtered data goes to ??? or during the start up of a new sensor; and actually being able to run my rig and loop off some* of the raw data when I need to. (*With OpenAPS, you can choose to loop off of it within a certain range, and there’s an option to only set a certain amount of correction for a proportion of what otherwise would be proposed, with a higher level of raw data.)
  • With two receivers running, that also gives me more flexibility around sensor changes. Technically, the sensor is approved for 7 days. At the end of the 7 days, the receiver stops giving you data and forces you to “start” a new sensor session. That could be by inserting a new sensor; or it could be the same sensor on your body. But either way, theoretically it’s a 2 hour ‘warm up’ period from that session where you can’t see data. With 2 receivers, I can stagger the end and start of sensor sessions. I usually set a calendar alarm to restart one of the receivers on the night of the 6th day of the session, allowing me more flexibility on day 7 to choose when to restart or change my sensor.
  • This also means I can choose to “hot swap” when actually changing a sensor. I may choose to not hit ‘stop’ and ‘start’ on a sensor session on one of the receivers, but rather shut it off for about 30 minutes, and just do the stop/start on the other receiver (leaving it plugged into a rig to upload raw data to NS, and be able to see where the new sensor’s readings come in compared to the old one). When I power the non-restarted receiver back on about 30m after swapping the transmitter over to the new receiver (as soon as the raw readings have flattened out), it usually either goes to “no signal” for a few minutes, and then comes back with some data, an hour or more before the restarted sensor allows me to calibrate it and get data. There are downsides to this method: the data on the receiver that didn’t get restarted can be fairly inaccurate, as it’s still using the calibrations from the old sensor. So I don’t always do that, but when it’s more important to me to be able to see relative trend of where BG is (flat, or dropping or spiking), it’s nice to have that option. And since I often soak my new CGM sensors, the data from “day 1” of the sensor after a session “start” on the receiver is often better than if it was truly day 1 of the sensor being in my body.

Phew. Maybe that sounds like a lot of work, but the above setup works well for me for a variety of reasons, and also allows me the flexibility and choice for when I change sensors, when I am forced to be without data or potentially not loop, etc. Given that my schedule varies a lot, it helps since I’m not consistently in the same time zone and what works for starting or changing sensors one week in one part of the world doesn’t always align with convenience exactly 168 hours (7 days) later in another part of the world that I’m in, doing something differently.

Some of the reasons I haven’t switched to G5 include the fact that the transmitters only last for ~3 months instead of 6+ months; I’ve observed many people being frustrated by sensor not talking to the phone even when it’s right beside them; there’s no raw data on G5; you can’t have multiple receivers paired with your transmitter; etc.

Now, you might say, but that’s using Dexcom’s app, etc. With DIY solutions, those limitations don’t apply! And that’s true, to a degree – savvy folks in the community have figured out how to make it so you don’t *have* to use Dexcom’s app to display or process the data; you can replace the batteries on the transmitter; etc. But, just like my method above of using raw data isn’t necessarily going to work for everyone or might not be something someone else choose to do, the DIY options that go with G5 (or even G4 in some cases), aren’t something I believe is the right thing to do for me.

A lot of it comes down to safety. When we first started designing my DIY closed loop, we spent eons discussing how we could do this safely for me. And that evolved into further discussions about how other people could do this safely, too. A core of the OpenAPS Reference Design is that we are using already approved and vetted devices that exist on the market (e.g. existing pumps and CGMs). Those devices include approved and vetted methods for CGM data processing, too, which is even more important when the CGM data is being used to dose insulin as in OpenAPS. Now – this is not a requirement we can enforce: people can do what they want, and some people are even using non-CGMs (such as the Libre, a “Flash Glucose Monitoring” solution, plus a DIY NFC reader) as a CGM source for looping. But, whether it’s a DIY app or algorithm on CGM data, or a different glucose measuring device that’s not a CGM, that’s choice has some safety implications that I hope people are aware of.

First, the background for those who aren’t familiar: the CGM companies display a processed (“filtered”) version of the CGM data. That’s part of their proprietary stuff, but there’s reasons behind it: the raw data can be hectic and weird, and individual readings aren’t the point, anyway. The beauty of CGM is you can see the trends in addition to the estimated BG number.  In some scenarios, such as during sensor starts, during error messages that are displayed as ???, etc, the companies/FDA decided that the CGM should not show data, and instead show an error message/symbol, to help prevent anyone from making incorrect treatment decisions based off of confusing or misleading data.  That’s good enough most of the time.  As mentioned above, there are edge cases when seeing the raw is helpful, but most of the time, I’m happy with the filtered data.

But to me, there’s a difference between using raw or DIY-calibrated data for edge cases, vs. using them all the time. I’ve seen several cases in just the past few days with a newer “DIY CGM app”, which uses its own calibration algorithm for processing the unfiltered CGM readings.  These people have reported the app displaying normal BGs (say, 90 mg/dL), while they found themselves in the 40’s (rather low). It’s not clear whether that is due to the app’s calibration algorithm, something the user did in testing and calibrating, or if it’s just a bad sensor, and since most of them are not using the official receiver/app in parallel, that’s difficult to figure out.  But regardless, it’s happened enough times across numerous people for me to be concerned about a DIY CGM app being used as the primary source of CGM data. There are limitations to using company-built apps or physical devices for CGMs, but in the case where people can afford it, for safety I think it is important to at least use the approved and vetted receiver/app in parallel, to provide a backup and baseline level of alerting and alarming. The FDA & the companies have worked to create something that can be reliable for alarming when your BG is actually low (say <55 mg/dl) and alerting a human that something is going on. This is important regardless of whether people are looping or not, but it’s perhaps even more important when people are looping, since that data is driving insulin dosing decisions. Additionally, the company-created devices have been designed to deal with miscalibrations that aren’t in line with what the data from the receiver is showing, and have safety measures in place to “reject” calibrations and request new ones when necessary. Sure: there are times where that’s frustrating, but those features truly are “there for safety”, and are important for avoiding the rare but potentially serious outcomes that could be caused by incorrect CGM readings. Since safety is what we prioritize and design around in DIY closed looping, I hope people will consider that ,and prioritize safety first when choosing what to use as their primary data source.

Tl;dr – YDMV. I currently use G4 with two receivers, for the reasons described above. I think it’s important to prioritize safety over convenience most of the time, and understand the limitations of the solution that you choose (DIY or commercial). But everyone’s different, and their situation, preferences, etc. may drive different decision making. And did I mention YDMV?

Exploring other sensors that could be used with #OpenAPS and for diabetes in general

Nobody appeared to notice the other day when I tweeted about going through airport security with 13 pieces of adhesive on my body. Which is amusing to me, because normally I sport two: my insulin pump site, and my continuous glucose monitor (CGM) sensor. That particular day, I added another diabetes-related piece of adhesive (I was giving the Freestyle Libre, a flash aka not quite continuous glucose monitor, a try), and 10 pieces of adhesive not directly related to diabetes. Or maybe, it will be in the future – and that’s what I’m trying to figure out!

Last fall, my program officer from RWJF (for my role as PI on this RWJF-funded grant – read more about it here if you don’t know about my research work) made an introduction to a series of people who may know other people that I should speak to about our project’s work. One of these introductions was to a researcher at UCSD, Todd Coleman. I happened to be in San Diego for a meeting, so my co-PI Eric Hekler and I stopped by to meet Todd. He shared about his lab’s work to develop an ambulatory GI sensor to measure gastric (stomach) activity and my brain immediately started drooling over the idea of having a sensor to better help assess our methods in the DIY closed looping community for articulating dynamic carb absorption, aka how slow or fast carbs are absorbing and therefore impacting blood glucose levels. I took over part of the white board in his office, and started drawing him examples of the different data elements that we have #OpenAPS (my DIY hybrid closed loop “artificial pancreas”) calculate every 5 minutes, and how it would be fantastic to wear the GI sensor and graph the gastric activity data alongside this detailed level of diabetes data.

I immediately was envisioning a number of things:

  • Assessing basic digestion patterns and figuring out if the dynamic carb absorption models in OpenAPS were reasonable. (Right now, we’re going off of observations and tweaking the model based on BG data and manual carb entry data from humans. Finding ways to validate these models would be awesome.)
  • Seeing if we can quantify, or use the data to better predict, how post-meal activity like walking home after dinner impacts carb absorption. (I notice a lot of slowed digestion when walking home from dinner, which obviously impacts how insulin can and should be dosed if I know I’ll be walking home from dinner or not. But this is something I’ve learned from a lot of observation and trial and error, and I would love to have a more scientific assessment of this impact).
  • Seeing if this could be used as a tool to help people with T1D and gastroparesis, since slowed digestion impacts insulin dosing, and can be unpredictable and frustrating. (I knew gastroparesis was “common”, but have since learned that 40-50% of PWDs may experience gastroparesis or slowed digestion, and it’s flabbergasting how little is talked about in the diabetes community and how few resources are focused on coming up with new strategies and methods to help!)
  • Learning exactly what happens to digestion when you have celiac disease and get glutened.
  • Etc.

Fast forward a few months, where Todd and his post-doctoral fellow Armen Gharibans, got on a video call to discuss potentially letting me use one of their GI sensors. I still don’t know what I said to convince them to say yes, but I’m thrilled they did! Armen shipped me one of the devices, some electrodes, and a set of lipo batteries.

Here’s what the device looks like – it’s a 3D printed gray box that holds an open source circuit board with connectors to wearable electrodes. (With American chapstick and unicorn for scale, of course.)

DanaMLewis EGG for scale

And here’s what it looked like on me:

DanaMLewis wearing an ambulatory EGG

The device stores data on an SD card, so I had many flash backs to my first OpenAPS rig and how I managed to bork the SD cards pretty easily. Turns out, that’s not just a Pi thing, because I managed to bork one of my first EGG SD cards, too. Go figure!

And this device is why I went through airport security the other day with 10 electrodes on. (I disconnected the device, put it in my bag alongside my OpenAPS rigs, and they all went through the x-ray just fine, as always.)

Just like OpenAPS, this device is obviously not waterproof, and neither are the electrodes, so there are limitations to when I can wear it. Generally, I’ve been showering at night as usual, then applying a fresh set of electrodes and wearing the device after that, until the next evening when I take a shower. Right now, hard core activity (e.g. running or situps) generates too much noise in the stomach for the data to be usable during those times, so I’ve been wearing it on days when I’ve not been running and when I’ve not been traveling so Scott can help me apply and connect the right electrodes in the right places.

This device is straight from a lab, too, so like with #OpenAPS I’ve been an interesting guinea pig for the research team, and have found even low-level activity like bending over to put shoes on can trigger the device’s reset button. That means I’ve had to pay attention to “is the light still on and blinking” (which is hard since it’s on my abdomen under my shirt), so thankfully Armen just shipped me another version of the board with the reset button removed to see if that makes it less likely to reset. (Resetting is a problem because then it stops recording data, unless I notice it and hit the “start recording” button again, which drives me bonkers to have to keep looking at it periodically to see if it’s recording.) I just got the new board in the mail, so I’m excited to wear it and see if that resolves the reset problem!

Data-wise, it’s been fascinating to get a peek into my stomach activity and compare it to the data I have from OpenAPS around net insulin activity levels, dynamic carb absorption activity, expectations on what my BG *should* be doing, and what actually ended up happening BG-wise. I wore it one night after a 4 mile run followed by a big dinner, and I had ongoing digestion throughout the night, paired with increased sensitivity from the run so I needed less insulin overall despite still having plenty of digestion happening (and picture-perfect BGs that night, which I wasn’t expecting). I only have a few days worth of data, but I’m excited to wear it more and see if there are differences based on daily activity patterns, the influences of running, and the impact of different types of meals (size, makeup of meal, etc).

A huge thanks to Todd, Armen (who’s been phenomenal about getting me the translated GI data back in super fast turnaround time), and the rest of the group that developed the sensor. They just put out a press release about a publication with data from one of their GI studies, and this press release is a great read if you’re curious to learn more about the GI sensor, or this news piece. I’m excited to see what I can learn from it, and how we can potentially apply some of these learnings and maybe other non-diabetes sensors to help us potentially  improve daily diabetes management!

Vitamin D and insulin sensitivity

tl;dr – for me, Vitamin D hugely influences insulin sensitivity.

After the flu, I continued to be sick. We did the usual song and dance many people do around “hey, do you have pneumonia?”. Which, luckily, I didn’t, but I was still pretty sick and my after visit summary sheet said bronchitis. Also, my average BGs were going up, which was weird. After all, when I had the flu, I had spectacular BGs throughout. So I was pretty concerned when my time in range started dropping and my average BG started rising.

In diabetes, there are a lot of things that influence BGs. It can be a bad pump site; a bad bottle of insulin; stress; sickness; etc etc. that causes out of range BGs. Most of these are helped by having a DIY closed loop like OpenAPS. So, when your BGs start to rise above (your) normal and stay there, it’s indicative of something else going on. And because I was sick, that’s what I thought it was. But as I continued to gradually heal, I noticed something else: not only were my BG averages continuing to rise (not normal), but I also was needing a lot more insulin. Like, 20-30u more per day than usual. And that wasn’t just one day, it was 4 days of that much insulin being required. Yikes. That’s not normal, either.

So, I was thinking that I was hitting the Fiasp plateau, which made me really sad. I’ve been using Fiasp for many months now with good results. (For those of you who haven’t been tuned into the diabetes community online, while many people like Fiasp because it’s slightly faster, many people also have experienced issues with it, ranging from pump sites dying much faster than on other insulins; having issues with prolonged high BGs where “insulin acts like water”, etc.) But, I was prepared mentally to accept the plateau as the likely cause. I debated with Scott whether I should switch back to my other insulin for 2-3 sites and reservoirs to give my body a break, and try again. But I was still sick – so maybe I should wait until I was not clearing gunk out of my lungs. Or I was also pretty convinced that it was correlated with my absolute ZERO level of activity. (I had some rising BG averages briefly over Christmas where I was fearing the plateau, but turns out it was related to my inactivity, and getting more than zero steps a day resolved that.) I knew I would be moving around more the next week as I gradually felt better, so it should hopefully self-resolve. But making changes in diabetes sometimes feels like chicken and egg, with really complicated chickens and eggs – there’s a lot of variables and it’s hard to pin down a single variable that’s causing the root of the problem.

One other topic came up in our discussion – vitamin D. Scott asked me, “when was the last time you saw the sun?”. Which, because I’d been sick for weeks, and traveled for a week before that, AND because we live in Seattle and it’s winter, meant I couldn’t remember the last time I had seen the sun directly on my skin. (That sounds depressing, doesn’t it? Sheesh.)

So, I decided I would not switch back to the previous insulin I was using, and I would give it some time before I tried that, and I would focus on taking my vitamin D (because I hadn’t been taking it) and also trying to get at least SOME activity every day. I took vitamin D that night, went to bed, and….

…woke up with perfect BGs. But I didn’t hold my breath, because I was having ok nights but rough days that required the extra 30 units of insulin. But by the end of the day, I still had picture-perfect BGs (my “normal”), and I was back to using my typical average amount of insulin. PHEW. Day 2 also yielded great BG levels (for me, regardless of sickness) and around average level of insulin needed for the day time. Double phew. Day 3 is also going as expected BG and total insulin usage wise.

You might find yourself thinking, “how can it be as simple as Vitamin D? There’s probably something else going on.” I would think that – except for I have enough data to know that, when I’m vitamin D deficient, getting some vitamin D (either via pill or via natural form from sunlight) can pack a punch for insulin sensitivity. In 2014, Scott and I went out in February even when it was cold to sit in a park and get some sunshine. After about an hour of sitting and doing nothing, with no extra insulin on board, WHOOOSH. I went mega-low. I’ve had several other experiences where after being likely vitamin D deficient, and then spending an hour or so in sunlight, WHOOSH. And same for when there was no sunlight, but I took my vitamin D supplements after a while of not taking them. And no, they’re not mixed with cinnamon 😉 (That’s a diabetes joke, cinnamon does not cure diabetes. Nothing cures type 1 diabetes.)

So tl;dr – my insulin sensitivity is influenced by vitamin D, and I’ll be trying to do a better job to take my vitamin D regularly in the winters from now on!

Making changes in diabetes is hard by DanaMLewis

Quantified sickness when you have #OpenAPS and the flu

Getting “real people sick*” is the worst. And it can be terrifying when you have type 1 diabetes, and know the sickness is both likely to send your blood sugars rocketing sky high, as well as leave you exhausted and weak and that much harder to deal with a plummeting low.

*(Scott hates this term because he doesn’t like the implication that PWD’s aren’t real. We’re real, all right. But I like the phrase because it differentiates between feeling bad from blood sugar-related reasons, and the kind of sickness that anyone can get.)

In February 2014, Scott got home from a conference on Friday, and on Saturday complained about being tired with a headache. By Sunday, I started feeling weary with a sore throat. By Monday morning, I had a raging fever, chills, and the bare minimum of energy required to drag myself into the employee health clinic and get diagnosed with the flu. And since they knew I was single and lived by myself, the conversation went from “here’s your prescription for Tamiflu” to “but you can’t be by yourself, maybe we should find a bed for you in the hospital” because of how sick I was. Luckily, I called Scott and asked him to come pick me up and let me stay at his place. And there I stayed in complete misery for several days, the sickest I’d ever been. I remember at one point on the second day, waking up from a fitful doze and seeing Scott standing across the room with his laptop on a dresser, using it as a standing desk because he was so worried about me that he didn’t want to leave the room at that point. It was that bad.

Luckily, I survived. (And good thing, right, given that we went on to build OpenAPS, yes? ;)) This year’s flu experience was different. This year I was real-people sick, but without the diabetes-related fear that I’d so often experienced in the past. My blood sugars were perfectly managed by OpenAPS. I didn’t go low. It didn’t matter if I didn’t eat, or did eat (potato soup, ice cream, and frozen fruit bars were the foods of choice). My BGs stayed almost entirely in range. And because they were so in range that it was odd, I started watching the sensitivity ratio that is calculated by autosensitivity to see how my insulin sensitivity was changing over the course of the sickness. And by day 5, I finally felt good enough to share some of that data (aka, tweet). Here’s what I found from this year’s flu experience:

  • Night 1 was terrible, because I got hardly any deep sleep (45 minutes, whereas 2+h is my usual average per night) and kept waking up coughing. I also was 40% insulin resistant all night long and into Day 2, meaning it took 40% more insulin than usual to keep my BGs at target.
  • Night 2 was even worse – ZERO deep sleep. Ahhhh! It was terrible. Resistance also nudged up to 50%.
  • Night 3 – hallelujah, deep sleep returned. I ended up getting 4h53m of deep sleep, and also was able to sleep for closer to 2 hour blocks at a time, with less coughing. Also, going into night 3 was pretty much the only “high” I had of being sick – up around 180 for a few hours. Then it fell off a cliff and whooshed down to the bottom of my target, marking the drastic end of insulin resistance. After that, insulin sensitivity was fairly normal.
  • Night 4 yielded more deep sleep (>5 hours), and a tad bit of insulin sensitivity (~10%), but it’s unclear whether that’s totally sickness related or more related to the fact that I wasn’t eating much in day 3 and day 4.
  • Night 5 felt like I was going backward – 1h36m of deep sleep, tons of coughing, and interestingly a tad bit of insulin resistance (~20%) again. Night 6 (last night) I supposedly got plenty of deep sleep again (>4h), but didn’t feel like it at all due to coughing. BGs are still perfectly in range, and insulin sensitivity back to usual.

This was all done still with no-bolus, and just carb announcement when I ate whatever it was I was eating. In several cases there was negative IOB on board, but I didn’t have the usual spikes that I would normally see from that. I had 120 carbs of gluten free biscuits and gravy yesterday, and I didn’t go higher than 130mg/dl.

It’s a weird feeling to have been this sick, and have perfectly normal blood sugars. But that’s why it’s so interesting to be able to look at other data beyond average, time in range, and A1c – we now have the tools and the data to be able to dive in and really understand more about what our bodies are doing in sick situations, whether it’s norovirus or the flu.

I’m thinking if everyone shared their data from when they had the flu, or norovirus, or strep throat, or whatever – we might be able to start to analyze and detect patterns of resistance and otherwise sensitivity changes over the course of typical illness. This way, when someone gets sick with diabetes, we’d know generally “expect around XX% resistance for Days 1-3, and then expect a drop off that looks like this on Day 4”, etc.

That would be way better than the traditional ways of just bracing yourself for sky-high highs and terrible lows with no understanding or ability to make things better during illness. The peace of mind I had during the flu this year was absolutely priceless. Some people will be able to get that with DIY closed loop technology; but as with so many other things we have learned and are learning from this community, I bet we can find ways to help translate these insights to be of benefit for all people with diabetes, regardless of which therapies they have access to or decide to use.

Want to help? Been sick? Consider donating your data to my diabetes sick-day analysis project. What you should do:

  1. If you’re using a closed loop, donate your data to the OpenAPS Data Commons. You can do all your data (yay!), or just the time frame you’ve been sick. Use the “message the project owner” feature to anonymously message and share what kind of illness you had, and the dates of sickness.
  2. Not using a closed loop, but have Nightscout? Donate your data to the Nightscout Data Commons, and do the same thing: Use the “message the project owner” feature to anonymously message and share what kind of illness you had, and the dates of sickness.

As we have more people who identify batches of sick-day data, I’ll look at what insights we can find around sensitivity changes before, during, and after sickness, plus other insights we can learn from the data.

Why Open Humans is an essential part of my work to change the future of healthcare research

I’ve written about Open Humans before; both in terms of how we’re creating Data Commons there for people using Nightscout and DIY closed loops like OpenAPS to donate data for research, as well as building tools to help other researchers on the Open Humans platform. Madeleine Ball asked me to share some more about the background of the community’s work and interactions with Open Humans, along with how it will play into the Opening Pathways grant work, so here it is! This is also posted on the OpenHumans blog. Thanks, Madeleine, and Open Humans!

 

So, what do you like about Open Humans?

Health data is important to individuals, including myself, and I think it’s important that we as a society find ways to allow individuals to be able to chose when and how we share our data. Open Humans makes that very easy, and I love being able to work with the Open Humans team to create tools like the Nightscout Data Transfer uploader tool that further anonymizes data  uploads. As an individual, this makes it easy to upload my own diabetes data (continuous glucose monitoring data, insulin dosing data, food info, and other data) and share it with projects that I trust. As a researcher, and as a partner to other researchers, it makes it easy to build Data Commons projects on Open Humans to leverage data from the DIY artificial pancreas community to further healthcare research overall.

Wait, “artificial pancreas”? What’s that?

I helped build a DIY “artificial pancreas” that is really an “automated insulin delivery system”. That means a small computer & radio device that can get data from an insulin pump & continuous glucose monitor, process the data and decide what needs to be done, and send commands to adjust the insulin dosing that the insulin pump is doing. Read, write, read, rinse, repeat!

I got into this because, as a patient, I rely on my medical equipment. I want my equipment to be better, for me and everyone else. Medical equipment often isn’t perfect. “One size fits all” really doesn’t fit all. In 2013, I built a smarter alarm system for my continuous glucose monitor to make louder alarms. In 2014, with the partnership of others like Ben West who is also a passionate advocate for understanding medical devices, I “closed the loop” and built a hybrid closed loop artificial pancreas system for myself. In early 2015, we open sourced it, launching the OpenAPS movement to make this kind of technology more broadly accessible to those who wanted it.

You must be the only one who’s doing something like this

Actually, no. There are more than 400+ people worldwide using various types of DIY closed loop systems – and that’s a low estimate! It’s neat to live during a time when off the shelf hardware, existing medical devices, and open source software can be paired to improve our lives. There’s also half a dozen (or more) other DIY solutions in the diabetes community, and likely other examples (think 3D-printing prosthetics, etc.) in other types of communities, too. And there should be even more than there are – which is what I’m hoping to work on.

So what exactly is your project that’s being funded?

I created the OpenAPS Data Commons to address a few issues. First, to stop researchers from emailing and asking me for my individual data. I by no means represent all other DIY closed loopers or people with diabetes! Second, the Data Commons approach allows people to donate their data anonymously to research; since it’s anonymized, it is often IRB-exempt. It also makes this data available to people (patient researchers) who aren’t affiliated with an organization and don’t need IRB approval or anything fancy, and just need data to test new algorithm features or investigate theories.

But, not everyone implicitly knows how to do research. Many people learn research skills, but not everyone has the wherewithal and time to do so. Or maybe they don’t want to become a data science expert! For a variety of reasons, that’s why we decided to create an on-call data science and research team, that can provide support around forming research questions and working through the process of scientific discovery, as well as provide data science resources to expedite the research process. This portion of the project does focus on the diabetes community, since we have multiple Data Commons and communities of people donating data for research, as well as dozens of citizen scientists and researchers already in action (with more interested in getting involved).

What else does Open Humans have to do with it?

Since I’ve been administering the Nightscout and OpenAPS Data Commons, I’ve spent a lot of time on the Open Humans site as both a “participant” of research donating my data, as well as a “researcher” who is pulling down and using data for research (and working to get it to other researchers). I’ve been able to work closely with Madeleine and suggest the addition of a few features to make it easier to use for research and downloading large data sets from projects. I’ve also been documenting some tools I’ve created (like a complex json to csv converter; scripts to pull data from multiple OH download files and into a single file for analysis; plus writing up more details about how to work with data files coming from Nightscout into OH), also with the goal of facilitating more researchers to be able to dive in and do research without needing specific tool or technical experience.

It’s also great to work with a platform like Open Humans that allows us to share data or use data for multiple projects simultaneously. There’s no burdensome data collection or study procedures for individuals to be able to contribute to numerous research projects where their data is useful. People consent to share their data with the commons, fill out an optional survey (which will save them from having to repeat basic demographic-type information that every research project is interested in), and are done!

Are you *only* working with the diabetes community?

Not at all. The first part of our project does focus on learning best practices and lessons learned from the DIY diabetes communities, but with an eye toward creating open source toolkit and materials that will be of use to many other patient health communities. My goal is to help as many other patient health communities spark similar #WeAreNotWaiting projects in the areas that are of most use to them, based on their needs.

How can I find out more about this work?
Make sure to read our project announcement blog post if you haven’t already – it’s got some calls to action for people with diabetes; people interested in leading projects in other health communities; as well as other researchers interested in collaborating! Also, follow me on Twitter, for more posts about this work in progress!

Not bolusing for meals (Fiasp, 0.6.0 algorithm in oref0 dev branch, and more)

I tweeted last week+, “I just realized I’ve now gone about 3 weeks without meal bolusing.” That means just a meal announcement (i.e. carb entry estimate, a la 30 carbs or 60 carbs or whatever, based on my IFTTT buttons). No manual bolus.

I kind of keep waiting for the other shoe to drop, because it sounds to good to be true. I’m sure you’re skeptical reading this.

I bet she’s doing SOME bolus.

Well, she must not be eating any carbs.

She must be having worse outcomes, bad post-meal BGs, etc.

Nope, nope, and nope.

  • While I started testing this new set of features with partial boluses and worked my way down (see more below on the testing topic), I’m now literally doing no manual meal bolus. I start eating, and press one button on my watch for a carb estimate entry (that via IFTTT goes to Nightscout and my rig).
  • I eat carbs. I’ve eaten 120 grams of carbs of gluten free biscuits and gravy; 60-90 grams of pasta; dinner followed by a few gluten free cookies, etc.
  • More nuanced details below, but:
    • My 70-180 time in range has stayed the same (93+%) compared to the versions I was testing before with manual meal boluses.
    • My 70-150 and 80-160 time in ranges have decreased slightly compared to manual meal boluses, but…
    • My average blood sugar has actually dropped down (as has my a1c to match).
    • (So this means I’m having a few more spikes above 160, usually topping off in 160-170 whereas before my manual meal boluses would have me top off around 150, when all was well.)

Also note – no eating soon required. No early bolus or pre-bolus. Just single button press as I stick food in my mouth.

Wow.

(See where I said, waiting for the other shoe to drop?)

That’s why I waited a while to even tweet about it. Maybe it’s a fluke. Maybe it won’t work for other people. Maybe, maybe, maybe. Who knows. It’s still fairly early to tell, but as other people are beginning to test the current dev branch of oref0 with 0.6.0-related features, other people are starting to see improvements as well. (And that could be some of the many other features we are adding to 0.6.0, ranging from exponential curves for insulin activity, to allowing SMBs to do more, to carb-ratio-tuned-autosensitivity, to huge autotune improvements, etc.) 

So while I don’t want to over-hype – and never do, what works for me will not work for everyone – I do want to share my cautious excitement over continuing to be able to push the envelope on algorithms and what might be possible outcome-wise for this kind of technology.

Here’s what is enabling me to be in the no-bolus zone for now well over a month, with still (to me) great outcomes worth the tradeoffs described above:

  1. Faster insulin. Thanks to our lovely looping friends in Germany/Austria, we came back from Europe with a few vials of Fiasp to try. I was HIGHLY skeptical about this. Some of our European friends saw great results right away, others didn’t. I didn’t get great results on it at first. Some of that may be due to natural changes between insulin types and not knowing exactly how to adjust my manual bolus strategy to the faster insulin action, but until we did some code changes to allow SMB‘s to do more and added some other features to what’s now 0.6.0, I wasn’t thrilled and in fact after about two weeks of it was about to switch off of it. So that brings me to #2.
  2. More improvements to the algorithm, which is now what will become the 0.6.0 release of oref0. There’s a whole lot of stuff packed in there. Exponential curves. Different carb absorption decay calculations. Allowing SMB to do more. Additional safety guards since we ramped SMB up.

How we started testing no-bolus approach:

  • I have always known that about 6u of insulin (thanks to testing dating back to my early DIYPS days, many many many moons ago) is about as much as I should bolus at any time. So, even if I ate 120 carbs, I usually did about a 6u bolus up front, and let the rig pick up the rest as needed over more hours. I started doing ~75% or something like that of boluses, based on wherever I felt like rounding to with my easy bolus buttons.
  • Whether I did 75% or 100%, I didn’t see a ton of difference at first…
  • ..so I took a leap and tried no-bolus with some SMB adjustments to allow it to ramp up faster with carb entry. Behaviorally, I find it a lot easier to do nothing 😀 vs. figure out the right amount of up front bolus. And outcomes wise (see above) it was very similar.

It definitely was an interesting approach to test. Between the Fiasp and the no-bolus up front, in some meals it matched really well and I had practically no rise. Due to incoming netIOB, food type, etc, sometimes I did have a rise – but while it spiked slightly higher (160-170 usually vs my earlier 150s with manual bolus), it was only up there for 2-3 data points and then came sharply down, leveling out smoothly in my preferred post-meal range. So an important lesson I learned was not to over-react to just the BG curve going up, without looking at the predictions to see where I was going to come just back down. (And as I had more than one meal where the spike and drop back to normal happened, it was very easy to adjust to the BG graph and not get that emotional tug to “do more” with a quick short rise like that).

Obviously, starting BG makes a difference. I’m usually starting <130 mg/dL when I see these spikes cap out at 170 or lower. I’ve started higher, and seen higher rises, too. They’re not all perfect: with occasional pump site issues, carb underestimates, unplanned carb stacking, and all the randomness of diabetes and a non-structured lifestyle (including live-testing bleeding edge algorithm changes), I’ve spent 12% of the last month >160 mg/dL, which is about the same as the 3 months before that. But in most cases (I’d say 95%), the no-bolus approach has actually yielded better outcomes than I expected AND has avoided post-meal lows better than I would have achieved with a manual bolus.

This is huge when you think about the QOL aspect of not having to do as much math at a meal; and when you think about all the complicating factors related to food – timing (do you bolus when you order, or when the food arrives, or earlier than that?), and the gluten factor. I have celiac disease, so if I’m eating out (which we do a lot, and especially since I travel frequently), bolusing prior to setting eyes on the food (knowing they didn’t plate it with bread, causing them to have to go back and start all over again) just isn’t smart. That’s why eating soon historically worked so well for me vs. traditional pre-boluses, because I could set the target entering the restaurant, bolus when I laid eyes on my hopefully safe food, and get reasonable (150 topping out) meal outcomes.

It also worked really well in the case where a restaurant cooked my gluten free pasta in the same pasta cooker and water as regular pasta, but didn’t inform me until after I found stray gluten noodles in the bottom of my pasta dish and started asking how that was possible since they (used to) do gluten free well. (Now, I pick up heaps of pasta, and sort pasta noodles one by one to make sure they all match before ever eating gluten free pasta. It makes waiters look at you very worriedly as you wave pasta around in the air, but better safe than glutened (again).) So, I was majorly glutened, and my digestion system was all out of sorts (isn’t that a nice polite way to describe getting glutened?) for many days, which of course impacted BG and insulin right then and for the days afterward. But because I had done carb entry and no-bolus, I was able to edit the carb entry down, and I didn’t have that much insulin stacked, and didn’t end up low after glutening, which is usually what happens.

Is that a super regular situation for most people? No. But it was super nice. And also helped me face pasta again last night, so I could put in a (very low in case of gluten) carb estimate, match my noodles, eat pasta, and let the SMBs ramp up to match absorption. It works very well for me.

Whether you have celiac or not, for many reasons (insert yours here), it’s nice to not to have to commit to the bolus up front. It’s closer to approaching what I think non-PWDs do at mealtimes: just eat.

(I haven’t done much testing (yet? TBD) for no-carb-entry and no-meal-bolus scenario, I expect I would have higher spikes but would be interesting to see if it would still come down reasonably fast. Probably wouldn’t be my go-to strategy because I don’t mind a general meal size estimate one button push, but would be nice to know what that curve shape would look like. If I test that, it’ll start with small snacks and ramp my way up.)

The questions I always get:

  1. Q: HOW DO I GET THIS?
    A: Caution: like all things OpenAPS but especially always true for the development branch, 0.6.0 is NOT released yet to master and is still highly experimental. I wouldn’t install dev unless you want to pay lots of close attention to it, and are willing to update multiple times over the course of the week, because Scott and I are merging features and tweaks almost daily to it.

    Got the disclaimers down? Ok. It’s in the dev branch of oref0. You should read this PR with notes on some more detail of what’s included, but you should also review the code diff to see all that’s changed, because it’s not all documented yet. Also, follow the instructions at the bottom to be able to install it without git. Hop into Gitter if you have questions about it!

    (Big huge thanks to folks like Tim and Matthias for early testing of 0.6.0; and to Tim for writing up about the initial rounds of 0.6.0-dev here (note that we’ve made further changes since this post), and others who’ve been testing & providing feedback and input into the dev branch!)

  2. Q: When will this get “released” to master?
    A: It depends. This is still a highly active dev branch, and we’re making a lot of changes and tweaking and testing things. The more people who test now and provide feedback will enable us to get to the final “prepare for release” testing stage. Lots and lots of testing, and things depend on how much existing needs tweaked, and what else we decide should go with this release. So, there’s never any specific release date.

  3. Q: What is Fiasp?
    A: Faster acting insulin that was only approved in Europe and Canada…until today. Convenient timing. I asked a PR person who messaged me about it, and they said it’s estimated to be available in U.S. pharmacies by late December/earlier Q1. As previously stated, available elsewhere in other parts of the world.

    Fiasp peaks sooner (say, ~45 minutes) with the same tail as everything else. It’s not instantaneous. For your million and one questions about whether it’s approved for your use in a tree, on a plane, at the zoo, and all other extrapolations – please ask Google/your doctor/the manufacturer, and not me. I don’t know. :)

  4. Q: Will any of this work for people NOT on Fiasp?
    A: Nothing is guaranteed (even for other people on Fiasp), but the folks who’ve started testing 0.6.0 even without Fiasp (on Humalog or Novolog/Novorapid, etc.) have been happier on it vs. earlier versions, too.

    I don’t expect Fiasp to work super well forever for me, given what I’ve heard from other people with months of experience on it…and given my first two weeks of Fiasp not being spectacular, I want people to not expect miracles. (Sorry, this blog post does not promise miracles, so sorry if you got super excited at the above. No miracles! This is not a cure! We still have diabetes!) Like all things artificial pancreas, I think it’s better to be cautiously hopeful with realistic expectations that things *might* be a little bit better than before, but as always, YDMV (your diabetes may/will always vary), your body will vary, and life happens, etc. so who knows.

Just 4 months ago, we published a blog post pointing out that the new features had allowed us to achieve 4 out of 5 of: no bolus; not counting carbs, medium/high carb meals, 80%+ time in range; and no hypoglycemia.  With Fiasp and  0.6.0 (currently what’s in the dev branch), we’ve now achieved all 5 simultaneously: I can eat large high-carb meals, enter very vague guesstimates of 60 or 90 carbs (no need for actual carb counting, just general size-based meal announcement), and still achieve 80%+ time in range 70-150 mg/dL without ever going <55 mg/dL.  Does that mean that OpenAPS with Fiasp finally meets the definition of a “real” Artificial Pancreas (step 5 on JDRF’s 6-step AP development pathway)?  We think it does.

So, tl;dr (because long post is long): with Fiasp and 0.6.0-dev branch, I’m able to not bolus for meals, and just enter a very generally sized meal estimate. It’s working well for me, and like all things, we’re working to make it available to other people via OpenAPS for others who want to try similar features/approaches. It may not work well for everyone. If it helps one other person, though, like everything else it’ll be worth it. Big thanks to Scott for LOTS of development in 0.6.0 and partnership in design of these features; too many people to name for testing and providing feedback and helping iterate on these features; and to the entire community for being awesome and helping us to continue to push the envelope on what might be possible for those of us with type 1 diabetes. :)

Why a non-academic (patient) publishes in academic journals

Today I was able to share that my Letter to the Editor was published in the Journal of Diabetes Science and Technology. It’s on why we need to set expectations to help patients successfully adopt hybrid closed loop/artificial pancreas/automated insulin delivery system technology. (You can read it via image copies in the first link.)

JDST_screenshot_LTE_expectationsI’ve published a few times in academic journals. Last year, Scott and I published another Letter to the Editor in JDST with the OpenAPS outcomes study we had presented at the 2016 ADA Scientific Sessions conference.

But, I’m sure people are wondering why I choose to do so – especially as I am 1) a patient and 2) a non-academic. (Although in case you missed it – I’m now the Principal Investigator on a grant-funded study!)

While there are many healthcare providers, researchers, industry employees, FDA staff, etc. who read blogs like this and are up to speed on the bleeding edge of diabetes technology… there are easily 10x the number that do not.

And if they don’t know about the existence of this world, they won’t know about the valuable lessons we’re learning and won’t be able to share those lessons and knowledge with other healthcare providers and the patients that they treat.

So, in my pursuit to find more ways to share knowledge from our community with the rest of the diabetes community, this is why we submit abstracts for posters and presentations to conferences like ADA’s Scientific Sessions. Our abstracts are evaluated just like the abstracts from traditional healthcare providers (as far as they can tell, I’m just another academic, albeit one with fewer credentials ;)), and I’m proud that they’re evaluated and deemed worthy of poster presentations alongside mainstream researchers. Ditto for our written publications, whether they be letters to the editor or other types of articles submitted to journals and publications.

We need to find more ways to share and distribute knowledge with the “traditional” medical and academic research world. And I’d love to do more – so please share ideas if you have them. And if you’re someone who bridges the gap to the traditional world, I appreciate your help sharing these types of articles and conversations with your colleagues.

Opening pathways for discovery, research, and innovation in health and healthcare

How can we get more patients and other communities to leverage the benefits of the #WeAreNotWaiting mindset for research, development, and innovation in health (and healthcare)?

That’s a question I’ve been asking myself for two years, after seeing the diverse efforts and valuable outpourings from the DIY diabetes community (ranging from amazing remote monitoring solutions for CGM to algorithms, hardware, and other software for automated insulin delivery systems).

But, how to scale? In diabetes, we’re perhaps uniquely positioned given our data-driven disease. However, I believe that the data and innovation approach we’ve taken in diabetes can help many other types of patient communities as well. I just didn’t know how to help scale it… until recently.

Last year when a group of us from the OpenAPS community participated in the Quantified Self Public Health Symposium in 2016, it prompted some follow up conversations with various academic researchers, including Eric Hekler from Arizona State University (ASU).

Eric started a conversation, and kept asking me: What could you do if you partnered with academic researchers? How can traditional researchers help the DIY community, OpenAPS or otherwise?

That also sparked a conversation with Paul Tarini, a senior program officer at the Robert Wood Johnson Foundation (RWJF), about potential funding for a project.

(Important to state here: OpenAPS itself is not a funded project. It has not been, and will not be. It is 100% DIY, non-commercial, and it has been built by a community of volunteers.)

What I wanted to talk to RWJF about was funding a collaboration with academic researchers for studying data and innovation coming out of the community; and to ultimately identify needs and build resources to help scale this type of community effort and empower other patient communities as well.

It took over a year, but we were able to work through initial project proposals and were then invited to submit a full proposal. And on Wednesday (September 6, 2017), I found out that we have been awarded the grant, and this project work will be funded by the Robert Wood Johnson Foundation. The project officially begins on September 15 and will run for 18 months.

So what exactly is this project?

Our project is titled “Learning to not wait: Opening pathways for discovery, research, and innovation in health and healthcare.”

It entails a number of things.

    1. We are creating an on-call data science team to support research in the DIY community. More details will be forthcoming, but essentially this team is there to help do research on the myriad of questions bubbling out of the community. For example – how does sensitivity change during growth spurts, during periods of inactivity, or when changing insulin types? What are some of the most successful mealtime insulin dosing strategies? Etc. People will be able to submit ideas, and get help formulating the idea into a researchable question, and get the research done.
    2. Studying the process of research when done by patients, and the barriers they/their research run into when spreading this scientific knowledge. I personally know there are a lot of barriers, but we need to document them and find solutions. (There are a lot of prejudice and perceived stigmas toward patient researchers doing this type of scientific work, around things like quality of research, methods of distributing knowledge, etc.)
    3. Convening a meeting with patients, traditional researchers, legal experts, and others in this innovative research space to discuss and address some of the known and being-found barriers for this type of research. I envision a white paper type publication to come out of this meeting to document the lay of the land as it is.
    4. Creating toolkit-type resources based on what we’ve learned and are learning in this project for helping patients new to DIY and this type of research take on various levels of research or innovation activity. Part of our project’s scope of work, in #WeAreNotWaiting spirit, includes beta testing with 2-3 other patient communities, so we can get feedback and iterate and roll these out as quickly as possible.

Our project has a couple of principles that I feel strongly about, and am also very proud of in approaching this body of work.

  • I am the scientific Principal Investigator of this project. This is unique in the world of grant-funded research, where a patient is driving the scientific discovery process. (I’m proud and very appreciative to have two amazing co-PI’s who are helping with some of the administrative work since the grant is being administered through Arizona State University Foundation, who is being an awesome partner given the uniqueness of this situation*.) My co-PI’s are Eric Hekler and Erik Johnston. The other members of the team include John Harlow, who’s a MacArthur Foundation Postdoctoral Fellow; Sayali Phatak, a PhD student at ASU; and Keren Hirsch from the ASU Decision Theater.
  • #WeAreNotWaiting is the mantra for this project and our entire team. We plan to be as efficient as possible in doing the project work, which includes being as timely as possible with sharing findings back with the community as soon as they’re ready (a given; there’s no reason to wait) as well as finding ways to publish that are faster than the very traditional academic publishing process, and being thoughtful about the right audiences outside the patient community for communicating about this project’s work.
  • Always asking why. As a brand new PI, I have a lot to learn. But as a non-traditional PI, I also am running into a lot of things that are done the way they’d be done if I was traditionally inside an organization. I plan to explore and challenge as many of these, and try to document the decisions I make in this project as I come to those forks in the road. In some cases, I choose the easier paths because for my project/work/focus, it does not matter. In other cases, based on principle, I choose the harder path-blazing approach.

* About the uniqueness of this project and the administrative details

Since I’m an individual patient researcher, not affiliated with the organization, we decided we would make the official grantee financial organization Arizona State University Foundation, since that’s where my co-PI’s were. But true to the nature of this project, I want to document the challenges and opportunities that come with that, so more to come about all the interesting lessons learned about the process of putting together the proposal and the grant approval process once we heard the grant would be awarded. That way, future patient researchers have a leg up on what is coming when taking on this type of project and are aware of what this approach entailed. The short version is I am a subcontractor to ASU for purpose of the grant; but am not employed or otherwise affiliated with ASU. Props to the many people at ASU who learned about me and this project in the approval process and rolled with it / helped make it happen.

So, what’s next? When do you start? What are you waiting on?!

Coming super soon – a project website with more details about this project.

For my fellow PWDs:

  • Stay tuned for the project website going live, which will also include more details about how individuals in the diabetes community can pitch ideas/get started working with the on-call data science team.

For patients reading this who are members of other patient disease communities:

  • Ping me if you’re SUPER excited and can’t wait to tell me :), or stay tuned for more info about the process for proposing that your patient community be one of the communities with whom we beta test some of the tools/resources developed toward the latter phases of this project.

If you’re someone else who’s interested in this work (such as a legal expert, other researcher, etc.):

  • Also ping me if you’re interested in hearing more about the meeting we plan to convene with a small multidisciplinary group to discuss and address barriers of patient-driven research. Even if we can’t get everyone interested to attend the in-person meeting, I would still love your input and collaboration for the white paper and/or other publications and intersections with this project.

For everyone else:

  • Please do let me know if there’s a particular aspect of this project that you’re curious to learn more about – whether it’s some of what I’m facing and documenting as a patient PI researcher, or otherwise. That’ll help me prioritize some of the blog posts and articles I’m writing about this process!

Thanks to everyone who managed to read this ginormous blog post.

I am incredibly excited about the project, and having resources to focus on how patients and non-traditional actors in healthcare can drive research, development, innovation, and knowledge sharing in non-traditional methods and from the ground up, plus prioritize and change the healthcare research agenda. Like my work in OpenAPS that stands on the shoulders of so many, I’m hoping this project is the first of many and gets to a place for others to leverage this work and take it beyond the scope of what we’ve all imagined is currently possible.

A huge thanks to the team partnering with me on this work; to ASU for being a great partner as an organization; to the Robert Wood Johnson Foundation for supporting this project (and in particular to our program manager, Paul Tarini, for his ongoing support throughout this entire process); and many extra thanks to Scott and all my family and friends for supporting me throughout the proposal process and being the recipients of some VERY excited and !!! filled texts when I found out we had officially been awarded the grant for this project.

Unexpected side-effect of closed looping: Body re-calibrations

It’s fascinating how bodies adapt to changing situations.

For those of us with diabetes: do you remember the first time you took insulin after diagnosis? For me, I had been fasting for ~18 hours (because I felt so bad, and hadn’t eaten anything since dinner the night before) and drinking water, and my BG was still somehow 550+ at the endo’s office.

Water did nothing for my unquenchable thirst, but that first shot of insulin first sure did.

I still remember the vivid feeling of it being an internal liquid hydration for my body, and everything feeling SO different when it started kicking in.

In case the BG of 550+, the A1c of 14+ (don’t remember exact number), and me feeling terrible for weeks wasn’t enough, that’s one of the things that really reinforced that I have diabetes and insulin is something my body desperately needs but wasn’t getting.

Over the last ~14+ years, I’ve had a handful of times that reinforced the feeling of being dependent on this life-saving drug, and the drastic difference I feel with and without it. Usually, it’s been times where a pump site ripped out, or I was sick and high and highly resistant, and then finally stopped being as resistant and my blood sugar started responding to insulin finally after hours of being really high, and I started dropping.

But I’ve had different ways to experience this feeling lately, as a result of having live with a DIY closed loop (OpenAPS) for 2+ years – and it hasn’t involved anything drastic as a HIGH BG or equipment failure. It’s a result of my body re-calibrating to the new norm of my body being able to spend more and more time close to 100% in range, in a much tighter and lower range than I ever thought possible (especially now true with some of the flexibility and freedom oref1 now offers).

I originally had a brief fleeting thought about how BGs in the low 200s used to feel like the 300s did. Then, I realized that 180 felt “high”. One day, it was 160.

Then one day, my CGM said flat in 120s and I felt “high”. (I calibrated, and turned out that it was really 140). I’ve had several other days where I’d hit 140s and feel like I used to do in the mid-200s (slightly high, and annoying, but no major high symptoms like 300-400 would cause – just enough to feel it and be annoyed).

That was odd enough as a fleeting thought, but it was really odd to wake up one morning and without even looking at my watch or CGM to see what my BGs had been all night, know that I had been running high.

I further classified “really odd” as “completely crazy” when that “running high” meant floating around the 130-140 range, instead of down in the 90-110 range, which is where I probably spend 95% of my nights nowadays.

Last night is what triggered this blog post, plus a recurring observation that because I have a DIY closed loop that does so well at handling the small, unknown variances that cause disturbances in BG levels without me having to do much work, that as result it is MUCH easier to pinpoint major influences, like my liver dumping glucose (either because of a low or because it’s ‘full up’ and needs to get rid of the excess).

In last night’s case, it was a major liver dump of glucose.

Here’s what happened:

Scott and I went on a long walk, with the plan to stop for dinner on the way home. BG started dropping as I was about half a mile out from the restaurant, but I’m stubborn 😀 and didn’t want to eat a fruit strip when I was about to sit down an eat a burger. So, my BG was dropping low when I actually ate. I expected my BG to flatten on its own, given the pause in activity, so I bolused fairly normally for my burger, and we walked the last .5 miles home.

However, I ended up not rising from the burger like I usually do, and started dropping again. It was quite a drop, and I realize my burger digestion was different because of the previous low, so I ended up eating some fruit to handle the second low. My body was unhappy at two lows, and so my liver decided to save the day by dumping a bunch of glucose to help bring my blood sugar up. Double rebound effect, then, from the liver dump and the fruit I had eaten. Oh well, that’s what a closed loop is for!

Instead of rebounding into the high 300s (which I would have expected pre-closed loop), I maxed out at 220. The closed loop did a good job of bolusing on the way up. However, because of how much glucose my liver dumped, I stayed higher longer. (Again, this probably sounds crazy to anyone not looping, as it would have sounded to me before I began looping). I sat around 180 for the first three hours of the night, and then dropped down to ~160 for most of the rest of the night, and ended up waking up around 130.

And boy, did I know I had been high all night. I felt (and still feel, hours later) like I used to years ago when I would wake up in the 300s (or higher).

Visuals

recalibration_3 hourHmm, 3 hours doesn’t look so bad despite feeling it.

recalibration_6 hour6 hour view shows why I feel it.

recalibration_12 hour12 hours. Sheesh.

recalibration_24 hour24 hours shows you the full view of the double low and why my liver decided I needed some help. Thanks, liver, for still being able to help if I really needed it!

recalibrating_pebble view of renormalizing Settling back to normal below 120, hours later.

There are SO many amazing things about DIY closed looping. Better A1c, better average BG, better time in range, less effort, less work, less worrying, more sleep, more time living your life.

One of the benefits, though, is this bit of double-edged sword: your body also re-calibrates to the new “normal”, and that means the occasional extreme BG excursion (even if not that extreme!) may give you a different range of symptoms than you used to experience.

This. Matters. (Why I continue to work on #OpenAPS, for myself and for others)

If you give a mouse a cookie or give a patient their data, great things will happen.

First, it was louder CGM alarms and predictive alerts (#DIYPS).

Next, it was a basic hybrid closed loop artificial pancreas that we open sourced so other people could build one if they wanted to (#OpenAPS, with the oref0 basic algorithm).

Then, it was all kinds of nifty lessons learned about timing insulin activity optimally (do eating soon mode around an hour before a meal) and how to use things like IFTTT integration to squash even the tiniest (like from 100mg/dL to 140mg/dL) predictable rises.

It was also things like displays, button, widgets on the devices of my choice – ranging from being able to “text” my pancreas, to a swipe and button tap on my phone, to a button press on my watch – not to mention tinier sized pancreases that fit in or clip easily to a pocket.

Then it was autosensitivity that enabled the system to adjust to my changing circumstances (like getting a norovirus), plus autotune to make sure my baseline pump settings were where they needed to be.

And now, it’s oref1 features that enable me to make different choices at every meal depending on the social situation and what I feel like doing, while still getting good outcomes. Actually, not good outcomes. GREAT outcomes.

With oref0 and OpenAPS, I’d been getting good or really good outcomes for 2 years. But it wasn’t perfect – I wasn’t routinely getting 100% time in range with lower end of the range BG for a 24hour average. ~90% time in range was more common. (Note – this time in range is generally calculated with 80-160mg/dL. I could easily “get” higher time in range with an 80-180 mg/dL target, or a lot higher also with a 70-170mg/dL target, but 80-160mg/dL was what I was actually shooting for, so that’s what I calculate for me personally). I was fairly happy with my average BGs, but they could have been slightly better.

I wrote from a general perspective this week about being able to “choose one” thing to give up. And oref1 is a definite game changer for this.

  • It’s being able to put in a carb estimate and do a single, partial bolus, and see your BG go from 90 to peaking out at 130 mg/dL despite a large carb (and pure ballpark estimate) meal. And no later rise or drop, either.
  • It’s now seeing multiple days a week with 24 hour average BGs a full ~10 or so points lower than you’re used to regularly seeing – and multiple days in a week with full 100% time in range (for 80-160mg/dL), and otherwise being really darn close to 100% way more often than I’ve been before.

But I have to tell you – seeing is believing, even more than the numbers show.

I remember in the early days of #DIYPS and #OpenAPS, there were a lot of people saying “well, that’s you”. But it’s not just me. See Tim’s take on “changing the habits of a lifetime“. See Katie’s parent perspective on how much her interactions/interventions have lessened on a daily basis when testing SMB.

See this quote from Matthias, an early tester of oref1:

I was pretty happy with my 5.8% from a couple months of SMB, which has included the 2 worst months of eating habits in years.  It almost feels like a break from diabetes, even though I’m still checking hourly to make sure everything is connected and working etc and periodically glancing to see if I need to do anything.  So much of the burden of tight control has been lifted, and I can’t even do a decent job explaining the feeling to family.

And another note from Katie, who started testing SMB and oref1:

We used to battle 220s at this time of day (showing a picture flat at 109). Four basal rates in morning. Extra bolus while leaving house. Several text messages before second class of day would be over. Crazy amount of work [in the morning]. Now I just have to brush my teeth.

And this, too:

I don’t know if I’ve ever gone 24 hours without ANY mention of something that was because of diabetes to (my child).

Ya’ll. This stuff matters. Diabetes is SO much more than the math – it’s the countless seconds that add up and subtract from our focus on school/work/life. And diabetes is taking away this time not just from a person with diabetes, but from our parents/spouses/siblings/children/loved ones. It’s a burden, it’s stressful…and everything we can do matters for improving quality of life. It brings me to tears every time someone posts about these types of transformative experiences, because it’s yet another reminder that this work makes a real difference in the real lives of real people. (And, it’s helpful for Scott to hear this type of feedback, too – since he doesn’t have diabetes himself, it’s powerful for him to see the impact of how his code contributions and the features we’re designing and building are making a difference not just to BG outcomes.)

Thank you to everyone who keeps paying it forward to help others, and to all of you who share your stories and feedback to help and encourage us to keep making things better for everyone.