Why a non-academic (patient) publishes in academic journals

Today I was able to share that my Letter to the Editor was published in the Journal of Diabetes Science and Technology. It’s on why we need to set expectations to help patients successfully adopt hybrid closed loop/artificial pancreas/automated insulin delivery system technology. (You can read it via image copies in the first link.)

JDST_screenshot_LTE_expectationsI’ve published a few times in academic journals. Last year, Scott and I published another Letter to the Editor in JDST with the OpenAPS outcomes study we had presented at the 2016 ADA Scientific Sessions conference.

But, I’m sure people are wondering why I choose to do so – especially as I am 1) a patient and 2) a non-academic. (Although in case you missed it – I’m now the Principal Investigator on a grant-funded study!)

While there are many healthcare providers, researchers, industry employees, FDA staff, etc. who read blogs like this and are up to speed on the bleeding edge of diabetes technology… there are easily 10x the number that do not.

And if they don’t know about the existence of this world, they won’t know about the valuable lessons we’re learning and won’t be able to share those lessons and knowledge with other healthcare providers and the patients that they treat.

So, in my pursuit to find more ways to share knowledge from our community with the rest of the diabetes community, this is why we submit abstracts for posters and presentations to conferences like ADA’s Scientific Sessions. Our abstracts are evaluated just like the abstracts from traditional healthcare providers (as far as they can tell, I’m just another academic, albeit one with fewer credentials ;)), and I’m proud that they’re evaluated and deemed worthy of poster presentations alongside mainstream researchers. Ditto for our written publications, whether they be letters to the editor or other types of articles submitted to journals and publications.

We need to find more ways to share and distribute knowledge with the “traditional” medical and academic research world. And I’d love to do more – so please share ideas if you have them. And if you’re someone who bridges the gap to the traditional world, I appreciate your help sharing these types of articles and conversations with your colleagues.

Opening pathways for discovery, research, and innovation in health and healthcare

How can we get more patients and other communities to leverage the benefits of the #WeAreNotWaiting mindset for research, development, and innovation in health (and healthcare)?

That’s a question I’ve been asking myself for two years, after seeing the diverse efforts and valuable outpourings from the DIY diabetes community (ranging from amazing remote monitoring solutions for CGM to algorithms, hardware, and other software for automated insulin delivery systems).

But, how to scale? In diabetes, we’re perhaps uniquely positioned given our data-driven disease. However, I believe that the data and innovation approach we’ve taken in diabetes can help many other types of patient communities as well. I just didn’t know how to help scale it… until recently.

Last year when a group of us from the OpenAPS community participated in the Quantified Self Public Health Symposium in 2016, it prompted some follow up conversations with various academic researchers, including Eric Hekler from Arizona State University (ASU).

Eric started a conversation, and kept asking me: What could you do if you partnered with academic researchers? How can traditional researchers help the DIY community, OpenAPS or otherwise?

That also sparked a conversation with Paul Tarini, a senior program officer at the Robert Wood Johnson Foundation (RWJF), about potential funding for a project.

(Important to state here: OpenAPS itself is not a funded project. It has not been, and will not be. It is 100% DIY, non-commercial, and it has been built by a community of volunteers.)

What I wanted to talk to RWJF about was funding a collaboration with academic researchers for studying data and innovation coming out of the community; and to ultimately identify needs and build resources to help scale this type of community effort and empower other patient communities as well.

It took over a year, but we were able to work through initial project proposals and were then invited to submit a full proposal. And on Wednesday (September 6, 2017), I found out that we have been awarded the grant, and this project work will be funded by the Robert Wood Johnson Foundation. The project officially begins on September 15 and will run for 18 months.

So what exactly is this project?

Our project is titled “Learning to not wait: Opening pathways for discovery, research, and innovation in health and healthcare.”

It entails a number of things.

    1. We are creating an on-call data science team to support research in the DIY community. More details will be forthcoming, but essentially this team is there to help do research on the myriad of questions bubbling out of the community. For example – how does sensitivity change during growth spurts, during periods of inactivity, or when changing insulin types? What are some of the most successful mealtime insulin dosing strategies? Etc. People will be able to submit ideas, and get help formulating the idea into a researchable question, and get the research done.
    2. Studying the process of research when done by patients, and the barriers they/their research run into when spreading this scientific knowledge. I personally know there are a lot of barriers, but we need to document them and find solutions. (There are a lot of prejudice and perceived stigmas toward patient researchers doing this type of scientific work, around things like quality of research, methods of distributing knowledge, etc.)
    3. Convening a meeting with patients, traditional researchers, legal experts, and others in this innovative research space to discuss and address some of the known and being-found barriers for this type of research. I envision a white paper type publication to come out of this meeting to document the lay of the land as it is.
    4. Creating toolkit-type resources based on what we’ve learned and are learning in this project for helping patients new to DIY and this type of research take on various levels of research or innovation activity. Part of our project’s scope of work, in #WeAreNotWaiting spirit, includes beta testing with 2-3 other patient communities, so we can get feedback and iterate and roll these out as quickly as possible.

Our project has a couple of principles that I feel strongly about, and am also very proud of in approaching this body of work.

  • I am the scientific Principal Investigator of this project. This is unique in the world of grant-funded research, where a patient is driving the scientific discovery process. (I’m proud and very appreciative to have two amazing co-PI’s who are helping with some of the administrative work since the grant is being administered through Arizona State University Foundation, who is being an awesome partner given the uniqueness of this situation*.) My co-PI’s are Eric Hekler and Erik Johnston. The other members of the team include John Harlow, who’s a MacArthur Foundation Postdoctoral Fellow; Sayali Phatak, a PhD student at ASU; and Keren Hirsch from the ASU Decision Theater.
  • #WeAreNotWaiting is the mantra for this project and our entire team. We plan to be as efficient as possible in doing the project work, which includes being as timely as possible with sharing findings back with the community as soon as they’re ready (a given; there’s no reason to wait) as well as finding ways to publish that are faster than the very traditional academic publishing process, and being thoughtful about the right audiences outside the patient community for communicating about this project’s work.
  • Always asking why. As a brand new PI, I have a lot to learn. But as a non-traditional PI, I also am running into a lot of things that are done the way they’d be done if I was traditionally inside an organization. I plan to explore and challenge as many of these, and try to document the decisions I make in this project as I come to those forks in the road. In some cases, I choose the easier paths because for my project/work/focus, it does not matter. In other cases, based on principle, I choose the harder path-blazing approach.

* About the uniqueness of this project and the administrative details

Since I’m an individual patient researcher, not affiliated with the organization, we decided we would make the official grantee financial organization Arizona State University Foundation, since that’s where my co-PI’s were. But true to the nature of this project, I want to document the challenges and opportunities that come with that, so more to come about all the interesting lessons learned about the process of putting together the proposal and the grant approval process once we heard the grant would be awarded. That way, future patient researchers have a leg up on what is coming when taking on this type of project and are aware of what this approach entailed. The short version is I am a subcontractor to ASU for purpose of the grant; but am not employed or otherwise affiliated with ASU. Props to the many people at ASU who learned about me and this project in the approval process and rolled with it / helped make it happen.

So, what’s next? When do you start? What are you waiting on?!

Coming super soon – a project website (now here) with more details about this project.

For my fellow PWDs:

  • Stay tuned for the project website going live, which will also include more details about how individuals in the diabetes community can pitch ideas/get started working with the on-call data science team.

For patients reading this who are members of other patient disease communities:

  • Ping me if you’re SUPER excited and can’t wait to tell me :), or stay tuned for more info about the process for proposing that your patient community be one of the communities with whom we beta test some of the tools/resources developed toward the latter phases of this project.

If you’re someone else who’s interested in this work (such as a legal expert, other researcher, etc.):

  • Also ping me if you’re interested in hearing more about the meeting we plan to convene with a small multidisciplinary group to discuss and address barriers of patient-driven research. Even if we can’t get everyone interested to attend the in-person meeting, I would still love your input and collaboration for the white paper and/or other publications and intersections with this project.

For everyone else:

  • Please do let me know if there’s a particular aspect of this project that you’re curious to learn more about – whether it’s some of what I’m facing and documenting as a patient PI researcher, or otherwise. That’ll help me prioritize some of the blog posts and articles I’m writing about this process!

Thanks to everyone who managed to read this ginormous blog post.

I am incredibly excited about the project, and having resources to focus on how patients and non-traditional actors in healthcare can drive research, development, innovation, and knowledge sharing in non-traditional methods and from the ground up, plus prioritize and change the healthcare research agenda. Like my work in OpenAPS that stands on the shoulders of so many, I’m hoping this project is the first of many and gets to a place for others to leverage this work and take it beyond the scope of what we’ve all imagined is currently possible.

A huge thanks to the team partnering with me on this work; to ASU for being a great partner as an organization; to the Robert Wood Johnson Foundation for supporting this project (and in particular to our program manager, Paul Tarini, for his ongoing support throughout this entire process); and many extra thanks to Scott and all my family and friends for supporting me throughout the proposal process and being the recipients of some VERY excited and !!! filled texts when I found out we had officially been awarded the grant for this project.

Unexpected side-effect of closed looping: Body re-calibrations

It’s fascinating how bodies adapt to changing situations.

For those of us with diabetes: do you remember the first time you took insulin after diagnosis? For me, I had been fasting for ~18 hours (because I felt so bad, and hadn’t eaten anything since dinner the night before) and drinking water, and my BG was still somehow 550+ at the endo’s office.

Water did nothing for my unquenchable thirst, but that first shot of insulin first sure did.

I still remember the vivid feeling of it being an internal liquid hydration for my body, and everything feeling SO different when it started kicking in.

In case the BG of 550+, the A1c of 14+ (don’t remember exact number), and me feeling terrible for weeks wasn’t enough, that’s one of the things that really reinforced that I have diabetes and insulin is something my body desperately needs but wasn’t getting.

Over the last ~14+ years, I’ve had a handful of times that reinforced the feeling of being dependent on this life-saving drug, and the drastic difference I feel with and without it. Usually, it’s been times where a pump site ripped out, or I was sick and high and highly resistant, and then finally stopped being as resistant and my blood sugar started responding to insulin finally after hours of being really high, and I started dropping.

But I’ve had different ways to experience this feeling lately, as a result of having live with a DIY closed loop (OpenAPS) for 2+ years – and it hasn’t involved anything drastic as a HIGH BG or equipment failure. It’s a result of my body re-calibrating to the new norm of my body being able to spend more and more time close to 100% in range, in a much tighter and lower range than I ever thought possible (especially now true with some of the flexibility and freedom oref1 now offers).

I originally had a brief fleeting thought about how BGs in the low 200s used to feel like the 300s did. Then, I realized that 180 felt “high”. One day, it was 160.

Then one day, my CGM said flat in 120s and I felt “high”. (I calibrated, and turned out that it was really 140). I’ve had several other days where I’d hit 140s and feel like I used to do in the mid-200s (slightly high, and annoying, but no major high symptoms like 300-400 would cause – just enough to feel it and be annoyed).

That was odd enough as a fleeting thought, but it was really odd to wake up one morning and without even looking at my watch or CGM to see what my BGs had been all night, know that I had been running high.

I further classified “really odd” as “completely crazy” when that “running high” meant floating around the 130-140 range, instead of down in the 90-110 range, which is where I probably spend 95% of my nights nowadays.

Last night is what triggered this blog post, plus a recurring observation that because I have a DIY closed loop that does so well at handling the small, unknown variances that cause disturbances in BG levels without me having to do much work, that as result it is MUCH easier to pinpoint major influences, like my liver dumping glucose (either because of a low or because it’s ‘full up’ and needs to get rid of the excess).

In last night’s case, it was a major liver dump of glucose.

Here’s what happened:

Scott and I went on a long walk, with the plan to stop for dinner on the way home. BG started dropping as I was about half a mile out from the restaurant, but I’m stubborn 😀 and didn’t want to eat a fruit strip when I was about to sit down an eat a burger. So, my BG was dropping low when I actually ate. I expected my BG to flatten on its own, given the pause in activity, so I bolused fairly normally for my burger, and we walked the last .5 miles home.

However, I ended up not rising from the burger like I usually do, and started dropping again. It was quite a drop, and I realize my burger digestion was different because of the previous low, so I ended up eating some fruit to handle the second low. My body was unhappy at two lows, and so my liver decided to save the day by dumping a bunch of glucose to help bring my blood sugar up. Double rebound effect, then, from the liver dump and the fruit I had eaten. Oh well, that’s what a closed loop is for!

Instead of rebounding into the high 300s (which I would have expected pre-closed loop), I maxed out at 220. The closed loop did a good job of bolusing on the way up. However, because of how much glucose my liver dumped, I stayed higher longer. (Again, this probably sounds crazy to anyone not looping, as it would have sounded to me before I began looping). I sat around 180 for the first three hours of the night, and then dropped down to ~160 for most of the rest of the night, and ended up waking up around 130.

And boy, did I know I had been high all night. I felt (and still feel, hours later) like I used to years ago when I would wake up in the 300s (or higher).

Visuals

recalibration_3 hourHmm, 3 hours doesn’t look so bad despite feeling it.

recalibration_6 hour6 hour view shows why I feel it.

recalibration_12 hour12 hours. Sheesh.

recalibration_24 hour24 hours shows you the full view of the double low and why my liver decided I needed some help. Thanks, liver, for still being able to help if I really needed it!

recalibrating_pebble view of renormalizing Settling back to normal below 120, hours later.

There are SO many amazing things about DIY closed looping. Better A1c, better average BG, better time in range, less effort, less work, less worrying, more sleep, more time living your life.

One of the benefits, though, is this bit of double-edged sword: your body also re-calibrates to the new “normal”, and that means the occasional extreme BG excursion (even if not that extreme!) may give you a different range of symptoms than you used to experience.

This. Matters. (Why I continue to work on #OpenAPS, for myself and for others)

If you give a mouse a cookie or give a patient their data, great things will happen.

First, it was louder CGM alarms and predictive alerts (#DIYPS).

Next, it was a basic hybrid closed loop artificial pancreas that we open sourced so other people could build one if they wanted to (#OpenAPS, with the oref0 basic algorithm).

Then, it was all kinds of nifty lessons learned about timing insulin activity optimally (do eating soon mode around an hour before a meal) and how to use things like IFTTT integration to squash even the tiniest (like from 100mg/dL to 140mg/dL) predictable rises.

It was also things like displays, button, widgets on the devices of my choice – ranging from being able to “text” my pancreas, to a swipe and button tap on my phone, to a button press on my watch – not to mention tinier sized pancreases that fit in or clip easily to a pocket.

Then it was autosensitivity that enabled the system to adjust to my changing circumstances (like getting a norovirus), plus autotune to make sure my baseline pump settings were where they needed to be.

And now, it’s oref1 features that enable me to make different choices at every meal depending on the social situation and what I feel like doing, while still getting good outcomes. Actually, not good outcomes. GREAT outcomes.

With oref0 and OpenAPS, I’d been getting good or really good outcomes for 2 years. But it wasn’t perfect – I wasn’t routinely getting 100% time in range with lower end of the range BG for a 24hour average. ~90% time in range was more common. (Note – this time in range is generally calculated with 80-160mg/dL. I could easily “get” higher time in range with an 80-180 mg/dL target, or a lot higher also with a 70-170mg/dL target, but 80-160mg/dL was what I was actually shooting for, so that’s what I calculate for me personally). I was fairly happy with my average BGs, but they could have been slightly better.

I wrote from a general perspective this week about being able to “choose one” thing to give up. And oref1 is a definite game changer for this.

  • It’s being able to put in a carb estimate and do a single, partial bolus, and see your BG go from 90 to peaking out at 130 mg/dL despite a large carb (and pure ballpark estimate) meal. And no later rise or drop, either.
  • It’s now seeing multiple days a week with 24 hour average BGs a full ~10 or so points lower than you’re used to regularly seeing – and multiple days in a week with full 100% time in range (for 80-160mg/dL), and otherwise being really darn close to 100% way more often than I’ve been before.

But I have to tell you – seeing is believing, even more than the numbers show.

I remember in the early days of #DIYPS and #OpenAPS, there were a lot of people saying “well, that’s you”. But it’s not just me. See Tim’s take on “changing the habits of a lifetime“. See Katie’s parent perspective on how much her interactions/interventions have lessened on a daily basis when testing SMB.

See this quote from Matthias, an early tester of oref1:

I was pretty happy with my 5.8% from a couple months of SMB, which has included the 2 worst months of eating habits in years.  It almost feels like a break from diabetes, even though I’m still checking hourly to make sure everything is connected and working etc and periodically glancing to see if I need to do anything.  So much of the burden of tight control has been lifted, and I can’t even do a decent job explaining the feeling to family.

And another note from Katie, who started testing SMB and oref1:

We used to battle 220s at this time of day (showing a picture flat at 109). Four basal rates in morning. Extra bolus while leaving house. Several text messages before second class of day would be over. Crazy amount of work [in the morning]. Now I just have to brush my teeth.

And this, too:

I don’t know if I’ve ever gone 24 hours without ANY mention of something that was because of diabetes to (my child).

Ya’ll. This stuff matters. Diabetes is SO much more than the math – it’s the countless seconds that add up and subtract from our focus on school/work/life. And diabetes is taking away this time not just from a person with diabetes, but from our parents/spouses/siblings/children/loved ones. It’s a burden, it’s stressful…and everything we can do matters for improving quality of life. It brings me to tears every time someone posts about these types of transformative experiences, because it’s yet another reminder that this work makes a real difference in the real lives of real people. (And, it’s helpful for Scott to hear this type of feedback, too – since he doesn’t have diabetes himself, it’s powerful for him to see the impact of how his code contributions and the features we’re designing and building are making a difference not just to BG outcomes.)

Thank you to everyone who keeps paying it forward to help others, and to all of you who share your stories and feedback to help and encourage us to keep making things better for everyone.

 

Why guess when you don’t have to? (#OpenAPS logs & why they’re handy)

One of the biggest benefits (in my very biased opinion) of a DIY closed loop is this: it’s designed to be understandable to the person using it.

You don’t have to guess “what did it do at 2am?” or “why did it do a temp basal and not an SMB?”

Well, you COULD guess – but you don’t have to. Guessing is a choice ;).

Because we’ve been designing a system that a person has to decide to trust, it provides information about everything it’s doing and the information it has. That’s what “the logs” are for, and you can get information from “the logs” from a couple of places:

  • The OpenAPS “pill” in Nightscout
  • Secondary logging sources like Papertrail
  • Information that shows up on your Pebble watch
  • The full logs from SSH’ing into a rig (usually what we mean when we ask, “what do your logs say?”)

Here’s an example of the information the OpenAPS pill provides me in Nightscout:

Example OpenAPS pill info in Nightscout

This tells me that at 11:03 am, my BG was 121; I had no carbs on board; was dropping a tiny bit as expected and was likely going to end up slightly below my target; and the current temporary basal rate running was about equivalent to what OpenAPS thought I needed at the time. I had 0.47 netIOB, all from basal adjustments. It also specifies some of the eventual numbers that are what trigger the “purple line predictions” displayed in Nightscout, so if you can’t tell where the line is (90 or 100?), you can use the pill information to determine that more easily.

(Here’s the instructions for setting up Nightscout for OpenAPS)

Here’s an example of a log from Papertrail and what it tells us:

Example papertrail usage for OpenAPS

This example is from Katie, who described her daughter’s patterns in the morning: when Anna leaves her rig in the bedroom and went to take a shower, you can see the tune change at around 6:55, meaning she’s out of range of the rig. After the shower, getting dressed, and getting back to the rig around 7:25, it goes back to “normal” tuning (which means reading and writing to the pump as usual).

Papertrail is handy for figuring out if a rig is working or not remotely and high level why it might not be, especially if it’s a communication or power problem. But I generally find it to be most helpful when you know what kind of problem it is, and use it to drill down on a particular thing. However, it’s not going to give you absolutely all the details needed for every problem – so make sure to read about how to access the traditional logs, too, and be able to do that on the go.

(Here’s the instructions for getting Papertrail going for OpenAPS)

Here’s what the logs ported to my Pebble tell me:

OpenAPS logs on Pebble watch @DanaMLewis example

There’s several helpful things that display on my watch (using the excellent “Urchin” watchface designed by Mark Wilson, which you can customize to suit your personal preference): BGs, basal activity, and then some detailed text, similar to what’s in the OpenAPS pill (current BG, the change in BG, timestamp of BG, my netIOB, my eventual BGs, and any temp basal activity). In this case, it’s easy for me to glance and see that I was a bit low for a while; am now flat but have negative net IOB so it’s been high temping a bit to level out the netIOB.

(I’ve always preferred a data-rich watchface – even back in the days of “open looping” with #DIYPS:)

https://twitter.com/danamlewis/status/652566409537433600/photo/1

(Here’s more about the Urchin watchface)

Here’s what the full logs from the rig tell me:

Example OpenAPS logs from the rig

This has a LOT of information in it (which is why it’s so awesome). There are messages being shared by each step of the loop – when it’s listening for “silence” to make sure it can talk successfully to the pump; refreshing pump history; checking the clocks on devices and for fresh BGs; and then processing through the math on what the BG is, where it’s headed, and what needs to happen. You can also see from this example where autosensitivity is kicking in, adjust basals slightly up, target down, and sensitivity down, etc. (And for those who aren’t testing oref1 features like SMB and UAM yet, you’ll get a glimpse of how there’s now additional information in the logs about if those features are currently enabled.)

(Here are some other logs you can watch, and how to run them)

Pro tip for OpenAPS users: if you’re logged into your rig, you just have to type l (the letter “L” but lower case) for it to bring up your logs!

So if you find yourself wondering: what did OpenAPS do/why did it do <thing>? Instead of wondering, start by looking at the logs.

And remember, if you don’t know what the problem is – the full logs are the best source of information for spotting what the main problem is. You can then use the information from the logs to ask about how to resolve a particular problem (Gitter is great for this!)– but part of troubleshooting well/finding out more is taking the first step to pull up your logs, because anyone who is going to help you troubleshoot will need that information to figure out a solution.

And if you ever see someone say “RTFL”, instead of “read the manual” or “read the docs”, it means “read the logs”. 😉 :)

Choose One: What would you give up if you could? (With #OpenAPS, maybe you can – oref1 includes unannounced meals or “UAM”)

What do you have to do today (related to daily insulin dosing for diabetes) that you’d like to give up if you could? Counting carbs? Bolusing? Or what about outcomes – what if you could give up going low after a meal? Or reduce the amount that you spike?

How many of these 5 things do you think are possible to achieve together?

  • No need to bolus
  • No need to count carbs
  • Medium/high carb meals
  • 80%+ time in range
  • no hypoglycemia

How many can you manage with your current therapy and tools of choice?  How many do you think will be possible with hybrid closed loop systems?  Please think about (and maybe even write down) your answers before reading further to get our perspective.

With just pump and CGM, it’s possible to get good time in range with proper boluses, counting carbs, and eating relatively low-carb (or getting lucky/spending a lot of time learning how to time your insulin with regular meals).  Even with all that, some people still go low/have hypoglycemia.  So, let’s call that a 2 (out of 5) that can be achieved simultaneously.

With a first-generation hybrid closed loop system like the original OpenAPS oref0 algorithm, it’s possible to get good time in range overnight, but achieve that for meal times would still require bolusing properly and counting carbs.  But with the perfect night-time BGs, it’s possible to achieve no-hypoglycemia and 80% time in range with medium carb meals (and high-carb meals with Eating Soon mode etc.).  So, let’s call that a 3 (out of 5).

With some of the advanced features we added to OpenAPS with oref0 (like advanced meal assist or “AMA” as we call it), it became a lot easier to achieve a 3 with less bolusing and less need to precisely count carbs.  It also deals better with high-carb meals, and gives the user even more flexibility.  So, let’s call that a 3.5.

A few months ago, when we began discussing how to further improve daily outcomes, we also began to discuss the idea of how to better deal with unannounced meals. This means when someone eats and boluses, but doesn’t enter carbs. (Or in some cases: eats, doesn’t enter carbs, and doesn’t even bolus). How do we design to better help in that safety, all while sticking to our safety principles and dosing safely?

I came up with this idea of “floating carbs” as a way to design a solution for this behavior. Essentially, we’ve learned that if BG spikes at a certain rate, it’s often related to carbs. We observed that AMA can appropriately respond to such a rise, while not dosing extra insulin if BG is not rising.  Which prompted the question: what if we had a “floating” amount of carbs hanging out there, and it could be decayed and dosed upon with AMA if that rise in BG was detected? That led us to build in support for unannounced meals, or “UAM”. (But you’ll probably see us still talk about “floating carbs” some, too, because that was the original way we were thinking about solving the UAM problem.) This is where the suite of tools that make up oref1 came from.  In addition to UAM, we also introduced supermicroboluses, or SMB for short.  (For more background info about oref1 and SMB, read here.)

So with OpenAPS oref1 with SMB and floating carbs for UAM, we are finally at the point to achieve a solid 4 out of 5.  And not just a single set of 4, but any 4 of the 5 (except we’d prefer you don’t choose hypoglycemia, of course):

  • With a low-carb meal, no-hypoglycemia and 80+% time in range is achievable without bolusing or counting carbs (with just an Eating Soon mode that triggers SMB).
  • With a regular meal, the user can either bolus for it (triggering floating carb UAM with SMB) or enter a rough carb count / meal announcement (triggering Eating Now SMB) and achieve 80% time in range.
  • If the user chooses to eat a regular meal and not bolus or enter a carb count (just an Eating Soon mode), the BG results won’t be as good, but oref1 will still handle it gracefully and bring BG back down without causing any hypoglycemia or extended hyperglycemia.

That is huge progress, of course.  And we think that might be about as good as it’s possible to do with current-generation insulin-only pump therapy.  To do better, we’d either need an APS that can dose glucagon and be configured for tight targets, or much faster insulin.  The dual-hormone systems currently in development are targeting an average BG of 140, or an A1c of 6.5, which likely means >20% of time spent > 160mg/dL.  And to achieve that, they do require meal announcements of the small/medium/large variety, similar to what oref1 needs.  Fiasp is promising on the faster-insulin front, and might allow us to develop a future version of oref1 that could deal with completely unannounced and un-bolused meals, but it’s probably not fast enough to achieve 80% time in range on a high-carb diet without some sort of meal announcement or boluses.

But 4 out of 5 isn’t bad, especially when you get to pick which 4, and can pick differently for every meal.

Does that make OpenAPS a “real” artificial pancreas? Is it a hybrid closed loop artificial insulin delivery system? Do we care what it’s called? For Scott and me; the answer is no: instead of focusing on what it’s called, let’s focus on how different tools and techniques work, and what we can do to continue to improve them.

Being Shuttleworth Funded with a Flash Grant as an independent patient researcher

Recently, I have been working on helping OpenAPS’ers collect our data and put it to good use in research (both by traditional researchers as well as using it to enable other fellow patient researchers or “citizen scientists). As a result, I have had the opportunity to work closely with Madeleine Ball at Open Humans. (Open Humans is the platform we use for the OpenAPS Data Commons.)

It’s been awesome to collaborate with Madeleine on many fronts. She’s proven herself really willing to listen to ideas and suggestions for things to change, to make it easier for both individuals to donate their data to research and for researchers who want to use the platform. And, despite me not having the same level of technical skills, she emits a deep respect for people of all experiences and perspectives. She’s also in general a really great person.

As someone who is (perhaps uniquely) utilizing the platform as both a data donor and as a data researcher, it has been fantastic to be able to work through the process of data donation, project creation, and project utilization from both perspectives. And, it’s been great to contribute ideas and make tools (like some of my scripts to download and unpack Open Humans data) that can then be used by other researchers on Open Humans.

Madeleine was also selected this year to be a Shuttleworth Fellow, applying “open” principles to change how we share and study human health data, plus exploring new, participant-centered approaches for health data sharing, research, and citizen science. Which means that everything she’s doing is in almost perfect sync with what we are doing in the OpenAPS and #WeAreNotWaiting communities.

What I didn’t know until this past week was that it also meant (as a Shuttleworth Fellow) that she was able to make nominations of individuals for a Shuttleworth Flash Grant, which is a grant made to a collection of social change agents, no strings attached, in support of their work.

I was astonished to receive an email from the Shuttleworth Foundation saying that I had been nominated by Madeleine for a $5,000 Flash Grant, which goes to individuals they would like to support/reward/encourage in their work for social good.

Shuttleworth Funded

I am so blown away by the Flash Grant itself – and the signal that this grant provides. This is the first (of hopefully many) organizations to recognize the importance of supporting independent patient researchers who are not affiliated with an institution, but rather with an online community. It’s incredibly meaningful for this research and work, which is centered around real needs of patients in the real world, to be funded, even to a small degree.

Many non-traditional researchers like me are unaffiliated with a traditional institution or organization. This means we do the research in our own time, funded solely by our own energy (and in some case resources). Time in of itself is a valuable contribution to research (think of the opportunity costs). However, it is also costly to distribute and disseminate ideas learned from patient-driven research to more traditional researchers. Even ignoring travel costs, most scientific conferences do not have a patient research access program, which means patients in some cases are asked to pay $400 (or more) per person for a single day pass to stand beside their poster if it is accepted for presentation at a conference. In some cases, patients have personal resources and determination and are willing to pay that cost. But not every patient is able to do that. (And to do it year over year as they continue to do new ground-breaking research each year – that adds up, too, especially when you factor in travel, lodging, and the opportunity cost of being away from a day job.)

So what will I use the Flash Grant for? Here’s so far what I’ve decided to put it toward:

#1 – I plan to use it to fund my & Scott’s travel costs this year to ADA’s Scientific Sessions, where our poster on Autotune & data from the #WeAreNotWaiting community will be presented. (I’m still hoping to convince ADA to create a patient researcher program vs. treating us like an individual walking in off the street; but if they again do not choose to do so, it will take $800 for Scott and I to stand with the poster during the poster session). Being at Scientific Sessions is incredibly valuable as researchers and developers, because we can have real-time conversations with traditional researchers who have not yet been introduced to some of our tools or the data collected and donated by the community. It’s one of the most valuable places for us to be in person in terms of facilitating new research partnerships, in addition to renewing and establishing relationships with device manufacturers who could (because our stuff is all open source MIT licensed) utilize our code and tools in commercial devices to more broadly reach people with diabetes.

#2 – Hardware parts. In order to best support the OpenAPS community, Scott and I have also been supporting and contributing to the development of open source hardware like the Explorer Board. Keeping in mind that each version of the board produced needs to be tested to see if the instructions related to OpenAPS need to change, we have been buying every iteration of Explorer Board so we can ensure compatibility and ease of use, which adds up. Having some of this grant funding go toward hardware supplies to support a multitude of setup options is nice!

There are so many individuals who have contributed in various ways to OpenAPS and WeAreNotWaiting and the patient-driven research movements. I’m incredibly encouraged, with a new spurt of energy and motivation, after receiving this Flash Grant to continue to further build upon everyone’s work and to do as much as possible to support every person in our collective communities. Thank you again to Madeleine for the nomination, and to the Shuttleworth Foundation for the Flash Grant, for the financial and emotional support for our community!

Making it possible for researchers to work with #OpenAPS or general Nightscout data – and creating a complex json to csv command line tool that works with unknown schema

This is less of an OpenAPS/DIYPS/diabetes-related post, although that is normally what I blog about. However, since we created the #OpenAPS Data Commons on Open Humans, to allow those of us who desire to donate our diabetes data to research, I have been spending a lot of time figuring out the process from uploading your data to how data is managed and shared securely with researchers. The hardest part is helping researchers figure out how to handle the data – because we PWDs produce a lot of data :) . So this post explains some of the challenges of the data management to get it to a researcher-friendly format. I have been greatly helped over the years by general purpose open-source work from other people, and one of the things that helps ME the most as a non-traditional programmer is plain language posts explaining the thought process by behind the tools and the attempted solution paths. Especially because sometimes the web pages and blog posts pop higher in search than nitty gritty tool documentation without context. (Plus, I’ve been taking my own advice about not letting myself hold me back from trying, even when I don’t know how to do things yet.) So that’s what this post is!

OH that I "certainly stress tested" a tool with lots of data

Background/inspiration for the project and the tools I had to build:

We’re using Nightscout, which is a remote data-viewing platform for diabetes data, made with love and open source and freely available for anyone with diabetes to use. It’s one of the best ways to display not only continuous glucose monitor (CGM) data, but also data from our DIY closed loop artificial pancreases (#OpenAPS). It can store data from a number of different kinds and brands of diabetes devices (pumps, CGMs, manual data entries, etc.), which means it’s a rich source of data. As the number of DIY OpenAPS users are growing, we estimate that our real-world use is overtaking the amount of total hours of data from clinical trials of closed loop artificial pancreas systems.  In the #WeAreNotWaiting spirit of moving quickly (rather than waiting years for research teams to collect and analyze their own data) we want to see what we can learn from OpenAPS usage, not only by donating data to help traditional researchers speed up their work, but also by co-designing research studies of the things of most value to the diabetes community.

Step 1: Data from users to Open Humans

I thought Step 1 would be the hardest. However, thanks to Madeleine Ball, John Costik, and others in the Nightscout community, a simple Nightscout Data Transfer App was created that enables people with Nightscout data to pop it into their Open Humans accounts. It’s then very easy to join different projects (like the OpenAPS Data Commons) and share your data with those projects. And as the volunteer administrator of the OpenAPS Data Commons, it’s also easy for me to provide data to researchers.

The biggest challenge at this stage was figuring out how much data to pull from the API. I have almost 3 years worth of DIY diabetes data, and I have numerous devices over time uploading all at once…which makes for large chunks of data. Not everyone has this much data (or 6-7 rigs uploading constantly ;)). Props to Madeleine for the patience in working with me to make sure the super users with large data sets will be able to use all of these tools!

Step 2: Sharing the data with researchers

This was easy. Yay for data-sharing tools like Dropbox.

Step 3: Researchers being able to use the data

Here’s where thing started to get interesting. We have large data files that come in json format from Nightscout. I know some researchers we will be working with are probably very comfortable working with tools that can take large, complex json files. However…not all will be, especially because we also want to encourage independent researchers to engage with the data for projects. So I had the belated realization that we need to do something other than hand over json files. We need to convert, at the least, to csv so it can be easily viewed in Excel.

Sounds easy, right?

According to basic searches, there’s roughly a gazillion ways to convert json to csv. There’s even websites that will do it for you, without making you run it on the command line. However, most of them require you to know the types of data and the number of types, in order to therefore construct headers in the csv file to make it readable and useful to a human.

This is where the DIY and infinite possibility nature of all the kinds of diabetes tools anyone could be using with Nightscout, plus the infinite ways they can self-describe profiles and alarms and methods of entering data, makes it tricky. Just based on an eyeball search between two individuals, I was unable to find and count the hundred+ types of data entry possibilities. This is definitely a job for the computer, but I had to figure out how to train the computer to deal with this.

Again, json to csv tools are so common I figured there HAD to be someone who had done this. Finally, after a dozen varying searches and trying a variety of command line tools, I finally found one web-based tool that would take json, create the schema without knowing the data types in advance, and convert it to csv. It was (is) super slick. I got very excited when I saw it linked to a Github repository, because that meant it was probably open source and I can use it. I didn’t see any instructions for how to use it on the command line, though, so I message the author on Twitter and found out that it didn’t yet exist and was a not-yet-done TODO for him.

Sigh. Given this whole #WeAreNotWaiting thing (and given I’ve promised to help some of the researchers in figuring this out so we can initiate some of the research projects), I needed to figure out how to convert this tool into a command line version.

So, I did.

  • I taught myself how to unzip json files (ended up picking `gzip -cd`, because it works on both Mac and Linux)
  • I planned to then convert the web tool to be able to work on the command line, and use it to translate the json files to csv.

But..remember the big file issue? It struck again. So I first had to figure out the best way to estimate the size and splice or split the json into a series of files, without splitting it in a weird place and messing up the data. That became jsonsplit.sh, a tool to split a json file based on the size you give it (and if you don’t specify, it defaults to something like 100000 records).

FWIW: 100,000 records was too much for the more complex schema of the data I was working with, so I often did it in smaller chunks, but you can set it to whatever size you prefer.

So now “all” I had to do was:

  • Unzip the json
  • Break it down if it was too large, using jsonsplit.sh
  • Convert each of these files from json to csv

Phew. Each of these looks really simple now, but took a good chunk of time to figure out. Luckily, the author of the web tool had done much of the hard json-to-csv work, and Scott helped me figure out how to take the html-based version of the conversion and make it useable in the command line using javascript. That became complex-json2csv.js.

Because I knew how hard this all was, and wanted other people to be able to easily use this tool if they had large, complex json with unknown schema to deal with, I created a package.json so I could publish it to npm so you can download and run it anywhere.

I also had to create a script that would pass it all of the Open Humans data; unzip the file; run jsonsplit.sh, run complex-json2csv.js, and organize the data in a useful way, given the existing file structure of the data. Therefore I also created an “OpenHumansDataTools” repository on Github, so that other researchers who will be using Nightscout-based Open Humans data can use this if they want to work with the data. (And, there may be something useful to others using Open Humans even if they’re not using Nightscout data as their data source – again, see “large, complex, challenging json since you don’t know the data type and count of data types” issue. So this repo can link them to complex-json2csv.js and jsonsplit.sh for discovery purposes, as they’re general purpose tools.) That script is here.

My next TODO will be to write a script to take only slices of data based on information shared as part of the surveys that go with the Nightscout data; i.e. if you started your DIY closed loop on X data, take data from 2 weeks prior and 6 weeks after, etc.

I also created a pull request (PR) back to the original tool that inspired my work, in case he wants to add it to his repository for others who also want to run his great stuff from the command line. I know my stuff isn’t perfect, but it works :) and I’m proud of being able to contribute to general-purpose open source in addition to diabetes-specific open source work. (Big thanks as always to everyone who devotes their work to open source for others to use!)

So now, I can pass researchers json or csv files for use in their research. We have a number of studies who are planning to request access to the OpenAPS Data Commons, and I’m excited about how work like this to make diabetes data more broadly available for research will help improve our lives in the short and long term!

Autotune (automatically assessing basal rates, ISF, and carb ratio with #OpenAPS – and even without it!)

What if, instead of guessing needed changes (the current most used method) basal rates, ISF, and carb ratios…we could use data to empirically determine how these ratios should be adjusted?

Meet autotune.

What if we could use data to determine basal rates, ISF and carb ratio? Meet autotune

Historically, most people have guessed basal rates, ISF, and carb ratios. Their doctors may use things like the “rule of 1500” or “1800” or body weight. But, that’s all a general starting place. Over time, people have to manually tweak these underlying basals and ratios in order to best live life with type 1 diabetes. It’s hard to do this manually, and know if you’re overcompensating with meal boluses (aka an incorrect carb ratio) for basal, or over-basaling to compensate for meal times or an incorrect ISF.

And why do these values matter?

It’s not just about manually dosing with this information. But importantly, for most DIY closed loops (like #OpenAPS), dose adjustments are made based on the underlying basals, ISF, and carb ratio. For someone with reasonably tuned basals and ratios, that’s works great. But for someone with values that are way off, it means the system can’t help them adjust as much as someone with well-tuned values. It’ll still help, but it’ll be a fraction as powerful as it could be for that person.

There wasn’t much we could do about that…at first. We designed OpenAPS to fall back to whatever values people had in their pumps, because that’s what the person/their doctor had decided was best. However, we know some people’s aren’t that great, for a variety of reasons. (Growth, activity changes, hormonal cycles, diet and lifestyle changes – to name a few. Aka, life.)

With autosensitivity, we were able to start to assess when actual BG deltas were off compared to what the system predicted should be happening. And with that assessment, it would dynamically adjust ISF, basals, and targets to adjust. However, a common reaction was people seeing the autosens result (based on 24 hours data) and assume that mean that their underlying ISF/basal should be changed. But that’s not the case for two reasons. First, a 24 hour period shouldn’t be what determines those changes. Second, with autosens we cannot tell apart the effects of basals vs. the effect of ISF.

Autotune, by contrast, is designed to iteratively adjust basals, ISF, and carb ratio over the course of weeks – based on a longer stretch of data. Because it makes changes more slowly than autosens, autotune ends up drawing on a larger pool of data, and is therefore able to differentiate whether and how basals and/or ISF need to be adjusted, and also whether carb ratio needs to be changed. Whereas we don’t recommend changing basals or ISF based on the output of autosens (because it’s only looking at 24h of data, and can’t tell apart the effects of basals vs. the effect of ISF), autotune is intended to be used to help guide basal, ISF, and carb ratio changes because it’s tracking trends over a large period of time.

Ideally, for those of us using DIY closed loops like OpenAPS, you can run autotune iteratively inside the closed loop, and let it tune basals, ISF, and carb ratio nightly and use those updated settings automatically. Like autosens, and everything else in OpenAPS, there are safety caps. Therefore, none of these parameters can be tuned beyond 20-30% from the underlying pump values. If someone’s autotune keeps recommending the maximum (20% more resistant, or 30% more sensitive) change over time, then it’s worth a conversation with their doctor about whether your underlying values need changing on the pump – and the person can take this report in to start the discussion.

Not everyone will want to let it run iteratively, though – not to mention, we want it to be useful to anyone, regardless of which DIY closed loop they choose to use – or not! Ideally, this can be run one-off by anyone with Nightscout data of BG and insulin treatments. (Note – I wrote this blog post on a Friday night saying “There’s still some more work that needs to be done to make it easier to run as a one-off (and test it with people who aren’t looping but have the right data)…but this is the goal of autotune!” And as by Saturday morning, we had volunteers who sat down with us and within 1-2 hours had it figured out and documented! True #WeAreNotWaiting. :))

And from what we know, this may be the first tool to help actually make data-driven recommendations on how to change basal rates, ISF, and carb ratios.

How autotune works:

Step 1: Autotune-prep

  • Autotune-prep takes three things initially: glucose data; treatments data; and starting profile (originally from pump; afterwards autotune will set a profile)
  • It calculates BGI and deviation for each glucose value based on treatments
  • Then, it categorizes each glucose value as attributable to either carb sensitivity factor (CSF), ISF, or basals
  • To determine if a “datum” is attributable to CSF, carbs on board (COB) are calculated and decayed over time based on observed BGI deviations, using the same algorithm used by Advanced Meal Asssit. Glucose values after carb entry are attributed to CSF until COB = 0 and BGI deviation <= 0. Subsequent data is attributed as ISF or basals.
  • If BGI is positive (meaning insulin activity is negative), BGI is smaller than 1/4 of basal BGI, or average delta is positive, that data is attributed to basals.
  • Otherwise, the data is attributed to ISF.
  • All this data is output to a single file with 3 sections: ISF, CSF, and basals.

Step 2: Autotune-core

  • Autotune-core reads the prepped glucose file with 3 sections. It calculates what adjustments should be made to ISF, CSF, and basals accordingly.
  • For basals, it divides the day into hour long increments. It calculates the total deviations for that hour increment and calculates what change in basal would be required to adjust those deviations to 0. It then applies 20% of that change needed to the three hours prior (because of insulin impact time). If increasing basal, it increases each of the 3 hour increments by the same amount. If decreasing basal, it does so proportionally, so the biggest basal is reduced the most.
  • For ISF, it calculates the 50th percentile deviation for the entire day and determines how much ISF would need to change to get that deviation to 0. It applies 10% of that as an adjustment to ISF.
  • For CSF, it calculates the total deviations over all of the day’s mealtimes and compares to the deviations that are expected based on existing CSF and the known amount of carbs entered, and applies 10% of that adjustment to CSF.
  • Autotune applies a 20% limit on how much a given basal, or ISF or CSF, can vary from what is in the existing pump profile, so that if it’s running as part of your loop, autotune can’t get too far off without a chance for a human to review the changes.

(See more about how to run autotune here in the OpenAPS docs.)

What autotune output looks like:

Here’s an example of autotune output.

OpenAPS autotune example by @DanaMLewis

Autotune is one of the things Scott and I spent time on over the holidays (and hinted about at the end of my development review of 2016 for OpenAPS). As always with #OpenAPS, it’s awesome to take an idea, get it coded up, get it tested with some early adopters/other developers within days, and continue to improve it!

Highlighting someone successfully using Autotune to help adjust baseline settings

A big thank you to those who’ve been testing and helping iterate on autotune (and of course, all other things OpenAPS). It’s currently in the dev branch of oref0 for anyone who wants to try it out, either one-off or for part of their dev loop. Documentation is currently here, and this is the issue in Github for logging feedback/input, along with sharing and asking questions as always in Gitter!

 

 

OpenAPS feature development in 2016

It’s been two years since my first DIY closed loop and almost two years since OpenAPS (the vision and resulting ecosystem to help make artificial pancreas technology, DIY or otherwise, more quickly available to more people living with diabetes) was created.  I’ve spent time here (on DIYPS.org) talking about a variety of things that are applicable to people who are DIY closed looping, but also focusing on things (like how to “soak” a CGM sensorr and how to do “eating soon” mode) that may be (in my opinion) universally applicable.

OpenAPS feature development in 2016

However, I think it’s worth recapping some of the amazing work that’s been done in the OpenAPS ecosystem over the past year, sometimes behind the scenes, because there are some key features and tools that have been added in that seem small, but are really impactful for people living with DIY closed loops.

  1. Advanced meal assist (aka AMA)
    1. This is an “advanced feature” that can be turned on by OpenAPS users, and, with reliable entry of carb information, will help the closed loop assist sooner with a post-meal BG rise where there is mis-timed or insufficient insulin coverage for the meal. It’s easy to use, because the PWD only has to put carbs and a bolus in – then AMA acts based on the observed absorption. This means that if absorption is delayed because you walk home from dinner, have gastroparesis, etc., it backs off and wait until the carbs actually start taking effect (even if it is later than the human would expect).
    2. We also now have the purple line predictions back in Nightscout to visualize some of these predictions. This is a hallmark of the original iob-cob branch in Nightscout that Scott and I originally created, that took my COB calculated by DIYPS and visualized the resulting BG graph. With AMA, there are actually 3 purple lines displayed when there is carb activity. As described here in the OpenAPS docs, the top purple line assumes 10 mg/dL/5m carb (0.6 mmol/L/5m) absorption and is most accurate right after eating before carb absorption ramps up. The line that is usually in the middle is based on current carb absorption trends and is generally the most accurate once carb absorption begins; and the bottom line assumes no carb absorption and reflects insulin only. Having the 3 lines is helpful for when you do something out of the ordinary following a meal (taking a walk; taking a shower; etc.) and helps a human decide if they need to do anything or if the loop will be able to handle the resulting impact of those decisions.
  2. The approach with a “preferences” file
    1. This is the file where people can adjust default safety and other parameters, like maxIOB which defaults to 0 during a standard setup, ultimately creating a low-glucose-suspend-mode closed loop when people are first setting up their closed loops. People have to intentionally change this setting to allow the system to high temp above a netIOB = 0 amount, which is an intended safety-first approach.
    2. One particular feature (“override_high_target_with_low”) makes it easier for secondary caregivers (like school nurses) to do conservative boluses at lunch/snack time, and allow the closed loop to pick up from there. The secondary caregiver can use the bolus wizard, which will correct down to the high end of the target; and setting this value in preferences to “true” allows the closed loop to target the low end of the target. Based on anecdotal reports from those using it, this feature sounds like it’s prevented a lot of (unintentional, diabetes is hard) overreacting by secondary caregivers when the closed loop can more easily deal with BG fluctuations. The same for “carbratio_adjustmentratio”, if parents would prefer for secondary caregivers to bolus with a more conservative carb ratio, this can be set so the closed loop ultimately uses the correct carb amount for any needed additional calculations.
  3. Autosensitivity
    1. I’ve written about autosensitivity before and how impressive it has been in the face of a norovirus and not eating to have the closed loop detect excessive sensitivity and be able to deal with it – resulting in 0 lows. It’s also helpful during other minor instances of sensitivity after a few active days; or resistance due to hormone cycles and/or an aging pump site.
    2. Autosens is a feature that has to be turned on specifically (like AMA) in order for people to utilize it, because it’s making adjustments to ISF and targets and looping accordingly from those values. It also have safety caps that are set and automatically included to limit the amount of adjustment in either direction that autosens can make to any of the parameters.
  4. Tiny rigs
    1. Thanks to Intel, we were introduced to a board designer who collaborated with the OpenAPS community and inspired the creation of the “Explorer Board”. It’s a multipurpose board that can be used for home automation and all kinds of things, and it’s another tool in the toolbox of off-the-shelf and commercial hardware that can be used in an OpenAPS setup. It’s enabled us, due to the built in radio stick, to be able to drastically reduce the size of an OpenAPS setup to about the size of two Chapsticks.
  5. Setup scripts
    1. As soon as we were working on the Explorer Board, I envisioned that it would be a game changer for increasing access for those who thought a Pi was too big/too burdensome for regular use with a DIY closed loop system. I knew we had a lot of work to do to continue to improve the setup process to cut down on the friction of the setup process – but balancing that with the fact that the DIY part of setting up a closed loop system was and still is incredibly important. We then worked to create the oref0-setup script to streamline the setup process. For anyone building a loop, you still have to set up your hardware and build a system, expressing intention in many places of what you want to do and how…but it’s cut down on a lot of friction and increased the amount of energy people have left, which can instead be focused on reading the code and understanding the underlying algorithm(s) and features that they are considering using.
  6. Streamlined documentation
    1. The OpenAPS “docs” are an incredible labor of love and a testament to dozens and dozens of people who have contributed by sharing their knowledge about hardware, software, and the process it takes to weave all of these tools together. It has gotten to be very long, but given the advent of the Explorer Board hardware and the setup scripts, we were able to drastically streamline the docs and make it a lot easier to go from phase 0 (get and setup hardware, depending on the kind of gear you have); to phase 1 (monitoring and visualizing tools, like Nightscout); to phase 2 (actually setup openaps tools and build your system); to phase 3 (starting with a low glucose suspend only system and how to tune targets and settings safely); to phase 4 (iterating and improving on your system with advanced features, if one so desires). The “old” documentation and manual tool descriptions are still in the docs, but 95% of people don’t need them.
  7. IFTTT and other tool integrations
    1. It’s definitely worth calling out the integration with IFTTT that allows people to use things like Alexa, Siri, Pebble watches, Google Assistant (and just about anything else you can think of), to easily enter carbs or “modes” for OpenAPS to use, or to easily get information about the status of the system. (My personal favorite piece of this is my recent “hack” to automatically have OpenAPS trigger a “waking up” mode to combat hormone-driven BG increases that happen when I start moving around in the morning – but without having to remember to set the mode manually!)

..and that was all just things the community has done in 2016! :) There are some other exciting things that are in development and being tested right now by the community, and I look forward to sharing more as this advanced algorithm development continues.

Happy New Year, everyone!