Why Open Humans is an essential part of my work to change the future of healthcare research

I’ve written about Open Humans before; both in terms of how we’re creating Data Commons there for people using Nightscout and DIY closed loops like OpenAPS to donate data for research, as well as building tools to help other researchers on the Open Humans platform. Madeleine Ball asked me to share some more about the background of the community’s work and interactions with Open Humans, along with how it will play into the Opening Pathways grant work, so here it is! This is also posted on the OpenHumans blog. Thanks, Madeleine, and Open Humans!

 

So, what do you like about Open Humans?

Health data is important to individuals, including myself, and I think it’s important that we as a society find ways to allow individuals to be able to chose when and how we share our data. Open Humans makes that very easy, and I love being able to work with the Open Humans team to create tools like the Nightscout Data Transfer uploader tool that further anonymizes data  uploads. As an individual, this makes it easy to upload my own diabetes data (continuous glucose monitoring data, insulin dosing data, food info, and other data) and share it with projects that I trust. As a researcher, and as a partner to other researchers, it makes it easy to build Data Commons projects on Open Humans to leverage data from the DIY artificial pancreas community to further healthcare research overall.

Wait, “artificial pancreas”? What’s that?

I helped build a DIY “artificial pancreas” that is really an “automated insulin delivery system”. That means a small computer & radio device that can get data from an insulin pump & continuous glucose monitor, process the data and decide what needs to be done, and send commands to adjust the insulin dosing that the insulin pump is doing. Read, write, read, rinse, repeat!

I got into this because, as a patient, I rely on my medical equipment. I want my equipment to be better, for me and everyone else. Medical equipment often isn’t perfect. “One size fits all” really doesn’t fit all. In 2013, I built a smarter alarm system for my continuous glucose monitor to make louder alarms. In 2014, with the partnership of others like Ben West who is also a passionate advocate for understanding medical devices, I “closed the loop” and built a hybrid closed loop artificial pancreas system for myself. In early 2015, we open sourced it, launching the OpenAPS movement to make this kind of technology more broadly accessible to those who wanted it.

You must be the only one who’s doing something like this

Actually, no. There are more than 400+ people worldwide using various types of DIY closed loop systems – and that’s a low estimate! It’s neat to live during a time when off the shelf hardware, existing medical devices, and open source software can be paired to improve our lives. There’s also half a dozen (or more) other DIY solutions in the diabetes community, and likely other examples (think 3D-printing prosthetics, etc.) in other types of communities, too. And there should be even more than there are – which is what I’m hoping to work on.

So what exactly is your project that’s being funded?

I created the OpenAPS Data Commons to address a few issues. First, to stop researchers from emailing and asking me for my individual data. I by no means represent all other DIY closed loopers or people with diabetes! Second, the Data Commons approach allows people to donate their data anonymously to research; since it’s anonymized, it is often IRB-exempt. It also makes this data available to people (patient researchers) who aren’t affiliated with an organization and don’t need IRB approval or anything fancy, and just need data to test new algorithm features or investigate theories.

But, not everyone implicitly knows how to do research. Many people learn research skills, but not everyone has the wherewithal and time to do so. Or maybe they don’t want to become a data science expert! For a variety of reasons, that’s why we decided to create an on-call data science and research team, that can provide support around forming research questions and working through the process of scientific discovery, as well as provide data science resources to expedite the research process. This portion of the project does focus on the diabetes community, since we have multiple Data Commons and communities of people donating data for research, as well as dozens of citizen scientists and researchers already in action (with more interested in getting involved).

What else does Open Humans have to do with it?

Since I’ve been administering the Nightscout and OpenAPS Data Commons, I’ve spent a lot of time on the Open Humans site as both a “participant” of research donating my data, as well as a “researcher” who is pulling down and using data for research (and working to get it to other researchers). I’ve been able to work closely with Madeleine and suggest the addition of a few features to make it easier to use for research and downloading large data sets from projects. I’ve also been documenting some tools I’ve created (like a complex json to csv converter; scripts to pull data from multiple OH download files and into a single file for analysis; plus writing up more details about how to work with data files coming from Nightscout into OH), also with the goal of facilitating more researchers to be able to dive in and do research without needing specific tool or technical experience.

It’s also great to work with a platform like Open Humans that allows us to share data or use data for multiple projects simultaneously. There’s no burdensome data collection or study procedures for individuals to be able to contribute to numerous research projects where their data is useful. People consent to share their data with the commons, fill out an optional survey (which will save them from having to repeat basic demographic-type information that every research project is interested in), and are done!

Are you *only* working with the diabetes community?

Not at all. The first part of our project does focus on learning best practices and lessons learned from the DIY diabetes communities, but with an eye toward creating open source toolkit and materials that will be of use to many other patient health communities. My goal is to help as many other patient health communities spark similar #WeAreNotWaiting projects in the areas that are of most use to them, based on their needs.

How can I find out more about this work?
Make sure to read our project announcement blog post if you haven’t already – it’s got some calls to action for people with diabetes; people interested in leading projects in other health communities; as well as other researchers interested in collaborating! Also, follow me on Twitter, for more posts about this work in progress!

Not bolusing for meals (Fiasp, 0.6.0 algorithm in oref0 dev branch, and more)

I tweeted last week+, “I just realized I’ve now gone about 3 weeks without meal bolusing.” That means just a meal announcement (i.e. carb entry estimate, a la 30 carbs or 60 carbs or whatever, based on my IFTTT buttons). No manual bolus.

I kind of keep waiting for the other shoe to drop, because it sounds to good to be true. I’m sure you’re skeptical reading this.

I bet she’s doing SOME bolus.

Well, she must not be eating any carbs.

She must be having worse outcomes, bad post-meal BGs, etc.

Nope, nope, and nope.

  • While I started testing this new set of features with partial boluses and worked my way down (see more below on the testing topic), I’m now literally doing no manual meal bolus. I start eating, and press one button on my watch for a carb estimate entry (that via IFTTT goes to Nightscout and my rig).
  • I eat carbs. I’ve eaten 120 grams of carbs of gluten free biscuits and gravy; 60-90 grams of pasta; dinner followed by a few gluten free cookies, etc.
  • More nuanced details below, but:
    • My 70-180 time in range has stayed the same (93+%) compared to the versions I was testing before with manual meal boluses.
    • My 70-150 and 80-160 time in ranges have decreased slightly compared to manual meal boluses, but…
    • My average blood sugar has actually dropped down (as has my a1c to match).
    • (So this means I’m having a few more spikes above 160, usually topping off in 160-170 whereas before my manual meal boluses would have me top off around 150, when all was well.)

Also note – no eating soon required. No early bolus or pre-bolus. Just single button press as I stick food in my mouth.

Wow.

(See where I said, waiting for the other shoe to drop?)

That’s why I waited a while to even tweet about it. Maybe it’s a fluke. Maybe it won’t work for other people. Maybe, maybe, maybe. Who knows. It’s still fairly early to tell, but as other people are beginning to test the current dev branch of oref0 with 0.6.0-related features, other people are starting to see improvements as well. (And that could be some of the many other features we are adding to 0.6.0, ranging from exponential curves for insulin activity, to allowing SMBs to do more, to carb-ratio-tuned-autosensitivity, to huge autotune improvements, etc.) 

So while I don’t want to over-hype – and never do, what works for me will not work for everyone – I do want to share my cautious excitement over continuing to be able to push the envelope on algorithms and what might be possible outcome-wise for this kind of technology.

Here’s what is enabling me to be in the no-bolus zone for now well over a month, with still (to me) great outcomes worth the tradeoffs described above:

  1. Faster insulin. Thanks to our lovely looping friends in Germany/Austria, we came back from Europe with a few vials of Fiasp to try. I was HIGHLY skeptical about this. Some of our European friends saw great results right away, others didn’t. I didn’t get great results on it at first. Some of that may be due to natural changes between insulin types and not knowing exactly how to adjust my manual bolus strategy to the faster insulin action, but until we did some code changes to allow SMB‘s to do more and added some other features to what’s now 0.6.0, I wasn’t thrilled and in fact after about two weeks of it was about to switch off of it. So that brings me to #2.
  2. More improvements to the algorithm, which is now what will become the 0.6.0 release of oref0. There’s a whole lot of stuff packed in there. Exponential curves. Different carb absorption decay calculations. Allowing SMB to do more. Additional safety guards since we ramped SMB up.

How we started testing no-bolus approach:

  • I have always known that about 6u of insulin (thanks to testing dating back to my early DIYPS days, many many many moons ago) is about as much as I should bolus at any time. So, even if I ate 120 carbs, I usually did about a 6u bolus up front, and let the rig pick up the rest as needed over more hours. I started doing ~75% or something like that of boluses, based on wherever I felt like rounding to with my easy bolus buttons.
  • Whether I did 75% or 100%, I didn’t see a ton of difference at first…
  • ..so I took a leap and tried no-bolus with some SMB adjustments to allow it to ramp up faster with carb entry. Behaviorally, I find it a lot easier to do nothing 😀 vs. figure out the right amount of up front bolus. And outcomes wise (see above) it was very similar.

It definitely was an interesting approach to test. Between the Fiasp and the no-bolus up front, in some meals it matched really well and I had practically no rise. Due to incoming netIOB, food type, etc, sometimes I did have a rise – but while it spiked slightly higher (160-170 usually vs my earlier 150s with manual bolus), it was only up there for 2-3 data points and then came sharply down, leveling out smoothly in my preferred post-meal range. So an important lesson I learned was not to over-react to just the BG curve going up, without looking at the predictions to see where I was going to come just back down. (And as I had more than one meal where the spike and drop back to normal happened, it was very easy to adjust to the BG graph and not get that emotional tug to “do more” with a quick short rise like that).

Obviously, starting BG makes a difference. I’m usually starting <130 mg/dL when I see these spikes cap out at 170 or lower. I’ve started higher, and seen higher rises, too. They’re not all perfect: with occasional pump site issues, carb underestimates, unplanned carb stacking, and all the randomness of diabetes and a non-structured lifestyle (including live-testing bleeding edge algorithm changes), I’ve spent 12% of the last month >160 mg/dL, which is about the same as the 3 months before that. But in most cases (I’d say 95%), the no-bolus approach has actually yielded better outcomes than I expected AND has avoided post-meal lows better than I would have achieved with a manual bolus.

This is huge when you think about the QOL aspect of not having to do as much math at a meal; and when you think about all the complicating factors related to food – timing (do you bolus when you order, or when the food arrives, or earlier than that?), and the gluten factor. I have celiac disease, so if I’m eating out (which we do a lot, and especially since I travel frequently), bolusing prior to setting eyes on the food (knowing they didn’t plate it with bread, causing them to have to go back and start all over again) just isn’t smart. That’s why eating soon historically worked so well for me vs. traditional pre-boluses, because I could set the target entering the restaurant, bolus when I laid eyes on my hopefully safe food, and get reasonable (150 topping out) meal outcomes.

It also worked really well in the case where a restaurant cooked my gluten free pasta in the same pasta cooker and water as regular pasta, but didn’t inform me until after I found stray gluten noodles in the bottom of my pasta dish and started asking how that was possible since they (used to) do gluten free well. (Now, I pick up heaps of pasta, and sort pasta noodles one by one to make sure they all match before ever eating gluten free pasta. It makes waiters look at you very worriedly as you wave pasta around in the air, but better safe than glutened (again).) So, I was majorly glutened, and my digestion system was all out of sorts (isn’t that a nice polite way to describe getting glutened?) for many days, which of course impacted BG and insulin right then and for the days afterward. But because I had done carb entry and no-bolus, I was able to edit the carb entry down, and I didn’t have that much insulin stacked, and didn’t end up low after glutening, which is usually what happens.

Is that a super regular situation for most people? No. But it was super nice. And also helped me face pasta again last night, so I could put in a (very low in case of gluten) carb estimate, match my noodles, eat pasta, and let the SMBs ramp up to match absorption. It works very well for me.

Whether you have celiac or not, for many reasons (insert yours here), it’s nice to not to have to commit to the bolus up front. It’s closer to approaching what I think non-PWDs do at mealtimes: just eat.

(I haven’t done much testing (yet? TBD) for no-carb-entry and no-meal-bolus scenario, I expect I would have higher spikes but would be interesting to see if it would still come down reasonably fast. Probably wouldn’t be my go-to strategy because I don’t mind a general meal size estimate one button push, but would be nice to know what that curve shape would look like. If I test that, it’ll start with small snacks and ramp my way up.)

The questions I always get:

  1. Q: HOW DO I GET THIS?
    A: Caution: like all things OpenAPS but especially always true for the development branch, 0.6.0 is NOT released yet to master and is still highly experimental. I wouldn’t install dev unless you want to pay lots of close attention to it, and are willing to update multiple times over the course of the week, because Scott and I are merging features and tweaks almost daily to it.

    Got the disclaimers down? Ok. It’s in the dev branch of oref0. You should read this PR with notes on some more detail of what’s included, but you should also review the code diff to see all that’s changed, because it’s not all documented yet. Also, follow the instructions at the bottom to be able to install it without git. Hop into Gitter if you have questions about it!

    (Big huge thanks to folks like Tim and Matthias for early testing of 0.6.0; and to Tim for writing up about the initial rounds of 0.6.0-dev here (note that we’ve made further changes since this post), and others who’ve been testing & providing feedback and input into the dev branch!)

  2. Q: When will this get “released” to master?
    A: It depends. This is still a highly active dev branch, and we’re making a lot of changes and tweaking and testing things. The more people who test now and provide feedback will enable us to get to the final “prepare for release” testing stage. Lots and lots of testing, and things depend on how much existing needs tweaked, and what else we decide should go with this release. So, there’s never any specific release date.

  3. Q: What is Fiasp?
    A: Faster acting insulin that was only approved in Europe and Canada…until today. Convenient timing. I asked a PR person who messaged me about it, and they said it’s estimated to be available in U.S. pharmacies by late December/earlier Q1. As previously stated, available elsewhere in other parts of the world.

    Fiasp peaks sooner (say, ~45 minutes) with the same tail as everything else. It’s not instantaneous. For your million and one questions about whether it’s approved for your use in a tree, on a plane, at the zoo, and all other extrapolations – please ask Google/your doctor/the manufacturer, and not me. I don’t know. :)

  4. Q: Will any of this work for people NOT on Fiasp?
    A: Nothing is guaranteed (even for other people on Fiasp), but the folks who’ve started testing 0.6.0 even without Fiasp (on Humalog or Novolog/Novorapid, etc.) have been happier on it vs. earlier versions, too.

    I don’t expect Fiasp to work super well forever for me, given what I’ve heard from other people with months of experience on it…and given my first two weeks of Fiasp not being spectacular, I want people to not expect miracles. (Sorry, this blog post does not promise miracles, so sorry if you got super excited at the above. No miracles! This is not a cure! We still have diabetes!) Like all things artificial pancreas, I think it’s better to be cautiously hopeful with realistic expectations that things *might* be a little bit better than before, but as always, YDMV (your diabetes may/will always vary), your body will vary, and life happens, etc. so who knows.

Just 4 months ago, we published a blog post pointing out that the new features had allowed us to achieve 4 out of 5 of: no bolus; not counting carbs, medium/high carb meals, 80%+ time in range; and no hypoglycemia.  With Fiasp and  0.6.0 (currently what’s in the dev branch), we’ve now achieved all 5 simultaneously: I can eat large high-carb meals, enter very vague guesstimates of 60 or 90 carbs (no need for actual carb counting, just general size-based meal announcement), and still achieve 80%+ time in range 70-150 mg/dL without ever going <55 mg/dL.  Does that mean that OpenAPS with Fiasp finally meets the definition of a “real” Artificial Pancreas (step 5 on JDRF’s 6-step AP development pathway)?  We think it does.

So, tl;dr (because long post is long): with Fiasp and 0.6.0-dev branch, I’m able to not bolus for meals, and just enter a very generally sized meal estimate. It’s working well for me, and like all things, we’re working to make it available to other people via OpenAPS for others who want to try similar features/approaches. It may not work well for everyone. If it helps one other person, though, like everything else it’ll be worth it. Big thanks to Scott for LOTS of development in 0.6.0 and partnership in design of these features; too many people to name for testing and providing feedback and helping iterate on these features; and to the entire community for being awesome and helping us to continue to push the envelope on what might be possible for those of us with type 1 diabetes. :)

Why a non-academic (patient) publishes in academic journals

Today I was able to share that my Letter to the Editor was published in the Journal of Diabetes Science and Technology. It’s on why we need to set expectations to help patients successfully adopt hybrid closed loop/artificial pancreas/automated insulin delivery system technology. (You can read it via image copies in the first link.)

JDST_screenshot_LTE_expectationsI’ve published a few times in academic journals. Last year, Scott and I published another Letter to the Editor in JDST with the OpenAPS outcomes study we had presented at the 2016 ADA Scientific Sessions conference.

But, I’m sure people are wondering why I choose to do so – especially as I am 1) a patient and 2) a non-academic. (Although in case you missed it – I’m now the Principal Investigator on a grant-funded study!)

While there are many healthcare providers, researchers, industry employees, FDA staff, etc. who read blogs like this and are up to speed on the bleeding edge of diabetes technology… there are easily 10x the number that do not.

And if they don’t know about the existence of this world, they won’t know about the valuable lessons we’re learning and won’t be able to share those lessons and knowledge with other healthcare providers and the patients that they treat.

So, in my pursuit to find more ways to share knowledge from our community with the rest of the diabetes community, this is why we submit abstracts for posters and presentations to conferences like ADA’s Scientific Sessions. Our abstracts are evaluated just like the abstracts from traditional healthcare providers (as far as they can tell, I’m just another academic, albeit one with fewer credentials ;)), and I’m proud that they’re evaluated and deemed worthy of poster presentations alongside mainstream researchers. Ditto for our written publications, whether they be letters to the editor or other types of articles submitted to journals and publications.

We need to find more ways to share and distribute knowledge with the “traditional” medical and academic research world. And I’d love to do more – so please share ideas if you have them. And if you’re someone who bridges the gap to the traditional world, I appreciate your help sharing these types of articles and conversations with your colleagues.

Opening pathways for discovery, research, and innovation in health and healthcare

How can we get more patients and other communities to leverage the benefits of the #WeAreNotWaiting mindset for research, development, and innovation in health (and healthcare)?

That’s a question I’ve been asking myself for two years, after seeing the diverse efforts and valuable outpourings from the DIY diabetes community (ranging from amazing remote monitoring solutions for CGM to algorithms, hardware, and other software for automated insulin delivery systems).

But, how to scale? In diabetes, we’re perhaps uniquely positioned given our data-driven disease. However, I believe that the data and innovation approach we’ve taken in diabetes can help many other types of patient communities as well. I just didn’t know how to help scale it… until recently.

Last year when a group of us from the OpenAPS community participated in the Quantified Self Public Health Symposium in 2016, it prompted some follow up conversations with various academic researchers, including Eric Hekler from Arizona State University (ASU).

Eric started a conversation, and kept asking me: What could you do if you partnered with academic researchers? How can traditional researchers help the DIY community, OpenAPS or otherwise?

That also sparked a conversation with Paul Tarini, a senior program officer at the Robert Wood Johnson Foundation (RWJF), about potential funding for a project.

(Important to state here: OpenAPS itself is not a funded project. It has not been, and will not be. It is 100% DIY, non-commercial, and it has been built by a community of volunteers.)

What I wanted to talk to RWJF about was funding a collaboration with academic researchers for studying data and innovation coming out of the community; and to ultimately identify needs and build resources to help scale this type of community effort and empower other patient communities as well.

It took over a year, but we were able to work through initial project proposals and were then invited to submit a full proposal. And on Wednesday (September 6, 2017), I found out that we have been awarded the grant, and this project work will be funded by the Robert Wood Johnson Foundation. The project officially begins on September 15 and will run for 18 months.

So what exactly is this project?

Our project is titled “Learning to not wait: Opening pathways for discovery, research, and innovation in health and healthcare.”

It entails a number of things.

    1. We are creating an on-call data science team to support research in the DIY community. More details will be forthcoming, but essentially this team is there to help do research on the myriad of questions bubbling out of the community. For example – how does sensitivity change during growth spurts, during periods of inactivity, or when changing insulin types? What are some of the most successful mealtime insulin dosing strategies? Etc. People will be able to submit ideas, and get help formulating the idea into a researchable question, and get the research done.
    2. Studying the process of research when done by patients, and the barriers they/their research run into when spreading this scientific knowledge. I personally know there are a lot of barriers, but we need to document them and find solutions. (There are a lot of prejudice and perceived stigmas toward patient researchers doing this type of scientific work, around things like quality of research, methods of distributing knowledge, etc.)
    3. Convening a meeting with patients, traditional researchers, legal experts, and others in this innovative research space to discuss and address some of the known and being-found barriers for this type of research. I envision a white paper type publication to come out of this meeting to document the lay of the land as it is.
    4. Creating toolkit-type resources based on what we’ve learned and are learning in this project for helping patients new to DIY and this type of research take on various levels of research or innovation activity. Part of our project’s scope of work, in #WeAreNotWaiting spirit, includes beta testing with 2-3 other patient communities, so we can get feedback and iterate and roll these out as quickly as possible.

Our project has a couple of principles that I feel strongly about, and am also very proud of in approaching this body of work.

  • I am the scientific Principal Investigator of this project. This is unique in the world of grant-funded research, where a patient is driving the scientific discovery process. (I’m proud and very appreciative to have two amazing co-PI’s who are helping with some of the administrative work since the grant is being administered through Arizona State University Foundation, who is being an awesome partner given the uniqueness of this situation*.) My co-PI’s are Eric Hekler and Erik Johnston. The other members of the team include John Harlow, who’s a MacArthur Foundation Postdoctoral Fellow; Sayali Phatak, a PhD student at ASU; and Keren Hirsch from the ASU Decision Theater.
  • #WeAreNotWaiting is the mantra for this project and our entire team. We plan to be as efficient as possible in doing the project work, which includes being as timely as possible with sharing findings back with the community as soon as they’re ready (a given; there’s no reason to wait) as well as finding ways to publish that are faster than the very traditional academic publishing process, and being thoughtful about the right audiences outside the patient community for communicating about this project’s work.
  • Always asking why. As a brand new PI, I have a lot to learn. But as a non-traditional PI, I also am running into a lot of things that are done the way they’d be done if I was traditionally inside an organization. I plan to explore and challenge as many of these, and try to document the decisions I make in this project as I come to those forks in the road. In some cases, I choose the easier paths because for my project/work/focus, it does not matter. In other cases, based on principle, I choose the harder path-blazing approach.

* About the uniqueness of this project and the administrative details

Since I’m an individual patient researcher, not affiliated with the organization, we decided we would make the official grantee financial organization Arizona State University Foundation, since that’s where my co-PI’s were. But true to the nature of this project, I want to document the challenges and opportunities that come with that, so more to come about all the interesting lessons learned about the process of putting together the proposal and the grant approval process once we heard the grant would be awarded. That way, future patient researchers have a leg up on what is coming when taking on this type of project and are aware of what this approach entailed. The short version is I am a subcontractor to ASU for purpose of the grant; but am not employed or otherwise affiliated with ASU. Props to the many people at ASU who learned about me and this project in the approval process and rolled with it / helped make it happen.

So, what’s next? When do you start? What are you waiting on?!

Coming super soon – a project website with more details about this project.

For my fellow PWDs:

  • Stay tuned for the project website going live, which will also include more details about how individuals in the diabetes community can pitch ideas/get started working with the on-call data science team.

For patients reading this who are members of other patient disease communities:

  • Ping me if you’re SUPER excited and can’t wait to tell me :), or stay tuned for more info about the process for proposing that your patient community be one of the communities with whom we beta test some of the tools/resources developed toward the latter phases of this project.

If you’re someone else who’s interested in this work (such as a legal expert, other researcher, etc.):

  • Also ping me if you’re interested in hearing more about the meeting we plan to convene with a small multidisciplinary group to discuss and address barriers of patient-driven research. Even if we can’t get everyone interested to attend the in-person meeting, I would still love your input and collaboration for the white paper and/or other publications and intersections with this project.

For everyone else:

  • Please do let me know if there’s a particular aspect of this project that you’re curious to learn more about – whether it’s some of what I’m facing and documenting as a patient PI researcher, or otherwise. That’ll help me prioritize some of the blog posts and articles I’m writing about this process!

Thanks to everyone who managed to read this ginormous blog post.

I am incredibly excited about the project, and having resources to focus on how patients and non-traditional actors in healthcare can drive research, development, innovation, and knowledge sharing in non-traditional methods and from the ground up, plus prioritize and change the healthcare research agenda. Like my work in OpenAPS that stands on the shoulders of so many, I’m hoping this project is the first of many and gets to a place for others to leverage this work and take it beyond the scope of what we’ve all imagined is currently possible.

A huge thanks to the team partnering with me on this work; to ASU for being a great partner as an organization; to the Robert Wood Johnson Foundation for supporting this project (and in particular to our program manager, Paul Tarini, for his ongoing support throughout this entire process); and many extra thanks to Scott and all my family and friends for supporting me throughout the proposal process and being the recipients of some VERY excited and !!! filled texts when I found out we had officially been awarded the grant for this project.

Unexpected side-effect of closed looping: Body re-calibrations

It’s fascinating how bodies adapt to changing situations.

For those of us with diabetes: do you remember the first time you took insulin after diagnosis? For me, I had been fasting for ~18 hours (because I felt so bad, and hadn’t eaten anything since dinner the night before) and drinking water, and my BG was still somehow 550+ at the endo’s office.

Water did nothing for my unquenchable thirst, but that first shot of insulin first sure did.

I still remember the vivid feeling of it being an internal liquid hydration for my body, and everything feeling SO different when it started kicking in.

In case the BG of 550+, the A1c of 14+ (don’t remember exact number), and me feeling terrible for weeks wasn’t enough, that’s one of the things that really reinforced that I have diabetes and insulin is something my body desperately needs but wasn’t getting.

Over the last ~14+ years, I’ve had a handful of times that reinforced the feeling of being dependent on this life-saving drug, and the drastic difference I feel with and without it. Usually, it’s been times where a pump site ripped out, or I was sick and high and highly resistant, and then finally stopped being as resistant and my blood sugar started responding to insulin finally after hours of being really high, and I started dropping.

But I’ve had different ways to experience this feeling lately, as a result of having live with a DIY closed loop (OpenAPS) for 2+ years – and it hasn’t involved anything drastic as a HIGH BG or equipment failure. It’s a result of my body re-calibrating to the new norm of my body being able to spend more and more time close to 100% in range, in a much tighter and lower range than I ever thought possible (especially now true with some of the flexibility and freedom oref1 now offers).

I originally had a brief fleeting thought about how BGs in the low 200s used to feel like the 300s did. Then, I realized that 180 felt “high”. One day, it was 160.

Then one day, my CGM said flat in 120s and I felt “high”. (I calibrated, and turned out that it was really 140). I’ve had several other days where I’d hit 140s and feel like I used to do in the mid-200s (slightly high, and annoying, but no major high symptoms like 300-400 would cause – just enough to feel it and be annoyed).

That was odd enough as a fleeting thought, but it was really odd to wake up one morning and without even looking at my watch or CGM to see what my BGs had been all night, know that I had been running high.

I further classified “really odd” as “completely crazy” when that “running high” meant floating around the 130-140 range, instead of down in the 90-110 range, which is where I probably spend 95% of my nights nowadays.

Last night is what triggered this blog post, plus a recurring observation that because I have a DIY closed loop that does so well at handling the small, unknown variances that cause disturbances in BG levels without me having to do much work, that as result it is MUCH easier to pinpoint major influences, like my liver dumping glucose (either because of a low or because it’s ‘full up’ and needs to get rid of the excess).

In last night’s case, it was a major liver dump of glucose.

Here’s what happened:

Scott and I went on a long walk, with the plan to stop for dinner on the way home. BG started dropping as I was about half a mile out from the restaurant, but I’m stubborn 😀 and didn’t want to eat a fruit strip when I was about to sit down an eat a burger. So, my BG was dropping low when I actually ate. I expected my BG to flatten on its own, given the pause in activity, so I bolused fairly normally for my burger, and we walked the last .5 miles home.

However, I ended up not rising from the burger like I usually do, and started dropping again. It was quite a drop, and I realize my burger digestion was different because of the previous low, so I ended up eating some fruit to handle the second low. My body was unhappy at two lows, and so my liver decided to save the day by dumping a bunch of glucose to help bring my blood sugar up. Double rebound effect, then, from the liver dump and the fruit I had eaten. Oh well, that’s what a closed loop is for!

Instead of rebounding into the high 300s (which I would have expected pre-closed loop), I maxed out at 220. The closed loop did a good job of bolusing on the way up. However, because of how much glucose my liver dumped, I stayed higher longer. (Again, this probably sounds crazy to anyone not looping, as it would have sounded to me before I began looping). I sat around 180 for the first three hours of the night, and then dropped down to ~160 for most of the rest of the night, and ended up waking up around 130.

And boy, did I know I had been high all night. I felt (and still feel, hours later) like I used to years ago when I would wake up in the 300s (or higher).

Visuals

recalibration_3 hourHmm, 3 hours doesn’t look so bad despite feeling it.

recalibration_6 hour6 hour view shows why I feel it.

recalibration_12 hour12 hours. Sheesh.

recalibration_24 hour24 hours shows you the full view of the double low and why my liver decided I needed some help. Thanks, liver, for still being able to help if I really needed it!

recalibrating_pebble view of renormalizing Settling back to normal below 120, hours later.

There are SO many amazing things about DIY closed looping. Better A1c, better average BG, better time in range, less effort, less work, less worrying, more sleep, more time living your life.

One of the benefits, though, is this bit of double-edged sword: your body also re-calibrates to the new “normal”, and that means the occasional extreme BG excursion (even if not that extreme!) may give you a different range of symptoms than you used to experience.

This. Matters. (Why I continue to work on #OpenAPS, for myself and for others)

If you give a mouse a cookie or give a patient their data, great things will happen.

First, it was louder CGM alarms and predictive alerts (#DIYPS).

Next, it was a basic hybrid closed loop artificial pancreas that we open sourced so other people could build one if they wanted to (#OpenAPS, with the oref0 basic algorithm).

Then, it was all kinds of nifty lessons learned about timing insulin activity optimally (do eating soon mode around an hour before a meal) and how to use things like IFTTT integration to squash even the tiniest (like from 100mg/dL to 140mg/dL) predictable rises.

It was also things like displays, button, widgets on the devices of my choice – ranging from being able to “text” my pancreas, to a swipe and button tap on my phone, to a button press on my watch – not to mention tinier sized pancreases that fit in or clip easily to a pocket.

Then it was autosensitivity that enabled the system to adjust to my changing circumstances (like getting a norovirus), plus autotune to make sure my baseline pump settings were where they needed to be.

And now, it’s oref1 features that enable me to make different choices at every meal depending on the social situation and what I feel like doing, while still getting good outcomes. Actually, not good outcomes. GREAT outcomes.

With oref0 and OpenAPS, I’d been getting good or really good outcomes for 2 years. But it wasn’t perfect – I wasn’t routinely getting 100% time in range with lower end of the range BG for a 24hour average. ~90% time in range was more common. (Note – this time in range is generally calculated with 80-160mg/dL. I could easily “get” higher time in range with an 80-180 mg/dL target, or a lot higher also with a 70-170mg/dL target, but 80-160mg/dL was what I was actually shooting for, so that’s what I calculate for me personally). I was fairly happy with my average BGs, but they could have been slightly better.

I wrote from a general perspective this week about being able to “choose one” thing to give up. And oref1 is a definite game changer for this.

  • It’s being able to put in a carb estimate and do a single, partial bolus, and see your BG go from 90 to peaking out at 130 mg/dL despite a large carb (and pure ballpark estimate) meal. And no later rise or drop, either.
  • It’s now seeing multiple days a week with 24 hour average BGs a full ~10 or so points lower than you’re used to regularly seeing – and multiple days in a week with full 100% time in range (for 80-160mg/dL), and otherwise being really darn close to 100% way more often than I’ve been before.

But I have to tell you – seeing is believing, even more than the numbers show.

I remember in the early days of #DIYPS and #OpenAPS, there were a lot of people saying “well, that’s you”. But it’s not just me. See Tim’s take on “changing the habits of a lifetime“. See Katie’s parent perspective on how much her interactions/interventions have lessened on a daily basis when testing SMB.

See this quote from Matthias, an early tester of oref1:

I was pretty happy with my 5.8% from a couple months of SMB, which has included the 2 worst months of eating habits in years.  It almost feels like a break from diabetes, even though I’m still checking hourly to make sure everything is connected and working etc and periodically glancing to see if I need to do anything.  So much of the burden of tight control has been lifted, and I can’t even do a decent job explaining the feeling to family.

And another note from Katie, who started testing SMB and oref1:

We used to battle 220s at this time of day (showing a picture flat at 109). Four basal rates in morning. Extra bolus while leaving house. Several text messages before second class of day would be over. Crazy amount of work [in the morning]. Now I just have to brush my teeth.

And this, too:

I don’t know if I’ve ever gone 24 hours without ANY mention of something that was because of diabetes to (my child).

Ya’ll. This stuff matters. Diabetes is SO much more than the math – it’s the countless seconds that add up and subtract from our focus on school/work/life. And diabetes is taking away this time not just from a person with diabetes, but from our parents/spouses/siblings/children/loved ones. It’s a burden, it’s stressful…and everything we can do matters for improving quality of life. It brings me to tears every time someone posts about these types of transformative experiences, because it’s yet another reminder that this work makes a real difference in the real lives of real people. (And, it’s helpful for Scott to hear this type of feedback, too – since he doesn’t have diabetes himself, it’s powerful for him to see the impact of how his code contributions and the features we’re designing and building are making a difference not just to BG outcomes.)

Thank you to everyone who keeps paying it forward to help others, and to all of you who share your stories and feedback to help and encourage us to keep making things better for everyone.

 

Why guess when you don’t have to? (#OpenAPS logs & why they’re handy)

One of the biggest benefits (in my very biased opinion) of a DIY closed loop is this: it’s designed to be understandable to the person using it.

You don’t have to guess “what did it do at 2am?” or “why did it do a temp basal and not an SMB?”

Well, you COULD guess – but you don’t have to. Guessing is a choice ;).

Because we’ve been designing a system that a person has to decide to trust, it provides information about everything it’s doing and the information it has. That’s what “the logs” are for, and you can get information from “the logs” from a couple of places:

  • The OpenAPS “pill” in Nightscout
  • Secondary logging sources like Papertrail
  • Information that shows up on your Pebble watch
  • The full logs from SSH’ing into a rig (usually what we mean when we ask, “what do your logs say?”)

Here’s an example of the information the OpenAPS pill provides me in Nightscout:

Example OpenAPS pill info in Nightscout

This tells me that at 11:03 am, my BG was 121; I had no carbs on board; was dropping a tiny bit as expected and was likely going to end up slightly below my target; and the current temporary basal rate running was about equivalent to what OpenAPS thought I needed at the time. I had 0.47 netIOB, all from basal adjustments. It also specifies some of the eventual numbers that are what trigger the “purple line predictions” displayed in Nightscout, so if you can’t tell where the line is (90 or 100?), you can use the pill information to determine that more easily.

(Here’s the instructions for setting up Nightscout for OpenAPS)

Here’s an example of a log from Papertrail and what it tells us:

Example papertrail usage for OpenAPS

This example is from Katie, who described her daughter’s patterns in the morning: when Anna leaves her rig in the bedroom and went to take a shower, you can see the tune change at around 6:55, meaning she’s out of range of the rig. After the shower, getting dressed, and getting back to the rig around 7:25, it goes back to “normal” tuning (which means reading and writing to the pump as usual).

Papertrail is handy for figuring out if a rig is working or not remotely and high level why it might not be, especially if it’s a communication or power problem. But I generally find it to be most helpful when you know what kind of problem it is, and use it to drill down on a particular thing. However, it’s not going to give you absolutely all the details needed for every problem – so make sure to read about how to access the traditional logs, too, and be able to do that on the go.

(Here’s the instructions for getting Papertrail going for OpenAPS)

Here’s what the logs ported to my Pebble tell me:

OpenAPS logs on Pebble watch @DanaMLewis example

There’s several helpful things that display on my watch (using the excellent “Urchin” watchface designed by Mark Wilson, which you can customize to suit your personal preference): BGs, basal activity, and then some detailed text, similar to what’s in the OpenAPS pill (current BG, the change in BG, timestamp of BG, my netIOB, my eventual BGs, and any temp basal activity). In this case, it’s easy for me to glance and see that I was a bit low for a while; am now flat but have negative net IOB so it’s been high temping a bit to level out the netIOB.

(I’ve always preferred a data-rich watchface – even back in the days of “open looping” with #DIYPS:)

https://twitter.com/danamlewis/status/652566409537433600/photo/1

(Here’s more about the Urchin watchface)

Here’s what the full logs from the rig tell me:

Example OpenAPS logs from the rig

This has a LOT of information in it (which is why it’s so awesome). There are messages being shared by each step of the loop – when it’s listening for “silence” to make sure it can talk successfully to the pump; refreshing pump history; checking the clocks on devices and for fresh BGs; and then processing through the math on what the BG is, where it’s headed, and what needs to happen. You can also see from this example where autosensitivity is kicking in, adjust basals slightly up, target down, and sensitivity down, etc. (And for those who aren’t testing oref1 features like SMB and UAM yet, you’ll get a glimpse of how there’s now additional information in the logs about if those features are currently enabled.)

(Here are some other logs you can watch, and how to run them)

Pro tip for OpenAPS users: if you’re logged into your rig, you just have to type l (the letter “L” but lower case) for it to bring up your logs!

So if you find yourself wondering: what did OpenAPS do/why did it do <thing>? Instead of wondering, start by looking at the logs.

And remember, if you don’t know what the problem is – the full logs are the best source of information for spotting what the main problem is. You can then use the information from the logs to ask about how to resolve a particular problem (Gitter is great for this!)– but part of troubleshooting well/finding out more is taking the first step to pull up your logs, because anyone who is going to help you troubleshoot will need that information to figure out a solution.

And if you ever see someone say “RTFL”, instead of “read the manual” or “read the docs”, it means “read the logs”. 😉 :)

Choose One: What would you give up if you could? (With #OpenAPS, maybe you can – oref1 includes unannounced meals or “UAM”)

What do you have to do today (related to daily insulin dosing for diabetes) that you’d like to give up if you could? Counting carbs? Bolusing? Or what about outcomes – what if you could give up going low after a meal? Or reduce the amount that you spike?

How many of these 5 things do you think are possible to achieve together?

  • No need to bolus
  • No need to count carbs
  • Medium/high carb meals
  • 80%+ time in range
  • no hypoglycemia

How many can you manage with your current therapy and tools of choice?  How many do you think will be possible with hybrid closed loop systems?  Please think about (and maybe even write down) your answers before reading further to get our perspective.

With just pump and CGM, it’s possible to get good time in range with proper boluses, counting carbs, and eating relatively low-carb (or getting lucky/spending a lot of time learning how to time your insulin with regular meals).  Even with all that, some people still go low/have hypoglycemia.  So, let’s call that a 2 (out of 5) that can be achieved simultaneously.

With a first-generation hybrid closed loop system like the original OpenAPS oref0 algorithm, it’s possible to get good time in range overnight, but achieve that for meal times would still require bolusing properly and counting carbs.  But with the perfect night-time BGs, it’s possible to achieve no-hypoglycemia and 80% time in range with medium carb meals (and high-carb meals with Eating Soon mode etc.).  So, let’s call that a 3 (out of 5).

With some of the advanced features we added to OpenAPS with oref0 (like advanced meal assist or “AMA” as we call it), it became a lot easier to achieve a 3 with less bolusing and less need to precisely count carbs.  It also deals better with high-carb meals, and gives the user even more flexibility.  So, let’s call that a 3.5.

A few months ago, when we began discussing how to further improve daily outcomes, we also began to discuss the idea of how to better deal with unannounced meals. This means when someone eats and boluses, but doesn’t enter carbs. (Or in some cases: eats, doesn’t enter carbs, and doesn’t even bolus). How do we design to better help in that safety, all while sticking to our safety principles and dosing safely?

I came up with this idea of “floating carbs” as a way to design a solution for this behavior. Essentially, we’ve learned that if BG spikes at a certain rate, it’s often related to carbs. We observed that AMA can appropriately respond to such a rise, while not dosing extra insulin if BG is not rising.  Which prompted the question: what if we had a “floating” amount of carbs hanging out there, and it could be decayed and dosed upon with AMA if that rise in BG was detected? That led us to build in support for unannounced meals, or “UAM”. (But you’ll probably see us still talk about “floating carbs” some, too, because that was the original way we were thinking about solving the UAM problem.) This is where the suite of tools that make up oref1 came from.  In addition to UAM, we also introduced supermicroboluses, or SMB for short.  (For more background info about oref1 and SMB, read here.)

So with OpenAPS oref1 with SMB and floating carbs for UAM, we are finally at the point to achieve a solid 4 out of 5.  And not just a single set of 4, but any 4 of the 5 (except we’d prefer you don’t choose hypoglycemia, of course):

  • With a low-carb meal, no-hypoglycemia and 80+% time in range is achievable without bolusing or counting carbs (with just an Eating Soon mode that triggers SMB).
  • With a regular meal, the user can either bolus for it (triggering floating carb UAM with SMB) or enter a rough carb count / meal announcement (triggering Eating Now SMB) and achieve 80% time in range.
  • If the user chooses to eat a regular meal and not bolus or enter a carb count (just an Eating Soon mode), the BG results won’t be as good, but oref1 will still handle it gracefully and bring BG back down without causing any hypoglycemia or extended hyperglycemia.

That is huge progress, of course.  And we think that might be about as good as it’s possible to do with current-generation insulin-only pump therapy.  To do better, we’d either need an APS that can dose glucagon and be configured for tight targets, or much faster insulin.  The dual-hormone systems currently in development are targeting an average BG of 140, or an A1c of 6.5, which likely means >20% of time spent > 160mg/dL.  And to achieve that, they do require meal announcements of the small/medium/large variety, similar to what oref1 needs.  Fiasp is promising on the faster-insulin front, and might allow us to develop a future version of oref1 that could deal with completely unannounced and un-bolused meals, but it’s probably not fast enough to achieve 80% time in range on a high-carb diet without some sort of meal announcement or boluses.

But 4 out of 5 isn’t bad, especially when you get to pick which 4, and can pick differently for every meal.

Does that make OpenAPS a “real” artificial pancreas? Is it a hybrid closed loop artificial insulin delivery system? Do we care what it’s called? For Scott and me; the answer is no: instead of focusing on what it’s called, let’s focus on how different tools and techniques work, and what we can do to continue to improve them.

Being Shuttleworth Funded with a Flash Grant as an independent patient researcher

Recently, I have been working on helping OpenAPS’ers collect our data and put it to good use in research (both by traditional researchers as well as using it to enable other fellow patient researchers or “citizen scientists). As a result, I have had the opportunity to work closely with Madeleine Ball at Open Humans. (Open Humans is the platform we use for the OpenAPS Data Commons.)

It’s been awesome to collaborate with Madeleine on many fronts. She’s proven herself really willing to listen to ideas and suggestions for things to change, to make it easier for both individuals to donate their data to research and for researchers who want to use the platform. And, despite me not having the same level of technical skills, she emits a deep respect for people of all experiences and perspectives. She’s also in general a really great person.

As someone who is (perhaps uniquely) utilizing the platform as both a data donor and as a data researcher, it has been fantastic to be able to work through the process of data donation, project creation, and project utilization from both perspectives. And, it’s been great to contribute ideas and make tools (like some of my scripts to download and unpack Open Humans data) that can then be used by other researchers on Open Humans.

Madeleine was also selected this year to be a Shuttleworth Fellow, applying “open” principles to change how we share and study human health data, plus exploring new, participant-centered approaches for health data sharing, research, and citizen science. Which means that everything she’s doing is in almost perfect sync with what we are doing in the OpenAPS and #WeAreNotWaiting communities.

What I didn’t know until this past week was that it also meant (as a Shuttleworth Fellow) that she was able to make nominations of individuals for a Shuttleworth Flash Grant, which is a grant made to a collection of social change agents, no strings attached, in support of their work.

I was astonished to receive an email from the Shuttleworth Foundation saying that I had been nominated by Madeleine for a $5,000 Flash Grant, which goes to individuals they would like to support/reward/encourage in their work for social good.

Shuttleworth Funded

I am so blown away by the Flash Grant itself – and the signal that this grant provides. This is the first (of hopefully many) organizations to recognize the importance of supporting independent patient researchers who are not affiliated with an institution, but rather with an online community. It’s incredibly meaningful for this research and work, which is centered around real needs of patients in the real world, to be funded, even to a small degree.

Many non-traditional researchers like me are unaffiliated with a traditional institution or organization. This means we do the research in our own time, funded solely by our own energy (and in some case resources). Time in of itself is a valuable contribution to research (think of the opportunity costs). However, it is also costly to distribute and disseminate ideas learned from patient-driven research to more traditional researchers. Even ignoring travel costs, most scientific conferences do not have a patient research access program, which means patients in some cases are asked to pay $400 (or more) per person for a single day pass to stand beside their poster if it is accepted for presentation at a conference. In some cases, patients have personal resources and determination and are willing to pay that cost. But not every patient is able to do that. (And to do it year over year as they continue to do new ground-breaking research each year – that adds up, too, especially when you factor in travel, lodging, and the opportunity cost of being away from a day job.)

So what will I use the Flash Grant for? Here’s so far what I’ve decided to put it toward:

#1 – I plan to use it to fund my & Scott’s travel costs this year to ADA’s Scientific Sessions, where our poster on Autotune & data from the #WeAreNotWaiting community will be presented. (I’m still hoping to convince ADA to create a patient researcher program vs. treating us like an individual walking in off the street; but if they again do not choose to do so, it will take $800 for Scott and I to stand with the poster during the poster session). Being at Scientific Sessions is incredibly valuable as researchers and developers, because we can have real-time conversations with traditional researchers who have not yet been introduced to some of our tools or the data collected and donated by the community. It’s one of the most valuable places for us to be in person in terms of facilitating new research partnerships, in addition to renewing and establishing relationships with device manufacturers who could (because our stuff is all open source MIT licensed) utilize our code and tools in commercial devices to more broadly reach people with diabetes.

#2 – Hardware parts. In order to best support the OpenAPS community, Scott and I have also been supporting and contributing to the development of open source hardware like the Explorer Board. Keeping in mind that each version of the board produced needs to be tested to see if the instructions related to OpenAPS need to change, we have been buying every iteration of Explorer Board so we can ensure compatibility and ease of use, which adds up. Having some of this grant funding go toward hardware supplies to support a multitude of setup options is nice!

There are so many individuals who have contributed in various ways to OpenAPS and WeAreNotWaiting and the patient-driven research movements. I’m incredibly encouraged, with a new spurt of energy and motivation, after receiving this Flash Grant to continue to further build upon everyone’s work and to do as much as possible to support every person in our collective communities. Thank you again to Madeleine for the nomination, and to the Shuttleworth Foundation for the Flash Grant, for the financial and emotional support for our community!

Making it possible for researchers to work with #OpenAPS or general Nightscout data – and creating a complex json to csv command line tool that works with unknown schema

This is less of an OpenAPS/DIYPS/diabetes-related post, although that is normally what I blog about. However, since we created the #OpenAPS Data Commons on Open Humans, to allow those of us who desire to donate our diabetes data to research, I have been spending a lot of time figuring out the process from uploading your data to how data is managed and shared securely with researchers. The hardest part is helping researchers figure out how to handle the data – because we PWDs produce a lot of data :) . So this post explains some of the challenges of the data management to get it to a researcher-friendly format. I have been greatly helped over the years by general purpose open-source work from other people, and one of the things that helps ME the most as a non-traditional programmer is plain language posts explaining the thought process by behind the tools and the attempted solution paths. Especially because sometimes the web pages and blog posts pop higher in search than nitty gritty tool documentation without context. (Plus, I’ve been taking my own advice about not letting myself hold me back from trying, even when I don’t know how to do things yet.) So that’s what this post is!

Background/inspiration for the project and the tools I had to build:

We’re using Nightscout, which is a remote data-viewing platform for diabetes data, made with love and open source and freely available for anyone with diabetes to use. It’s one of the best ways to display not only continuous glucose monitor (CGM) data, but also data from our DIY closed loop artificial pancreases (#OpenAPS). It can store data from a number of different kinds and brands of diabetes devices (pumps, CGMs, manual data entries, etc.), which means it’s a rich source of data. As the number of DIY OpenAPS users are growing, we estimate that our real-world use is overtaking the amount of total hours of data from clinical trials of closed loop artificial pancreas systems.  In the #WeAreNotWaiting spirit of moving quickly (rather than waiting years for research teams to collect and analyze their own data) we want to see what we can learn from OpenAPS usage, not only by donating data to help traditional researchers speed up their work, but also by co-designing research studies of the things of most value to the diabetes community.

Step 1: Data from users to Open Humans

I thought Step 1 would be the hardest. However, thanks to Madeleine Ball, John Costik, and others in the Nightscout community, a simple Nightscout Data Transfer App was created that enables people with Nightscout data to pop it into their Open Humans accounts. It’s then very easy to join different projects (like the OpenAPS Data Commons) and share your data with those projects. And as the volunteer administrator of the OpenAPS Data Commons, it’s also easy for me to provide data to researchers.

The biggest challenge at this stage was figuring out how much data to pull from the API. I have almost 3 years worth of DIY diabetes data, and I have numerous devices over time uploading all at once…which makes for large chunks of data. Not everyone has this much data (or 6-7 rigs uploading constantly ;)). Props to Madeleine for the patience in working with me to make sure the super users with large data sets will be able to use all of these tools!

Step 2: Sharing the data with researchers

This was easy. Yay for data-sharing tools like Dropbox.

Step 3: Researchers being able to use the data

Here’s where thing started to get interesting. We have large data files that come in json format from Nightscout. I know some researchers we will be working with are probably very comfortable working with tools that can take large, complex json files. However…not all will be, especially because we also want to encourage independent researchers to engage with the data for projects. So I had the belated realization that we need to do something other than hand over json files. We need to convert, at the least, to csv so it can be easily viewed in Excel.

Sounds easy, right?

According to basic searches, there’s roughly a gazillion ways to convert json to csv. There’s even websites that will do it for you, without making you run it on the command line. However, most of them require you to know the types of data and the number of types, in order to therefore construct headers in the csv file to make it readable and useful to a human.

This is where the DIY and infinite possibility nature of all the kinds of diabetes tools anyone could be using with Nightscout, plus the infinite ways they can self-describe profiles and alarms and methods of entering data, makes it tricky. Just based on an eyeball search between two individuals, I was unable to find and count the hundred+ types of data entry possibilities. This is definitely a job for the computer, but I had to figure out how to train the computer to deal with this.

Again, json to csv tools are so common I figured there HAD to be someone who had done this. Finally, after a dozen varying searches and trying a variety of command line tools, I finally found one web-based tool that would take json, create the schema without knowing the data types in advance, and convert it to csv. It was (is) super slick. I got very excited when I saw it linked to a Github repository, because that meant it was probably open source and I can use it. I didn’t see any instructions for how to use it on the command line, though, so I message the author on Twitter and found out that it didn’t yet exist and was a not-yet-done TODO for him.

Sigh. Given this whole #WeAreNotWaiting thing (and given I’ve promised to help some of the researchers in figuring this out so we can initiate some of the research projects), I needed to figure out how to convert this tool into a command line version.

So, I did.

  • I taught myself how to unzip json files (ended up picking `gzip -cd`, because it works on both Mac and Linux)
  • I planned to then convert the web tool to be able to work on the command line, and use it to translate the json files to csv.

But..remember the big file issue? It struck again. So I first had to figure out the best way to estimate the size and splice or split the json into a series of files, without splitting it in a weird place and messing up the data. That became jsonsplit.sh, a tool to split a json file based on the size you give it (and if you don’t specify, it defaults to something like 100000 records).

FWIW: 100,000 records was too much for the more complex schema of the data I was working with, so I often did it in smaller chunks, but you can set it to whatever size you prefer.

So now “all” I had to do was:

  • Unzip the json
  • Break it down if it was too large, using jsonsplit.sh
  • Convert each of these files from json to csv

Phew. Each of these looks really simple now, but took a good chunk of time to figure out. Luckily, the author of the web tool had done much of the hard json-to-csv work, and Scott helped me figure out how to take the html-based version of the conversion and make it useable in the command line using javascript. That became complex-json2csv.js.

Because I knew how hard this all was, and wanted other people to be able to easily use this tool if they had large, complex json with unknown schema to deal with, I created a package.json so I could publish it to npm so you can download and run it anywhere.

I also had to create a script that would pass it all of the Open Humans data; unzip the file; run jsonsplit.sh, run complex-json2csv.js, and organize the data in a useful way, given the existing file structure of the data. Therefore I also created an “OpenHumansDataTools” repository on Github, so that other researchers who will be using Nightscout-based Open Humans data can use this if they want to work with the data. (And, there may be something useful to others using Open Humans even if they’re not using Nightscout data as their data source – again, see “large, complex, challenging json since you don’t know the data type and count of data types” issue. So this repo can link them to complex-json2csv.js and jsonsplit.sh for discovery purposes, as they’re general purpose tools.) That script is here.

My next TODO will be to write a script to take only slices of data based on information shared as part of the surveys that go with the Nightscout data; i.e. if you started your DIY closed loop on X data, take data from 2 weeks prior and 6 weeks after, etc.

I also created a pull request (PR) back to the original tool that inspired my work, in case he wants to add it to his repository for others who also want to run his great stuff from the command line. I know my stuff isn’t perfect, but it works :) and I’m proud of being able to contribute to general-purpose open source in addition to diabetes-specific open source work. (Big thanks as always to everyone who devotes their work to open source for others to use!)

So now, I can pass researchers json or csv files for use in their research. We have a number of studies who are planning to request access to the OpenAPS Data Commons, and I’m excited about how work like this to make diabetes data more broadly available for research will help improve our lives in the short and long term!