Presentations and poster content from @DanaMLewis at #2018ADA

DanaMLewis_ADA2018As I mentioned, I am honored to have two presentations and a co-authored poster being presented at #2018ADA. As per my usual, I plan to post all content and make it fully available online as the embargo lifts. There will be three sets of content:

  • Poster 79-LB in Category 12-A Detecting Insulin Sensitivity Changes for Individuals with Type 1 Diabetes using “Autosensitivity” from OpenAPS’ poster, co-authored by Dana Lewis, Tim Street, Scott Leibrand, and Sayali Phatak.
  • Content from my presentation Saturday, The Data behind DIY Diabetes—Opportunities for Collaboration and Ongoing Research’, which is part of the “The Diabetes Do-It-Yourself (DIY) Revolution” Symposium!
  • Content from my presentation Monday, Improvements in A1c and Time-in-Range in DIY Closed-Loop (OpenAPS) Users’, co-authored by Dana Lewis, Scott Swain, and Tom Donner.

First up: the autosensitivity poster!

Dana_Scott_ADA2018_autosens_posterYou can find the full write up and content of the autosensitivity poster in a post over on OpenAPS.org. There’s also a twitter thread if you’d like to share this poster with others on Twitter or elsewhere.

Summary: we ran autosensitivity retrospectively on the command line to assess patterns of sensitivity changes for 16 individuals who had donated data in the OpenAPS Data Commons. Many had normal distributions of sensitivity, but we found a few people who trended sensitive or resistant, indicating underlying pump settings could likely benefit from a change.
2018 ADA poster on Autosensitivity from OpenAPS by DanaMLewis

 

Presentation:
The Data behind DIY Diabetes—Opportunities for Collaboration and Ongoing Research’

This presentation was a big deal to me, as it was flanked by 3 other excellent presentations on the topic of DIY and diabetes. Jason Wittmer gave a great overview and context setting of DIY diabetes, ranging from DIY remote monitoring and CGM tools all the way to DIY closed loops like OpenAPS. Jason is a dad who created OpenAPS rigs for his son with T1D. Lorenzo Sandini spoke about the clinician’s perspective for when patients come into the office with DIY tools. He knows it from both sides – he’s using OpenAPS rigs, and also has patients who use OpenAPS. And after my presentation, Joyce Lee also spoke about the overarching landscape of diabetes and the role DIY plays in this emerging technology space.

Why did I present as part of this group today? One of the roles I’ve taken on in the last few years in the OpenAPS community (among others) is a collaborator and facilitator of research with and about the community. I put together the first outcomes study (see here in JDST or here in a blog post form on OpenAPS.org) in 2016. We presented a poster on Autotune last year at ADA (see here in a blog post form on OpenAPS.org). I’ve also worked to create and manage the OpenAPS Data Commons, as well as build tools for researchers to use this data, so individuals can easily and anonymously donate their DIY closed loop data for other research projects, lowering the friction and barriers for both patients and researchers. And, I’ve co-led or led several research projects with the community’s data as a result.

My presentation was therefore about setting the stage with background on OpenAPS & how we ended up creating the OpenAPS Data Commons; presenting a selection of research projects that have utilized data from the community; highlighting other research projects working with the OpenAPS community; announcing a new international collaboration (OPEN – more coming on that in the future!) for research with the DIY community; and hopefully encouraging other diabetes researchers to think about sharing their work, data, methods, tools, and insights as openly possible to help us all move forward with improving the lives of people with diabetes.

That is, of course, quite an abbreviated summary! I’ve shared a thread on Twitter that goes into detail on each of the key points as part of the presentation, or there’s a version of this Twitter/presentation content also written below.

If you’re someone who wants to do research with retrospective data from the OpenAPS Data Commons, you can find out more about it here (including instructions on how to request data). And if you’re interested in prospective research, please do reach out as well!

Full content for those who don’t want to read Twitter:

Patients are often seen as passive recipients of care, but many of us PWDs have discovered that problems are opportunities to change things. My journey to DIY began after I was frustrated by my inability to hear CGM alarms at night. 4 years ago, there was no way for me to access my own device data in real time OR retrospectively. Thanks to John Costik for sharing his code, I was able to get my CGM data & send it to the cloud and down to my phone, creating a louder alarm. Scott and I created an algorithm to push notifications to me to take action. This was an ‘open loop’ system we called #DIYPS. With Ben West’s help, we realized could combine our algorithm with small, off-the-shelf hardware & a radio stick to automate insulin delivery. #OpenAPS was thus created, open sourcing all components of DIY closed loop system so others could close the loop, too. An #OpenAPS rig consists of a small computer, radio chip, & battery. The hardware is constantly evolving. Many of us also use Nightscout to visualize our closed loop data, and share with loved ones.

2018ADA_slide12018ADA_slide 42018ADA_slide 32018ADA_Slide 2

 

 

 

 

 

 

I closed the loop in December of 2015. As people learned about it, I got pushback: “It works for you, but how do you know it’s going to work for others?” I didn’t, and I said so. But that didn’t mean I shouldn’t share what was working for me.

Once we had dozens of users of #OpenAPS, we presented a research study at #2016ADA, with 18 individuals sharing outcomes data on A1c, TIR, and QOL improvements. (See that publication here: https://twitter.com/danamlewis/status/763782789070192640 ). I was often asked to share my data for people to analyze, but I’m not representative of entire #OpenAPS community. Plus, the community has kept growing: we estimate there are more than (n=1)*710+ (as of June 2018) people worldwide using different kinds of DIY APs. (Note: if you’d like to keep track of the growing #OpenAPS community, the count of loopers worldwide is updated periodically at  https://openaps.org/outcomes ).  I began to work with Open Humans to build the #OpenAPS Data Commons, enabling individuals to anonymously upload their data and consent to share it with the Data Commons.

2018ADA_Slide 52018ADA_Slide 62018ADA_Slide 72018ADA_Slide 8

 

 

 

 

 

Criteria for using the #OpenAPS Data Commons:

  • 1) share insights back with the community, especially if you find something about an individual’s data set where we should notify them
  • 2) publish in an accessible (and preferably open) manner

I’ve learned that not many are prepared to take advantage of the rich (and complex) data available from #OpenAPS users; and many researchers have varying background and skillsets.  To aid researchers, I created a series of open source tools (described here: http://bit.ly/2l5ypxq, and tools available at https://github.com/danamlewis/OpenHumansDataTools ) to help researchers & patients working with data.

2018ADA_Slide 10 2018ADA_Slide 9

 

 

 

We have a variety of research projects that have leveraged the anonymously donated, DIY closed loop data from the #OpenAPS Data Commons.

  • 2018ADA_Slide 112018ADA_Slide 12One research project, in collaboration with a Stanford team, evaluated published machine learning model predictions & #OpenAPS predictions. Some models (particularly linear regression) = accurate predictions in short term, but less so longer term when insulin peaks. This study is pending publication, but I’d like to note the challenge of more traditional research keeping pace with DIY innovation: the code (and data) studied was from January 2017. #OpenAPS prediction code has been updated 2x since then.
  • In response to the feedback from the #2016ADA #OpenAPS Outcomes study we presented, a follow up study on #OpenAPS outcomes was created in partnership with a team at Johns Hopkins. That study will be presented on Monday, 6-6:15pm (352-OR).
  • 2018ADA_Slide 13Many people share publicly online their outcomes with DIY closed loops. Sulka Haro has shared his script to evaluate the reduction in daily manual diabetes interventions after they began using #OpenAPS. Before: 4.5/day manual corrections; now they treat <1/day.
  • #OpenAPS features such as autosensitivity automatically detect sensitivity changes and insulin needs, improving outcomes. (See above at the top of this post for the full poster content).
  • If you missed it at #2017ADA (see here: http://bit.ly/2rMBFmn) , Autotune is a tool for assessing changes to basal rates, ISF, and carb ratio. Developed for #OpenAPS users but can also be used by traditional pumpers (and some MDI users also utilize it).

I’m also thrilled to share a new tool we’ve created: an #OpenAPS simulator to allow us to more easily back-test and compare settings changes & feature changes in #OpenAPS code.
2018ADA_Slide 14

  • Screen Shot 2018-06-22 at 4.48.06 PM2018ADA_Slide 16  We pulled a recent week of data for n=1 adult PWD who does no-bolus, rough carb entry meal announcements, and ran the simulator to predict what the outcomes would be for no-bolus and no meal-announcement.

 

  • 2018ADA_Slide 172018ADA_Slide 18 We also ran the simulator on n=1 teen PWD who does no-bolus and no-meal-announcement in real life. The simulator tracked closely to his actual outcomes (validated this week with a lab-A1c of 6.1)

 

 

 

The new #OpenAPS simulator will allow us to better test future algorithm changes and features across a diverse data set donated by DIY closed loop users.

There are many other studies & collaborations ongoing with the DIY community.

  • Michelle Litchman, Perry Gee, Lesly Kelly, and myself have a paper pending review analyzing social-media-reported outcomes & themes from DIY community.
  • 2018ADA_Slide 19There are also multiple other posters about DIY outcomes here at #2018ADA:
  • 2018ADA_Slide 20 There are many topics of interest in DIY community we’d like to see studies on, and have data for. These include: “eating soon” (optimal insulin dosing for lesser post-prandial spikes); and variability in sensitivity for various ages, pregnancy, and menstrual cycle.
  • 2018ADA_Slide 21I’m also thrilled to announce funding will be awarded to OPEN (a new collaboration on Outcomes of Patients’ Evidence, with Novel, DIY-AP tech), a 36-month international collaboration assessing outcomes, QOL, further development, access of real-world AP tech, etc. (More to come on this soon!)

In summary: we don’t have a choice in living with diabetes. We *do* have a choice to DIY, and also to research to learn more and improve knowledge and availability of tools for us PWDs, more quickly. We would love to partner and collaborate with anyone interested in working with the DIY community, whether that is utilizing the #OpenAPS Data Commons for retrospective studies or designing prospective studies. If you take away one thing today: let it be the request for us to all openly share our tools, data, and insights so we can all make life with type 1 diabetes better, faster.

2018ADA_Slide 222018ADA_Slide 23

 

 

 

 

A huge thank you as always to the community: those who have donated and shared data; those who have helped develop, test, troubleshoot, and otherwise help power the #OpenAPS and other DIY diabetes communities.

2018ADA_Slide 24

Presentation:
Improvements in A1c and Time-in-Range in DIY Closed-Loop (OpenAPS) Users

(full tweet thread available here; or a description of this presentation below)

#OpenAPS is an open and transparent effort to make safe and effective Artificial Pancreas System (APS) technology widely available to reduce the burden of Type 1 diabetes. #OpenAPS evolved from my first DIY closed loop system and our desire to openly share what we’ve learned living with DIY closed loops. It takes a small, off-the-shelf computer; a radio; and a battery to communicate with existing insulin pumps and CGMs. As a PWD, I care a lot about safety: the safety reference design is the first thing in #OpenAPS that was shared, in order to help set expectations around what a DIY closed loop can (and cannot) do.

ADA2018_Slide 23ADA2018_Slide 24As I shared about my own DIY experience, people questioned whether it would work for others, or just me. At #2016ADA, we presented an outcomes study with data from 18 of the first 40 DIY closed loop users. Feedback on that study included requests to evaluate CGM data, given concerns around accuracy of self-reported outcomes.

This 2018 #OpenAPS outcomes study was the result. We performed a retrospective cross-over analysis of continuous BG readings recorded during 2-week segments 4-6 weeks before and after initiation of OpenAPS.

ADA2018_Slide 26For this study, n=20 based on the availability of data that met the stringent protocol requirements (and the limited number of people who had both recorded that data and donated it to the #OpenAPS Data Commons in early 2017).  Demographics show that, like the 2016 study, the people choosing to #OpenAPS typically have lower A1C than the average T1D population; have had diabetes for over a decade; and are long-time pump and CGM users. Like the 2016 study, this 2018 study found mean BG and TIR improved across all time categories (overall, day, and nighttime).

ADA2018_Slide 28ADA2018_Slide 29ADA2018_Slide 30ADA2018_Slide 31ADA2018_Slide 32

Overall, mean BG (mg/dl) improved (135.7 to 128.3); mean estimated HbA1c improved (6.4 to 6.1%). TIR (70-180) increased from 75.8 to 82.2%. Overall, time spent high and low were all reduced, in addition to eAG and A1c reduction. Overnight (11pm-7am) had smaller improvement in all categories compared to daytime improvements in these categories.

Notably: although this study primarily focused on a 4-6 week time frame pre-looping vs. 4-6 weeks post-looping, the improvements in all categories are sustained over time by #OpenAPS users.

ADA2018_Slide 33 ADA2018_Slide 34

ADA2018_Slide 35Conclusion: Even with tight initial control, persons with T1D saw meaningful improvements in estimated A1c, TIR, and a reduction in time spent high and low, during the day and at night, after initiating #OpenAPS. Although this study focused on BG data from CGM, do not overlook additional QOL benefits when analyzing benefits of hybrid closed loop therapy or designing future studies! See these examples shared from Sulka Haro and Jason Wittmer as example of quality of life impacts of #OpenAPS.

A huge thank you to the community: those who have donated and shared data; those who have helped develop, test, troubleshoot, and otherwise help power the #OpenAPS and other DIY diabetes communities.

And, special thank you to my co-authors, Scott Swain & Tom Donner, for the collaboration on this study. Lewis_Donner_Swain_ADA2018

Getting ready for #2018ADA (@DanaMLewis) & preparing to encourage photography

We’re a few weeks away from the 78th American Diabetes Scientific Sessions (aka, #2018ADA), and I’m getting excited. Partially because of the research I have the honor of presenting; but also because ADA has made strides to (finally) update their photography policy and allow individual presenters to authorize photography & sharing of their content. Yay!

As a result of preparing to encourage people to take pictures & share any and all content from my presentations, I started putting together my slides for each presentation, including the slide about allowing photography, which I’ll also verbally say at the start of the presentation. Interestingly to me, though, ADA only provided an icon for discouraging photography, saying that if staff notice that icon on any photos, that’s who will be asked to take down photos. I don’t want any confusion (in past years, despite explicit permission, people have been asked to take down photos of my work), so I wanted to include obvious ‘photography is approved’ icons.

And this is what I landed on for a photography encouraged slide, and the footer of all my other slides:

Encouraging photography in my slides Example encouraging use of photography in content slidesEncouraging photography in the footer of my slides

And, if anyone else plans to encourage (allow) photography and would like to use this slide design, you can find my example slide deck here that you are welcome to use: http://bit.ly/2018ADAexampleslides

I used camera and check mark icons which are licensed to be freely used; and I also licensed this slide deck and all content to be freely used by all! I hope it’s helpful.

Where you’ll find me at #2018ADA

And if you’re wondering where and what I’ll be presenting on with these slides…I’ll be sharing new content in a few different times and places!

On Saturday, I’m thrilled there is a full, 2-hour session on DIY-related content, and to get to share the stage with Jason Wittmer, Lorenzo Sandini, and Joyce Lee. That’s 1:45-3:45pm (Eastern), “The Diabetes Do-It-Yourself (DIY) Revolution”, in W415C (Valencia Ballroom). I’ll be discussing some of the data & research in DIY diabetes! A huge thanks to Joshua Miller for championing and moderating this session.

I’m also thrilled that a poster has been accepted on one of the projects from my RWJF grant work, in partnership with Tim Street (as well as Scott Leibrand, and Sayali Phatak who is heading our data science work for Opening Pathways). The embargo lifts on Saturday morning (content will be shared online then), and the poster will be displayed Saturday, Sunday, and Monday. Scott and I will also be present with the poster on Monday during the poster session from 12-1pm.

And last but not least, there is also an oral presentation on Monday evening with a new study on outcomes data from using OpenAPS. I’ll be presenting during the 4:30-6:30pm session (again in W415C (Valencia Ballroom)), likely during the 6-6:15pm slot. I’m thrilled that Scott Swain & Tom Donner, who partnered on this study & work, will also be there to help answer questions about this study!

As we have done in the past (see last year’s poster, for example), we plan to share all of this content online once the embargo lifts, in addition to the in-person presentations and poster discussions.

A huge thanks, as always, goes to the many dozens of people who have contributed to this DIY community in so many ways: development, testing, support, feedback, documentation, data donation, and more! <3

Women in open source make a difference

I was incredibly honored to find out that I had been nominated for the 2018 Women in Open Source award (and even more blown away to learn that I am one of the finalists). I wasn’t familiar with the award, and when I looked it up to learn more about it, the finalist list for the last few years gives me some serious imposter syndrome! Aside from that, there were a few things that caught my eye – and one of them was a citation from a study that found that only 11% of people contributing in open source are women.

To me, this number both makes sense, and doesn’t.

Why it makes sense (to me): open source can be hard on women.

I’ve been doing things in open source since 2014, falling into it because of DIYPS and because of getting to know people like Ben who are passionate open source advocates. Because I was helped by open source work, it was a key driver for my own passion behind making our work with OpenAPS open source, and is why I’m currently working on developing a series of open source tools to help researchers working with diabetes data. I don’t know that I would have done anything open source had I not found the perfect series of projects that led me there.

While there are many great people in the diabetes open source community, in the middle of 2014 I wrote this blog post about being female and being discounted. It was a hard post to write. But I felt it was important, because one of things both Scott and I noticed is a lot of the attitudes behind this seemed to be subconscious: directing technical questions about our project to Scott only; refusing to direct any substantial questions to me even after Scott specifically would redirect questions to me, etc. The only way I saw to (begin to) deal with the problem was to address it head on.

And, for me, things have improved with time. But it hasn’t gone away, and it still requires active addressing about once a month or so. And yes – these are (relatively) minor problems compared to what some women experience in open source, or in tech. But it’s some of the most common, frustrating friction that can easily drive women away when they get tired of experiencing stuff like that. And when they go away, it’s a loss for everyone.

Why this number doesn’t make sense (to me): women contribute incredible value to open source, and are high-volume contributors, especially if you look beyond the narrow definition applied to open source coding.

Every where I turn, I see women participating in open source. I see Kate Farnsworth and Christine Deltrap, two incredible individuals who have made watchfaces used by thousands of families to remote monitor their children’s blood sugars. I see Katie DiSimone, who has written hundreds of lines of documentation, and answers hundreds of technical troubleshooting questions across several channels. I see Mad Price Ball, who leads the Open Humans Foundation with open source work (*and* is an amazing mentor to women like me, who have non-traditional development backgrounds). I see Karen Sandler, a fierce advocate for making software open source, who herself is a finalist for the WOS award, too!

I also see a lot of my own contributions in open source, especially in the early days when Scott was the one doing most of the committing to Github for the tools we were building. Those were part of why I was told I was discounted in 2014, because my work didn’t “count”. Today when someone goes and looks at Github, if they look at the wrong toolkit (or just one, for example), it gets said that “Dana didn’t do anything on OpenAPS”. (Heh).

So I know there are also other women out there whose work is being overlooked when counting who’s doing open source. However, this type of work is absolutely crucial to open source projects, and these contributors drive an incredible amount of value. I’m glad the Red Hat Women in Open Source Awards site acknowledged this, and made this list:

  • Code and programming.
  • Quality assurance and bug triage.
  • Involvement in open hardware.
  • System administration and infrastructure.
  • Design, artwork, user experience, and marketing.
  • Documentation, tutorials, and other communications.
  • Translation and internationalization.
  • Open content.
  • Community advocacy and community management.
  • Intellectual property advocacy and legal reform.
  • Open source methodology.

The list was partially to help encourage people to nominate women; and also to help women to recognize all of the activities they do that’s open source. And it was helpful to me, too. Because of that list, instead of a handful of key examples of open source activity by women, I can instead name dozens of women. I bet you can, too. There’s so much incredible open source activity and value that happens in places outside of commit history, and if we want to recognize and acknowledge the work of everyone in open source communities, we should do a better job of acknowledging *all* of these types of activities and not just recognizing individuals (male or female) who have a traditional code-based commit history.

So if you’re reading this, it’s likely you’re a supporter of women in open source communities. Thank you for that! But I’d like to ask you to do two specific things.

1) Actively recognize the women working with you in open source. The internet can be a hard place to be, let alone work, when you are female. Help lift women up; recognize their work; and help them grow their skills.

2) Ok, this one is optional :) But if you’ve read all this way, you might consider clicking here and going to the Women in Open Source Awards site and voting for one the finalists in each category. It’s one vote per email address. Thanks!

Acknowledging all contributions in open source by DanaMLewis

Why Open Humans is an essential part of my work to change the future of healthcare research

I’ve written about Open Humans before; both in terms of how we’re creating Data Commons there for people using Nightscout and DIY closed loops like OpenAPS to donate data for research, as well as building tools to help other researchers on the Open Humans platform. Madeleine Ball asked me to share some more about the background of the community’s work and interactions with Open Humans, along with how it will play into the Opening Pathways grant work, so here it is! This is also posted on the OpenHumans blog. Thanks, Madeleine, and Open Humans!

 

So, what do you like about Open Humans?

Health data is important to individuals, including myself, and I think it’s important that we as a society find ways to allow individuals to be able to chose when and how we share our data. Open Humans makes that very easy, and I love being able to work with the Open Humans team to create tools like the Nightscout Data Transfer uploader tool that further anonymizes data  uploads. As an individual, this makes it easy to upload my own diabetes data (continuous glucose monitoring data, insulin dosing data, food info, and other data) and share it with projects that I trust. As a researcher, and as a partner to other researchers, it makes it easy to build Data Commons projects on Open Humans to leverage data from the DIY artificial pancreas community to further healthcare research overall.

Wait, “artificial pancreas”? What’s that?

I helped build a DIY “artificial pancreas” that is really an “automated insulin delivery system”. That means a small computer & radio device that can get data from an insulin pump & continuous glucose monitor, process the data and decide what needs to be done, and send commands to adjust the insulin dosing that the insulin pump is doing. Read, write, read, rinse, repeat!

I got into this because, as a patient, I rely on my medical equipment. I want my equipment to be better, for me and everyone else. Medical equipment often isn’t perfect. “One size fits all” really doesn’t fit all. In 2013, I built a smarter alarm system for my continuous glucose monitor to make louder alarms. In 2014, with the partnership of others like Ben West who is also a passionate advocate for understanding medical devices, I “closed the loop” and built a hybrid closed loop artificial pancreas system for myself. In early 2015, we open sourced it, launching the OpenAPS movement to make this kind of technology more broadly accessible to those who wanted it.

You must be the only one who’s doing something like this

Actually, no. There are more than 400+ people worldwide using various types of DIY closed loop systems – and that’s a low estimate! It’s neat to live during a time when off the shelf hardware, existing medical devices, and open source software can be paired to improve our lives. There’s also half a dozen (or more) other DIY solutions in the diabetes community, and likely other examples (think 3D-printing prosthetics, etc.) in other types of communities, too. And there should be even more than there are – which is what I’m hoping to work on.

So what exactly is your project that’s being funded?

I created the OpenAPS Data Commons to address a few issues. First, to stop researchers from emailing and asking me for my individual data. I by no means represent all other DIY closed loopers or people with diabetes! Second, the Data Commons approach allows people to donate their data anonymously to research; since it’s anonymized, it is often IRB-exempt. It also makes this data available to people (patient researchers) who aren’t affiliated with an organization and don’t need IRB approval or anything fancy, and just need data to test new algorithm features or investigate theories.

But, not everyone implicitly knows how to do research. Many people learn research skills, but not everyone has the wherewithal and time to do so. Or maybe they don’t want to become a data science expert! For a variety of reasons, that’s why we decided to create an on-call data science and research team, that can provide support around forming research questions and working through the process of scientific discovery, as well as provide data science resources to expedite the research process. This portion of the project does focus on the diabetes community, since we have multiple Data Commons and communities of people donating data for research, as well as dozens of citizen scientists and researchers already in action (with more interested in getting involved).

What else does Open Humans have to do with it?

Since I’ve been administering the Nightscout and OpenAPS Data Commons, I’ve spent a lot of time on the Open Humans site as both a “participant” of research donating my data, as well as a “researcher” who is pulling down and using data for research (and working to get it to other researchers). I’ve been able to work closely with Madeleine and suggest the addition of a few features to make it easier to use for research and downloading large data sets from projects. I’ve also been documenting some tools I’ve created (like a complex json to csv converter; scripts to pull data from multiple OH download files and into a single file for analysis; plus writing up more details about how to work with data files coming from Nightscout into OH), also with the goal of facilitating more researchers to be able to dive in and do research without needing specific tool or technical experience.

It’s also great to work with a platform like Open Humans that allows us to share data or use data for multiple projects simultaneously. There’s no burdensome data collection or study procedures for individuals to be able to contribute to numerous research projects where their data is useful. People consent to share their data with the commons, fill out an optional survey (which will save them from having to repeat basic demographic-type information that every research project is interested in), and are done!

Are you *only* working with the diabetes community?

Not at all. The first part of our project does focus on learning best practices and lessons learned from the DIY diabetes communities, but with an eye toward creating open source toolkit and materials that will be of use to many other patient health communities. My goal is to help as many other patient health communities spark similar #WeAreNotWaiting projects in the areas that are of most use to them, based on their needs.

How can I find out more about this work?
Make sure to read our project announcement blog post if you haven’t already – it’s got some calls to action for people with diabetes; people interested in leading projects in other health communities; as well as other researchers interested in collaborating! Also, follow me on Twitter, for more posts about this work in progress!

Opening pathways for discovery, research, and innovation in health and healthcare

How can we get more patients and other communities to leverage the benefits of the #WeAreNotWaiting mindset for research, development, and innovation in health (and healthcare)?

That’s a question I’ve been asking myself for two years, after seeing the diverse efforts and valuable outpourings from the DIY diabetes community (ranging from amazing remote monitoring solutions for CGM to algorithms, hardware, and other software for automated insulin delivery systems).

But, how to scale? In diabetes, we’re perhaps uniquely positioned given our data-driven disease. However, I believe that the data and innovation approach we’ve taken in diabetes can help many other types of patient communities as well. I just didn’t know how to help scale it… until recently.

Last year when a group of us from the OpenAPS community participated in the Quantified Self Public Health Symposium in 2016, it prompted some follow up conversations with various academic researchers, including Eric Hekler from Arizona State University (ASU).

Eric started a conversation, and kept asking me: What could you do if you partnered with academic researchers? How can traditional researchers help the DIY community, OpenAPS or otherwise?

That also sparked a conversation with Paul Tarini, a senior program officer at the Robert Wood Johnson Foundation (RWJF), about potential funding for a project.

(Important to state here: OpenAPS itself is not a funded project. It has not been, and will not be. It is 100% DIY, non-commercial, and it has been built by a community of volunteers.)

What I wanted to talk to RWJF about was funding a collaboration with academic researchers for studying data and innovation coming out of the community; and to ultimately identify needs and build resources to help scale this type of community effort and empower other patient communities as well.

It took over a year, but we were able to work through initial project proposals and were then invited to submit a full proposal. And on Wednesday (September 6, 2017), I found out that we have been awarded the grant, and this project work will be funded by the Robert Wood Johnson Foundation. The project officially begins on September 15 and will run for 18 months.

So what exactly is this project?

Our project is titled “Learning to not wait: Opening pathways for discovery, research, and innovation in health and healthcare.”

It entails a number of things.

    1. We are creating an on-call data science team to support research in the DIY community. More details will be forthcoming, but essentially this team is there to help do research on the myriad of questions bubbling out of the community. For example – how does sensitivity change during growth spurts, during periods of inactivity, or when changing insulin types? What are some of the most successful mealtime insulin dosing strategies? Etc. People will be able to submit ideas, and get help formulating the idea into a researchable question, and get the research done.
    2. Studying the process of research when done by patients, and the barriers they/their research run into when spreading this scientific knowledge. I personally know there are a lot of barriers, but we need to document them and find solutions. (There are a lot of prejudice and perceived stigmas toward patient researchers doing this type of scientific work, around things like quality of research, methods of distributing knowledge, etc.)
    3. Convening a meeting with patients, traditional researchers, legal experts, and others in this innovative research space to discuss and address some of the known and being-found barriers for this type of research. I envision a white paper type publication to come out of this meeting to document the lay of the land as it is.
    4. Creating toolkit-type resources based on what we’ve learned and are learning in this project for helping patients new to DIY and this type of research take on various levels of research or innovation activity. Part of our project’s scope of work, in #WeAreNotWaiting spirit, includes beta testing with 2-3 other patient communities, so we can get feedback and iterate and roll these out as quickly as possible.

Our project has a couple of principles that I feel strongly about, and am also very proud of in approaching this body of work.

  • I am the scientific Principal Investigator of this project. This is unique in the world of grant-funded research, where a patient is driving the scientific discovery process. (I’m proud and very appreciative to have two amazing co-PI’s who are helping with some of the administrative work since the grant is being administered through Arizona State University Foundation, who is being an awesome partner given the uniqueness of this situation*.) My co-PI’s are Eric Hekler and Erik Johnston. The other members of the team include John Harlow, who’s a MacArthur Foundation Postdoctoral Fellow; Sayali Phatak, a PhD student at ASU; and Keren Hirsch from the ASU Decision Theater.
  • #WeAreNotWaiting is the mantra for this project and our entire team. We plan to be as efficient as possible in doing the project work, which includes being as timely as possible with sharing findings back with the community as soon as they’re ready (a given; there’s no reason to wait) as well as finding ways to publish that are faster than the very traditional academic publishing process, and being thoughtful about the right audiences outside the patient community for communicating about this project’s work.
  • Always asking why. As a brand new PI, I have a lot to learn. But as a non-traditional PI, I also am running into a lot of things that are done the way they’d be done if I was traditionally inside an organization. I plan to explore and challenge as many of these, and try to document the decisions I make in this project as I come to those forks in the road. In some cases, I choose the easier paths because for my project/work/focus, it does not matter. In other cases, based on principle, I choose the harder path-blazing approach.

* About the uniqueness of this project and the administrative details

Since I’m an individual patient researcher, not affiliated with the organization, we decided we would make the official grantee financial organization Arizona State University Foundation, since that’s where my co-PI’s were. But true to the nature of this project, I want to document the challenges and opportunities that come with that, so more to come about all the interesting lessons learned about the process of putting together the proposal and the grant approval process once we heard the grant would be awarded. That way, future patient researchers have a leg up on what is coming when taking on this type of project and are aware of what this approach entailed. The short version is I am a subcontractor to ASU for purpose of the grant; but am not employed or otherwise affiliated with ASU. Props to the many people at ASU who learned about me and this project in the approval process and rolled with it / helped make it happen.

So, what’s next? When do you start? What are you waiting on?!

Coming super soon – a project website (now here) with more details about this project.

For my fellow PWDs:

  • Stay tuned for the project website going live, which will also include more details about how individuals in the diabetes community can pitch ideas/get started working with the on-call data science team.

For patients reading this who are members of other patient disease communities:

  • Ping me if you’re SUPER excited and can’t wait to tell me :), or stay tuned for more info about the process for proposing that your patient community be one of the communities with whom we beta test some of the tools/resources developed toward the latter phases of this project.

If you’re someone else who’s interested in this work (such as a legal expert, other researcher, etc.):

  • Also ping me if you’re interested in hearing more about the meeting we plan to convene with a small multidisciplinary group to discuss and address barriers of patient-driven research. Even if we can’t get everyone interested to attend the in-person meeting, I would still love your input and collaboration for the white paper and/or other publications and intersections with this project.

For everyone else:

  • Please do let me know if there’s a particular aspect of this project that you’re curious to learn more about – whether it’s some of what I’m facing and documenting as a patient PI researcher, or otherwise. That’ll help me prioritize some of the blog posts and articles I’m writing about this process!

Thanks to everyone who managed to read this ginormous blog post.

I am incredibly excited about the project, and having resources to focus on how patients and non-traditional actors in healthcare can drive research, development, innovation, and knowledge sharing in non-traditional methods and from the ground up, plus prioritize and change the healthcare research agenda. Like my work in OpenAPS that stands on the shoulders of so many, I’m hoping this project is the first of many and gets to a place for others to leverage this work and take it beyond the scope of what we’ve all imagined is currently possible.

A huge thanks to the team partnering with me on this work; to ASU for being a great partner as an organization; to the Robert Wood Johnson Foundation for supporting this project (and in particular to our program manager, Paul Tarini, for his ongoing support throughout this entire process); and many extra thanks to Scott and all my family and friends for supporting me throughout the proposal process and being the recipients of some VERY excited and !!! filled texts when I found out we had officially been awarded the grant for this project.

Making it possible for researchers to work with #OpenAPS or general Nightscout data – and creating a complex json to csv command line tool that works with unknown schema

This is less of an OpenAPS/DIYPS/diabetes-related post, although that is normally what I blog about. However, since we created the #OpenAPS Data Commons on Open Humans, to allow those of us who desire to donate our diabetes data to research, I have been spending a lot of time figuring out the process from uploading your data to how data is managed and shared securely with researchers. The hardest part is helping researchers figure out how to handle the data – because we PWDs produce a lot of data :) . So this post explains some of the challenges of the data management to get it to a researcher-friendly format. I have been greatly helped over the years by general purpose open-source work from other people, and one of the things that helps ME the most as a non-traditional programmer is plain language posts explaining the thought process by behind the tools and the attempted solution paths. Especially because sometimes the web pages and blog posts pop higher in search than nitty gritty tool documentation without context. (Plus, I’ve been taking my own advice about not letting myself hold me back from trying, even when I don’t know how to do things yet.) So that’s what this post is!

OH that I "certainly stress tested" a tool with lots of data

Background/inspiration for the project and the tools I had to build:

We’re using Nightscout, which is a remote data-viewing platform for diabetes data, made with love and open source and freely available for anyone with diabetes to use. It’s one of the best ways to display not only continuous glucose monitor (CGM) data, but also data from our DIY closed loop artificial pancreases (#OpenAPS). It can store data from a number of different kinds and brands of diabetes devices (pumps, CGMs, manual data entries, etc.), which means it’s a rich source of data. As the number of DIY OpenAPS users are growing, we estimate that our real-world use is overtaking the amount of total hours of data from clinical trials of closed loop artificial pancreas systems.  In the #WeAreNotWaiting spirit of moving quickly (rather than waiting years for research teams to collect and analyze their own data) we want to see what we can learn from OpenAPS usage, not only by donating data to help traditional researchers speed up their work, but also by co-designing research studies of the things of most value to the diabetes community.

Step 1: Data from users to Open Humans

I thought Step 1 would be the hardest. However, thanks to Madeleine Ball, John Costik, and others in the Nightscout community, a simple Nightscout Data Transfer App was created that enables people with Nightscout data to pop it into their Open Humans accounts. It’s then very easy to join different projects (like the OpenAPS Data Commons) and share your data with those projects. And as the volunteer administrator of the OpenAPS Data Commons, it’s also easy for me to provide data to researchers.

The biggest challenge at this stage was figuring out how much data to pull from the API. I have almost 3 years worth of DIY diabetes data, and I have numerous devices over time uploading all at once…which makes for large chunks of data. Not everyone has this much data (or 6-7 rigs uploading constantly ;)). Props to Madeleine for the patience in working with me to make sure the super users with large data sets will be able to use all of these tools!

Step 2: Sharing the data with researchers

This was easy. Yay for data-sharing tools like Dropbox.

Step 3: Researchers being able to use the data

Here’s where thing started to get interesting. We have large data files that come in json format from Nightscout. I know some researchers we will be working with are probably very comfortable working with tools that can take large, complex json files. However…not all will be, especially because we also want to encourage independent researchers to engage with the data for projects. So I had the belated realization that we need to do something other than hand over json files. We need to convert, at the least, to csv so it can be easily viewed in Excel.

Sounds easy, right?

According to basic searches, there’s roughly a gazillion ways to convert json to csv. There’s even websites that will do it for you, without making you run it on the command line. However, most of them require you to know the types of data and the number of types, in order to therefore construct headers in the csv file to make it readable and useful to a human.

This is where the DIY and infinite possibility nature of all the kinds of diabetes tools anyone could be using with Nightscout, plus the infinite ways they can self-describe profiles and alarms and methods of entering data, makes it tricky. Just based on an eyeball search between two individuals, I was unable to find and count the hundred+ types of data entry possibilities. This is definitely a job for the computer, but I had to figure out how to train the computer to deal with this.

Again, json to csv tools are so common I figured there HAD to be someone who had done this. Finally, after a dozen varying searches and trying a variety of command line tools, I finally found one web-based tool that would take json, create the schema without knowing the data types in advance, and convert it to csv. It was (is) super slick. I got very excited when I saw it linked to a Github repository, because that meant it was probably open source and I can use it. I didn’t see any instructions for how to use it on the command line, though, so I message the author on Twitter and found out that it didn’t yet exist and was a not-yet-done TODO for him.

Sigh. Given this whole #WeAreNotWaiting thing (and given I’ve promised to help some of the researchers in figuring this out so we can initiate some of the research projects), I needed to figure out how to convert this tool into a command line version.

So, I did.

  • I taught myself how to unzip json files (ended up picking `gzip -cd`, because it works on both Mac and Linux)
  • I planned to then convert the web tool to be able to work on the command line, and use it to translate the json files to csv.

But..remember the big file issue? It struck again. So I first had to figure out the best way to estimate the size and splice or split the json into a series of files, without splitting it in a weird place and messing up the data. That became jsonsplit.sh, a tool to split a json file based on the size you give it (and if you don’t specify, it defaults to something like 100000 records).

FWIW: 100,000 records was too much for the more complex schema of the data I was working with, so I often did it in smaller chunks, but you can set it to whatever size you prefer.

So now “all” I had to do was:

  • Unzip the json
  • Break it down if it was too large, using jsonsplit.sh
  • Convert each of these files from json to csv

Phew. Each of these looks really simple now, but took a good chunk of time to figure out. Luckily, the author of the web tool had done much of the hard json-to-csv work, and Scott helped me figure out how to take the html-based version of the conversion and make it useable in the command line using javascript. That became complex-json2csv.js.

Because I knew how hard this all was, and wanted other people to be able to easily use this tool if they had large, complex json with unknown schema to deal with, I created a package.json so I could publish it to npm so you can download and run it anywhere.

I also had to create a script that would pass it all of the Open Humans data; unzip the file; run jsonsplit.sh, run complex-json2csv.js, and organize the data in a useful way, given the existing file structure of the data. Therefore I also created an “OpenHumansDataTools” repository on Github, so that other researchers who will be using Nightscout-based Open Humans data can use this if they want to work with the data. (And, there may be something useful to others using Open Humans even if they’re not using Nightscout data as their data source – again, see “large, complex, challenging json since you don’t know the data type and count of data types” issue. So this repo can link them to complex-json2csv.js and jsonsplit.sh for discovery purposes, as they’re general purpose tools.) That script is here.

My next TODO will be to write a script to take only slices of data based on information shared as part of the surveys that go with the Nightscout data; i.e. if you started your DIY closed loop on X data, take data from 2 weeks prior and 6 weeks after, etc.

I also created a pull request (PR) back to the original tool that inspired my work, in case he wants to add it to his repository for others who also want to run his great stuff from the command line. I know my stuff isn’t perfect, but it works :) and I’m proud of being able to contribute to general-purpose open source in addition to diabetes-specific open source work. (Big thanks as always to everyone who devotes their work to open source for others to use!)

So now, I can pass researchers json or csv files for use in their research. We have a number of studies who are planning to request access to the OpenAPS Data Commons, and I’m excited about how work like this to make diabetes data more broadly available for research will help improve our lives in the short and long term!