PERT Pilot – the first iOS app for Exocrine Pancreatic Insufficiency (EPI or PEI) and Pancreatic Enzyme Replacement Therapy (PERT)

Introducing PERT Pilot, the first iOS app designed for people with exocrine pancreatic insufficiency (EPI / PEI) and the only iOS app for specifically recording pancreatic enzyme replacement therapy (PERT) dosing!

*Available to download for FREE on the iOS App Store *
The PERT Pilot logo - PERT is in all caps and bold purple font, the word "Pilot" is in a script font in black placed below PERT.

After originally developing GI symptoms, then working through the long journey to diagnosis with exocrine pancreatic insufficiency (known as EPI or PEI), I’ve had to come up methods to figure out the right dosing of PERT for my EPI. I realized that the methods that I’ve made work for me – logging what I was eating in a spreadsheet and using it to determine the ratios I needed to use to dose my pancreatic enzyme replacement therapy (PERT) – weren’t methods that other people were as comfortable using. I have been thinking about this for the last year or more, and in my pursuit for wanting to encourage others to improve their outcomes with EPI (and realize that it IS possible to get to few symptoms, based on increasing/titrating the enzymes we take based on what we eat), I wrote a very long blog post explaining these methods and also sharing a free web-based calculator to help others to calculate their ratios.

But, that still isn’t the most user-friendly way to enable people to do this.

What else could I do, though? I wasn’t sure.

More recently, though, I have been experimenting with various projects and using ‘large language model’ (LLM) tools like GPT-4 to work on various projects. And a few weeks ago I realized that maybe I could *try* to build an iOS app version of my idea. I wanted something to help people log what they are eating, record their PERT dosing, and more easily see the relationship in what they are eating and what enzymes they are dosing. This would enable them to use that information to more easily adjust what they are dosing for future meals if they’re not (yet) satisfied with their outcomes.

And thus, PERT Pilot was born!

Screenshots from the PERT Pilot app which show the home screen, the calculator where you enter what PERT you're taking and a typical meal, plus the resulting ratios screen that show you the relationship between what you ate and how many enzymes you dosed.

What does PERT Pilot do?

PERT Pilot is designed to help people living with Exocrine Pancreatic Insufficiency (EPI or PEI) more easily deal with pancreatic enzyme replacement therapy (PERT). Aka, “taking enzymes”.

The PERT Pilot calculator enables you log the PERT that you are taking along with a meal, how many pills you take for it, and whether this dosing seems to work for you or not.

PERT Pilot then shows you the relationship between how much PERT you have been taking and what you are eating, supporting you as you fine-tune your enzyme intake.

PERT Pilot also enables you to share what’s working – and what might not be working – with your healthcare provider. PERT Pilot not only lists every meal you’ve entered, but also has a visual graph so you can see each meal and how much fat and protein from each meal were dosed by one pill – and it’s color coded by the outcome you assigned that meal! Green means you said that meal’s dosing “worked”; orange means you were “unsure”, and red matches the meals you said “didn’t work” for that level of dosing.

You can press on any meal and edit it, and you can swipe to delete a meal.

PERT Pilot also has is an education section so you can learn more about EPI and why you need PERT, and how this approach to ratios may help you more effectively dose your PERT in the future.

Why use PERT Pilot if you have EPI or PEI or PI?

PERT Pilot is the first and only specific app for those of us living with EPI (PEI or PI). People who use the approach in PERT Pilot of adapting their PERT dosing to what they are eating for each meal or snack often report fewer symptoms. PERT Pilot was designed and built by someone with exocrine pancreatic insufficiency, just like you!

With PERT Pilot you can:

  • Log your meals and PERT dosing. No other app specifically is designed for PERT dosing.
  • Edit or adjust your meal entry at any time – including if you wake up the next morning and realize your last dose from the day before ‘didn’t work’.
  • Review your dosing and see all of your meals, dosing, and outcomes – including a visual graph that shows you, for each meal, what one pill ‘covered’ so you can see where there are clusters of dosing that worked and if there are any clear patterns in what didn’t work for you.
  • You can also export your data, as a PDF list of all meals or a CSV file (which you can open in tools like Excel or other spreadsheet tools) if you want to analyze your data elsewhere!
  • Your data is your data, period. No one has access to your dosing data, meal data, or outcome data, and nothing you enter into PERT Pilot leaves your device – unless you decide to export your data. (See more in the PERT Pilot Privacy Policy.)

Note: this app was not funded by nor has any relationship to any pharmaceutical or medical-related companies. It’s simply built by a person with EPI for other people with EPI.

Here is a quick demonstration of PERT Pilot in action:

An animated gif of PERT Pilot in action

You can share your feedback about PERT Pilot:

Feel free to email me (Dana+PERTPilot@OpenAPS.org) any time.

I’d love to hear what works or is helpful, but also if something in the app isn’t yet working as expected.

Or, if you use another approved brand of PERT that’s not currently listed, let me know and I can add it in.

And, you can share your feature requests! I’m planning to build more features soon (see below).

What’s coming next for PERT Pilot:

I’m not done improving the functionality! I plan to add an AI meal estimation feature (UPDATE: now available!), so if you don’t know what’s in what you’re eating at a restaurant or someone else’s home cooked meal you can simply enter a description of the meal and have macronutrient estimates generated for you to use or modify.

Download PERT Pilot today! It’s free to download, so go ahead and download it and check it out! If you find it useful, please also leave a rating or review on the App Store to help other people find it in the future. You can also share it via social media, and give people a link to download it: https://bit.ly/PERT-Pilot-iOS

How I Use LLMs like ChatGPT And Tips For Getting Started

You’ve probably heard about new AI (artificial intelligence) tools like ChatGPT, Bard, Midjourney, DALL-E and others. But, what are they good for?

Last fall I started experimenting with them. I looked at AI art tools and found them to be challenging, at the time, for one of my purposes, which was creating characters and illustrating a storyline with consistent characters for some of my children’s books. I also tested GPT-3 (meaning version 3.0 of GPT). It wasn’t that great, to be honest. But later, GPT-3.5 was released, along with the ChatGPT chat interface to it, which WAS a big improvement for a lot of my use cases. (And now, GPT-4 is out and is an even bigger improvement, although it costs more to use. More on the cost differences below)

So what am I using these AI tools for? And how might YOU use some of these AI tools? And what are the limitations? This is what I’ve learned:

  1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.

You know the feeling of staring at a blank page and not knowing where to start? Maybe it’s the blank page of a cold email; the blank page of an essay or paper you need to write; the blank page of the outline for a presentation. Starting is hard!

Even for this blog post, I had a list of bulleted notes of things I wanted to remember to include. But I wasn’t sure how I wanted to start the blog post or incorporate them. I stuck the notes in ChatGPT and asked it to expand the notes.

What did it do? It wrote a few paragraph summary. Which isn’t what I wanted, so I asked it again to use the notes and this time “expand each bullet into a few sentences, rather than summarizing”. With these clear directions, it did, and I was able to look at this content and decide what I wanted to edit, include, or remove.

Sometimes I’m stuck on a particular writing task, and I use ChatGPT to break it down. In addition to kick-starting any type of writing overall, I’ve asked it to:

  • Take an outline of notes and summarize them into an introduction; limitations section; discussion section; conclusion; one paragraph summary; etc.
  • Take a bullet point list of notes and write full, complete sentences.
  • Take a long list of notes I’ve written about data I’ve extracted from a systematic review I was working on, and ask it about recurring themes or outlier concepts. Especially when I had 20 pages (!) of hand-written notes in bullets with some loose organization by section, I could feed in chunks of content and get help getting the big picture from that 20 pages of content I had created. It can highlight themes in the data based on the written narratives around the data.

A lot of times, the best thing it does is it prompts my brain to say “that’s not correct! It should be talking about…” and I’m able to more easily write the content that was in the back of my brain all along. I probably use 5% of what it’s written, and more frequently use it as a springboard for my writing. That might be unique to how I’m using it, though, and other simple use cases such as writing an email to someone or other simplistic content tasks may mean you can keep 90% or more of the content to use.

2. It can also help analyze data (caution alert!) if you understand how the tools work.

Huge learning moment here: these tools are called LLMs (large language models). They are trained on large amounts of language. They’re essentially designed so that, based on all of those words (language) it’s taken in previously, to predict content that “sounds” like what would come after a given prompt. So if you ask it to write a song or a haiku, it “knows” what a song or a haiku “looks” like, and can generate words to match those patterns.

It’s essentially a PATTERN MATCHER on WORDS. Yeah, I’m yelling in all caps here because this is the biggest confusion I see. ChatGPT or most of these LLMs don’t have access to the internet; they’re not looking up in a search engine for an answer. If you ask it a question about a person, it’s going to give you an answer (because it knows what this type of answer “sounds” like), but depending on the amount of information it “remembers”, some may be accurate and some may be 100% made up.

Why am I explaining this? Remember the above section where I highlighted how it can start to sense themes in the data? It’s not answering solely based on the raw data; it’s not doing analysis of the data, but mostly of the words surrounding the data. For example, you can paste in data (from a spreadsheet) and ask it questions. I did that once, pasting in some data from a pivot table and asking it the same question I had asked myself in analyzing the data. It gave me the same sense of the data that I had based on my own analysis, then pointed out it was only qualitative analysis and that I should also do quantitative statistical analysis. So I asked it if it could do quantitative statistical analysis. It said yes, it could, and spit out some numbers and described the methods of quantitative statistical analysis.

But here’s the thing: those numbers were completely made up!

It can’t actually use (in its current design) the methods it was describing verbally, and instead made up numbers that ‘sounded’ right.

So I asked it to describe how to do that statistical method in Google Sheets. It provided the formula and instructions; I did that analysis myself; and confirmed that the numbers it had given me were 100% made up.

The takeaway here is: it outright said it could do a thing (quantitative statistical analysis) that it can’t do. It’s like a human in some regards: some humans will lie or fudge and make stuff up when you talk to them. It’s helpful to be aware and query whether someone has relevant expertise, what their motivations are, etc. in determining whether or not to use their advice/input on something. The same should go for these AI tools! Knowing this is an LLM and it’s going to pattern match on language helps you pinpoint when it’s going to be prone to making stuff up. Humans are especially likely to make something up that sounds plausible in situations where they’re “expected” to know the answer. LLMs are in that situation all the time: sometimes they actually do know an answer, sometimes they have a good guess, and sometimes they’re just pattern matching and coming up with something that sounds plausible.

In short:

  • LLM’s can expand general concepts and write language about what is generally well known based on its training data.
  • Try to ask it a particular fact, though, and it’s probably going to make stuff up, whether that’s about a person or a concept – you need to fact check it elsewhere.
  • It can’t do math!

But what it can do is teach you or show you how to do the math, the coding, or whatever thing you wish it would do for you. And this gets into one of my favorite use cases for it.

3. You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.

One of the first things I did was ask ChatGPT to help me write a script. In fact, that’s what I did to expedite the process of finding tweets where I had used an image in order to get a screenshot to embed on my blog, rather than embedding the tweet.

It’s now so easy to generate code for scripts, regardless of which language you have previous experience with. I used to write all of my code as bash scripts, because that’s the format I was most familiar with. But ChatGPT likes to do things as Python scripts, so I asked it simple questions like “how do I call a python script from the command line” after I asked it to write a script and it generated a python script. Sure, you could search in a search engine or Stack Overflow for similar questions and get the same information. But one nice thing is that if you have it generate a script and then ask it step by step how to run a script, it gives you step by step instructions in context of what you were doing. So instead of saying “to run a script, type `python script.py’”, using placeholder names, it’ll say “to run the script, use ‘python actual-name-of-the-script-it-built-you.py’ “ and you can click the button to copy that, paste it in, and hit enter. It saves a lot of time for figuring out how to take placeholder information (which you would get from a traditional search engine result or Stack Overflow, where people are fond of things like saying FOOBAR and you have no idea if that means something or is meant to be a placeholder). Careful observers will notice that the latest scripts I’ve added to my Open Humans Data Tools repository (which is packed with a bunch of scripts to help work with big datasets!) are now in Python rather than bash; such as when I was adding new scripts for fellow researchers looking to check for updates in big datasets (such as the OpenAPS Data Commons). This is because I used GPT to help with those scripts!

It’s really easy now to go from an idea to a script. If you’re able to describe it logically, you can ask it to write a script, tell you how to run it, and help you debug it. Sometimes you can start by asking it a question, such as “Is it possible to do Y?” and it describes a method. You need to test the method or check for it elsewhere, but things like uploading a list of DOIs to Mendeley to save me hundreds of clicks? I didn’t realize Mendeley had an API or that I could write a script that would do that! ChatGPT helped me write the script, figure out how to create a developer account and app access information for Mendeley, and debug along the way so I ended up within an hour and a half of having a tool that easily saved me 3 hours on the very first project that I used it with.

I’m gushing about this because there’s probably a lot of ideas you have that you immediately throw out as being too hard, or you don’t know how to do it. It takes time, but I’m learning to remember to think “I should ask the LLM this” and ask it questions such as:

  • Is it possible to do X?
  • Write a script to do X.
  • I have X data. Pretend I am someone who doesn’t know how to use Y software and explain how I should do Z.

Another thing I’ve done frequently is ask it to help me quickly write a complex formula to use in a spreadsheet. Such as “write a formula that can be used in Google Sheets to take an average of the values in M3:M84 if they are greater than zero”.

It gives me the formula, and also describes it, and in some cases, gives alternative options.

Other things I’ve done with spreadsheets include:

  • Ask it to write a conditional formatting custom formula, then give me instructions for expanding the conditional formatting to apply to a certain cell range.
  • Asking it to check if a cell is filled with a particular value and then repeating the value in the new cell, in order to create new data series to use in particular charts and graphs I wanted to create from my data.
  • Help me transform my data so I could generate a box and whisker plot.
  • Ask it for other visuals that might be effective ways to illustrate and visualize the same dataset.
  • Explain the difference between two similar formulas (e.g. COUNT and COUNTA or when to use IF and IFS).

This has been incredibly helpful especially with some of my self-tracked datasets (particularly around thyroid-related symptom data) where I’m still trying to figure out the relationship between thyroid levels, thyroid antibody levels, and symptom data (and things like menstrual cycle timing). I’ve used it for creating the formulas and solutions I’ve talked about in projects such as the one where I created a “today” line that dynamically updates in a chart.

It’s also helped me get past the friction of setting up new tools. Case in point, Jupyter notebooks. I’ve used them in the web browser version before, but often had issues running the notebooks people gave me. I debugged and did all kinds of troubleshooting, but have not for years been able to get it successfully installed locally on (multiple of) my computers. I had finally given up on effectively using notebooks and definitely given up on running it locally on my machine.

However, I decided to see if I could get ChatGPT to coax me through the install process.

I told it:

“I have this table with data. Pretend I am someone who has never used R before. Tell me, step by step, how to use a Jupyter notebook to generate a box and whisker plot using this data”

(and I pasted my data that I had copied from a spreadsheet, then hit enter).

It outlined exactly what I needed to do, saying to install Jupyter Notebook locally if I hadn’t, gave me code to do that, installing the R kernel, told me how to do that, then how to start a notebook all the way down to what code to put in the notebook, the data transformed that I could copy/paste, and all the code that generated the plot.

However, remember I have never been able to successfully get Jupyter Notebooks running! For years! I was stuck on step 2, installing R. I said:

“Step 2, explain to me how I enter those commands in R? Do I do this in Terminal?”

It said “Oh apologies, no, you run those commands elsewhere, preferably in Rstudio. Here is how to download RStudio and run the commands”.

So, like humans often do, it glossed over a crucial step. But it went back and explained it to me and kept giving more detailed instructions and helping me debug various errors. After 5-6 more troubleshooting steps, it worked! And I was able to open Jupyter Notebooks locally and get it working!

All along, most of the tutorials I had been reading had skipped or glossed over that I needed to do something with R, and where that was. Probably because most people writing the tutorials are already data scientists who have worked with R and RStudio etc, so they didn’t know those dependencies were baked in! Using ChatGPT helped me be able to put in every error message or every place I got stuck, and it coached me through each spot (with no judgment or impatience). It was great!

I was then able to continue with the other steps of getting my data transformed, into the notebook, running the code, and generating my first ever box and whisker plot with R!

A box and whisker plot, illustrated simply to show that I used R and Jupyter finally successfully!

This is where I really saw the power of these tools, reducing the friction of trying something new (a tool, a piece of software, a new method, a new language, etc.) and helping you troubleshoot patiently step by step.

Does it sometimes skip steps or give you solutions that don’t work? Yes. But it’s still a LOT faster than manually debugging, trying to find someone to help, or spending hours in a search engine or Stack Overflow trying to translate generic code/advice/solutions into something that works on your setup. The beauty of these tools is you can simply paste in the error message and it goes “oh, sorry, try this to solve that error”.

Because the barrier to entry is so low (compared to before), I’ve also asked it to help me with other project ideas where I previously didn’t want to spend the time needed to learn new software and languages and all the nuances of getting from start to end of a project.

Such as, building an iOS app by myself.

I have a ton of projects where I want to temporarily track certain types of data for a short period of time. My fall back is usually a spreadsheet on my phone, but it’s not always easy to quickly enter data on a spreadsheet on your phone, even if you set up a template with a drop down menu like I’ve done in the past (for my DIY macronutrient tool, for example). For example, I want to see if there’s a correlation in my blood pressure at different times and patterns of inflammation in my eyelid and heart rate symptoms (which are symptoms, for me, of thyroid antibodies being out of range, due to Graves’ disease). That means I need to track my symptom data, but also now some blood pressure data. I want to be able to put these datasets together easily, which I can, but the hardest part (so to speak) is finding a way that I am willing to record my blood pressure data. I don’t want to use an existing BP tracking app, and I don’t want a connected BP monitor, and I don’t want to use Apple Health. (Yes, I’m picky!)

I decided to ask ChatGPT to help me accomplish this. I told it:

“You’re an AI programming assistant. Help me write a basic iOS app using Swift UI. The goal is a simple blood pressure tracking app. I want the user interface to default to the data entry screen where there should be three boxes to take the systolic, diastolic blood pressure numbers and also the pulse. There should also be selection boxes to indicate whether the BP was taken sitting up or laying down. Also, enable the selection of a section of symptom check boxes that include “HR feeling” and “Eyes”. Once entered on this screen, the data should save to a google spreadsheet.” 

This is a completely custom, DIY, n of 1 app. I don’t care about it working for anyone else, I simply want to be able to enter my blood pressure, pulse, whether I’m sitting or laying down, and the two specific, unique to me symptoms I’m trying to analyze alongside the BP data.

And it helped me build this! It taught me how to set up a new SwiftUI project in XCode, gave me code for the user interface, how to set up an API with Google Sheets, write code to save the data to Sheets, and get the app to run.

(I am still debugging the connection to Google Sheets, so in the interim I changed my mind and had it create another screen to display the stored data then enable it to email me a CSV file, because it’s so easy to write scripts or formulas to take data from two sources and append it together!)

Is it fancy? No. Am I going to try to distribute it? No. It’s meeting a custom need to enable me to collect specific data super easily over a short period of time in a way that my previous tools did not enable.

Here’s a preview of my custom app running in a simulator phone:

Simulator iphone with a basic iOS app that intakes BP, pulse, buttons for indicating whether BP was taken sitting or laying down; and toggles for key symptoms (in my case HR feeling or eyes), and a purple save button.

I did this in a few hours, rather than taking days or weeks. And now, the barrier to entry to creating more custom iOS is reduced, because now I’m more comfortable working with XCode and the file structures and what it takes to build and deploy an app! Sure, again, I could have learned to do this in other ways, but the learning curve is drastically shortened and it takes away most of the ‘getting started’ friction.

That’s the theme across all of these projects:

  • Barriers to entry are lower and it’s easier to get started
  • It’s easier to try things, even if they flop
  • There’s a quicker learning curve on new tools, technologies and languages
  • You get customized support and troubleshooting without having to translate through as many generic placeholders

PS – speaking of iOS apps, based on building this one simple app I had the confidence to try building a really complex, novel app that has never existed in the world before! It’s for people with exocrine pancreatic insufficiency like me who want to log pancreatic enzyme replacement therapy (PERT) dosing and improve their outcomes – check out PERT Pilot and how I built it here.

4. Notes about what these tools cost

I found ChatGPT useful for writing projects in terms of getting started, even though the content wasn’t that great (on GPT-3.5, too). Then they came out with GPT-4 and made a ChatGPT Pro option for $20/month. I didn’t think it was worth it and resisted it. Then I finally decided to try it, because some of the more sophisticated use cases I wanted to use it for required a longer context window, and in addition to a better model it also gave you a longer context window. I paid the first $20 assuming I’d want to cancel it by the end of the month.

Nope.

The $20 has been worth it on every single project that I’ve used it for. I’ve easily saved 5x that on most projects in terms of reducing the energy needed to start a project, whether it was writing or developing code. It has saved 10x that in time cost recouped from debugging new code and tools.

GPT-4 does have caps, though, so even with the $20/month, you can only do 25 messages every 3 hours. I try to be cognizant of which projects I default to using GPT-3.5 on (unlimited) versus saving the more sophisticated projects for my GPT-4 quota.

For example, I saw a new tool someone had built called “AutoResearcher”, downloaded it, and tried to use it. I ran into a bug and pasted the error into GPT-3.5 and got help figuring out where the problem was. Then I decided I wanted to add a feature to output to a text file, and it helped me quickly edit the code to do that, and I PR’ed it back in and it was accepted (woohoo) and now everyone using that tool can use that feature. That was pretty simple and I was able to use GPT-3.5 for that. But sometimes, when I need a larger context window for a more sophisticated or content-heavy project, I start with GPT-4. When I run into the cap, it tells me when my next window opens up (3 hours after I started using it), and I usually have an hour or two until then. I can open a new chat on GPT-3.5 (without the same context) and try to do things there; switch to another project; or come back at the time it says to continue using GPT-4 on that context/setup.

Why the limit? Because it’s a more expensive model. So you have a tradeoff between paying more and having a limit on how much you can use it, because of the cost to the company.

—–

TLDR:

Most important note: LLMs don’t “think” or “know” things the way humans do. They output language they predict you want to see, based on its training and the inputs you give it. It’s like the autocomplete of a sentence in your email, but more words on a wider range of topics!

Also, the LLM can’t do math. But they can write code. Including code to do math.

(Some, but not all, LLMs have access to the internet to look up or incorporate facts; make sure you know which LLM you are using and whether it has this feature or not.)

Ways to get started:

    1. The most frequent way I use these AI tools is for getting started on a project, especially those related to writing.
      • Ask it to help you expand on notes; write summaries of existing content; or write sections of content based on instructions you give it
    2.  It can also help analyze data (caution alert!) if you understand the limitations of the LLM.
      • The most effective way to work with data is to have it tell you how to run things in analytical software, whether that’s how to use R or a spreadsheet or other software for data analysis. Remember the LLM can’t do math, but it can write code so you can then do the math!
    3.  You can get an LLM to teach you how to use new tools, solve problems, and lower the barrier to entry (and friction) on using new tools, languages, and software.
      • Build a new habit of asking it “Can I do X” or “Is it possible to do Y” and when it says it’s possible, give it a try! Tell it to give you step-by-step instructions. Tell it where you get stuck. Give it your error messages or where you get lost and have it coach you through the process. 

What’s been your favorite way to use an LLM? I’d love to know other ways I should be using them, so please drop a comment with your favorite projects/ways of using them!

Personally, the latest project that I built with an LLM has been PERT Pilot!

How I use LLMs (like ChatGPT) and tips for getting started

Looking Back Through 2022 (What You May Have Missed)

I ended up writing a post last year recapping 2021, in part because I felt like I did hardly anything – which wasn’t true. In part, that was based on my body having a number of things going on that I didn’t know at the time. I figured those out in 2022 which made 2022 hard and also provided me with a sense of accomplishment as I tackled some of these new challenges.

For 2022, I have a very different feeling looking back on the entire year, which makes me so happy because it was night and day (different) compared to this time last year.

One major example? Exocrine Pancreatic Insufficiency.

I started taking enzymes (pancreatic enzyme replacement therapy, known as PERT) in early January. And they clearly worked, hooray!

I quickly realized that like insulin, PERT dosing needed to be based on the contents of my meals. I figured out how to effectively titrate for each meal and within a month or two was reliably dosing effectively with everything I was eating and drinking. And, I was writing and sharing my knowledge with others – you can see many of the posts I wrote collected at DIYPS.org/EPI.

I also designed and built an open source web calculator to help others figure out their ratios of lipase and fat and protease and protein to help them improve their dosing.

I even published a peer-reviewed journal article about EPI – submitted within 4 months of confirming that I had it! You can read that paper here with an analysis of glucose data from both before and after starting PERT. It’s a really neat example that I hope will pave the way for answering many questions we all have about how particular medications possibly affect glucose levels (instead of simply being warned that they “may cause hypoglycemia or hyperglycemia” which is vague and unhelpful.)

I also had my eyes opened to having another chronic disease that has very, very expensive medication with no generic medication option available (and OTCs may or may not work well). Here’s some of the math I did on the cost of living with EPI and diabetes (and celiac and Graves) for a year, in case you missed it.

Another other challenge+success was running (again), but with a 6 week forced break (ha) because I massively broke a toe in July 2022.

That was physically painful and frustrating for delaying my ultramarathon training.

I had been successfully figuring out how to run and fuel with enzymes for EPI; I even built a DIY macronutrient tracker and shared a template so others can use it. I ran a 50k with a river crossing in early June and was on track to target my 100 mile run in early fall.

However with the broken toe, I took the time off needed and carefully built back up, put a lot of planning into it, and made my attempt in late October instead.

I succeeded in running ~82 miles in ~25 hours, all in one go!

I am immensely proud of that run for so many reasons, some of which are general pride at the accomplishment and others are specific, including:

  • Doing something I didn’t think I could do which is running all day and all night without stopping
  • Doing this as a solo or “DIY” self-organized ultra
  • Eating every 30 minutes like clockwork, consuming enzymes (more than 92 pills!), which means 50 snacks consumed. No GI issues, either, which is remarkable even for an ultrarunner without EPI!
  • Generally figuring out all the plans and logistics needed to be able to handle such a run, especially when dealing with type 1 diabetes, celiac, EPI, and Graves
  • Not causing any injuries, and in fact recovering remarkably fast which shows how effective my training and ‘race’ strategy were.

On top of this all, I achieved my biggest-ever running year, with more than 1,333 miles run this year. This is 300+ more than my previous best from last year which was the first time I crossed 1,000 miles in a year.

Professionally, I did quite a lot of miscellaneous writing, research, and other activities.

I spent a lot of time doing research. I also peer reviewed more than 24 papers for academic journals. I was asked to join an editorial board for a journal. I served on 2 grant review committees/programs.

I also wrote a lot.

*by ton, I mean way more than the past couple of years combined. Some of that has been due to getting some energy back once I’ve fixed missing enzyme and mis-adjusted hormone levels in my body! I’m up to 40+ blog posts this year.

And personally, the punches felt like they kept coming, because this year we also found out that I have Graves’ disease, taking my chronic disease count up to 4. Argh. (T1D, celiac, EPI, and now Graves’, for those curious about my list.)

My experience with Graves’ has included symptoms of subclinical hyperthyroidism (although my T3 and T4 are in range), and I have chosen to try thyroid medication in order to manage the really bothersome Graves’-related eye symptoms. That’s been an ongoing process and the symptoms of this have been up and down a number of times as I went on medication, reduced medication levels, etc.

What I’ve learned from my experience with both EPI and Graves’ in the same year is that there are some huge gaps in medical knowledge around how these things actually work and how to use real-world data (whether patient-recorded data or wearable-tracked data) to help with diagnosis, treatment (including medication titration), etc. So the upside to this is I have quite a few new projects and articles coming to fruition to help tackle some of the gaps that I fell into or spotted this year.

And that’s why I’m feeling optimistic, and like I accomplished quite a bit more in 2022 than in 2021. Some of it is the satisfaction of knowing the core two reasons why the previous year felt so physically bad; hopefully no more unsolved mysteries or additional chronic diseases will pop up in the next few years. Yet some of it is also the satisfaction of solving problems and creating solutions that I’m uniquely poised, due to my past experiences and skillsets, to solve. That feels good, and it feels good as always to get to channel my experiences and expertise to try to create solutions with words or code or research to help other people.

Regulatory Approval Is A Red Herring

One of the most common questions I have been asked over the last 8 years is whether or not we are submitting OpenAPS to the FDA for regulatory approval.

This question is a big red herring.

Regulatory approval is often seen and discussed as the one path for authenticating and validating safety and efficacy.

It’s not the only way.

It’s only one way.

As background, you need to understand what OpenAPS is. We took an already-approved insulin pump that I already had, a continuous glucose monitor (CGM) that I already had, and found a way to read data from those devices and also to use the already-built commands in the pump to send back instructions to automate insulin delivery via the decision-making algorithm that we created. The OpenAPS algorithm was the core innovation, along with the realization that this already-approved pump had those capabilities built in. We used various off the shelf hardware (mini-computers and radio communication boards) to interoperate with my already approved medical devices. There was novelty in how we put all the pieces together, though the innovation was the algorithm itself.

The caveat, though, is that although the pump I was using was regulatory-approved and on the market, which is how I already had it, it had later been recalled after researchers, the manufacturer, and the FDA realized that you could use the already-built commands in the pump’s infrastructure. So these pumps, while not causing harm to anyone and no cases of harm have ever been recorded, were no longer being sold. It wasn’t a big deal to the company; it was a voluntary recall, and people like me often chose to keep our pumps if we were not concerned about this potential risk.

We had figured out how to interoperate with these other devices. We could have taken our system to the FDA. But because we were using already-off-the-market pumps, there was no way the FDA would approve it. And at the time (circa 2014), there was no vision or pathway for interoperable devices, so they didn’t have the infrastructure to approve “just” an automated insulin delivery algorithm. (That changed many years later and they now have infrastructure for reviewing interoperable pumps, CGM, and algorithms which they call controllers).

The other relevant fact is that the FDA has jurisdiction based on the commerce clause in the US Constitution: Congress used its authority to authorize the FDA to regulate interstate commerce in food, drugs, and medical devices. So if you’re intending to be a commercial entity and sell products, you must submit for regulatory approval.

But if you’re not going to sell products…

This is the other aspect that many people don’t seem to understand. All roads do not lead to regulatory approval because not everyone wants to create a company and spend 5+ years dedicating all their time to it. That’s what we would have had to do in order to have a company to try to pursue regulatory approval.

And the key point is: given such a strict regulatory environment, we (speaking for Dana and Scott) did not want to commercialize anything. Therefore there was no point in submitting for regulatory approval. Regardless of whether or not the FDA was likely to approve given the situation at the time, we did not want to create a company, spend years of our life dealing with regulatory and compliance issues full time, and maybe eventually get permission to sell a thing (that we didn’t care about selling).

The aspect of regulatory approval is a red herring in the story of the understanding of OpenAPS and the impact it is having and could have.

Yes, we could have created a company. But then we would not have been able to spend the thousands of hours that we spent improving the system we made open source and helping thousands of individuals who were able to use the algorithm and subsequent systems with a variety of pumps, CGMs, and mobile devices as an open source automated insulin delivery system. We intentionally chose this path to not commercialize and thus not to pursue regulatory approval.

As a result of our work (and others from the community), the ecosystem has now changed.

Time has also passed: it’s been 8 years since I first automated insulin delivery for myself!

The commercial players have brought multiple commercial AIDs to market now, too.

We created OpenAPS when there was NO commercial option at the time. Now there are a few commercial options.

But it is also an important note that I, and many thousands of other people, are still choosing to use open source AID systems.

Why?

This is another aspect of the red herring of regulatory approval.

Just because something is approved does not mean it’s available to order.

If it’s available to order (and not all countries have approved AID systems!), it doesn’t mean it’s accessible or affordable.

Insurance companies are still fighting against covering pumps and CGMs as standalone devices. New commercial AID systems are even more expensive, and the insurance companies are fighting against coverage for them, too. So just because someone wants an AID and has one approved in their country doesn’t mean that they will be able to access and/or afford it. Many people with diabetes struggle with the cost of insulin, or the cost of CGM and/or their insulin pump.

Sometimes providers refuse to prescribe devices, based on preconceived notions (and biases) about who might do “well” with new therapies based on past outcomes with different therapies.

For some, open source AID is still the most accessible and affordable option.

And in some places, it is still the ONLY option available to automate insulin delivery.

(And in most places, open source AID is still the most advanced, flexible, and customizable option.)

Understanding the many reasons why someone might choose to use open source automated insulin delivery folds back into the understanding of how someone chooses to use open source automated insulin delivery.

It is tied to the understanding that manual insulin delivery – where someone makes all the decisions themselves and injects or presses buttons manually to deliver insulin – is inherently risky.

Automated insulin delivery reduces risk compared to manual insulin delivery. While some new risk is introduced (as is true of any additional devices), the net risk reduction overall is significantly large compared to manual insulin delivery.

This net risk reduction is important to contextualize.

Without automated insulin delivery, people overdose or underdose on insulin multiple times a day, causing adverse effects and bad outcomes and decreasing their quality of life. Even when they’re doing everything right, this is inevitable because the timing of insulin is so challenging to manage alongside dozens of other variables that at every decision point play a role in influencing the glucose outcomes.

With open source automated insulin delivery, it is not a single point-in-time decision to use the system.

Every moment, every day, people are actively choosing to use their open source automated insulin delivery system because it is better than the alternative of managing diabetes manually without automated insulin delivery.

It is a conscious choice that people make every single day. They could otherwise choose to not use the automated components and “fall back” to manual diabetes care at any moment of the day or night if they so choose. But most don’t, because it is safer and the outcomes are better with automated insulin delivery.

Each individual’s actions to use open source AID on an ongoing basis are data points on the increased safety and efficacy.

However, this paradigm of patient-generated data and patient choice as data contributing toward safety and efficacy is new. There are not many, if any, other examples of patient-developed technology that does not go down the commercial path, so there are not a lot of comparisons for open source AID systems.

As a result, when there were questions about the safety and efficacy of the system (e.g., “how do you know it works for someone else other than you, Dana?”), we began to research as a community to address the questions. We published data at the world’s biggest scientific conference and were peer-reviewed by scientists and accepted to present a poster. We did so. We were cited in a piece in Nature as a result. We then were invited to submit a letter to the editor of a traditional diabetes journal to summarize our findings; we did so and were published.

I then waited for the rest of the research community to pick up this lead and build on the work…but they didn’t. I picked it up again and began facilitating research directly with the community, coordinating efforts to make anonymized pools of data for individuals with open source AID to submit their data to and for years have facilitated access to dozens of researchers to use this data for additional research. This has led to dozens of publications further documenting the efficacy of these solutions.

Yet still, there was concern around safety because the healthcare world didn’t know how to assess these patient-generated data points of choice to use this system because it was better than the alternative every single day.

So finally, as a direct result of presenting this community-based research again at the world’s largest diabetes scientific conference, we were able to collaborate and design a grant proposal that received grant funding from New Zealand’s Health Research Council (the equivalent of the NIH in the US) for a randomized control trial of the OpenAPS algorithm in an open source AID system.

An RCT is often seen as the gold standard in science, so the fact that we received funding for such a study alone was a big milestone.

And this year, in 2022, the RCT was completed and our findings were published in one of the world’s largest medical journals, the New England Journal of Medicine, establishing that the use of the OpenAPS algorithm in an open source AID was found to be safe and effective in children and adults.

No surprises here, though. I’ve been using this system for more than 8 years, and seeing thousands of others choose the OpenAPS algorithm on an ongoing, daily basis for similar reasons.

So today, it is possible that someone could take an open source AID system using the OpenAPS algorithm to the FDA for regulatory approval. It won’t likely be me, though.

Why not? The same reasons apply from 8 years ago: I am not a company, I don’t want to create a company to be able to sell things to end users. The path to regulatory approval primarily matters for those who want to sell commercial products to end users.

Also, regulatory approval (if someone got the OpenAPS algorithm in an open source AID or a different algorithm in an open source AID) does not mean it will be commercially available, even if it will be approved.

It requires a company that has pumps and CGMs it can sell alongside the AID system OR commercial partnerships ready to go that are able to sell all of the interoperable, approved components to interoperate with the AID system.

So regulatory approval of an AID system (algorithm/mobile controller design) without a commercial partnership plan ready to go is not very meaningful to people with diabetes in and of itself. It sounds cool, but will it actually do anything? In and of itself, no.

Thus, the red herring.

Might it be meaningful eventually? Yes, possibly, especially if we collectively have insurers to get over themselves and provide coverage for AID systems given that AID systems all massively improve short-term and long-term outcomes for people with diabetes.

But as I said earlier, regulatory approval does necessitate access nor affordability, so an approved system that’s not available and affordable to people is not a system that can be used by many.

We have a long way to go before commercial AID systems are widely accessible and affordable, let alone available in every single country for people with diabetes worldwide.

Therefore, regulatory approval is only one piece of this puzzle.

And it is not the only way to assess safety and efficacy.

The bigger picture this has shown me over the years is that while systems are created to reduce harm toward people – and this is valid and good – there have been tendencies to convert to the assumption that therefore the systems are the only way to achieve the goal of harm reduction or to assess safety and efficacy.

They aren’t the only way.

As explained above, FDA approval is one method of creating a rubber stamp as a shorthand for “is this considered to be safe and effective”.

That’s also legally necessary for companies to use if they want to sell products. For situations that aren’t selling products, it’s not the only way to assess safety and efficacy, which we have shown with OpenAPS.

With open source automated insulin delivery systems, individuals have access to every line of code and can test and choose for themselves, not just once, but every single day, whether they consider it to be safer and more effective for them than manual insulin dosing. Instead of blindly trusting a company, they get the choice to evaluate what they’re using in a different way – if they so choose.

So any questions around seeking regulatory approval are red herrings.

A different question might be: What’s the future of the OpenAPS algorithm?

The answer is written in our OpenAPS plain language reference design that we posted in February of 2015. We detailed our vision for individuals like us, researchers, and companies to be able to use it in the future.

And that’s how it’s being used today, by 1) people like me; and 2)  in research, to improve what we can learn about diabetes itself and improve AID; and 3) by companies, one of whom has already incorporated parts of our safety design as part of a safety layer in their ML-based AID system and has CE mark approval and is being sold and used by thousands of people in Europe.

It’s possible that someone will take it for regulatory approval; but that’s not necessary for the thousands of people already using it. That may or may not make it more available for thousands more (see earlier caveats about needing commercial partnerships to be able to interoperate with pumps and CGMs).

And regardless, it is still being used to change the world for thousands of people and help us learn and understand new things about the physiology of diabetes because of the way it was designed.

That’s how it’s been used and that’s the future of how it will continue to be used.

No rubber stamps required.

Regulatory Approval: A Red Herring

Convening The Center Paper Describing Our Methods and The Two-Spectrum Framework For Assessing Patient Experience

I’m excited to share another paper is out that has been in the works for a while. This paper describes the methods we used to design the Convening The Center project, and an artifact we ended up creating in the process that we think will be helpful to people with lived experience and traditional researchers and others who want to partner with patients!

As a quick recap, John Harlow and I (Dana Lewis) collaborated to create Convening The Center (CTC) to bring people (known as “patients” and “carers”, or people with lived experience based on health and healthcare experiences) together, solely to allow them to connect and convene about what they care about. There was no agenda! It’s a bit hard to design an agenda-less meeting, and we put a lot of thought into it. We ended up converting from an in-person gathering in 2020 to a digital experience due to the COVID-19 pandemic, which also required a lot of design in order to achieve a digital space that allowed virtual strangers to feel comfortable connecting and discussing their experiences and perspectives.

One theme that came up throughout the first individual round of discussions (Phase 1) was that there was a spectrum of participation; some people participate and contribute as individuals to other projects and organizations, whereas others choose to or find themselves in situations that necessitate creating something new. I also saw there were different levels, from individual to community or system-level creation and contributions.

Thus, the Two-Spectrum Framework for Assessing Patient Experience was created, and we used it to “see” where our 25 participants from CTC fell, based on our Phase 1 discussions, and this helped us group people in Phase 2 (alongside scheduling availability) for smaller group discussions.

Figure 1 from our paper, illustrating the Two-Spectrum Framework for Assessing Patient Experience. It shows a horizontal spectrum with "contributing" on the left and "creating" on the right. The vertical axis has "level 1 - individual" at the bottom; "level 2 - community" in the center, and "level 3 - systems" at the top. Light blue boxes, 25 in total, are arranged across this spectrum to illustrate where CTC participants are.
Figure 1 from our paper, illustrating the Two-Spectrum Framework for Assessing Patient Experience

It was really helpful for thinking about how patients (people with lived experience) do things; not just the labels we are given by others. And so I decided we should try to write it up as a paper so that others could use it as well!

An animated gif showing an individual first on the continuum from contributing to creating; then the various locations on the vertical spectrum (indivdiual to community to systems) where they might be.
An illustrated gif I use to articulate how individuals might see themselves on the Two-Spectrum Framework for Assessing Patient Experience.

As of today, our paper is now out and is open access: “From Individuals to Systems and Contributions to Creations: Novel Framework for Mapping the Efforts of Individuals by Convening The Center of Health and Health Care”.

I encourage you to read it, and in particular the “Principal Findings” section of the discussion that talks more about the Two-Spectrum Framework for Assessing Patient Experience. Notably, “Rather than making claims about what patients “are,” this framework describes what patients “do,” the often-unseen work of patients, and, importantly, how they do this work “, and the implications of this.

We hope you find something in this paper useful, and we’re excited to see how this framework might be further used in the future!

Huge thanks to our advisors, Liz Salmi and Alicia Staley, who not only advised throughout the project but also co-authored this paper with us. And of course, ongoing respect, admiration, and appreciation to the 25 participants of Convening The Center, as well as our artist collaborator, Rebeka Ryvola who’s beautiful work is represented in this paper!

Continuation Results On 48 Weeks of Use Of Open Source Automated Insulin Delivery From the CREATE Trial: Safety And Efficacy Data

In addition to the primary endpoint results from the CREATE trial, which you can read more about in detail here or as published in the New England Journal of Medicine, there was also a continuation phase study of the CREATE trial. This meant that all participants from the CREATE trial, including those who were randomized to the automated insulin delivery (AID) arm and those who were randomized to sensor-augmented insulin pump therapy (SAPT, which means just a pump and CGM, no algorithm), had the option to continue for another 24 weeks using the open source AID system.

These results were presented by Dr. Mercedes J. Burnside at #EASD2022, and I’ve summarized her presentation and the results below on behalf of the CREATE study team.

What is the “continuation phase”?

The CREATE trial was a multi-site, open-labeled, randomized, parallel-group, 24-week superiority trial evaluating the efficacy and safety of an open-source AID system using the OpenAPS algorithm in a modified version of AndroidAPS. Our study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14 percentage points higher among those who used the open-source AID system (95% confidence interval [CI], 9.2 to 18.8; P<0.001) compared to those who used sensor augmented pump therapy; a difference that corresponds to 3 hours 21 minutes more time spent in target range per day. The system did not contribute to any additional hypoglycemia. Glycemic improvements were evident within the first week and were maintained over the 24-week trial. This illustrates that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. This initial study concluded that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS, a widely used open-source AID solution, is efficacious and safe. These results were from the first 24-week phase when the two groups were randomized into SAPT and AID, accordingly.

The second 24-week phase is known as the “continuation phase” of the study.

There were 52 participants who were randomized into the SAPT group that chose to continue in the study and used AID for the 24 week continuation phase. We refer to those as the “SAPT-AID” group. There were 42 participants initially randomized into AID who continued to use AID for another 24 weeks (the AID-AID group).

One slight change to the continuation phase was that those in the SAPT-AID used a different insulin pump than the one used in the primary phase of the study (and 18/42 AID-AID participants also switched to this different pump during the continuation phase), but it was a similar Bluetooth-enabled pump that was interoperable with the AID system (app/algorithm) and CGM used in the primary outcome phase.

All 42 participants in AID-AID completed the continuation phase; 6 participants (out of 52) in the SAPT-AID group withdrew. One withdrew from infusion site issues; three with pump issues; and two who preferred SAPT.

What are the results from the continuation phase?

In the continuation phase, those in the SAPT-AID group saw a change in time in range (TIR) from 55±16% to 69±11% during the continuation phase when they used AID. In the SAPT-AID group, the percentage of participants who were able to achieve the target goals of TIR > 70% and time below range (TBR) <4% increased from 11% of participants during SAPT use to 49% during the 24 week AID use in the continuation phase. Like in the primary phase for AID-AID participants; the SAPT-AID participants saw the greatest treatment effect overnight with a TIR difference of 20.37% (95% CI, 17.68 to 23.07; p <0.001), and 9.21% during the day (95% CI, 7.44 to 10.98; p <0.001) during the continuation phase with open source AID.

Those in the AID-AID group, meaning those who continued for a second 24 week period using AID, saw similar TIR outcomes. Prior to AID use at the start of the study, TIR for that group was 61±14% and increased to 71±12% at the end of the primary outcome phase; after the next 6 months of the continuation phase, TIR was maintained at 70±12%. In this AID-AID group, the percentage of participants achieving target goals of TIR >70% and TBR <4% was 52% of participants in the first 6 months of AID use and 45% during the continuation phase. Similarly to the primary outcomes phase, in the continuation phase there was also no treatment effect by age interaction (p=0.39).

The TIR outcomes between both groups (SAPT-AID and AID-AID) were very similar after each group had used AID for 24 weeks (SAPT-AID group using AID for 24 weeks during the continuation phase and AID-AID using AID for 24 weeks during the initial RCT phase).. The adjusted difference in TIR between these groups was 1% (95% CI, -4 to 6; p=-0.67). There were no glycemic outcome differences between those using the two different study pumps (n=69, which was the SAPT-AID user group and 18 AID-AID participants who switched for continuation; and n=25, from the AID-AID group who elected to continue on the pump they used in the primary outcomes phase).

In the initial primary results (first 24 weeks of trial comparing the AID group to the SAPT group), there was a 14 percentage point difference between the groups. In the continuation phase, all used AID and the adjusted mean difference in TIR between AID and the initial SAPT results was a similar 12.10 percentage points (95% CI, p<0.001, SD 8.40).

Similar to the primary phase, there was no DKA or severe hypoglycemia. Long-term use (over 48 weeks, representing 69 person-years) did not detect any rare severe adverse events.

CREATE results from the full 48 weeks on open source AID with both SAPT (control) and AID (intervention) groups plotted on the graph.

Conclusion of the continuation study from the CREATE trial

In conclusion, the continuation study from the CREATE trial found that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS is efficacious and safe with various hardware (pumps), and demonstrates sustained glycaemic improvements without additional safety concerns.

Key points to takeaway:

  • Over 48 weeks total of the study (6 months or 24 weeks in the primary phase; 6 months/24 weeks in the continuation phase), there were 64 person-years of use of open source AID in the study, compared to 59 person-years of use of sensor-augmented pump therapy.
  • A variety of pump hardware options were used in the primary phase of the study among the SAPT group, due to hardware (pump) availability limitations. Different pumps were also used in the SAPT-AID group during the AID continuation phase, compared to the pumps available in the AID-AID group throughout both phases of trial. (Also, 18/42 of AID-AID participants chose to switch to the other pump type during the continuation phase).
  • The similar TIR results (14 percentage points difference in primary and 12 percentage points difference in continuation phase between AID and SAPT groups) shows durability of the open source AID and algorithm used, regardless of pump hardware.
  • The SAPT-AID group achieved similar TIR results at the end of their first 6 months of use of AID when compared to the AID-AID group at both their initial 6 months use and their total 12 months/48 weeks of use at the end of the continuation phase.
  • The safety data showed no DKA or severe hypoglycemia in either the primary phase or the continuation phases.
  • Glycemic improvements from this version of open source AID (the OpenAPS algorithm in a modified version of AndroidAPS) are not only immediate but also sustained, and do not increase safety concerns.
CREATE Trial Continuation Results were presented at #EASD2022 on 48 weeks of use of open source AID

Wondering about the “how” rather than the “why” of autoimmune conditions

I’ve been thinking a lot about stigma, per a previous post of mine, and how I generally react to, learn about, and figure out how to deal with new chronic diseases.

I’ve observed a pattern in my experiences. When I suspect an issue, I begin with research. I read medical literature to find out the basics of what is known. I read a high volume of material, over a range of years, to see what is known and the general “ground truth” about what has stayed consistent over the years and where things might have changed. This is true for looking into causal mechanisms as well as diagnosis and then more importantly to me, management/treatment.

I went down a new rabbit hole of research and most articles were publicly accessible

A lot of times with autoimmune related diseases…the causal mechanism is unknown. There are correlations, there are known risk factors, but there’s not always a clear answer of why things happen.

I realize that I am lucky that my first “thing” (type 1 diabetes) was known to be an autoimmune condition, and that probably has framed my response to celiac disease (6 years later); exocrine pancreatic insufficiency (19+ years after diabetes); and now Graves’ disease (19+ years after diabetes). Why do I think that is lucky? Because when I’m diagnosed with an autoimmune condition, it’s not a surprise that it IS an autoimmune condition. When you have a nicely overactive immune system, it interferes with how your body is managing things. In type 1 diabetes, it eventually makes it so the beta cells in your pancreas no longer produce insulin. In celiac, it makes it so the body has an immune reaction to gluten, and the villi in your small intestine freak out at the microscopic, crumb-level presence of gluten (and if you keep eating gluten, can cause all sorts of damage). In exocrine pancreatic insufficiency, there is possibly either atrophy as a result of the pancreas not producing insulin or other immune-related responses – or similar theories related to EPI and celiac in terms of immune responses. It’s not clear ‘why’ or which mechanism (celiac, T1D, or autoimmune in general) caused my EPI, and not knowing that doesn’t bother me, because it’s clearly linked to autoimmune shenanigans. Now with Graves’ disease, I also know that low TSH and increased thyroid antibodies are causing subclinical hyperthyroidism symptoms (such as occasional minor tremor, increased resting HR, among others) and Graves’ ophthalmology symptoms as a result of the thyroid antibodies. The low TSH and increased thyroid antibodies are a result of my immune system deciding to poke at my thyroid.

All this to say…I typically wonder less about “why” I have gotten these things, in part because the “why” doesn’t change “what” to do; I simply keep gathering new data points that I have an overactive immune system that gives me autoimmune stuff to deal with.

I have contrasted this with a lot of posts I observe in some of the online EPI groups I am a part of. Many people get diagnosed with EPI as a result of ongoing GI issues, which may or may not be related to other conditions (like IBS, which is often a catch-all for GI issues). But there’s a lot of posts wondering “why” they’ve gotten it, seemingly out of the blue.

When I do my initial research/learning on a new autoimmune thing, as I mentioned I do look for causal mechanisms to see what is known or not known. But that’s primarily, I think, to rule out if there’s anything else “new” going on in my body that this mechanism would inform me about. But 3/3 times (following type 1 diabetes, where I first learned about autoimmune conditions), it’s primarily confirmed that I have autoimmune things due to a kick-ass overactive immune system.

What I’ve realized that I often focus on, and most others do not, is what comes AFTER diagnosis. It’s the management (or treatment) of, and living with, these conditions that I want to know more about.

And sadly, especially in the latest two experiences (exocrine pancreatic insufficiency and Graves’ disease), there is not enough known about management and optimization of dealing with these conditions.

I’ve previously documented and written quite a bit (see a summary of all my posts here) about EPI, including my frustrations about “titrating” or getting the dose right for the enzymes I need to take every single time I eat something. This is part of the “management” gap I find in research and medical knowledge. It seems like clinicians and researchers spend a lot of time on the “why” and the diagnosis/starting point of telling someone they have a condition. But there is way less research about “how” to live and optimally manage these things.

My fellow patients (people with lived experiences) are probably saying “yeah, duh, and that’s the power of social media and patient advocacy groups to share knowledge”. I agree. I say that a lot, too. But one of the reasons these online social media groups are so powerful in sharing knowledge is because of the black hole or vacuum or utter absence of research in this space.

And it’s frustrating! Social media can be super powerful because you can learn about many n=1 experiences. If you’re like me, you analyze the patterns to see what might be reproducible and what is worth experimenting in my own n=1. But often, this knowledge stays in the real world. It is not routinely funded, studied, operationalized, and translated in systematic ways back to healthcare providers. When patients are diagnosed, they’re often told the “what” and occasionally the “why” (if it exists), but left to sometimes fall through the cracks in the “how” of optimally managing the new condition.

(I know, I know. I’m working on that, in diabetes and EPI, and I know dozens of friends, both people with lived experiences and researchers who ARE working on this, from diabetes to brain tumors to Parkinson’s and Alzheimer’s and beyond. And while we are moving the needles here, and making a difference, I’m wanting to highlight the bigger issue to those who haven’t previously been exposed to the issues that cause the gaps we are trying to fill!)

In my newest case of Graves’ disease, it presented with subclinical hyperthyroidism. As I wrote here, that for me means the lower TSH and higher thyroid antibodies but in range T3 and T4. In discussion with my physician, we decided to try an antithyroid drug, to try to lower the antibody levels, because the antibody levels are what cause the related eye symptoms (and they’re quite bothersome). The other primary symptom I have is higher resting HR, which is also really annoying, so I’m also hoping it helps with that, too. But the game plan was to start taking this medication every day; and get follow-up labs in about 2 months, because it takes ~6 weeks to see the change in thyroid levels.

Let me tell you, that’s a long time. I get that the medication works not on stored thyroid levels; thus, it impacts the new production only, and that’s why it takes 6 weeks to see it in the labs because that’s how long it takes to cycle through the stored thyroid stuff in your body.

My hope was that within 2-3 weeks I would see a change in my resting HR levels. I wasn’t sure what else to expect, and whether I’d see any other changes.

But I did.

It was in the course of DAYS, not weeks. It was really surprising! I immediately started to see a change in my resting HR (across two different wearable devices; a ring and a watch). Within a week, my phone’s health flagged it as a “trend”, too, and pinpointed the day (which it didn’t know) that I had started the new medication based on the change in the trending HR values.

Additionally, some of my eye symptoms went away. Prior to commencing the new medication, I would wake up and my eyes would hurt. Lubricating them (with eye drops throughout the day and gel before bed) helped some, but didn’t really fix the problem. I also had pretty significant red, patchy spots around the outside corner of one of my eyes, and eyelid swelling that would push on my eyeball. 4 days into the new medication, I had my first morning where I woke up without my eyes hurting. The next day it returned, and then I had two days without eye pain. Then I had 3-4 days with the painful eyes. Then….now I’m going on 2 weeks without the eye pain?! Meanwhile, I’m also tracking the eye swelling. It went down to match the eye pain going away. But it comes back periodically. Recently, I commented to Scott that I was starting to observe the pattern that the red/patchy skin at the corner and under my right eye would appear; then the next day the swelling of and above the eyelid would return. After 1-2 days of swelling, it would disappear. Because I’ve been tracking various symptoms, I looked at my data the other day and saw that it’s almost a 6-7 day pattern.

Interesting!

Again, the eye stuff is a result of antibody levels. So now I am curious about the production of antibodies and their timeline, and how that differs from TSH and thyroid hormones, and how they’re impacted with this drug.

None of that is information that is easy to get, so I’m deep in the medical literature trying again to find out what is known, whether this type of pattern is known; if it’s common; or if this level of data, like my within-days impact to resting HR change is new information.

Most of the research, sadly, seems to be on pre-diagnosis or what happens if you diagnose someone but not give them medication in hyperthyroid. For example, I found this systematic review on HRV and hyperthyroid and got excited, expecting to learn things that I could use, but found they explicitly removed the 3 studies that involved treating hyperthyroidism and are only studying what happens when you don’t treat it.

Sigh.

This is the type of gap that is so frustrating, as a patient or person who’s living with this. It’s the gap I see in EPI, where little is known on optimal titration and people don’t get prescribed enough enzymes and aren’t taught how to match their dosing to what they are eating, the way we are taught in diabetes to match our insulin dosing to what we’re eating.

And it matters! I’m working on writing up data from a community survey of people with EPI, many of whom shared that they don’t feel like they have their enzyme dosing well matched to what they are eating, in some cases 5+ years after their diagnosis. That’s appalling, to me. Many people with EPI and other conditions like this fall through the cracks with their doctors because there’s no plan or discussion on what managing optimally looks like; what to change if it’s not optimal for a person; and what to do or who to talk to if they need help managing.

Thankfully in diabetes, most people are supported and taught that it’s not “just” a shot of insulin, but there are more variables that need tracking and managing in order to optimize wellbeing and glucose levels when living with diabetes. But it took decades to get there in diabetes, I think.

What would it be like if more chronic diseases, like EPI and Graves’ disease (or any other hyper/hypothyroid-related diseases), also had this type of understanding across the majority of healthcare providers who treated and supported managing these conditions?

How much better would and could people feel? How much more energy would they have to live their lives, work, play with their families and friends? How much more would they thrive, instead of just surviving?

That’s what I wonder.

Wondering "how" rather than "why" of autimmune conditions, by @DanaMLewis from DIYPS.org

Findings from the world’s first RCT on open source AID (the CREATE trial) presented at #ADA2022

September 7, 2022 UPDATEI’m thrilled to share that the paper with the primary outcomes from the CREATE trial is now published. You can find it on the journal site here, or view an author copy here. You can also see a Twitter thread here, if you are interested in sharing the study with your networks.

Example citation:

Burnside, M; Lewis, D; Crocket, H; et al. Open-Source Automated Insulin Delivery in Type 1 Diabetes. N Engl J Med 2022;387:869-81. DOI:10.1056/NEJMoa2203913


(You can also see a previous Twitter thread here summarizing the study results, if you are interested in sharing the study with your networks.)

TLDR: The CREATE Trial was a multi-site, open-labeled, randomized, parallel-group, 24-week superiority trial evaluating the efficacy and safety of an open-source AID system using the OpenAPS algorithm in a modified version of AndroidAPS. Our study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14 percentage points higher among those who used the open-source AID system (95% confidence interval [CI], 9.2 to 18.8; P<0.001) compared to those who used sensor augmented pump therapy; a difference that corresponds to 3 hours 21 minutes more time spent in target range per day. The system did not contribute to any additional hypoglycemia. Glycemic improvements were evident within the first week and were maintained over the 24-week trial. This illustrates that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID. This study concluded that open-source AID using the OpenAPS algorithm within a modified version of AndroidAPS, a widely used open-source AID solution, is efficacious and safe.

The backstory on this study

We developed the first open source AID in late 2014 and shared it with the world as OpenAPS in February 2015. It went from n=1 to (n=1)*2 and up from there. Over time, there were requests for data to help answer the question “how do you know it works (for anybody else)?”. This led to the first survey in the OpenAPS community (published here), followed by additional retrospective studies such as this one analyzing data donated by the community,  prospective studies, and even an in silico study of the algorithm. Thousands of users chose open source AID, first because there was no commercial AID, and later because open source AID such as the OpenAPS algorithm was more advanced or had interoperability features or other benefits such as quality of life improvements that they could not find in commercial AID (or because they were still restricted from being able to access or afford commercial AID options). The pile of evidence kept growing, and each study has shown safety and efficacy matching or surpassing commercial AID systems (such as in this study), yet still, there was always the “but there’s no RCT showing safety!” response.

After Martin de Bock saw me present about OpenAPS and open source AID at ADA Scientific Sessions in 2018, we literally spent an evening at the dinner table drawing the OpenAPS algorithm on a napkin at the table to illustrate how OpenAPS works in fine grained detail (as much as one can do on napkin drawings!) and dreamed up the idea of an RCT in New Zealand to study the open source AID system so many were using. We sought and were granted funding by New Zealand’s Health Research Council, published our protocol, and commenced the study.

This is my high level summary of the study and some significant aspects of it.

Study Design:

This study was a 24-week, multi-centre randomized controlled trial in children (7–15 years) and adults (16–70 years) with type 1 diabetes comparing open-source AID (using the OpenAPS algorithm within a version of AndroidAPS implemented in a smartphone with the DANA-i™ insulin pump and Dexcom G6® CGM), to sensor augmented pump therapy. The primary outcome was change in the percent of time in target sensor glucose range (3.9-10mmol/L [70-180mg/dL]) from run-in to the last two weeks of the randomized controlled trial.

  • This is a LONG study, designed to look for rare adverse events.
  • This study used the OpenAPS algorithm within a modified version of AndroidAPS, meaning the learning objectives were adapted for the purpose of the study. Participants spent at least 72 hours in “predictive low glucose suspend mode” (known as PLGM), which corrects for hypoglycemia but not hyperglycemia, before proceeding to the next stage of closed loop which also then corrected for hyperglycemia.
  • The full feature set of OpenAPS and AndroidAPS, including “supermicroboluses” (SMB) were able to be used by participants throughout the study.

Results:

Ninety-seven participants (48 children and 49 adults) were randomized.

Among adults, mean time in range (±SD) at study end was 74.5±11.9% using AID (Δ+ 9.6±11.8% from run-in; P<0.001) with 68% achieving a time in range of >70%.

Among children, mean time in range at study end was 67.5±11.5% (Δ+ 9.9±14.9% from run-in; P<0.001) with 50% achieving a time in range of >70%.

Mean time in range at study end for the control arm was 56.5±14.2% and 52.5±17.5% for adults and children respectively, with no improvement from run-in. No severe hypoglycemic or DKA events occurred in either arm. Two participants (one adult and one child) withdrew from AID due to frustrations with hardware issues.

  • The pump used in the study initially had an issue with the battery, and there were lots of pumps that needed refurbishment at the start of the study.
  • Aside from these pump issues, and standard pump site/cannula issues throughout the study (that are not unique to AID), there were no adverse events reported related to the algorithm or automated insulin delivery.
  • Only two participants withdrew from AID, due to frustration with pump hardware.
  • No severe hypoglycemia or DKA events occurred in either study arm!
  • In fact, use of open source AID improved time in range without causing additional hypoglycemia, which has long been a concern of critics of open source (and all types of) AID.
  • Time spent in ‘level 1’ and ‘level 2’ hyperglycemia was significantly lower in the AID group as well compared to the control group.

In the primary analysis, the mean (±SD) percentage of time that the glucose level was in the target range (3.9 – 10mmol/L [70-180mg/dL]) increased from 61.2±12.3% during run-in to 71.2±12.1% during the final 2-weeks of the trial in the AID group and decreased from 57.7±14.3% to 54±16% in the control group, with a mean adjusted difference (AID minus control at end of study) of 14.0 percentage points (95% confidence interval [CI], 9.2 to 18.8; P<0.001). No age interaction was detected, which suggests that adults and children benefited from AID similarly.

  • The CREATE study found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy.
  • This difference reflects 3 hours 21 minutes more time spent in target range per day!
  • For children AID users, they spent 3 hours 1 minute more time in target range daily (95% CI, 1h 22m to 4h 41m).
  • For adult AID users, they spent 3 hours 41 minutes more time in target range daily (95% CI, 2h 4m to 5h 18m).
  • Glycemic improvements were evident within the first week and were maintained over the 24-week trial. Meaning: things got better quickly and stayed so through the entire 24-week time period of the trial!
  • AID was most effective at night.
Difference between control and AID arms overall, and during day and night separately, of TIR for overall, adults, and kids

One thing I think is worth making note of is that one criticism of previous studies with open source AID is regarding the self-selection effect. There is the theory that people do better with open source AID because of self-selection and self-motivation. However, the CREATE study recruited a diverse cohort of participants, and the study findings (as described above) match all previous reports of safety and efficacy outcomes from previous studies. The CREATE study also found that the greatest improvements in TIR were seen in participants with lowest TIR at baseline. This means one major finding of the CREATE study is that all people with T1D, irrespective of their level of engagement with diabetes self-care and/or previous glycemic outcomes, stand to benefit from AID.

This therefore means there should be NO gatekeeping by healthcare providers or the healthcare system to restrict AID technology from people with insulin-requiring diabetes, regardless of their outcomes or experiences with previous diabetes treatment modalities.

There is also no age effect observed in the trail, meaning that the results of the CREATE Trial demonstrated that open-source AID is safe and effective in children and adults with type 1 diabetes. If someone wants to use open source AID, they would likely benefit, regardless of age or past diabetes experiences. If they don’t want to use open source AID or commercial AID…they don’t have to! But the choice should 100% be theirs.

In summary:

  • The CREATE trial was the first RCT to look at open source AID, after years of interest in such a study to complement the dozens of other studies evaluating open source AID.
  • The conclusion of the CREATE trial is that open-source AID using the OpenAPS algorithm within a version of AndroidAPS, a widely used open-source AID solution, appears safe and effective.
  • The CREATE trial found that across children and adults, the percentage of time that the glucose level was in the target range of 3.9-10mmol/L [70-180mg/dL] was 14.0 percentage points higher among those who used the open-source AID system compared to those who used sensor augmented pump therapy; a difference that reflects 3 hours 21 minutes more time spent in target range per day.
  • The study recruited a diverse cohort, yet still produced glycemic outcomes consistent with existing open-source AID literature, and that compare favorably to commercially available AID systems. Therefore, the CREATE Trial indicates that a range of people with type 1 diabetes might benefit from open-source AID solutions.

Huge thanks to each and every participant and their families for their contributions to this study! And ditto, big thanks to the amazing, multidisciplinary CREATE study team for their work on this study.


September 7, 2022 UPDATE – I’m thrilled to share that the paper with the primary outcomes from the CREATE trial is now published. You can find it on the journal site here, or like all of the research I contribute to, access an author copy on my research paper.

Example citation:

Burnside, M; Lewis, D; Crocket, H; et al. Open-Source Automated Insulin Delivery in Type 1 Diabetes. N Engl J Med 2022;387:869-81. DOI:10.1056/NE/Moa2203913

Note that the continuation phase study results are slated to be presented this fall at another conference!

Findings from the RCT on open source AID, the CREATE Trial, presented at #ADA2022

Looking back at work and accomplishments in 2021

I decided to do a look back at the last year’s worth of work, in part because it was a(nother) weird year in the world and also because, if you’re interested in my work, unless you read every single Tweet, there may have been a few things you missed that are of interest!

In general, I set goals every year that stretch across personal and professional efforts. This includes a daily physical activity streak that coincides with my walking and running lots of miles this year in pursuit of my second marathon and first (50k) ultramarathon. It’s good for my mental and physical health, which is why I post almost daily updates to help keep myself accountable. I also set goals like “do something creative” which could be personal (last year, knitting a new niece a purple baby blanket ticked the box on this goal!) or professional. This year, it was primarily professional creativity that accomplished this goal (more on that below).

Here’s some specifics about goals I accomplished:

RUNNING

  • My initial goal was training ‘consistently and better’ than I did for my first marathon, with 400 miles as my stretch goal if I was successfully training for the marathon. (Otherwise, 200 miles for the year would be the goal without a marathon.) My biggest-ever running year in 2013 with my first marathon was 356 miles, so that was a good big goal for me. I achieved it in June!
  • I completed my second marathon in July, and PR’d by over half an hour.
  • I completed my first-ever ultramarathon, a 50k!
  • I re-set my mileage goal after achieving 400 miles..to 500..600…etc. I ultimately achieved the biggest-ever mileage goal I’ve ever hit and think I ever will hit: I ran 1,000 miles in a single year!
  • I wrote lots of details about my methods of running (primarily, run/walking) and running with diabetes here. If you’re looking for someone to cheer you on as you set a goal for daily activity, like walking, or learning to run, or returning to running…DM or @ me on Twitter (@DanaMLewis). I love to cheer people on as they work toward their activity goals! It helps keep me inspired, too, to keep aiming at my own goals.

CREATIVITY

  • My efforts to be creative were primarily on the professional side this year. The “Convening The Center” project ended up having 2 out of 3 of my things that I categorized as being creative. The first was the design of the digital activities and the experience of CTC overall (more about that here). The second were the items in the physical “kit” we mailed out to participants: we brainstormed and created custom playing cards and physical custom keychains. They were really fun to make, especially in partnership with our excellent project artist, Rebeka Ryvola, who did the actual design work!
  • My third “creative” endeavor was a presentation, but it was unlike the presentations I usually give. I was tasked to create a presentation that was “visually engaging” and would not involve showing my face in the presentation. I’ve linked to the video below in the presentation section, but it was a lot of work to think about how to create a visually and auditory focused presentation and try to make it engaging, and I’m proud of how it turned out!

RESEARCH AND PUBLICATIONS

  • This is where the bulk of my professional work sits right now. I continue to be a PI on the CREATE trial, the world’s first randomized control trial assessing open-source automated insulin delivery technology, including the algorithm Scott and I dreamed up and that I have been using every day for the past 7 years. The first data from the trial itself is forthcoming in 2022. 
  • Convening The Center also was a grant-funded project that we turned into research with a publication that we submitted, assessing more of what patients “do”, which is typically not assessed by researchers and those looking at patient engagement in research or innovation. Hopefully, the publication of the research article we just submitted will become a 2022 milestone! In the meantime, you can read our report from the project here (https://bit.ly/305iQ1W ), as this grant-funded project is now completed.
  • Goal-wise, I aim to generate a few publications every year. I do not work for any organization and I am not an academic. However, I come from a communications background and see the benefit of reaching different audiences where they are, which is why I write blog posts for the patient community and also seek to disseminate knowledge to the research and clinical communities through traditional peer-reviewed literature. You can see past years’ research articulated on my research page (DIYPS.org/research), but here’s a highlight of some of the 2021 publications:
  • Also, although I’m not a traditional academic researcher, I also participate in the peer review process and frequently get asked to peer-review submitted articles to a variety of journals. I skimmed my email and it looks like I completed (at least) 13 peer reviews, most of which included also reviewing subsequent revisions of those submitted articles. So it looks like my rate of peer reviewing (currently) is matching my rate of publishing. I typically get asked to review articles related to open-source or DIY diabetes technology (OpenAPS, AndroidAPS, Loop, Nightscout, and other efforts), citizen science in healthcare, patient-led research or patient engagement in research, digital health, and diabetes data science. If you’re submitting articles on that topic, you’re welcome to recommend me as a potential reviewer.

PRESENTATIONS

  • I continued to give a lot of virtual presentations this year, such as at conferences like the “Insulin100” celebration conference (you can see the copy I recorded of my conference presentation here). I keynoted at the European Patients Forum Congress as well as at ADA’s Precision Diabetes Medicine 2021; an invited talk ADA Scientific Sessions (session coverage here); the 2021 Federal Wearables Summit: (video here); and the BIH Clinician Scientist Symposium (video here), to name a few (but not all).
  • Additionally, as I mentioned, one of the presentations I’m most proud of was created for the Fall 2021 #DData Exchange event:

OTHER STUFF

I did quite a few other small projects that don’t fit neatly into the above categories.

One final thing I’m excited to share is that also in 2021, Amazon came out with a beta program for producing hardcover/hardback books, alongside the ability to print paperback books on demand (and of course Kindle). So, you can now buy a copy of my book about Automated Insulin Delivery: How artificial pancreas “closed loop” systems can aid you in living with diabetes in paperback, hardback, or on Kindle. (You can also, still, read it 100% for free online via your phone or desktop at ArtificialPancreasBook.com, or download a PDF for free to read on your device of choice. Thousands of people have downloaded the PDF!)

Now available in hardcover, the book about Automated Insulin Delivery by Dana M. Lewis

New Convening The Center Update – Help Us Find People Who Could Use Internet Scholarships to Do Good In Healthcare?

You may have previously read a blog post about Convening The Center, a RWJF-grant-funded project with the aim of bringing together 25 diverse individuals who are working to change healthcare in nontraditional ways. The main part of the CTC project has finished (more about that soon!), but we also realized that we had a little bit of budget left over from the project, and pitched to RWJF a new plan to use the remaining funds.

We want to give individuals working to make a difference in health and healthcare – and the health of their (online, geographic, or disease) communities – by providing 9 internet scholarships of $1,000 USD each. This is estimated to cover about a year’s worth of internet access for each individual. Individuals who are applying should be able to articulate their past, current, or future efforts as it relates to making a difference in health/care.

There are no strings attached to this ‘internet scholarship.’ You don’t have to do anything particular, or commit to any projects if you’re selected, other than write us a few (say, 250 or so) words within the next year to let us know what it meant to you to have your internet paid for. That’s it. This feedback (which can be given privately to us, or posted publicly – your call) is the only requirement for receiving these funds.

Can you help us find people who could use Internet scholarships to do good in healthcare?

Why are we doing this?

We learned (and re-learned) from working with the cohort from the original CTC project that internet access is something many of us take for granted, and that we shouldn’t. Many of us may assume, from a privileged position, that access to high speed internet is table stakes and that everyone has it, so when invited to take a seat at the table, anyone invited could get there. But that’s not the case.

This is relevant to the space we are working in with CTC, where we are seeking to support patients (people living with diseases) or carers who are working to improve healthcare and their communities, often from non-resourced settings. The ability to afford high-speed internet access therefore might be a barrier for enabling patients/carers to take a seat at the table, when invited – or from building their own table.

We realize that $9,000 won’t solve all the problems of equitable access and facilitate online participation of everyone who needs it. But it’s a start, and could be the thing that makes a difference for 9 individuals, and it’s the best use we can envision for this remaining budget.

So our ask, if you’re reading this:

  • Please consider nominating someone or applying (self-nominating) for the Convening The Center Internet Scholarship, by filling out this Google form by November 14.
  • Please share this blog post (https://bit.ly/CTC-Internet-Scholarships) with your online and offline networks, including with those you know in rural settings where internet cost may be a bigger barrier.

John and I are excited to facilitate this last use of our CTC project budget. We will close the nomination Google form on November 14; select recipients by the end of November; and aim to provide payments of the CTC Internet Scholarships (administered by Trailhead Institute, our fiscal sponsor) in early December (all 2021). Within the next year after we receive feedback from all participants, we will also (anonymously, at an aggregate level) share the feedback and what we learned from using the remaining budget funds for this purpose with the broader community, to help inform others who are looking to create similar initiatives in the future.

In summary:

  • Who: People who are looking to make a difference in health/care who might benefit from having a year’s worth of internet costs covered
  • What: Up to 9 individuals will receive $1,000 USD, estimated to cover a year’s worth of typical high speed internet plans.
  • How: fill out this Google form and nominate yourself or someone else. Multiple nominations are welcome, there is no limit.
  • When: Please apply by November 14, and recipients will be selected in November 2021.