IUD insertion or IUD replacement is more manageable with a paracervical block

If you’re someone who is considering an IUD (intrauterine device) or has an IUD and is considering a replacement or the removal process, this post is for you. You should know about this! Feel free to share it with a friend.

I recently decided to replace my IUD. I was dreading it, because I found the insertion process the first time I got one to be the most painful thing I had ever experienced. For context, years later I massively broke my ankle in 3 places. I now am able to articulate that the pain level of an IUD insertion, for me, is like broken bone level pain inside. It “only” lasts for a few minutes at that level, but nevertheless, it is excruciating.

When I was due for my first ever replacement (swap), I asked my doctor’s office if there were any better pain management options than what I experienced for the insertion process. They told me no, the only thing they could offer was an oral medication I could try to soften the cervix. I took it in advance as directed, and also took full doses of ibuprofen and Tylenol, went for the swap process and…it was just as bad as the first insertion, even though I had previously had one. Ugh.

The good news related to IUDs is that they keep getting extended, in terms of how many years they are approved for birth control efficacy. Mine went from 5 years approval to 8 years approval, so I was looking forward to having more years in between the terrible experiences. However, my experience was that this time around when I reached a little over 5 years (fully expecting to go to 8 years with it), my period bleeding picked back up to a degree that I decided I would go ahead and swap to a new one. (Birth control-wise, they’re approved for 8 years, but the approval indication for heavy bleeding is still at 5 years, so it makes sense that some people who see a reduction in period bleeding on IUDs may see a return after that 5 year timeframe. Not everyone, but some will, and I did.)

So that’s why I was going in to get a replacement, at about 5.5 years from my last one. This time, I had a new provider’s office, but since my last office couldn’t offer me any reasonable pain mitigation, I didn’t bother asking in advance and just went in with an active dose of Tylenol in my system, mentally prepared for the pain.

But then, at my appointment when going over the risks and discussing any questions I had before the procedure, my new provider said “what pain mitigation would you like?”

I said: “What? You’re offering me something?!”

And yes, there are options and she did offer them! She talked about ibuprofen/Tylenol (I already had taken Tylenol), a hot pack for the stomach, or something called a paracervical block. It’s an injection, so she asked how I felt about needles. I laughed and told her I had type 1 diabetes (the implication being, I deal a lot with needles and regardless of what I feel about them, they keep me alive so I am used to dealing with them).

The side effects of this paracervical block include potentially experiencing ringing in the ears and a metal taste in your mouth, plus obviously the potential pain from the injection itself.

I quickly evaluated my thoughts – I didn’t think it would help (because softening the cervix previously didn’t help), and I didn’t love the idea of a block. That’s because my previous ‘nerve block’ type injections, such as when at the dentist, result in a LOT of pain for me for the injection. But, then again, the IUD replacement process is even more painful, so I thought for the small chance that it would help cut down on that pain, it was worth trying at least once. So I said yes.

While she was getting set up, I asked her if this was new (because I was surprised I hadn’t heard of it) and she said no, it’s been around but early research showed it wasn’t much more effective than placebo so it didn’t really pick up in clinical practice, but that later studies DID show efficacy of it. (I later looked that up and she was right – there’s a 2012 study showing similar efficacy on pain reduction to placebo, aka around 30%); whereas international studies and a later 2018 study with an increased dose DID show pain reduction for more people.) And we all know it takes time for things to translate to clinical practice (see this visual and imagine it as a game of telephone), so knowing this now helps me better understand why in 2015 (my first insertion) and 2020 (my first replacement), this was not an option offered to me by my old clinic. I don’t like it, but I understand the context better.

What the experience of a paracervical block was like

The first step was a numbing spray. Then came the injection. Maybe because of the numbing spray, it didn’t feel like an injection the way I normally experience injections for nerve blocks. I felt a minor pinch and a little bit of pressure from the fluid going in. I was surprised that it took a few (it was injected into several areas) and I was borderline slightly uncomfortable, not in the sense that I was going to ask her to stop, but I was ready for that part to be done. (And probably anticipatory pain for the actual removal/swap process). But it was done and then I realized, this was nothing like the other injections for nerve blocks and it was indeed very tolerable.

(Side effect wise, I did not experience ‘ringing’ in my ears, but I did feel like I could hear more easily (e.g. sounds in the room suddenly got louder). Afterward when I got up, I did feel a little odd for about 60 seconds, but that could have also been because I was laying down and then hopped back up (see below) pretty soon after. I didn’t have any taste in my mouth, and the ‘louder sounds’ didn’t persist beyond a few minutes. None of the side effects phased me nor would influence my decision to get another one.)

Then it was time for the IUD removal. She asked me to cough and I did while it came out. It felt uncomfortable like a pinch with friction, but it wasn’t stabbing excruciating pain. It wasn’t “sharp” feeling like pain. I breathed a bit while she got ready to do the insertion of the new IUD and she asked me to take a few deep breaths. I did, and the IUD was inserted. Like the removal, it felt slightly uncomfortable, but again more like friction, and it was less than the removal.

No excruciating, stabbing pain!!!

She was done, and I immediately sat up and told her the paracervical block helped, I was so glad I had done it, and now I wasn’t going to dread my next swap.

Previously, for my first insertion and my subsequent first swap, it took me a minute or two of laying there, breathing deeply, to recover from the intense, excruciating pain. I would be able to get up and get dressed and leave on my own, but it definitely was an intense full body experience that required a few minutes and then I would feel like I had to recover from it (psychologically) the rest of the day. And obviously carry that experience 5 years forward.

In contrast, I immediately sat up and was ready to get dressed and go. I didn’t need to recover. I left and drove home in great spirits, then started texting everyone I know who had IUDs that it was a jaw-dropping, wildly different experience and they should look up paracervical blocks and when it’s time for swap/replacement IUDs, ask in advance if their doctor/clinic offers it and shop around for somewhere that will offer it if not. It is THAT wildly different of an experience. I hate making phone calls, but I will 100% make as many phone calls as it takes in the future to make sure I always have this option. It took the painful experience from broken bone level pain (e.g. 9-10/10 excruciating pain) to a tolerable discomfort with only a little bit of pain (e.g. 2-3/10 experience). I say that as someone who was told by an ER doc while he was setting my ankle, broken in 3 places, that I have a high pain tolerance.

The other benefit of the paracervical block is she said it helps reduce cramping for up to an hour. And it did! I made it home before I started to feel cramping (like strong period cramps), almost exactly an hour after the injection. I continued to alternate between Tylenol and ibuprofen the rest of the day, but this was like managing a period, and I didn’t have any pain hangover from the injection or the IUD replacement process. (Again, previously it felt like it took me hours to recover from the experience, when I had it without the paracervical block).

IUD insertion (or IUD replacement) is more manageable with a paracervical block, a blog post from Dana M. Lewis on DIYPS.orgNot everyone finds IUD insertions or replacement to be excruciating. If you don’t, I’m so glad for you. But my experience was that it’s the most painful thing I’ve ever experienced. Over half the people I talk to with personal experience also say it is incredibly painful. So if you are one of the people, like me, who find IUD insertions or IUD replacements to be a terrible, painful experience…ask about a paracervical block. It makes an incredible difference and I’m now not dreading the replacement or future removal.

(And if you have any other questions about the experience that I can answer, happy to do so – leave a comment below.)

The data we leave behind in clinical trials and why it matters for clinical care and healthcare research in the future with AI

Every time I hear that all health conditions will be cured and fixed in 5 years with AI, I cringe. I know too much to believe in this possibility. But this is not an uninformed opinion or a disbelief in the trajectory of AI takeoff: this is grounded in the very real reality of the nature of clinical trials reporting and publication of data and the limitations we have in current datasets today.

The sad reality is, we leave so much important data behind in clinical trials today. (And every clinical trial done before today). An example of this is how we report “positive” results for a lot of tests or conditions, using binary cutoffs and summary reporting without reporting average titres (levels) within subgroups. This affects both our ability to understand and characterize conditions, compare overlapping conditions with similar results, and also to be able to use this information clinically alongside symptoms and presentations of a condition. It’s not just a problem for research, it’s a problem for delivering healthcare. I have some ideas of things you (yes, you!) can do starting today to help fix this problem. It’s a great opportunity to do something now in order to fix the future (and today’s healthcare delivery gaps), not just complain that it’s someone else’s problem. If you contribute to clinical trials, you can help solve this!

What’s an example of this? Imagine an autoantibody test result, where values >20 are considered positive. That means a value of 21, 58, or 82 are all considered positive. But…that’s a wide range, and a much wider spread than is possible with “negative” values, where negative values could be 19, 8, or 3.

When this test is reported by labs, they give suggested cutoffs to interpret “weak”, “moderate”, or “strong” positives. In this example, a value of 20-40 is a “weak” positive, a value between 40-80 is a “moderate” positive, and a value above 80 is a strong positive. In our example list, all positives actually fall between barely a weak positive (21), a solidly moderate positive in the middle of that range (58), and a strong positive just above that cutoff (82). The weak positive could be interpreted as a negative, given variance in the test of 10% or so. But the problem lies in the moderate positive range. Clinicians are prone to say it’s not a strong positive therefore it should be considered as possibly negative, treating it more like the 21 value than the 82 value. And because there are no studies with actual titres, it’s unclear if the average or median “positive” reported is actually all above the “strong” (>80) cutoff or actually falls in the moderate positive category.

Also imagine the scenario where some other conditions occasionally have positive levels of this antibody level but again the titres aren’t actually published.

Today’s experience and how clinicians in the real world are interpreting this data:

  • 21: positive, but 10% within cutoff doesn’t mean true positivity
  • 53: moderate positive but it’s not strong and we don’t have median data of positives, so clinicians lean toward treating it as negative and/or an artifact of a co-condition given 10% prevalence in the other condition
  • 82: strong positive, above cutoff, easy to treat as positive

Now imagine these values with studies that have reported that the median titre in the “positive” >20 group is actually a value of 58 for the people with the true condition.

  • 21: would still be interpreted as likely negative even though it’s technically above the positive cutoff >20, again because of 10% error and how far it is below the median
  • 53: moderate positive but within 10% of the median positive value. Even though it’s not above the “strong” cutoff, more likely to be perceived as a true positive
  • 92: still strong positive, above cutoff, no change in perception

And what if the titres in the co-condition have a median value of 28? This makes it even more likely that if we know the co-condition value is 28 and the true condition value is 58, then a test result of 53 will be more correctly interpreted as the true condition rather than providing a false negative interpretation because it’s not above the >80 strong cutoff.

Why does this matter in the real world? Imagine a patient with a constellation of confusing symptoms and their positive antibody test (which would indicate a diagnosis for a disease) is interpreted as negative. This may result in a missed diagnosis, even if this is the correct diagnosis, given the absence of other definitive testing for the condition. This may mean lack of effective treatment, ineligibility to enroll in clinical trials, impacted quality of life, and possibly negatively impacting their survival and lifespan.

If you think I’m cherry picking a single example, you’re wrong. This has played out again and again in my last few years of researching conditions and autoantibody data. Another real-world scenario is where I had a slight positive (e.g. above a cutoff of 20) value, for a test that the lab reported is correlated with condition X. My doctor was puzzled because I have no signs of this condition X. I looked up the sensitivity and specificity data for this test and it only has 30% sensitivity and 80% specificity, whereas 20% of people with condition Y (which I do have) also have this antibody. There is no data on the median value of positivity in either condition X or condition Y. In the context of these two pieces of information we do have, it’s easier to interpret and guess that this value is not meaningful as a diagnostic for condition X given the lack of matching symptoms, yet the lab reports the association with condition X only even though it’s only slightly more probably for condition X to have this autoantibody compared to condition Y and several other conditions. I went looking for research data on raw levels of this autoantibody, to see where the median value is for positives with condition X and Y and again, like the above example, there is no raw data so it can’t be used for interpretation. Instead, it’s summary of summary data of summarizing with a simple binary cutoff >20, which then means clinical interpretation is really hard to do and impossible to research and meta-analyze the data to support individual interpretation.

And this is a key problem or limitation I see with the future of AI in healthcare that we need to focus on fixing. For diseases that are really well defined and characterized and we have in vitro or mouse models etc to use for testing diagnostics and therapies – sure, I can foresee huge breakthroughs in the next 5 years. However, for so many autoimmune conditions, they are not well characterized or defined, and the existing data we DO have is based on summaries of cutoff data like the examples above, so we can’t use them as endpoints to compare diagnostics or therapeutic targets. We need to re-do a lot of these studies and record and store the actual data so AI *can* do all of the amazing things we hear about the potential for.

But right now, for a lot of things, we can’t.

So what can we do? Right now, we actually CAN make a difference on this problem. If you’re gnashing your teeth about the change in the research funding landscape? You can take action right now by re-evaluating your current and retrospective datasets and your current studies and figure out:

  • Where you’re summarizing data and where raw data needs to be cleaned and tagged and stored so we can use AI with it in the future to do all these amazing things
  • What data could I tag and archive now that would be impossible or expensive to regenerate later?
  • Am I cleaning and storing values in formats that AI models could work with in the future (e.g. structured tables, CSVs, or JSON files)?
  • Most simply: how am I naming and storing the files with data so I can easily find them in the future? “Results.csv” or “results.xlsx” is maybe not ideal for helping you or your tools in the future find this data. How about “autoantibody_test-X_results_May-2025.csv” or similar.
  • Where are you reporting data? Can you report more data, as an associated supplementary file or a repository you can cite in your paper?

You should also ask yourself whether you’re even measuring the right things at the right time, and whether your inclusion and exclusion criteria are too strict and excluding the bulk of the population for which you should be studying.

An example of this is in exocrine pancreatic insufficiency, where studies often don’t look at all of the symptoms that correlate with EPI; they include or allow only for co-conditions that are only a tiny fraction of the likely EPI population; and they study the treatment (pancreatic enzyme replacement therapy) without context of food intake, which is as useful as studying whether insulin works in type 1 diabetes without context of how many carbohydrates someone is consuming.

You can be part of the solution, starting right now. Don’t just think about how you report data for a published paper (although there are opportunities there, too): think about the long term use of this data by humans (researchers and clinicians like yourself) AND by AI (capabilities and insights we can’t do yet but technology will be able to do in 3-5+ years).

A simple litmus test for you can be: if an interested researcher or patient reached out to me as the author of my study, and asked for the data to understand what the mean or median values were of a reported cohort with “positive” values…could I provide this data to them as an array of values?

For example, if you report that 65% of people with condition Y have positive autoantibody levels, you should also be able to say:

  • The mean value of the positive cohort (>20) is 58.
  • The mean value of the negative cohort (<20) is 13.
  • The full distribution (e.g. [21, 26, 53, 58, 60, 82, 92…]) is available in a supplemental file or data repository.

That makes a magnitude of difference in characterizing many of these conditions, for developing future models, testing treatments or comparative diagnostic approaches, or even getting people correctly diagnosed after previous missed diagnoses due to lack of available data to correctly interpret lab results.

Maybe you’re already doing this. If so, thanks. But I also challenge you to do more:

  • Ask for this type of data via peer review, either to be reported in the manuscript and/or included in supplementary material.
  • Push for more supplemental data publication with papers, in terms of code and datasets where possible.
  • Talk with your team, colleague and institution about long-term storage, accessibility, and formatting of datasets
  • Better yet, publish your anonymized dataset either with the supplementary appendix or in a repository online.
  • Take a step back and consider whether you’re studying the right things in the right population at the right time

The data we leave behind in clinical trials (white matters for clinical care, healthcare research, and the future with AI), a blog post by Dana M. Lewis from DIYPS.orgThese are actionable, doable, practical things we can all be doing, today, and not just gnashing our teeth. The sooner we course correct with improved data availability, the better off we’ll all be in the future, whether that’s tomorrow with better clinical care or in years with AI-facilitated diagnoses, treatments, and cures.

We should be thinking about:

  • What if we design data gathering & data generation in clinical trials not only for the current status quo (humans juggling data and only collecting minimal data), but how should we design trials for a potential future of machines as the primary viewers of the data?
  • What data would be worth accepting, collecting, and seeking as part of trials?
  • What burdens would that add (and how might we reduce those) now while preparing for that future?

The best time to collect the data we need was yesterday. The second best time is today (and tomorrow).

Exhausting and familiar, the experience of living with autoimmune diseases

A new autoimmune disease is exhausting and foreign, until it becomes exhausting and familiar.

Exhausting is such a good word for it. Sometimes, it’s the disease process itself that is exhausting and causes fatigue physically. Other times, it’s the coping and figuring out how to wrangle your life into a pretzel around it that is exhausting. It’s exhausting continually finding new things that are changing, out of your control, that you have to adapt to both physically and mentally. Sometimes, it’s exhausting trying to explain to others how it affects you and what you need support-wise, especially when it doesn’t come with a clear name, a neat bow, and an easily explainable narrative about what the trajectory will look like. Because you don’t know. You don’t have answers, and it’s exhausting to deal with the unknown and uncertainty.

On the flip side, it’s also exhausting when you don’t talk about it. It’s exhausting to be dealing with it, struggling to adjust, and not talking about it. Because of stigma, because of concern about how other people will treat you differently (even if well-intentioned), because you don’t have answers or a name and can’t fully articulate what is going on in a way other people will understand, because you worry about how it impacts the people you love and whether you’re an anchor holding them back.

Sometimes that also means it’s exhausting to articulate to your healthcare providers. I am lucky that my healthcare providers listened to me and believed me, even when I was coming from a high state of health and physical fitness (e.g. cross country skiing for 6 hours, run/walking ultramarathons, exercising every day, etc) and respect that when I said “I can’t run and press off through my ankle and now it feels weird in my thigh and to lift my hip, and my hands are now weak”, that it was indeed a problem, even when my bloodwork and other biomarkers and clinical exams were mostly normal. (Except for my lungs, the canary in the coalmine for me, and a few sporadic blood biomarkers showing immune shenanigans, most of my data makes me look like the healthiest horse standing outside of the glue factory. I look sick compared to healthy horses, but I look too healthy compared to the horses going into the glue factory. So to speak.)

It’s exhausting to not gaslight myself, coming out of doctor’s appointment after doctor’s appointment where they repeat “you remain a mystery” while also doing everything they can to help to try to diagnose the exhausting, now-familiar thing that evades naming, evades mechanistic understanding, and evades effective curative treatment. They’re doing everything they can despite the fact that they can’t provide answers. Nor can I. I have to hold on to my data (experiential, lived, wearable, and the few lab results and pulmonary function testing that clearly show the level of the problem) tightly and push back against gaslighting myself.

But while it’s all exhausting, it has slowly shifted from foreign to familiar.

I am grateful for the familiarity in a way, because with familiarity comes a newly developed language to put words to the indescribable; a reinforced skill for adaptation and new ‘hacks’ and discovered instruments of freedom; and a commitment to find the glimmers of joy buried under all the exhaustion.

My newly developed language is evolving, because I constantly test this new language on my family and friends in the know. I have to differentiate how this impacts my muscles, especially because I come from a land of ultrarunners who specifically train in discomfort for the purpose of being able to tolerate discomfort in endurance activities. When I say something bothers me, it means it’s intolerable and not what a healthy body would experience in terms of discomfort. I know what tired, sore muscles feel like (hello, 82 miles of run/walking or 6 hours of cross country skiing 50 kilometers/31 miles) and what acute muscle damage feels like from physical activity. This is not that. It’s struggling to initiate a muscle movement, but still being able to move, even though it progressively feels like moving through jello. It’s not something that anyone I know, or I when healthy, experienced, and so I have to figure out and evolve the ways to describe it. Mostly, I found success in describing the consequences of what is happening, when I can’t run and I find it more challenging to walk and hike, even though I can still do those things. Those are things that people can understand, and understand that it’s important to me that I can’t do them and/or that these activities are immensely harder to accomplish, even if they don’t understand the sensation causing that outcome. I can generally describe having an autoimmune disease that affects my muscles, and that’s enough (people understand autoimmune diseases) to be understood.

Like Icarus flying too close to the sun (analogy at top of mind from recently reading to my nephew a Percy Jackson book…), it’s like my muscles are melting, but not to the point of my falling into the ocean. And that’s where healthcare providers most usually see patients, when they’re about to or have hit the ocean when their wings (muscles) fully stop working. I am still flying, but less high, a little melty, a little wonky. I know something is clearly wrong and I see the ocean and where things will go without a solution. But even by flying lower (aka, doing less, stopping running, etc), I’m still melting – it’s still happening, and limiting my physical activity or activities of daily living doesn’t change the trajectory.

Thus, the reinforced skills for adaptations. I learned a lot from my decades of type 1 diabetes and having to “DIY” things myself outside of the healthcare solution. I’m more quick to go from “ugh this is a problem” to wondering about possible solutions. This is everything from changing how I sit (different chairs, with cushions) or lay down to work (with my laptop on a stand to reduce neck muscle strain and a separate bluetooth keyboard and trackpad so I can iterate positions as needed) to bracing early and often (ankle braces, a back brace, a foam neck brace) to a variety of things to lower the challenge to my hands. This includes using little slide tins for meds instead of flip top containers, because even the easy to open containers sometimes bother my hands. I also found a ski carrying strap so I can carry my cross country skis in the winter over my shoulder with the strap, rather than holding them in my hands. Sometimes I ask Scott to make my dinner or prep things (like cutting fruit in advance) so I don’t make my food choices based on not wanting to use up my hand energy to prep the food rather than for eating it. (Ongoing shout out to Scott, who epitomizes the ultimate supportive partner/husband and if he gets annoyed at anything, it’s my occasional hesitation or resistance of wanting him to ask him to do more, because I worry about asking him to do much. He recently spontaneously reminded me that I am a sail…and that always helps.)

It’s important for me to also remember that every bit helps and it doesn’t have to cure but that doesn’t mean it won’t help. Because the help adds up. But it requires pushing back the mental knee-jerk response of saying “that won’t help” because it’s not a cure for the root of the problem. Nope, no cures here. But that doesn’t mean a little bit of help for a little corner of the problem isn’t worth trying. Usually, 2 out of 3 times that little bit of help is a huge relief and reduces the physical burden, even if it’s ‘small’ like something for my hands. Sometimes it’s only a little bit of a help, and so we keep looking for better solutions for that particular challenge. Sometimes I can’t adapt a solution and I adapt my behavior. But my success rate has gone up, and that is great knowing that I can adapt solutions to fit my needs, even though sometimes it gets overwhelming to think about the volume of adaptations, especially when comparing it to a few years ago of how I lived and locomoted and worked.

Because the adaptations mean I can continue to find the glimmers of joy in life. No, I can’t run and I hike and walk more slowly, but I can still get out into the trees and under the blue skies and sunshine and feel the breeze on my face as I move through space. On days I choose to rest, I can sit out on the porch, sometimes braced, sometimes reclined in a chair with pillows, and watch the kittens sun themselves or jump up and stretch out on my legs. I can still spend time with family and friends, enjoying what I can still do with my ten niblings (nieces and nephews) and honorary niblings in my life. I can remind myself that while it feels like I’m falling out of the sky and I am dipping down, I have (hopefully) miles to go before I hit the ocean and stop flying at all. The delta in altitude sucks, especially in comparison to what I could do before, but with less comparison I can find more glimmers of joy both now and on the horizon. There is still a lot I can (and do) do, even as the list of things I can’t do or must adapt grows.

Exhausting and familiar, the experience of living with autoimmune diseases, a blog by Dana M. Lewis on DIYPS.orgIf you find yourself in the exhausting-but-foreign space of a new or suspected autoimmune disease, it sucks. I’m sorry. I wish I could help. (And if I can help you, let me know). But I hope it helps you to know that you are not alone. That yes, it does suck, but there is some solace when it turns from completely foreign to somewhat familiar, even if that doesn’t mean that it got easier or got better. But maybe it’ll be more known, maybe you’ll find more adaptations, and maybe you’ll still be able to find some glimmers of joy.

I did (I have), and I hope you do, too.

Passive Impact (You Can Optimize For That)

Some of the popular use cases of AI agents are to build custom software for other people, and sell it cheaper than big corporate software but still giving you a really good return on your time. A lot of people are quickly making good chunks of money doing that, and automating their workflows so they can make money while they sleep, so to speak. It’s often referred to as passive income, whether that’s investments or a product or business that can generate income for you without you having to do everything.

Some of us, though, are taking a different approach. Not a new one, though, although AI is helping us scale our efforts. This type of approach has been common through open source software for decades. The idea is that you can build something that will help other people…including yes, while you sleep. Instead of passive income, it’s passive impact. 

We can think about using our time, skills, and output of our work not for financial profit, but for people’s benefit, by reaching more people and being useful to people we don’t even know. The goal isn’t making money, it’s to help more people.

Some people do pursue financial income to make an impact. I’m not saying not to do that, if that’s your path of choice. You can maximize your income, then donate to causes you care about. It’s a great strategy for a lot of people who have lucrative careers, and there are a lot of causes (like Life For a Child, which we estimate is the most cost-effective way to help people with diabetes worldwide) that can scale their work with your help through financial donations.

Other people maybe don’t have the same income earning potential or have chosen less financially lucrative careers where their work makes a difference, or they volunteer their time and elbow grease outside of their work to make a difference.

Both of those are great. Yet, I’m saying “yes and” there’s a third option we should talk about more in general society: scaling impact asynchronously and building things for passive impact.

These are things that don’t just solve a one-time problem for one person. They can solve a one-time problem for a lot of people, or a recurring problem for other people.

They scale.

They persist.

What are some examples of passive impact? Think about things that can run without you.

They don’t require a calendar invite or a Zoom link. They don’t need a customer service rep. They just… work. For someone. Somewhere. Every day.

Note that these tools don’t have to work for EVERYONE. Most of my stuff is considered “niche”. I was asked countless times early on why I thought OpenAPS would work for everyone with diabetes. People were surprised when I said I didn’t think it would work for everyone, but it would probably work for everyone who wanted it. (Then we did a lot of research and proved that). Not everyone needed or wanted it, but that doesn’t mean that it wasn’t worthwhile to build for me, or me plus the first dozen, or me plus the now tens of thousands of people using the OpenAPS algorithm in their AID system. But that’s “just” a “small fraction” of “only people with type 1 diabetes” who are “a tiny part of the population of the world”. It doesn’t matter; those people still matter and they (and I!) benefitted from that work, so it was worth doing. The impact scaled.

(You don’t have to quantify it all, but some metrics are helpful)

As someone who builds a lot of things with passive impact, it’s helpful to have data or some kind of analytics to see use of what I built over time. It’s useful for identifying where there are areas to improve, such as if people are stuck when using a feature or finding a bug, but also reinforcing the scale of your impact.

I love waking up and seeing the volume of meal data that’s been processed through Carb Pilot or PERT Pilot (measured via API use metrics), to know that while I was sleeping on the west coast of the US, several people woke up in (likely Europe or elsewhere) and used one of my apps to estimate what they had for breakfast. It’s great reinforcement and you can also see whether you’re gaining exponential growth (in terms of overall usage or new users) over time, and perhaps consider whether you should do a little more sharing about what you’ve built so it can reach the right people before it takes off on its own. Again, not for profit, but for helping.

(But passive impact doesn’t mean passive effort)

Creating something helpful is not passive in the beginning. It takes work, and elbow grease, to understand the problem you are (or someone else is) facing. To be able to determine a useful solution. To build, or write, something that other people can use, without needing to explain it every time. To deploy or share or host it in a way that is accessible and usable long-term. Of course, there are increasing numbers of tools (like LLMs – here are some of my tips for getting started if you’re new to prompting) that can help you get started more quickly, or find a fix to a bug or project blocker, or try something new you didn’t know how to do before.

But once it’s live, the math changes. One hour of your time can help hundreds of people, without requiring hundreds of hours. A lot of times it may be more than one hour, but nowadays, it’s often a lot less than it would have been otherwise. And more likely, you may find yourself spending multiple hours building something and be frustrated (well, I often get frustrated) at “how long it’s taking”, then realize that if you don’t build it, no one will. And without this effort, it wouldn’t get built at all. So it’s worth the >one hour time it’s taking to build it, even if it’s longer than expected.

That’s how a lot of my projects started: I needed something, I built it, and then I realized others needed it too. So I built (or wrote) it and shared it.

That’s passive impact, and it adds up.

Passive impact: Creating and building to help people you don’t know in ways that persist without always requiring your time or presence

I’d love to see more of this in the world. And I’d love to see an understanding that this IS the goal, not financial outcomes, and that’s a valid and celebratable goal. (This comment is motivated by having someone ask me in recent weeks asking how much royalties I’m getting from the open source code we released 10 years ago, intentionally free, with the goal of companies using it!) But preferably not followed with “I could never do that”, because, of course, you could. You can. You…should? Maybe, maybe not. But hopefully you think about it in the future. It’s not “either or” with financial income, either. You can do both! Society spends a lot of time talking about how to earn money passively. But not nearly enough time thinking about how we can create value for others passively. Especially in health, technology, and research spaces (fields where gaps are common and timely help matters), this way of thinking can change not just how we build, but who we build for. We can bring more people into using or building or doing, whether it’s active or passive. And we all win as a result.

Try, Try Again with AI

If you’ve scoffed at, dismissed, or tried using AI and felt disappointed in the past, you’re not alone. Maybe the result wasn’t quite right, or it missed the mark entirely. It’s easy to walk away thinking, “AI just doesn’t work.” But like learning any new tool, getting good results from AI takes a little persistence, a bit of creativity, and the willingness to try again. Plus an understanding that “AI” is not a single thing.

AI is not magic or a mind reader. AI is a tool. A powerful one, but it depends entirely on how you use it. I find it helpful to think of it as a coworker or intern that’s new to your field. It’s generally smart and able to do some things, but it needs clear requests and directions on what to do. When it misses the mark, it needs feedback, or for you to circle around and try again with fresh instructions.

If your first attempt doesn’t go perfectly, it doesn’t mean the technology is useless, just like your brand new coworker isn’t completely useless.

Imperfect Doesn’t Mean Impossible

One way to think of AI is that it is a new kitchen gadget. Imagine that you get a new mini blender or food processor. You’ve never made a smoothie before, but you want to. You toss in a bunch of ingredients and out comes…yuck.

Are you going to immediately throw away the blender? Probably not. You’re likely to try again, with some tweaks. You’ll try different ingredients, more or less liquid, and modify and try again.

I had that experience when I broke my ankle and needed to incorporate more protein in my diet. I got a protein powder and tried stirring it into chocolate milk. Gross. I figured out that putting it in a tupperware container and shaking it thoroughly, then leaving it overnight, turned out ok. Eventually when I got a blender, I found it did even better. But the perfect recipe for me ended up being chocolate milk, protein powder, and frozen bananas. Yum, it made it like a chocolate milkshake texture and I couldn’t tell there was powder in it. But I still had to tweak things: shoving in large pieces of frozen bananas didn’t work well with my mini blender. I figured out slices worked ok, and eventually Scott and I zeroed in that it was most efficient to slice the banana and put it into the freezer, that way I had ready-to-go frozen right-sized banana chunks to mix in.

I had some other flops, too. I had found a few other recipes I liked to do without protein powder. Frozen raspberry or frozen pineapple + a crystal light lemonade packet + water are two of my hot weather favorites. But one time it occurred to me to try the pineapple recipe with protein powder in it… ew. That protein powder did not go well with citrus. So I didn’t make that one again.

AI is like that blender. If the result isn’t what you wanted, you should try:

  • Rewriting your prompt. Try different words, try giving it more context (instructions).
  • Give it more detail or clearer instructions. “Make a smoothie” is a little vague; “blend chocolate milk, protein powder, and frozen banana” is a little more direction to tell it what you want.
  • Try a different tool. The models are different for LLMs, and the setup is different for every tool. How you might use ChatGPT to do something might end up being different for using Gemini or MidJourney.

Sometimes, small tweaks make a big difference.

If It Doesn’t Work Today, Try Again Tomorrow (or sometime in the future)

Some tasks are still on the edge of what AI can do in general, or a particular model at that time. That doesn’t mean they’ll always be unable to do that task. AI is improving constantly, and quickly. What didn’t work a few months ago might work today, either in the same model or a new model/tool.

A flowchart diagram titled “Try a task with AI” illustrates how to approach AI usage with persistence and iteration. At the top is a purple box labeled “Try a task with AI.” Two arrows extend downward. The left arrow leads to a peach-colored box labeled “Result is not quite right,” which then leads to another box with three bullet points: “Reword your prompt,” “Give it more instructions,” and “Try this prompt with a different model/tool.” Below that is a smaller orange box labeled “Still didn’t work?” which connects to a final box that says: “Park this project: ‘try again later’ list” and “Try a different task or project.” From this box, an arrow loops back to the top box, showing that users should try again. The right arrow from the top goes to a green box labeled “Result is pretty good,” which then leads to another green box that says “Keep going & use AI for other tasks and projects.” This green path also loops back to the top. The overall message of the diagram is that regardless of whether the result is good or not quite right, users should continue experimenting with AI and trying new tasks.I’ve started making a list of projects or tasks I want to work on where the AI isn’t quite there yet and/or I haven’t figured out a good setup, the right tool, etc. A good example of this was when I wanted to make an Android version of PERT Pilot. It took me *four tries* over the course of an entire year before I made progress to a workable prototype. Ugh. I knew it wasn’t impossible, so I kept coming back to the project periodically and starting fresh with a new chat and new instructions to try to get going. In the course of a year, the models changed several times, and the latest models were even better at coding. Plus, I was better through practice at both prompting and troubleshooting when the output of the LLM wasn’t quite what I wanted. All of that over time added up, and I finally have an Android version of PERT Pilot (and it’s out on the Play Store now, too!) to match the iOS version of PERT Pilot. (AI also helped me quickly take the AI meal estimation feature from PERT Pilot, which is an app for people with EPI, and turn it into a general purpose app for iOS called Carb Pilot. If you’re interested in getting macronutrient (fat, protein, carb, and/or calorie) counts for meals, you might be interested in Carb Pilot.)

Try different tasks and projects

You don’t have to start with complex projects. In fact, it’s better if you don’t. Start with tasks you already know how to do, but maybe want to see how the AI does. This could be summarizing text, writing or rewriting an email, changing formats of information (eg json to csv, or raw text into a table formatted so you can easily copy/paste it elsewhere).

Then branch out. Try something new you don’t know how to do, or tackle a challenge you’ve been avoiding.

There are two good categories of tasks you can try with AI:

  • Tasks you already do, but want to do more efficiently
  • Tasks you want to do, but aren’t sure how to begin

AI is a Skill, and Skills Take Practice

Using AI well is a skill. And like any skill, it improves with practice. It’s probably like managing an intern or a new coworker who’s new to your organization or field. The first time you managed someone, it probably wasn’t as good as after you had 5 years of practice managing people or helping interns get up to speed quickly. Over time, you figure out how to right-size tasks; repeat instructions or give them differently to meet people’s learning or communication styles; and circle back when needed when it’s clear your instructions may have been misunderstood or they’re heading off in a slightly unexpected direction.

Don’t let one bad experience with AI close the door. The people who are getting the most out of AI right now are the ones who keep trying. We experimented, failed, re-tried, and learned. That can be you, too.

If AI didn’t wow you the first time for the first task you tried, don’t quit. Rephrase your prompt. Try another model/tool. (Some people like ChatGPT; some people like Claude; some people like Gemini….etc.) You can also ask for help. (You can ask the LLM itself for help! Or ask a friendly human, I’m a friendly human you can try asking, for example, if you’re reading this post. DM or email me and tell me what you’re stuck on. If I can make suggestions, I will!)

Come back in a week. Try a new type of task. Try the same task again, with a fresh prompt.

But most importantly: keep trying. The more you do, the better it gets.

iOS and Android development experience for newbies

Vibe coding apps is one things, but what about deploying and distributing them? That still requires some elbow grease, and I’ve described my experiences with both Apple and Google below for my first apps in each platform.

(I’m writing this from the perspective of someone familiar with coding primarily through bash scripts, JavaScript, Python, and various other languages, but with no prior IDE or mobile app development experience when I got started, as I typically work in vim through the terminal. I was brand new to IDEs and app development for both iOS and Android when I got started. For context, I have an iOS personal device.)

Being new to iOS app development

First, some notes on iOS development. If you only want to test your app on your own phone, it’s free. You can build the app in XCode and with a cord, deploy it directly on your phone. However, if you wish to distribute apps via TestFlight (digitally) to yourself or other users, Apple requires a paid developer account at $99 per year. (This cost can be annoying for people working on free apps who are doing this as not-a-business). Initially, figuring out the process to move an app from Xcode to TestFlight or the App Store is somewhat challenging. However, once you understand that archiving the app opens a popup to distribute it, the process becomes seamless. Sometimes there are errors if Apple has new development agreements for you to sign in the web interface, but the errors from the process just say your account is wrong. (So check the developer page in your account for things to sign, then go try again once you’ve done that.) TestFlight itself is intuitive even for newcomers, whether that is yourself or a friend or colleague you ask to test your app.

Submitting an app to the App Store through the web interface is relatively straightforward. Once you’ve got your app into TestFlight, you can go to app distribution, and create a version and listing for your app and add the build you put into TestFlight. Note that Apple is particular about promotional app screenshots and requires specific image sizes. Although there are free web-based tools to generate these images from your screenshots, if you use a tool without an account login, it becomes difficult to replicate the exact layout later. To simplify updates, I eventually switched to creating visuals manually using PowerPoint. This method made updating images easier when I had design changes to showcase, making me more likely to keep visuals current. Remember, you must generate screenshots for both iPhone and iPad, so don’t neglect testing your app on iPad, even if usage might seem minimal.

When submitting an app for the first time, the review process can take several days before beginning. My initial submission encountered bugs discovered by the reviewer and was rejected. After fixing the issues and resubmitting, the process was straightforward and quicker than expected. Subsequent submissions for new versions have been faster than the very first review (usually 1-3 days max, sometimes same-day), and evaluation by App Store reviewers seems more minimal for revisions versus new apps.

The main challenge I have faced with App Store reviews involved my second app, Carb Pilot. I had integrated an AI meal estimation feature into PERT Pilot and created Carb Pilot specifically for AI-based meal estimation and custom macronutrient tracking. Same feature, but plucked out to its own app. While this feature was approved swiftly in PERT Pilot as an app revision, Carb Pilot repeatedly faced rejections due to the reviewer testing it with non-food items. Same code as PERT Pilot, but obviously a different reviewer and this was the first version submitted. Eventually, I implemented enough additional error handling to ensure the user (or reviewer, in this case) entered valid meal information, including a meal name and a relevant description. If incorrect data was entered (identified by the API returning zero macronutrient counts), the app would alert users. After addressing these edge cases through several rounds of revisions, the app was finally approved. It might have been faster with a different reviewer, but it did ultimately make the app more resilient to unintended or unexpected user inputs.

Other than this instance, submitting to the App Store was straightforward, and it was always clear at what stage the process was, and the reviewer feedback was reasonable.

(Note that some features like HealthKit or audio have to be tested on physical devices, because these features aren’t available in the simulator, so depending on your app functionality, you’ll want to test both with the simulator and with physical iOS devices to test those. Otherwise, you don’t have to have access to test on a physical device.)

Being new to Android app development

In contrast, developing for Android was more challenging. I decided to create an Android version of PERT Pilot after receiving several requests. However, this effort took nearly two years and four separate attempts to even get a test version built. I flopped at the same stage three times in a row, even with LLM (AI) assistance in trying to debug the problem.

Despite assistance from language models (LLMs), I initially struggled to create a functional Android app from scratch. Android Studio uses multiple nested folder structures with Kotlin (.kt) files and separate XML files. The XML files handle layout design, while Kotlin files manage functionality and logic, unlike iOS development, which primarily consolidates both into fewer files or at least consistently uses a single language. Determining when and where to code specific features was confusing. (This is probably easier in 2025 with the advent of agent and IDE-integrated LLM tools! My attempts were with chat-based LLMs that could not access my code directly or see my IDE, circa 2023 and 2024.)

Additionally, Android development involves a project-wide “gradle” file that handles various settings. Changes made to this file require manually triggering a synchronization process. Experienced Android developers might find this trivial, but it is unintuitive for newcomers to locate both the synchronization warnings and the sync button. If synchronization isn’t performed, changes cannot be tested, causing blocks in development.

Dependency management also posed difficulties, and that plus the gradle confusion is what caused my issues on three different attempts. Initially, dependencies provided by the LLM were formatted incorrectly, breaking the build. Eventually (fourth time was the charm!), I discovered there are two separate gradle files, and pasting dependencies correctly and synchronizing appropriately resolved these issues. While partly user error (I kept thrashing around with the LLM trying to solve the dependency formatting, and finally on the fourth attempt realized it was giving me a language/formatting approach that was a different language than the default Android Studio gradle file, even though I had set up Android Studio’s project to match the LLM approach. It was like giving Android Studio Chinese characters to work with when it was expecting French), this issue significantly impacted my development experience, and it was not intuitive to resolve within Android Studio even with LLM help. But I finally got past that to a basic working prototype that could build in the simulator!

I know Android has different features than iOS, so I then had to do some research to figure out what gestures were different (since I’m not an Android user), as well as user research. We switched from swiping to long pressing on things to show menu options for repeat/edit/deleting meals, etc. That was pretty easy to swap out, as were most of the other cosmetic aspects of building PERT Pilot for Android.

Most of the heartache came down to the setup of the project and then the exporting and deploying to get it to the Play Store for testing and distribution.

Setting up a Google Play developer account was quick and straightforward, despite needing to upload identification documents for approval, which took a day to get verified. There’s a one-time cost ($25) for creating the development account, that’s a lot cheaper than the yearly fee for Apple ($99/year). But remember, above and below, that you’re paying with your time as opposed to money, in terms of a less intuitive IDE and web interface for moving forward with testing and deploying to production.

Also, you have to have hands-on access to a physical Android device. I have an old phone that I was able to use for this purpose. You only have to do this once during the account creation/approval process, so you may be able to use a friend’s device (involves scanning QR code and being logged in), but this is a little bit of a pain if you don’t have a modern physical Android device.

I found navigating the Play Store developer console more complicated than Apple’s, specifically when determining the correct processes for uploading test versions and managing testers. Google requires at least 12 users over a two-week testing period before allowing production access. Interestingly, it’s apparently pretty common to get denied production access even after you have 12 users, the minimum stated. It’s probably some secret requirement about app use frequency, although they didn’t say that. The reason for rejection was uninformative. Once denied, you then have a mandatory 14 day wait period before you can apply again. I did some research and found that it’s probably because they want a lot of active use in that time frame. Instead of chasing other testers (people who would test for the sake of testing but not be people with EPI), I waited the 14 days and applied again and made it clear that people wouldn’t be using the app every day, and otherwise left my answers the same…and this time lucked into approval. This meant I was allowed to submit for review for production access to the Play Store. I submitted….and was rejected, because there are rules that medical and medical education apps can only be distributed by developers tied to organizations that have a business number and have been approved. What?!

Apparently Google has a policy that medical “education” apps must be distributed by organizations with approved business credentials. The screenshots sent back to me seem to be flagging on the button I had on the home screen that described PERT and dosing PERT and information about the app. I am an individual (not an organization or a nonprofit or a company) and I’m making this app available for free to help people, so I didn’t want to have to go chase a nonprofit who might have android developer credentials to tie my app to.

What I tried next was removing the button with the ‘education’ info, changing the tags on my app to fall under health & fitness rather than ‘medical’, and resubmitting. No other changes.

This time…it was accepted!

Phew.

iOS or Android: which was easier? A newbie's perspective on iOS and Android development and app deployment, a blog by Dana M. Lewis from DIYPS.orgTL;DR: as more and more people are going to vibe code their way to having Android and/or iOS apps, it’s very feasible for people with less experience to do both and to distribute apps on both platforms (iOS App Store and Google Play Store for Android). However, there’s an up front higher cost to iOS ($99/year) but a slightly easier, more intuitive experience for deploying your apps and getting them reviewed and approved. Conversely, Android development, despite its lower entry cost ($25 once), involves navigating a more complicated development environment, less intuitive deployment processes, and opaque requirements for app approval. You pay with your time, but if you plan to eventually build multiple apps, once you figure it out you can repeat the process more easily. Both are viable paths for app distribution if you’re building iOS and Android apps in the LLM-era of assisted coding, but don’t be surprised if you hit bumps in the road for deploying for testing or production.

Which should you choose for your first app, iOS or Android? It depends on if you have a fondness for either iOS or Android ecosystem; if one is closer to development languages you already know; or if one is easier to integrate/work with your LLM of choice. (I now have both working with Cursor and both also can be pulled into the ChatGPT app). Cost may be an issue, if $99/year is out of reach as a recurring cost, but keep in mind you’ll pay with your time for Android development even though it’s a $25 single time user account setup fee for developers. You also may want to think about whether your first app is a one-off or if you think you might do more apps in the future, which may change the context for paying the Apple developer fee yearly. Given the requirements to test with a certain number of users for Play Store access, it’s easier to go from testing to production/store publication on Apple than it is for Google, which might factor into subsequent app and platform decisions, too.

iOS Android
Creating a developer account better (takes more time, ID verification), one time $25 fee, requires physical device access
Fees/costs $99/year Better: one time $25 fee for account creation
IDE better (more challenging with different languages/files and requires gradle syncing)
Physical device access required No (unless you need to test integrations like HealthKit or audio input or exporting files or sending emails) Yes, as part of the account setup but you could borrow someone’s phone to accomplish this
Getting your app to the web for testing Pretty clear once you realize you have to “archive” your app from XCode, pops up a window that then guides you through sending to TestFlight. (Whether or not you actually test in TestFlight, you can then add to submit for review).

Hiccups occasionally if Apple requires you to sign new agreements in the web interface (watch for email notifications and if you get errors about your account not being correct, if you haven’t changed which account you are logged into with XCode, check the Apple developer account page on the web. Accept agreements, try again to archive in XCode, and it should clear that error and proceed.

A little more complicated with generating signed bundles, finding where that file was saved on your computer, then dragging and dropping or attaching it and submitting for testing.

Also more challenging to manage adding testers and facilitate access to test.

Submitting for approval/production access Better, easy to see what stage of review your app is in. Challenging to navigate where/how to do this in web interface the first time, and Google has obtuse, unstated requirements about app usage during testing.
Expect to be rejected the first time (or more) and have to wait 14 days to resubmit.
Distribution once live on the store Same Same

 

Piecing together your priorities when your pieces keep changing

When dealing with chronic illnesses, it sometimes feels like you have less energy or time in the day to work with than someone without chronic diseases. The “spoon theory” is a helpful analogy to illustrate this. In spoon theory, each person has a certain number of “spoons” representing their daily energy available for tasks including activities of daily living, activity or recreation activity, work, etc. For example, an average person might have 10 spoons per day, using just one spoon for daily tasks. However, someone with chronic illness may start with only 8 spoons and require 2-3 spoons for the same daily tasks, leaving them with fewer spoons for other activities.

I’ve been thinking about this differently lately. My priorities on a daily basis are mixed between activities of daily living (which includes things like eating, managing diabetes stuff like changing pump site or CGM, etc); exercise or physical activity like walking or cross-country skiing (in winter) or hiking (at other times of the year); and “work”. (“Work” for me is a mix of funded projects and my ongoing history of unfunded projects of things that move the needle, such as developing the world’s first app for exocrine pancreatic insufficiency or developing a symptom score and validating it through research or OpenAPS, to name a few.)

A raccooon juggles three spoonsAs things change in my body (I have several autoimmune diseases and have gained more over the years), my ‘budget’ on any given day has changed, and so have my priorities. During times when I feel like I’m struggling to get everything done that I want to prioritize, it sometimes feels like I don’t have enough energy to do it all, compared to other times when I’ve had sufficient energy to do the same amount of daily activities, and with extra energy left over. (Sometimes I feel like a raccoon juggling three spoons of different weights.)

In my head, I can think about how the relative amount of energy or time (these are not always identical variables) are shaped differently or take up different amounts of space in a given day, which only has 24 hours. It’s a fixed budget.

I visualize activities of daily living as the smallest amount of time, but it’s not insignificant. It’s less than the amount of time I want to spend on work/projects, and my physical activity/recreation also takes up quite a bit of space. (Note: this isn’t going to be true for everyone, but remember for me I like ultrarunning for context!)

ADLs are green, work/projects are purple, and physical activity is blue:

Example of two blocks stacked on each other (green), four blocks in an l shape (purple), three blocks in a corner shape (blue)

They almost look like Tetris pieces, don’t they? Imagine all the ways they can fit together. But we have a fixed budget, remember – only 24 hours in the day – so to me they become Tangram puzzle pieces and it’s a question every day of how I’m going to construct my day to fit everything in as best as possible.

Preferably, I want to fit EVERYTHING in. I want to use up all available time and perfectly match my energy to it. Luckily, there are a number of ways these pieces fit together. For example, check out these different variations:

8 squares with different color combinations with a double block, an l shaped block, and a corner (three pieces) block. All squares are completely full, but in different combinations/layouts of the blocks

But sometimes even this feels impossible, and I’m left feeling like I can’t quite perfectly line everything up and things are getting dropped.

Example of a square where the blocks don't all fit inside the squareIt’s important to remember that even if the total amount of time is “a lot”, it doesn’t have to be done all at once. Historically, a lot of us might work 8 hour days (or longer days). For those of us with desk jobs, we sometimes have options to split this up. For example, working a few hours and then taking a lunch break, or going for a walk / hitting the gym, then returning to work. Instead of a static 9-5, it may look like 8-11:30, 1:30-4:30, 8-9:30.

The same is true for other blocks of time, too, such as activities of daily living: they’re usually not all in one block of time, but often at least two (waking up and going to bed) plus sprinkled throughout the day.

In other words, it’s helpful to recognize that these big “blocks” can be broken down into smaller subunits:

Tangram-puzzle-pieces-different-shapes-closeup-DanaMLewis

And from there… we have a lot more possibilities for how we might fit “everything” (or our biggest priorities) into a day:

Showing full blocks filled with individual blocks, sometimes linked but in different shapes than the L and corner shapes from before.

For me, these new blocks are more common. Sometimes I have my most typical day with a solid block of exercise and work just how I’d prefer them (top left). Other times, I have less exercise and several work blocks in a day (top right). Other days, I don’t have energy for physical activity, activities of daily living take more energy or I have more tasks to do and I also don’t have quite as much time for longer work sections (bottom left). There’s also non-work days too where I prioritize getting as much activity as I can in a day (bottom right!). But in general, the point of this is that instead of thinking about the way we USED to do things or thinking we SHOULD do things a certain way, we should think about what needs to be done; the minimum of how it needs to be done; and think creatively about how we CAN accomplish these tasks, goals, and priorities.

A useful trigger phrase to check is if you find yourself saying “I should ______”. Stop and ask yourself: should, according to what/who? Is it actually a requirement? Is the requirement about exactly how you do it, or is it about the end state?

“I should work 8 hours a day” doesn’t mean (in all cases) that you have to do it 8 straight hours in a row, other than a lunch break.

If you find yourself should-ing, try changing the wording of your sentence, from “I should do X” to “I want to do X because Y”. It helps you figure out what you’re trying to do and why (Y), which may help you realize that there are more ways (X or Z or A) to achieve it, so “X” isn’t the requirement you thought it was.

If you find yourself overwhelmed because it feels like you have a big block task that you need to do, this is also helpful then to break it down into steps. Start small, as small as opening a document and writing what you need to do.

My recent favorite trick that is working well for me is putting the item of “start writing prompt for (project X)” on my to-do list. I don’t have to run the prompt; I don’t have to read the output then; I don’t have to do the next steps after that…but only start writing the prompt. It turns out that writing the prompt for an LLM helps me organize my thoughts in a way that it then makes the subsequent next steps easier and clearer, and I often then bridge into completing several of those follow up tasks! (More tips about starting that one small step here.)

The TL;DR: perhaps is that while we might yearn to fit everything in perfectly and optimize it all, it’s not going to always turn out like that. Our priorities change, our energy availability changes (due to health or kids’ schedules or other life priorities), and if we strive to be more flexible we will find more options to try to fit it all in.

Sometimes we can’t, but sometimes breaking things down can help us get closer.

Showing how the blocks on the left have fixed shapes and have certain combinations, then an arrow to the right with example blocks using the individual unit blocks rather than the fixed shapes, so the blocks look very different but are all filled, also.

What bends and what breaks and the importance of knowing the difference as a patient

As a patient, navigating healthcare often feels like decoding a complex rulebook. There are rules for everything: medication dosages, timing protocols, follow-up intervals. Some of these rules matter a lot, for either short term or longer term safety or health outcomes. But at other times… the rules seem senseless and are applied differently based on different healthcare providers within the same specialty, let alone across different specialities. As a patient, it’s easy to initially want to try to follow all rules perfectly, but feel unable to because the rules don’t make sense in a personal context. Over time, it can be hard to resist the conclusion that the rules don’t matter or don’t apply to you. The reality is somewhere in between. And it’s the in-between part that can be a challenging balance to figure out. Learning to navigate this balance requires understanding which rules are flexible and which aren’t.

I’ve learned there’s enormous value in digging into the “why” behind medical recommendations, when I can. Take acetaminophen (Tylenol), for example. There’s a clear, non-negotiable daily limit on the bottle because exceeding it is dangerous. The over-the-counter recommendation for Extra Strength acetaminophen (500 mg tablets) is no more than two tablets every six hours, not exceeding six tablets in 24 hours. Which actually means 3 doses per day, despite the 6 hour recommendation. This maximum daily limit (no more than six tablets) is set close to the safety threshold; exceeding that limit (eight tablets in 24 hours) increases the risk of severe liver damage.

Understanding this daily limit provides flexibility within safe boundaries (with the obvious caveat that I’m not a doctor and you should always talk to your own doctor). The “every 6 hours” recommendation ensures stable bioavailability of acetaminophen throughout the day, and making sure over the course of 24 hours that you are safely and completely below the max dosage line. Slight deviations to timing, such as taking a dose at 5 hours and 30 minutes instead of precisely 6 hours because you’re about to go to sleep, do not inherently cause harm, as long as the total intake remains within the safe daily limit. This is an example where a compliance-oriented guideline is designed primarily for optimal adherence at the population based level, rather than marking an absolute safety threshold at each individual dose.

There are a lot of things like this in healthcare, but it’s not always explained to patients and patients may not always think to stop and question the why – or have the time and resources to do so – and figure it out from first principles to decide whether a deviation on the timing or amount is risky, or not.

But many healthcare rules aren’t as clearly defined by safety, as is the case of the acetaminophen example. Other rules are shaped by convenience, compliance, and practical constraints of research protocols.

Timelines like “two weeks,” “one month,” or “six months” for follow-up visits or medication titration points often reflect research convenience more than physiological necessity or even the ideal best practice. These intervals might mark study endpoints, or convenience to the healthcare system, but they don’t necessarily pinpoint the best timeline overall or the right timeline for an individual patient. It can be hard as a patient to decide if your experience is deviating from the typical timeline in a beneficial or non-optimal way, and if and when to speak up and try to better adjust to the system or adjust the system to meet your needs (such as scheduling an earlier appointment rather than waiting for a mythical 4 month follow up when it’s clear by months 2-3 that there is no benefit to a treatment because any impact should have been observed by then, even if it wasn’t significant).

As a patient, understanding when rules reflect safety versus when they’re crafted primarily for convenience is crucial, but hard. Compliance-driven rules can sometimes be thoughtfully bent. They might be able to be adjusted to better fit individual circumstances without compromising safety. For instance, a medication schedule set strictly every eight hours might be modified slightly based on daily activities or sleep patterns, provided the change remains within safe therapeutic boundaries over the course of 24 hours. (And patients should be able to discuss this with their doctors! But time availability or access may influence the ability to have these conversations up front or over time as conflicts or issues arise.)

Yet, bending rules requires confidence, critical thinking, and often significant resources, whether those are educational, emotional, health itself, or financial. It means feeling secure enough to question a provider’s advice or advocate for adjustments tailored to individual needs. It’s not always even questioning the advice itself, but checking the understanding and interpretation of how you apply it to your own life. Most providers understand that, and have no problem confirming your understanding. Other times, though, it can accidentally or unintentionally cause conflict, if providers sometimes perceive questioning of their judgement.

I’ve tripped into that situation at least once accidentally before, when I had a follow up appointment with a non-MD clinical provider who wasn’t my main doctor at the practice, who I was seeing for an acute short-term issue. She was describing a recommendation for an rx, specifically because I have diabetes. In the past, I have received over-treatment from most providers because of having type 1 diabetes, because many recommendations for non-diabetes management that have guidance for people with diabetes are based on an assumption of non-optimal healing and non-optimal glucose management. Given that at the time I was already using OpenAPS, with ideal glucose outcomes for years, and no issues ever with reduced healing, I asked if the prescription recommendation would be given to the same type of patient without diabetes. I was trying to help myself make an informed decision about whether to accept the recommendation for the rx to determine if it was appropriate. If it was just because I had diabetes, it warranted additional discussion. It wasn’t about her clinical judgement per se, but about a shared decision making process to right-size the next steps to my individual situation, rather than assume that population-based outcomes for people with diabetes were automatically appropriate. Because of my experience, I know that sometimes they are and sometimes they are not, so I’ve learned to ask these questions. However, some combination of the lack of existing relationship with this provider; perhaps a poorly worded question; and other factors made the provider act defensive. I got the information I needed, decided the rx was appropriate for me and I would use it, and went about my business. But I got a follow up call later from another MD (again, not my MD) who was defensive and calling to check why I was questioning this non-MD provider and it came across as if I was questioning her because the provider was a non-MD…which was not the issue at all! It was about me and my care and making sure I understood the root of the recommendation: whether it was because of the health situation or because I had diabetes. (It was the former, about the health situation, although initially articulated as being simply because of the latter fact of simply having diabetes.)

This situation has colored all future encounters with healthcare providers for me. Seeing new providers who I don’t have a longstanding relationship with makes me nervous, from learned, lived experience about how some of these one-off encounters have gone in the past, like the ones above.

Unfortunately, patients who push back against compliance-driven rules or simply ask questions to facilitate their understanding risk being labeled “non-compliant” or “non-adherent”, and sometimes we get labels on our chart for asking questions and being misunderstood, despite our good intentions. Such labels can have lasting impacts, influencing how future providers perceive our reliability and credibility and can cause subsequent issues for receiving or even being granted access to healthcare.

This creates a profound dilemma for patients: follow all rules precisely, without question, but potentially sacrificing optimal care, or thoughtfully question to bend them and risk being misunderstood or penalized for trying to optimize your individual outcomes when the one-size-fits-all approach doesn’t actually fit.

Breaking compliance-oriented rules isn’t about defiance. At least, it’s never been that way for me. It’s about personalization and achieving the best possible outcomes. But not every patient has the luxury of confidently navigating these nuances, and even when they do, as described above, it can still sometimes turn out not so well. Many patients don’t have the time, energy, resources, or privilege required to safely challenge or reinterpret guidelines. Or they’ve been penalized for doing so. Consequently, they may remain strictly compliant, potentially missing opportunities for better individual outcomes and higher quality of life.

Healthcare needs to provide clarity around which rules are absolute safety boundaries and which are recommendations optimized primarily for convenience or broad adherence for the safe general public use. Patients deserve transparency and support in discerning between what’s bendable for individual benefit and what’s non-negotiable for safety.

What bends, what breaks and the importance of understanding the difference in healthcare. A blog post by Dana M. Lewis from DIYPS.orgAnd: patients should not be punished for asking questions in order to better understand or check their understanding. 

Knowing the difference on what bends and what breaks matters. But many patients remain caught in the delicate balance between bending and breaking, carefully evaluating risks and rewards, often alone.

Just Do Something (Permission Granted)

Just do it. And by it, I mean anything. You don’t need permission, but if you want permission, you have it.

If you’ve ever found yourself feeling stuck, overwhelmed (by uncertainty or the status of the world), and not sure what to do, I’ll tell you what to do.

Do something, no matter how small. Just don’t wait, go do it now.

Let’s imagine you have a grand vision for a project, but it’s something you feel like you need funding for, or partners for, or other people to work on, or any number of things that leave you feeling frozen and unable to do anything to get started. Or it’s something you want the world to have but it’s something that requires expertise to build or do, and you don’t have that expertise.

The reality is…you don’t need those things to get started.

You can get started RIGHT NOW.

The trick is to start small. As small as opening up a document and writing one sentence. But first, tell yourself, “I am not going to write an entire plan for Z”. Nope, you’re not going to do that. But what you are going to do is open the document and write down what the document is for. “This document is where I will keep notes about Plan Z”. If you have some ideas so far, write them down. Don’t make them pretty! Typos are great. You can even use voice dictation and verbalize your notes. For example “develop overall strategy, prompt an LLM for an outline of steps, write an email to person A about their interest in project Z”.

Thanks to advances in technology, you now have a helper to get started or tackle the next step, no matter how big or small. You can come back later and say “I’m not going to do all of this, but I am going to write the prompt for my LLM to create an outline of steps to develop the strategy for project Z”. That’s all you have to do: write the prompt. But you may find yourself wanting to go ahead and paste the prompt and hit run on the LLM. You don’t have to read the output yet, but it’s there for next time. Then next time, you can copy and paste the output into your doc and review it. Maybe there will be some steps you feel like taking then, or maybe you generate follow up prompts. Maybe your next step is to ask the LLM to write an email to person A about the project Z, based on the outline it generated. (Some other tips for prompting and getting started with LLMs here, if you want them.)

The beauty of starting small is that once you have something, anything, then you are making forward progress! You need progress to make a snowball, not just a snowflake in the air. Everything you do adds to the snowball. And the more you do, the easier it will get because you will have practice breaking things down into the smallest possible next step. Every time you find yourself procrastinating or saying “I can’t do thing B”, get in the habit of catching yourself and saying: 1) what could I do next? And write that down, even if you don’t do it then, and 2) ask an LLM “is it possible” or “how might I do thing B?” and break it down further and further until there’s steps you think you could take, even if you don’t take them then.

I’ve seen posts suggesting that increasingly funders (such as VCs, but I imagine it applies to other types of funders too) are going to be less likely to take projects seriously that don’t have a working prototype or an MVP or something in the works. It’s now easier than ever to build things, thanks to LLMs, and that means it’s easier for YOU to build things, too.

Yes, you. Even if you’re “not technical”, even if you “don’t know how to code”, or even if you’re “not a computer person”. Your excuses are gone. If you don’t do it, it’s because you don’t WANT to do it. Not knowing how to do it is no longer valid. Sure, maybe you don’t have time or don’t want to prioritize it – fine. But if it’s important to you to get other people involved (with funding or applications for funding or recruiting developers), then you should invest some of your time first and do something, anything, to get it started and figure out how to get things going. It doesn’t have to be perfect, it just has to be started. The more progress you make, the easier it is to share and the more people can discover your vision and jump on board with helping you move faster.

Another trigger you can watch for is finding yourself thinking or saying “I wish someone would do Y” or “I wish someone would make Z”. Stop and ask yourself “what would it take to build Y or Z?” and consider prompting an LLM to lay out what it would take. You might decide not to do it, but information is power, and you can make a more informed decision about whether this is something that’s important enough for you to prioritize doing.

And maybe you don’t have an idea for a project yet, but if you’re stewing with uncertainty these days, you can still make an impact by taking action, no matter how small. Remember, small adds up. Doing something for someone else is better than anything you could do for yourself, and I can say from experience it feels really good to make even small actions, whether it’s at the global level or down to the neighborhood level.

You probably know more what your local community needs, but to start you brainstorming, things you can do include:

  • Go sign up for a session to volunteer at a local food bank
  • Take groceries to the local food bank
  • Ask the local food bank if they have specific needs related to allergies etc, such as whether they need donations of gluten-free food for people with celiac
  • Go take books and deposit them at the free little libraries around your neighborhood
  • Sign up for a shift or get involved at a community garden
  • Paint rocks and go put them out along your local walking trails for people to discover
  • Write a social media post about your favorite charity and why you support it, and post it online or email it to your friends
  • Do a cost-effective analysis for your favorite nonprofit and share it with them (you may need some data from them first) and also post it publicly

Just-do-something-you-have-permission-DanaMLewisI’ve learned from experience that waiting rarely creates better outcomes. It only delays impact.

Progress doesn’t require permission: it requires action.

What are you waiting for? Go do something.

The Cost-Effectiveness of Life for a Child – A Deep Dive into DALY Estimates and the 2025 Funding Gap

Life for a Child is an international non-profit organization that supports children with diabetes by providing insulin, test strips, and essential diabetes care to over 60,000 children in low-income countries who would otherwise have little to no access to treatment.

Without access to supplies and skilled medical care, children with type 1 diabetes (T1D) often die quickly, and with only intermittent access may die within a few years of diagnosis. In some countries,  limited amounts and types of older insulins may be provided by the health systems. In these ‘luckier’ countries, test strips are still not usually provided. Without regular blood glucose testing, children may survive into early adulthood, yet still experience early mortality due to long-term complications such as blindness, kidney failure, or amputations.

Life for a Child (LFAC) offers a lifeline, extending life expectancy and improving the quality of life for children at a remarkably low cost. Life for a Child also does incredibly critical work in improving care delivery infrastructures in each of these countries that they support. They work directly with local healthcare providers to co-develop critical education materials for young people living with diabetes. Further, they provide a support network to local healthcare providers and some governments. This is all to help improve sustainability of access to services, medications, and support for people with diabetes in the long run.

Scott and I have been supporting Life for a Child as our charity of choice for many years. As we wrote in our analysis here in 2017:

“Life for a Child seems like a fairly effective charity, spending about $200-$300/yr for each person they serve (thanks in part to in-kind donations from pharmaceutical firms). If we assume that providing insulin and other diabetes supplies to one individual (and hopefully keeping them alive) for 40 years is approximately the equivalent of preventing a death from malaria, that would mean that Life for a Child might be about half as effective as AMF, which is quite good compared to the far lower effectiveness of most charities, especially those that work in first world countries.”

We used some of GiveWell’s analyses to assess effective giving, especially comparing options like GiveDirectly or more specific charity options like AMF:

​For example, the Against Malaria Foundation, the recommended charity with the most transparent and straightforward impact on people’s lives, can buy and distribute an insecticide-treated bed net for about $5.  Distributing about 600-1000 such nets results in one child living who otherwise would have died, and prevents dozens of cases of malaria.  As such, donating 10% of a typical American household’s income to AMF will save the lives of 1-2 African kids *every year*.”

(Note: In addition to donations, I also have supported Life for a  Child with my time at both the US level, serving on the US-based Life for a Child US board, as well as the US representative on the international steering committee for Life for a Child.)

However, in 2025, Life for a Child faces an immediate and unexpected $300,000 funding shortfall, due to a previously committed donor no longer being able to provide this donation. This funding was for test strips, which will reduce the number of strips provided per child from three to two test strips per day.

Further, Life for a Child has additional funding needs to continue expanding to support more children who are otherwise unsupported and going without critical supplies. (The room for funding is several orders of magnitude above this year’s funding gap.)

In order to assess the need for how we (in a general sense, speaking of all of us) fill this funding gap and understanding if this is still a cost-effective way to support people with diabetes, we wanted to revisit our analysis for how cost-effective Life For a Child is.

For background, I asked Graham Ogle, head of LFAC, for some numbers. These include:

  • Life for A Child currently supports 60,000 children in 2025
  • The original expansion plan is a goal to support 100,000 children or more by 2030
  • Estimates for how much is spent per child is about $150 USD (slightly less than what Scott and I had estimated in 2017), or $160 USD if you incorporate indirect costs.

We used these numbers below to estimate the cost-effectiveness of Life for a Child’s interventions.

Estimating Life For A Child’s Cost per Disability-Adjusted Life Year (DALY)

The Disability-Adjusted Life Year (DALY) is the most commonly used metric in global health to capture both the years of life lost (YLL) due to premature death and the years lived with disability (YLD) due to a health condition, such as type 1 diabetes.

The goal of Life for a Child’s work is to reduce both of these by providing insulin and glucose monitoring as well as improved care necessary for improved health outcomes.

  1. Life for a Child support reduces Years of Life Lost (YLL) 

To estimate YLL reduction, we calculate the difference between the expected age at death for a child with T1D who receives no care versus a child receiving LFAC support:

  • Without Life for a Child :
    • In the worst-case scenario, children with T1D may die within 1-2 years due to lack of insulin, meaning an early death by age 10 instead of the typical life expectancy of 60 years in some of these countries. . This results in 50 YLLs (60 – 10 = 50).
    • In countries where insulin is available but costly and/or glucose monitoring is not affordable and readily available, children may survive into their late 20s or 30s, but still experience significant complications, reducing life expectancy. In this scenario (minimal access to insulin, glucose monitoring, etc), we make a rough assumption that children with diabetes may survive into their mid to late 30s, therefore 25 YLLs is a reasonable estimate (60 – 35 = 25).
  • With Life for a Child :
    • Life for a Child’s program significantly improves both short-term and long-term survival. We assume that children supported by Life for a Child have the potential to live to an average life expectancy of 50-60 years (instead of dying prematurely due to untreated T1D), even when considering that LFAC only supports children into early adulthood (e.g. 25-30 years of age).

If we assume the average life expectancy for children newly diagnosed with T1D increases from 15-35 years to 50-60 years with standard Life for a Child support, that gives a savings of 25-35 YLLs (DALYs) per child, accounting for most of the uncertainty in our lifespan estimates above.

  1. Years Lived with Disability (YLD) Reduction

T1D also causes significant disability when people with T1D don’t have access to insulin and/or sufficient glucose monitoring and monitoring for early signs of complications, especially due to complications like blindness, kidney failure, and amputations. Each of these conditions brings about substantial life impairment.

  • Without Life for a Child:
    • Children with poorly supported T1D face a high likelihood of severe complications as they age. We estimate the disability weight (DW) for this scenario at 0.20, reflecting significant disability as a result of some of those complications.
  • With Life for a Child:
    • Access to insulin and glucose monitoring and healthcare monitoring drastically reduces the risk of complications. We estimate a DW of 0.05, which represents a much lower level of disability, especially in terms of future complications.

With such DWs, the reduction in YLD before premature death (20%-5%=15% over 5-30 years = 1-4 DALYs), and the 5% reduction in the YLL benefit (5% * 25-35 = 1-2 DALYs) partially cancel out, and don’t change the end result much. The net gain of 1-2 DALYs due to YLD reduction is smaller than the uncertainty range on the YLL benefit.

So for purposes of cost-effectiveness calculations, we’ll ignore YLD in the rest of this post and continue using the 25-35 DALYs per child figure.

  1. Total DALYs and Cost per DALY

For this section, we’ll assume the total impact of Life for a Child’s intervention per child from the calculations above is 25-35 DALYs.

Life for a Child’s cost per child in 2025 is approximately $150 per year (or $160 including indirect costs), and if we estimate that most children receive treatment for about 15 years, meaning the total cost per child is roughly $1,500–$2,250 over that period (or $1,600-$2,400 total with indirect costs).

Thus, the cost per DALY for Life for a Child can be estimated as:

(Cost per child) / (DALYs saved per child)

Here are a variety of estimates for varying cost levels using the lower bound of 25 DALYs saved per child supported:

  • With $1,500 per lifetime per child ($150/year for 10 years) and 25 DALYs saved, that estimates $60 per DALY ($64 with indirect costs)
  • With $2,250 per lifetime per child ($150/year for 15 years) and 25 DALYs saved, that estimates $90 per DALY ($96 with indirect costs)
  • With slightly higher costs to assume the cost will rise over time of $175/year for 15 years, this is a higher estimated $2,625 per lifetime per child and 25 DALYs saved, estimating $105 per DALY.
  • With slightly higher costs to assume the cost will rise over time of $175/year for 20 years, this is a higher estimated $3,500 per lifetime per child and 25 DALYs saved, estimating $140 per DALY.

This places Life for a Child’s cost per DALY in the range of $60–$90, for conservative estimates a remarkably cost-effective intervention, and even the higher estimates of $105-$140 assuming an increase in costs and increase in years of support compares favorably to the most effective global health programs, including those recommended by GiveWell.

How did we come to this conclusion?

  • GiveWell estimates cash transfers through GiveDirectly result in $1000/DALY, based on welfare gains rather than direct health outcomes (so apples and oranges), but even apples to oranges we can estimate Life for a Child is more cost-effective by at least single digit (eg 1-9x) factors than cash giving elsewhere.
  • We know GiveWell’s top charities are around $50-$100/DALY. Given we were estimating $60-$140 with a wide swathe of estimates, we can see that Life for a Child aligns with some of GiveWell’s top charities in terms of cost per DALY and thus “compares favorably” in our analysis. 

Why You Should Donate to Life for a Child

The point of this post was for Scott and I to reassess our statement that we have been making since ~2017 or so, which is the fact that Life for a Child is a remarkably cost-effective charity overall, and likely one of the most cost-effective charities to support people living with diabetes around the world who otherwise won’t have access (or regular access) to insulin and blood glucose testing.

Life for a Child has a DALY cost in the range of $60-$140 (reflecting current versus future cost increases), depending on which input variables you use, which makes it one of the best uses of global health funding available today.

Because of this reassessment, we also hope if you’ve read this far that you, too, will consider making a life-saving and life-changing donation for people with diabetes by donating to Life for a Child.

If you’re feeling overwhelmed with world events and want to make a tangible difference in people’s lives in a measurable way, consider donating to Life for a Child.

If you want to support people with diabetes in the most cost-effective way, so that your donation dollars make the biggest impact? Donate to Life for a Child.

Your donation saves – and changes – lives.

Life for a Child is a cost-effective charity supporting people with diabetes that needs your help. A blog post from Dana M. Lewis at DIYPS.org(Thank you).

PS – feel free to reach out to me (Dana@OpenAPS.org) and/or Scott (Scott@OpenAPS.org) if you want to chat through any of the estimates or numbers in more detail and how we consider donations.