AI is often an accessibility tool, even if you don’t use it that way

Talking about AI (artificial intelligence) often veers conversations toward lofty, futuristic scenarios. But there’s a quieter, more fundamental way AI is making a big difference today: serving as an accessibility tool that helps many of us to accomplish tasks more efficiently and comfortably than otherwise would be possible. And often, enabling us to complete tasks we might otherwise avoid or be unable to do at all.

One way to think about AI is as the ultimate translator. But I don’t just mean between languages: I mean between ways of interacting with the world.

Imagine you’re someone dealing with a repetitive stress injury like carpal tunnel syndrome, making prolonged typing painful or even impossible. Traditionally, you might use dictation software to turn spoken words into text, alleviating physical strain. No issues with that, right? But somehow, suggesting people use AI tools to do the same thing (dictation and cleaning up of the dictated text) causes skepticism about “cheating” the “correct” way of doing things. If you imagine the carpal tunnel scenario, that’s less likely to be a reaction, but imagine many other situations where you see outrage and disgust (as a knee jerk reaction) to the idea of people using AI.

In reality, there are three ways of doing things to accomplish a note-taking task:

  • A human types notes
  • A human speaks notes to a voice dictation tool
  • A human speaks notes to an AI-based dictation tool, that also when prompted could clean up and transform the notes into different formats.

All three introduce the possibility of errors. The difference is how we perceive and tolerate those errors: the perception often reflects bias rather than logic.

For example, the focus disproportionately in the third example is about errors, where errors might not even come up in the other two. OMG, the AI might do something wrong! It might hallucinate an error! Well, yes, it might. But so too does the dictation software. There was similar outrage years ago when voice dictation software became common for doctors to use to dictate their chart notes. And yes, there were and are errors there, too. And guess what? Humans typing notes? ALSO RESULTS IN ERRORS. The important thing here is all three cases: human alone, human plus basic tech, human plus AI, all result in the possibility of errors.

(I actually see this frequently, where I see three different providers who either use voice dictation to write my chart notes, introducing errors; AI-assisted notetaking, occasionally introducing errors; and one manually types all of their notes…still occasionally introducing errors. They’re typically different types of errors, but the result is the same: error!)

This is more about cultural change than it is about the errors in and of themselves. If people actually cared about the errors, we would be creating pathways to fix errors by humans and other approaches, such as enabling wiki-style editing requests of medical charts so that patients and providers can collaboratively update and keep medical records and chart notes free of errors so they don’t propagate over time. This almost never happens: chart notes can only be corrected by providers, and patients often have to use scarce visit time if they care enough to request a correction. Instead, most discussions focus more on where theoretical errors came from rather than practical approaches to fix real-world errors.

Back to AI specifically:

Note taking is a simplistic example of what can be useful with AI, but there’s more examples of transformation, such as transforming data into different formats. Converting data from JSON to CSV or vice versa – this is a task that can be tedious or impossible for some people. Sure, this could be done manually, or it can be done with hand-written scripts for transforming the data, or it can be done by having an AI write the scripts to transform that data, or it can be done with the AI writing and executing the scripts to “transform the data itself”. AI can often do all of these steps quickly and efficiently, triggered by a plain-language request (either typed or dictated by voice).

Here are other examples where AI can be an accessibility tool:

  • A visually impaired user has AI describe images and generate ALT text and/or convert unreadable PDFs into something their screen reader can use. They might also have the AI summarize the text, first, to see if they want to bother spending the time screen reading all that text.
  • Individuals with mobility limitations control their home environment or work environment, by using AI to pair together tools that allow them to do things that weren’t possible before, and can brainstorm solutions to problems that previously they didn’t know how to solve or didn’t have the tools to solve or build.
  • People in a country where they don’t speak the language and are needing to access the healthcare system can benefit from real-time AI translation when there’s no medical interpreter services, if they bring their own AI translator. US healthcare providers are generally prohibited from using such tools and are forced to forego translation entirely when human translators are not available.
  • People with disabilities (whether those are mental or physical) using AI to help understand important healthcare or insurance forms or paperwork they need to understand or interpret and take action on.

Personally, I keep finding endless ways where AI is an accessibility tool for me, in large and small ways. And the small ways often add up to a lot of time saved.

One frequent example where I keep using it is for finding and customizing hikes. Last year, I had to change my exercise strategy, which included hiking more instead of running. Increasingly since then, though, I also have had to modify which hikes I’m able to do, including factoring in the terrain. (Super rocky or loose rock terrain are challenging whereas they used to not be a limitation). I used to spend a lot of time researching hikes based on location, then round trip distance, then elevation gain, then read trail descriptions and trail reports from recent weeks and months to ensure that a hike would be a good candidate for me. This actually took quite a bit of time to do manually (for context, we did 61 hikes last year!).

But with AI, I can give an LLM the parameters of geography (eg hikes along the I-90 corridor or less than two hours from Seattle), round trip mileage and elevation limits, *and* ask it to search and exclude any hikes with long sections of loose, rocky or technical terrain. I can also say things like “find hikes similar to the terrain of Rattlesnake Ledge”, which is a smooth terrain hike. This cuts down and creates a short list that meets my criteria so I can spend my time picking between hikes that already meet all my criteria, and confirming the AI’s assessment with my own quick read of the trail description and trail reviews.

It’s a great use of AI to more quickly do burdensome tasks, and it’s actually found several great hikes that I wouldn’t have found by manual searching, which is expanding my ‘horizons’ even when it feels like I’m being limited by the increasing number of restrictions/criteria that I need to plan around. Which is awesome. As hiking itself gets harder, the effort it takes to find doable hikes with my new criteria is actually much less, which means the cost-effort ratio of finding and doing things continues to evolve so that hiking continues to be something I do rather than giving it up completely (and drastically reducing my physical activity levels).

Whenever I see knee jerk reactions along the lines of “AI is bad!” and “you shouldn’t use it that way!” it often comes from a place of projecting the way people “should” do things (in a perfect world). But the reality is, a lot of times people can’t do things the same way, because of a disability or otherwise.

AI is an accessibility tool, even if you do not use it that way). A blog by Dana M. Lewis from DIYPS.orgAI often gives us new capabilities to do these things, even if it’s different from the way someone might do it manually or without the disability. And for us, it’s often not a choice of “do it manually or do it differently” but a choice of “do, with AI, or don’t do at all because it’s not possible”. Accessibility can be about creating equitable opportunities, and it can also be about preserving energy, reducing pain, enhancing dignity, and improving quality of life in the face of living with a disability (or multiple disabilities). AI can amplify our existing capabilities and super powers, but it can also level the playing field and allow us to do more than we could before, more easily, with fewer barriers.

Remember, AI helps us do more – and it also helps more of us do things at all.

The data we leave behind in clinical trials and why it matters for clinical care and healthcare research in the future with AI

Every time I hear that all health conditions will be cured and fixed in 5 years with AI, I cringe. I know too much to believe in this possibility. But this is not an uninformed opinion or a disbelief in the trajectory of AI takeoff: this is grounded in the very real reality of the nature of clinical trials reporting and publication of data and the limitations we have in current datasets today.

The sad reality is, we leave so much important data behind in clinical trials today. (And every clinical trial done before today). An example of this is how we report “positive” results for a lot of tests or conditions, using binary cutoffs and summary reporting without reporting average titres (levels) within subgroups. This affects both our ability to understand and characterize conditions, compare overlapping conditions with similar results, and also to be able to use this information clinically alongside symptoms and presentations of a condition. It’s not just a problem for research, it’s a problem for delivering healthcare. I have some ideas of things you (yes, you!) can do starting today to help fix this problem. It’s a great opportunity to do something now in order to fix the future (and today’s healthcare delivery gaps), not just complain that it’s someone else’s problem. If you contribute to clinical trials, you can help solve this!

What’s an example of this? Imagine an autoantibody test result, where values >20 are considered positive. That means a value of 21, 58, or 82 are all considered positive. But…that’s a wide range, and a much wider spread than is possible with “negative” values, where negative values could be 19, 8, or 3.

When this test is reported by labs, they give suggested cutoffs to interpret “weak”, “moderate”, or “strong” positives. In this example, a value of 20-40 is a “weak” positive, a value between 40-80 is a “moderate” positive, and a value above 80 is a strong positive. In our example list, all positives actually fall between barely a weak positive (21), a solidly moderate positive in the middle of that range (58), and a strong positive just above that cutoff (82). The weak positive could be interpreted as a negative, given variance in the test of 10% or so. But the problem lies in the moderate positive range. Clinicians are prone to say it’s not a strong positive therefore it should be considered as possibly negative, treating it more like the 21 value than the 82 value. And because there are no studies with actual titres, it’s unclear if the average or median “positive” reported is actually all above the “strong” (>80) cutoff or actually falls in the moderate positive category.

Also imagine the scenario where some other conditions occasionally have positive levels of this antibody level but again the titres aren’t actually published.

Today’s experience and how clinicians in the real world are interpreting this data:

  • 21: positive, but 10% within cutoff doesn’t mean true positivity
  • 53: moderate positive but it’s not strong and we don’t have median data of positives, so clinicians lean toward treating it as negative and/or an artifact of a co-condition given 10% prevalence in the other condition
  • 82: strong positive, above cutoff, easy to treat as positive

Now imagine these values with studies that have reported that the median titre in the “positive” >20 group is actually a value of 58 for the people with the true condition.

  • 21: would still be interpreted as likely negative even though it’s technically above the positive cutoff >20, again because of 10% error and how far it is below the median
  • 53: moderate positive but within 10% of the median positive value. Even though it’s not above the “strong” cutoff, more likely to be perceived as a true positive
  • 92: still strong positive, above cutoff, no change in perception

And what if the titres in the co-condition have a median value of 28? This makes it even more likely that if we know the co-condition value is 28 and the true condition value is 58, then a test result of 53 will be more correctly interpreted as the true condition rather than providing a false negative interpretation because it’s not above the >80 strong cutoff.

Why does this matter in the real world? Imagine a patient with a constellation of confusing symptoms and their positive antibody test (which would indicate a diagnosis for a disease) is interpreted as negative. This may result in a missed diagnosis, even if this is the correct diagnosis, given the absence of other definitive testing for the condition. This may mean lack of effective treatment, ineligibility to enroll in clinical trials, impacted quality of life, and possibly negatively impacting their survival and lifespan.

If you think I’m cherry picking a single example, you’re wrong. This has played out again and again in my last few years of researching conditions and autoantibody data. Another real-world scenario is where I had a slight positive (e.g. above a cutoff of 20) value, for a test that the lab reported is correlated with condition X. My doctor was puzzled because I have no signs of this condition X. I looked up the sensitivity and specificity data for this test and it only has 30% sensitivity and 80% specificity, whereas 20% of people with condition Y (which I do have) also have this antibody. There is no data on the median value of positivity in either condition X or condition Y. In the context of these two pieces of information we do have, it’s easier to interpret and guess that this value is not meaningful as a diagnostic for condition X given the lack of matching symptoms, yet the lab reports the association with condition X only even though it’s only slightly more probably for condition X to have this autoantibody compared to condition Y and several other conditions. I went looking for research data on raw levels of this autoantibody, to see where the median value is for positives with condition X and Y and again, like the above example, there is no raw data so it can’t be used for interpretation. Instead, it’s summary of summary data of summarizing with a simple binary cutoff >20, which then means clinical interpretation is really hard to do and impossible to research and meta-analyze the data to support individual interpretation.

And this is a key problem or limitation I see with the future of AI in healthcare that we need to focus on fixing. For diseases that are really well defined and characterized and we have in vitro or mouse models etc to use for testing diagnostics and therapies – sure, I can foresee huge breakthroughs in the next 5 years. However, for so many autoimmune conditions, they are not well characterized or defined, and the existing data we DO have is based on summaries of cutoff data like the examples above, so we can’t use them as endpoints to compare diagnostics or therapeutic targets. We need to re-do a lot of these studies and record and store the actual data so AI *can* do all of the amazing things we hear about the potential for.

But right now, for a lot of things, we can’t.

So what can we do? Right now, we actually CAN make a difference on this problem. If you’re gnashing your teeth about the change in the research funding landscape? You can take action right now by re-evaluating your current and retrospective datasets and your current studies and figure out:

  • Where you’re summarizing data and where raw data needs to be cleaned and tagged and stored so we can use AI with it in the future to do all these amazing things
  • What data could I tag and archive now that would be impossible or expensive to regenerate later?
  • Am I cleaning and storing values in formats that AI models could work with in the future (e.g. structured tables, CSVs, or JSON files)?
  • Most simply: how am I naming and storing the files with data so I can easily find them in the future? “Results.csv” or “results.xlsx” is maybe not ideal for helping you or your tools in the future find this data. How about “autoantibody_test-X_results_May-2025.csv” or similar.
  • Where are you reporting data? Can you report more data, as an associated supplementary file or a repository you can cite in your paper?

You should also ask yourself whether you’re even measuring the right things at the right time, and whether your inclusion and exclusion criteria are too strict and excluding the bulk of the population for which you should be studying.

An example of this is in exocrine pancreatic insufficiency, where studies often don’t look at all of the symptoms that correlate with EPI; they include or allow only for co-conditions that are only a tiny fraction of the likely EPI population; and they study the treatment (pancreatic enzyme replacement therapy) without context of food intake, which is as useful as studying whether insulin works in type 1 diabetes without context of how many carbohydrates someone is consuming.

You can be part of the solution, starting right now. Don’t just think about how you report data for a published paper (although there are opportunities there, too): think about the long term use of this data by humans (researchers and clinicians like yourself) AND by AI (capabilities and insights we can’t do yet but technology will be able to do in 3-5+ years).

A simple litmus test for you can be: if an interested researcher or patient reached out to me as the author of my study, and asked for the data to understand what the mean or median values were of a reported cohort with “positive” values…could I provide this data to them as an array of values?

For example, if you report that 65% of people with condition Y have positive autoantibody levels, you should also be able to say:

  • The mean value of the positive cohort (>20) is 58.
  • The mean value of the negative cohort (<20) is 13.
  • The full distribution (e.g. [21, 26, 53, 58, 60, 82, 92…]) is available in a supplemental file or data repository.

That makes a magnitude of difference in characterizing many of these conditions, for developing future models, testing treatments or comparative diagnostic approaches, or even getting people correctly diagnosed after previous missed diagnoses due to lack of available data to correctly interpret lab results.

Maybe you’re already doing this. If so, thanks. But I also challenge you to do more:

  • Ask for this type of data via peer review, either to be reported in the manuscript and/or included in supplementary material.
  • Push for more supplemental data publication with papers, in terms of code and datasets where possible.
  • Talk with your team, colleague and institution about long-term storage, accessibility, and formatting of datasets
  • Better yet, publish your anonymized dataset either with the supplementary appendix or in a repository online.
  • Take a step back and consider whether you’re studying the right things in the right population at the right time

The data we leave behind in clinical trials (white matters for clinical care, healthcare research, and the future with AI), a blog post by Dana M. Lewis from DIYPS.orgThese are actionable, doable, practical things we can all be doing, today, and not just gnashing our teeth. The sooner we course correct with improved data availability, the better off we’ll all be in the future, whether that’s tomorrow with better clinical care or in years with AI-facilitated diagnoses, treatments, and cures.

We should be thinking about:

  • What if we design data gathering & data generation in clinical trials not only for the current status quo (humans juggling data and only collecting minimal data), but how should we design trials for a potential future of machines as the primary viewers of the data?
  • What data would be worth accepting, collecting, and seeking as part of trials?
  • What burdens would that add (and how might we reduce those) now while preparing for that future?

The best time to collect the data we need was yesterday. The second best time is today (and tomorrow).

Try, Try Again with AI

If you’ve scoffed at, dismissed, or tried using AI and felt disappointed in the past, you’re not alone. Maybe the result wasn’t quite right, or it missed the mark entirely. It’s easy to walk away thinking, “AI just doesn’t work.” But like learning any new tool, getting good results from AI takes a little persistence, a bit of creativity, and the willingness to try again. Plus an understanding that “AI” is not a single thing.

AI is not magic or a mind reader. AI is a tool. A powerful one, but it depends entirely on how you use it. I find it helpful to think of it as a coworker or intern that’s new to your field. It’s generally smart and able to do some things, but it needs clear requests and directions on what to do. When it misses the mark, it needs feedback, or for you to circle around and try again with fresh instructions.

If your first attempt doesn’t go perfectly, it doesn’t mean the technology is useless, just like your brand new coworker isn’t completely useless.

Imperfect Doesn’t Mean Impossible

One way to think of AI is that it is a new kitchen gadget. Imagine that you get a new mini blender or food processor. You’ve never made a smoothie before, but you want to. You toss in a bunch of ingredients and out comes…yuck.

Are you going to immediately throw away the blender? Probably not. You’re likely to try again, with some tweaks. You’ll try different ingredients, more or less liquid, and modify and try again.

I had that experience when I broke my ankle and needed to incorporate more protein in my diet. I got a protein powder and tried stirring it into chocolate milk. Gross. I figured out that putting it in a tupperware container and shaking it thoroughly, then leaving it overnight, turned out ok. Eventually when I got a blender, I found it did even better. But the perfect recipe for me ended up being chocolate milk, protein powder, and frozen bananas. Yum, it made it like a chocolate milkshake texture and I couldn’t tell there was powder in it. But I still had to tweak things: shoving in large pieces of frozen bananas didn’t work well with my mini blender. I figured out slices worked ok, and eventually Scott and I zeroed in that it was most efficient to slice the banana and put it into the freezer, that way I had ready-to-go frozen right-sized banana chunks to mix in.

I had some other flops, too. I had found a few other recipes I liked to do without protein powder. Frozen raspberry or frozen pineapple + a crystal light lemonade packet + water are two of my hot weather favorites. But one time it occurred to me to try the pineapple recipe with protein powder in it… ew. That protein powder did not go well with citrus. So I didn’t make that one again.

AI is like that blender. If the result isn’t what you wanted, you should try:

  • Rewriting your prompt. Try different words, try giving it more context (instructions).
  • Give it more detail or clearer instructions. “Make a smoothie” is a little vague; “blend chocolate milk, protein powder, and frozen banana” is a little more direction to tell it what you want.
  • Try a different tool. The models are different for LLMs, and the setup is different for every tool. How you might use ChatGPT to do something might end up being different for using Gemini or MidJourney.

Sometimes, small tweaks make a big difference.

If It Doesn’t Work Today, Try Again Tomorrow (or sometime in the future)

Some tasks are still on the edge of what AI can do in general, or a particular model at that time. That doesn’t mean they’ll always be unable to do that task. AI is improving constantly, and quickly. What didn’t work a few months ago might work today, either in the same model or a new model/tool.

A flowchart diagram titled “Try a task with AI” illustrates how to approach AI usage with persistence and iteration. At the top is a purple box labeled “Try a task with AI.” Two arrows extend downward. The left arrow leads to a peach-colored box labeled “Result is not quite right,” which then leads to another box with three bullet points: “Reword your prompt,” “Give it more instructions,” and “Try this prompt with a different model/tool.” Below that is a smaller orange box labeled “Still didn’t work?” which connects to a final box that says: “Park this project: ‘try again later’ list” and “Try a different task or project.” From this box, an arrow loops back to the top box, showing that users should try again. The right arrow from the top goes to a green box labeled “Result is pretty good,” which then leads to another green box that says “Keep going & use AI for other tasks and projects.” This green path also loops back to the top. The overall message of the diagram is that regardless of whether the result is good or not quite right, users should continue experimenting with AI and trying new tasks.I’ve started making a list of projects or tasks I want to work on where the AI isn’t quite there yet and/or I haven’t figured out a good setup, the right tool, etc. A good example of this was when I wanted to make an Android version of PERT Pilot. It took me *four tries* over the course of an entire year before I made progress to a workable prototype. Ugh. I knew it wasn’t impossible, so I kept coming back to the project periodically and starting fresh with a new chat and new instructions to try to get going. In the course of a year, the models changed several times, and the latest models were even better at coding. Plus, I was better through practice at both prompting and troubleshooting when the output of the LLM wasn’t quite what I wanted. All of that over time added up, and I finally have an Android version of PERT Pilot (and it’s out on the Play Store now, too!) to match the iOS version of PERT Pilot. (AI also helped me quickly take the AI meal estimation feature from PERT Pilot, which is an app for people with EPI, and turn it into a general purpose app for iOS called Carb Pilot. If you’re interested in getting macronutrient (fat, protein, carb, and/or calorie) counts for meals, you might be interested in Carb Pilot.)

Try different tasks and projects

You don’t have to start with complex projects. In fact, it’s better if you don’t. Start with tasks you already know how to do, but maybe want to see how the AI does. This could be summarizing text, writing or rewriting an email, changing formats of information (eg json to csv, or raw text into a table formatted so you can easily copy/paste it elsewhere).

Then branch out. Try something new you don’t know how to do, or tackle a challenge you’ve been avoiding.

There are two good categories of tasks you can try with AI:

  • Tasks you already do, but want to do more efficiently
  • Tasks you want to do, but aren’t sure how to begin

AI is a Skill, and Skills Take Practice

Using AI well is a skill. And like any skill, it improves with practice. It’s probably like managing an intern or a new coworker who’s new to your organization or field. The first time you managed someone, it probably wasn’t as good as after you had 5 years of practice managing people or helping interns get up to speed quickly. Over time, you figure out how to right-size tasks; repeat instructions or give them differently to meet people’s learning or communication styles; and circle back when needed when it’s clear your instructions may have been misunderstood or they’re heading off in a slightly unexpected direction.

Don’t let one bad experience with AI close the door. The people who are getting the most out of AI right now are the ones who keep trying. We experimented, failed, re-tried, and learned. That can be you, too.

If AI didn’t wow you the first time for the first task you tried, don’t quit. Rephrase your prompt. Try another model/tool. (Some people like ChatGPT; some people like Claude; some people like Gemini….etc.) You can also ask for help. (You can ask the LLM itself for help! Or ask a friendly human, I’m a friendly human you can try asking, for example, if you’re reading this post. DM or email me and tell me what you’re stuck on. If I can make suggestions, I will!)

Come back in a week. Try a new type of task. Try the same task again, with a fresh prompt.

But most importantly: keep trying. The more you do, the better it gets.

iOS and Android development experience for newbies

Vibe coding apps is one things, but what about deploying and distributing them? That still requires some elbow grease, and I’ve described my experiences with both Apple and Google below for my first apps in each platform.

(I’m writing this from the perspective of someone familiar with coding primarily through bash scripts, JavaScript, Python, and various other languages, but with no prior IDE or mobile app development experience when I got started, as I typically work in vim through the terminal. I was brand new to IDEs and app development for both iOS and Android when I got started. For context, I have an iOS personal device.)

Being new to iOS app development

First, some notes on iOS development. If you only want to test your app on your own phone, it’s free. You can build the app in XCode and with a cord, deploy it directly on your phone. However, if you wish to distribute apps via TestFlight (digitally) to yourself or other users, Apple requires a paid developer account at $99 per year. (This cost can be annoying for people working on free apps who are doing this as not-a-business). Initially, figuring out the process to move an app from Xcode to TestFlight or the App Store is somewhat challenging. However, once you understand that archiving the app opens a popup to distribute it, the process becomes seamless. Sometimes there are errors if Apple has new development agreements for you to sign in the web interface, but the errors from the process just say your account is wrong. (So check the developer page in your account for things to sign, then go try again once you’ve done that.) TestFlight itself is intuitive even for newcomers, whether that is yourself or a friend or colleague you ask to test your app.

Submitting an app to the App Store through the web interface is relatively straightforward. Once you’ve got your app into TestFlight, you can go to app distribution, and create a version and listing for your app and add the build you put into TestFlight. Note that Apple is particular about promotional app screenshots and requires specific image sizes. Although there are free web-based tools to generate these images from your screenshots, if you use a tool without an account login, it becomes difficult to replicate the exact layout later. To simplify updates, I eventually switched to creating visuals manually using PowerPoint. This method made updating images easier when I had design changes to showcase, making me more likely to keep visuals current. Remember, you must generate screenshots for both iPhone and iPad, so don’t neglect testing your app on iPad, even if usage might seem minimal.

When submitting an app for the first time, the review process can take several days before beginning. My initial submission encountered bugs discovered by the reviewer and was rejected. After fixing the issues and resubmitting, the process was straightforward and quicker than expected. Subsequent submissions for new versions have been faster than the very first review (usually 1-3 days max, sometimes same-day), and evaluation by App Store reviewers seems more minimal for revisions versus new apps.

The main challenge I have faced with App Store reviews involved my second app, Carb Pilot. I had integrated an AI meal estimation feature into PERT Pilot and created Carb Pilot specifically for AI-based meal estimation and custom macronutrient tracking. Same feature, but plucked out to its own app. While this feature was approved swiftly in PERT Pilot as an app revision, Carb Pilot repeatedly faced rejections due to the reviewer testing it with non-food items. Same code as PERT Pilot, but obviously a different reviewer and this was the first version submitted. Eventually, I implemented enough additional error handling to ensure the user (or reviewer, in this case) entered valid meal information, including a meal name and a relevant description. If incorrect data was entered (identified by the API returning zero macronutrient counts), the app would alert users. After addressing these edge cases through several rounds of revisions, the app was finally approved. It might have been faster with a different reviewer, but it did ultimately make the app more resilient to unintended or unexpected user inputs.

Other than this instance, submitting to the App Store was straightforward, and it was always clear at what stage the process was, and the reviewer feedback was reasonable.

(Note that some features like HealthKit or audio have to be tested on physical devices, because these features aren’t available in the simulator, so depending on your app functionality, you’ll want to test both with the simulator and with physical iOS devices to test those. Otherwise, you don’t have to have access to test on a physical device.)

Being new to Android app development

In contrast, developing for Android was more challenging. I decided to create an Android version of PERT Pilot after receiving several requests. However, this effort took nearly two years and four separate attempts to even get a test version built. I flopped at the same stage three times in a row, even with LLM (AI) assistance in trying to debug the problem.

Despite assistance from language models (LLMs), I initially struggled to create a functional Android app from scratch. Android Studio uses multiple nested folder structures with Kotlin (.kt) files and separate XML files. The XML files handle layout design, while Kotlin files manage functionality and logic, unlike iOS development, which primarily consolidates both into fewer files or at least consistently uses a single language. Determining when and where to code specific features was confusing. (This is probably easier in 2025 with the advent of agent and IDE-integrated LLM tools! My attempts were with chat-based LLMs that could not access my code directly or see my IDE, circa 2023 and 2024.)

Additionally, Android development involves a project-wide “gradle” file that handles various settings. Changes made to this file require manually triggering a synchronization process. Experienced Android developers might find this trivial, but it is unintuitive for newcomers to locate both the synchronization warnings and the sync button. If synchronization isn’t performed, changes cannot be tested, causing blocks in development.

Dependency management also posed difficulties, and that plus the gradle confusion is what caused my issues on three different attempts. Initially, dependencies provided by the LLM were formatted incorrectly, breaking the build. Eventually (fourth time was the charm!), I discovered there are two separate gradle files, and pasting dependencies correctly and synchronizing appropriately resolved these issues. While partly user error (I kept thrashing around with the LLM trying to solve the dependency formatting, and finally on the fourth attempt realized it was giving me a language/formatting approach that was a different language than the default Android Studio gradle file, even though I had set up Android Studio’s project to match the LLM approach. It was like giving Android Studio Chinese characters to work with when it was expecting French), this issue significantly impacted my development experience, and it was not intuitive to resolve within Android Studio even with LLM help. But I finally got past that to a basic working prototype that could build in the simulator!

I know Android has different features than iOS, so I then had to do some research to figure out what gestures were different (since I’m not an Android user), as well as user research. We switched from swiping to long pressing on things to show menu options for repeat/edit/deleting meals, etc. That was pretty easy to swap out, as were most of the other cosmetic aspects of building PERT Pilot for Android.

Most of the heartache came down to the setup of the project and then the exporting and deploying to get it to the Play Store for testing and distribution.

Setting up a Google Play developer account was quick and straightforward, despite needing to upload identification documents for approval, which took a day to get verified. There’s a one-time cost ($25) for creating the development account, that’s a lot cheaper than the yearly fee for Apple ($99/year). But remember, above and below, that you’re paying with your time as opposed to money, in terms of a less intuitive IDE and web interface for moving forward with testing and deploying to production.

Also, you have to have hands-on access to a physical Android device. I have an old phone that I was able to use for this purpose. You only have to do this once during the account creation/approval process, so you may be able to use a friend’s device (involves scanning QR code and being logged in), but this is a little bit of a pain if you don’t have a modern physical Android device.

I found navigating the Play Store developer console more complicated than Apple’s, specifically when determining the correct processes for uploading test versions and managing testers. Google requires at least 12 users over a two-week testing period before allowing production access. Interestingly, it’s apparently pretty common to get denied production access even after you have 12 users, the minimum stated. It’s probably some secret requirement about app use frequency, although they didn’t say that. The reason for rejection was uninformative. Once denied, you then have a mandatory 14 day wait period before you can apply again. I did some research and found that it’s probably because they want a lot of active use in that time frame. Instead of chasing other testers (people who would test for the sake of testing but not be people with EPI), I waited the 14 days and applied again and made it clear that people wouldn’t be using the app every day, and otherwise left my answers the same…and this time lucked into approval. This meant I was allowed to submit for review for production access to the Play Store. I submitted….and was rejected, because there are rules that medical and medical education apps can only be distributed by developers tied to organizations that have a business number and have been approved. What?!

Apparently Google has a policy that medical “education” apps must be distributed by organizations with approved business credentials. The screenshots sent back to me seem to be flagging on the button I had on the home screen that described PERT and dosing PERT and information about the app. I am an individual (not an organization or a nonprofit or a company) and I’m making this app available for free to help people, so I didn’t want to have to go chase a nonprofit who might have android developer credentials to tie my app to.

What I tried next was removing the button with the ‘education’ info, changing the tags on my app to fall under health & fitness rather than ‘medical’, and resubmitting. No other changes.

This time…it was accepted!

Phew.

iOS or Android: which was easier? A newbie's perspective on iOS and Android development and app deployment, a blog by Dana M. Lewis from DIYPS.orgTL;DR: as more and more people are going to vibe code their way to having Android and/or iOS apps, it’s very feasible for people with less experience to do both and to distribute apps on both platforms (iOS App Store and Google Play Store for Android). However, there’s an up front higher cost to iOS ($99/year) but a slightly easier, more intuitive experience for deploying your apps and getting them reviewed and approved. Conversely, Android development, despite its lower entry cost ($25 once), involves navigating a more complicated development environment, less intuitive deployment processes, and opaque requirements for app approval. You pay with your time, but if you plan to eventually build multiple apps, once you figure it out you can repeat the process more easily. Both are viable paths for app distribution if you’re building iOS and Android apps in the LLM-era of assisted coding, but don’t be surprised if you hit bumps in the road for deploying for testing or production.

Which should you choose for your first app, iOS or Android? It depends on if you have a fondness for either iOS or Android ecosystem; if one is closer to development languages you already know; or if one is easier to integrate/work with your LLM of choice. (I now have both working with Cursor and both also can be pulled into the ChatGPT app). Cost may be an issue, if $99/year is out of reach as a recurring cost, but keep in mind you’ll pay with your time for Android development even though it’s a $25 single time user account setup fee for developers. You also may want to think about whether your first app is a one-off or if you think you might do more apps in the future, which may change the context for paying the Apple developer fee yearly. Given the requirements to test with a certain number of users for Play Store access, it’s easier to go from testing to production/store publication on Apple than it is for Google, which might factor into subsequent app and platform decisions, too.

iOS Android
Creating a developer account better (takes more time, ID verification), one time $25 fee, requires physical device access
Fees/costs $99/year Better: one time $25 fee for account creation
IDE better (more challenging with different languages/files and requires gradle syncing)
Physical device access required No (unless you need to test integrations like HealthKit or audio input or exporting files or sending emails) Yes, as part of the account setup but you could borrow someone’s phone to accomplish this
Getting your app to the web for testing Pretty clear once you realize you have to “archive” your app from XCode, pops up a window that then guides you through sending to TestFlight. (Whether or not you actually test in TestFlight, you can then add to submit for review).

Hiccups occasionally if Apple requires you to sign new agreements in the web interface (watch for email notifications and if you get errors about your account not being correct, if you haven’t changed which account you are logged into with XCode, check the Apple developer account page on the web. Accept agreements, try again to archive in XCode, and it should clear that error and proceed.

A little more complicated with generating signed bundles, finding where that file was saved on your computer, then dragging and dropping or attaching it and submitting for testing.

Also more challenging to manage adding testers and facilitate access to test.

Submitting for approval/production access Better, easy to see what stage of review your app is in. Challenging to navigate where/how to do this in web interface the first time, and Google has obtuse, unstated requirements about app usage during testing.
Expect to be rejected the first time (or more) and have to wait 14 days to resubmit.
Distribution once live on the store Same Same

 

Piecing together your priorities when your pieces keep changing

When dealing with chronic illnesses, it sometimes feels like you have less energy or time in the day to work with than someone without chronic diseases. The “spoon theory” is a helpful analogy to illustrate this. In spoon theory, each person has a certain number of “spoons” representing their daily energy available for tasks including activities of daily living, activity or recreation activity, work, etc. For example, an average person might have 10 spoons per day, using just one spoon for daily tasks. However, someone with chronic illness may start with only 8 spoons and require 2-3 spoons for the same daily tasks, leaving them with fewer spoons for other activities.

I’ve been thinking about this differently lately. My priorities on a daily basis are mixed between activities of daily living (which includes things like eating, managing diabetes stuff like changing pump site or CGM, etc); exercise or physical activity like walking or cross-country skiing (in winter) or hiking (at other times of the year); and “work”. (“Work” for me is a mix of funded projects and my ongoing history of unfunded projects of things that move the needle, such as developing the world’s first app for exocrine pancreatic insufficiency or developing a symptom score and validating it through research or OpenAPS, to name a few.)

A raccooon juggles three spoonsAs things change in my body (I have several autoimmune diseases and have gained more over the years), my ‘budget’ on any given day has changed, and so have my priorities. During times when I feel like I’m struggling to get everything done that I want to prioritize, it sometimes feels like I don’t have enough energy to do it all, compared to other times when I’ve had sufficient energy to do the same amount of daily activities, and with extra energy left over. (Sometimes I feel like a raccoon juggling three spoons of different weights.)

In my head, I can think about how the relative amount of energy or time (these are not always identical variables) are shaped differently or take up different amounts of space in a given day, which only has 24 hours. It’s a fixed budget.

I visualize activities of daily living as the smallest amount of time, but it’s not insignificant. It’s less than the amount of time I want to spend on work/projects, and my physical activity/recreation also takes up quite a bit of space. (Note: this isn’t going to be true for everyone, but remember for me I like ultrarunning for context!)

ADLs are green, work/projects are purple, and physical activity is blue:

Example of two blocks stacked on each other (green), four blocks in an l shape (purple), three blocks in a corner shape (blue)

They almost look like Tetris pieces, don’t they? Imagine all the ways they can fit together. But we have a fixed budget, remember – only 24 hours in the day – so to me they become Tangram puzzle pieces and it’s a question every day of how I’m going to construct my day to fit everything in as best as possible.

Preferably, I want to fit EVERYTHING in. I want to use up all available time and perfectly match my energy to it. Luckily, there are a number of ways these pieces fit together. For example, check out these different variations:

8 squares with different color combinations with a double block, an l shaped block, and a corner (three pieces) block. All squares are completely full, but in different combinations/layouts of the blocks

But sometimes even this feels impossible, and I’m left feeling like I can’t quite perfectly line everything up and things are getting dropped.

Example of a square where the blocks don't all fit inside the squareIt’s important to remember that even if the total amount of time is “a lot”, it doesn’t have to be done all at once. Historically, a lot of us might work 8 hour days (or longer days). For those of us with desk jobs, we sometimes have options to split this up. For example, working a few hours and then taking a lunch break, or going for a walk / hitting the gym, then returning to work. Instead of a static 9-5, it may look like 8-11:30, 1:30-4:30, 8-9:30.

The same is true for other blocks of time, too, such as activities of daily living: they’re usually not all in one block of time, but often at least two (waking up and going to bed) plus sprinkled throughout the day.

In other words, it’s helpful to recognize that these big “blocks” can be broken down into smaller subunits:

Tangram-puzzle-pieces-different-shapes-closeup-DanaMLewis

And from there… we have a lot more possibilities for how we might fit “everything” (or our biggest priorities) into a day:

Showing full blocks filled with individual blocks, sometimes linked but in different shapes than the L and corner shapes from before.

For me, these new blocks are more common. Sometimes I have my most typical day with a solid block of exercise and work just how I’d prefer them (top left). Other times, I have less exercise and several work blocks in a day (top right). Other days, I don’t have energy for physical activity, activities of daily living take more energy or I have more tasks to do and I also don’t have quite as much time for longer work sections (bottom left). There’s also non-work days too where I prioritize getting as much activity as I can in a day (bottom right!). But in general, the point of this is that instead of thinking about the way we USED to do things or thinking we SHOULD do things a certain way, we should think about what needs to be done; the minimum of how it needs to be done; and think creatively about how we CAN accomplish these tasks, goals, and priorities.

A useful trigger phrase to check is if you find yourself saying “I should ______”. Stop and ask yourself: should, according to what/who? Is it actually a requirement? Is the requirement about exactly how you do it, or is it about the end state?

“I should work 8 hours a day” doesn’t mean (in all cases) that you have to do it 8 straight hours in a row, other than a lunch break.

If you find yourself should-ing, try changing the wording of your sentence, from “I should do X” to “I want to do X because Y”. It helps you figure out what you’re trying to do and why (Y), which may help you realize that there are more ways (X or Z or A) to achieve it, so “X” isn’t the requirement you thought it was.

If you find yourself overwhelmed because it feels like you have a big block task that you need to do, this is also helpful then to break it down into steps. Start small, as small as opening a document and writing what you need to do.

My recent favorite trick that is working well for me is putting the item of “start writing prompt for (project X)” on my to-do list. I don’t have to run the prompt; I don’t have to read the output then; I don’t have to do the next steps after that…but only start writing the prompt. It turns out that writing the prompt for an LLM helps me organize my thoughts in a way that it then makes the subsequent next steps easier and clearer, and I often then bridge into completing several of those follow up tasks! (More tips about starting that one small step here.)

The TL;DR: perhaps is that while we might yearn to fit everything in perfectly and optimize it all, it’s not going to always turn out like that. Our priorities change, our energy availability changes (due to health or kids’ schedules or other life priorities), and if we strive to be more flexible we will find more options to try to fit it all in.

Sometimes we can’t, but sometimes breaking things down can help us get closer.

Showing how the blocks on the left have fixed shapes and have certain combinations, then an arrow to the right with example blocks using the individual unit blocks rather than the fixed shapes, so the blocks look very different but are all filled, also.

Scale yourself

One of the things I wish people would consider more often when thinking about AI is how they can use it to scale themselves. What are some time-consuming things that they currently have to do themselves that AI could do for them to streamline their output and increase their productivity? Productivity for giving them more time to do the things only they can do, the things they want to do, or the things they love to do. (And to help stop procrastinating on things they have to do.)

I have a habit of trying to scale myself. These days, it’s often related to EPI (exocrine pancreatic insufficiency, which some areas of the world know by the acronym PEI). I developed a strong knowledge base first from personal experience, then by doing research – including a systematic review where I read hundreds, plural, of research papers on key topics related to design protocols and guidelines. As a result of both personal and research experience, I have a lot of knowledge. It gets tapped almost daily in the EPI support groups that I’m a part of.

Whenever I notice myself answering the same question repeatedly, I make a mental note of it. Eventually, if a topic comes up often enough, I turn my response into a blog post. This way, I can provide a well-structured, comprehensive answer with more time and context than a quick comment on social media allows – and with the ability to give the same, high quality answer to multiple people (and in some cases, hundreds or thousands of people rather than the few who might see the comment buried in a response thread).

A few examples of this include:

One of my favorite things with this approach is then seeing other people begin to share the links to my longer-form content to help answer common questions. By writing things down in a shareable way, it also enables and supports other people to scale your work by sharing it easily. This has started to happen more and more with the elastase blog post, in part because there are so few resources that cover this information all in one place.

For me, I lean toward writing, but for other people that could be videos, podcast/audio recording, or other formats that can capture things you know and make them shareable, thus scaling yourself.

For me, this approach of “scaling myself” and thinking about longer form content to post online instead of re-typing similar answers over and over again isn’t unique to EPI.

I have been doing this for over a decade. I developed this pattern early after we developed and shared OpenAPS (the first open source automated insulin delivery algorithm) with the world. Early on, I found myself answering the same technical questions repeatedly in online discussions with the same answers. Typing out explanations on my phone was inefficient, and if one person had a question, others likely had the same one. Instead of repeating myself, I took the time to document answers. I would often pause, write up the information in the documentation, and share that instead. This made it easier and quicker to go find and share a link instead of retyping responses, and it also took less time, so I was willing to do it more quickly than if I had to delay what I was doing in real life in order to type out a long yet already-answered question. Over time, I had to do less one-off typing on my phone (and could save that time and energy for true, one-off unique questions) and could share links with a lot more information more easily.

How do I use AI to scale this type of work?

A lot of the above tasks are related to writing. There are different ways you can use AI for writing, without having it write something completely. You can give it notes – whether you type or voice dictate them – and have it clean up your notes, so you can focus on thinking and not about typing or fixing typos that break your flow. You can have it convert the notes into full sentences. You can ask it to write a paragraph or an article based on the notes. You can ask it to suggest wording for a particular sentence you want to clarify for your audience.

If you think about the AI as an intern and/or a partner/collaborator who you would ask to review or edit for you, you’ll likely find even more ways to integrate AI into different parts of your writing process, even if it’s not doing the full writing for you.

I have also tried to task the AI with writing for me, with mixed results. This doesn’t mean I don’t use it, but I’ve been practicing and learning where it generates usable content and where it doesn’t.

A lot of it depends on the prompt and the topic (as much as it does the output in terms of style, length, intended audience etc).

If it’s a topic that’s “known”, it can write more content that I can take and edit and transform, as opposed to when I am trying to write about a concept that is far from the current knowledge base. (I mean far for both humans and of AI – a lot of my work is bleeding edge, pushing fields towards new developments and leading humans there.) Sometimes I ask it to write something and end up using none of the content, but by saying “ugh no” my brain has jumped to saying to myself “it should really say…” and I am able to more quickly springboard into manually writing the content I was previously slow on. In other words, it can be a brainstorming tool in the opposite sense, showing me what I do not want to say on a topic! And on some of my frontier/bleeding edge topics, it reflects what is commonly ‘known’ and when what is known is now wrong (example, as always, of how it’s commonly incorrectly reported that chronic pancreatitis is the most common cause of EPI), it helps me more clearly distinguish the new content from the old, wrong, or misinformed.

(Also, it’s worth reminding you what I have to remind myself, that AI is changing constantly and new tools override what is known about what tasks do and don’t do well! For example, in between writing this and posting it, OpenAI released GPT4.5, which is reportedly better at writing-related tasks than GPT-4o and other older models. I’ll have to test it and see if that’s true and for what kinds of writing tasks!)

This isn’t the only way you can scale yourself with AI, though. Scaling yourself doesn’t have to be limited to writing and documentation style tasks. AI and other tools can help with many tasks (more examples here and here), such as:

  • Cleaning and transforming data into different formats
  • Converting a CSV file into a more readable table
  • Writing code to automate tedious data processing
  • Drafting plain-language instructions for engineers or programmers
  • Checking whether instructions or explanations are clear and understandable, and identifying any gaps in logic that you missed on your first pass

By leveraging AI and other automation tools, you can free up time and energy for higher-value work: the things you are uniquely suited to do in the world, and the things that you want or love to do. And do them more easily!

Pro tip: if you find yourself procrastinating a task, this may be a good sign that you could use AI for some of it. 

I’m trying to use noticing procrastination as a trigger for considering AI for a task.

An example of this is an upcoming post with a bunch of math and meaty cost analysis that I originally did by hand. I needed (wanted) to re-do these estimates with different numbers, but procrastinated a bit because having to carefully re-do all the estimates and replace them throughout the blog post seemed tedious, so my brain wanted to procrastinate. So, I took the blog post and dumped it in with a prompt asking it to write Jupyter Notebook code to replicate the analyses explained via the plain language post, with the ability to adjust all input variables and see the results in a table so I could compare the original and updated numbers. It took less than 1 minute to generate this code and about 5 minutes for me to copy/paste, update the numbers, run it, and evaluate the output and decide what to update in the post. Manually, this would’ve taken 30-60 minutes due to needing to check my work manually and trace it throughout the post. Instead, this automated the tedious bit and will result in this new post coming out next week rather than weeks from now (read about it here – it’s an analysis on how cost-effect Life for a Child is, a charity supporting people living with diabetes in low- and middle-income countries that can use your help to save lives.)

Scale yourself: automate more, so you can handle what matters, a blog by Dana M. Lewis from DIYPS.orgI encourage you to think about scaling yourself and identifying a task or series of tasks where you can get in the habit of leveraging these tools to do so. Like most things, the first time or two might take a little more time. But once you figure out what tasks or projects are suited for this, the time savings escalate. Just like learning how to use any new software, tool, or approach. A little bit of invested time up front will likely save you a lot of time in the future.

You Can Create Your Own Icons (and animated gifs)

Over the years, I’ve experimented with different tools for making visuals. Some of them are just images but in the last several years I’ve made more animations, too.

But not with any fancy design program or purpose built tool. Instead, I use PowerPoint.

Making animated gifs

I first started using PowerPoint to create gifs around 2018 or 2019. At the time, PowerPoint didn’t have a built-in option to export directly to GIF format, so I had to export animations as a movie file first and then use an online converter to turn them into a GIF. Fortunately, in recent years, PowerPoint has added a direct “Export as GIF” feature.

The process of making an animated GIF in PowerPoint is similar to adding animations or transitions in a slide deck for a presentation. I’ve used this for various projects, including:

Am I especially trained? No. Do I feel like I have design skills? No.

Elbow grease and determination to try is what I have, with the goal of trying to use visuals to convey information as a summary or to illustrate a key point to accompany written text. (I also have a tendency to want to be a perfectionist, and I have to consciously let that go and let “anything is better than nothing” guide my attempts.)

Making icons is possible, too

Beyond animations, I’ve also used PowerPoint to create icons and simple logo designs.

I ended up making the logos for Carb Pilot (a free iOS app that enables you to track the macronutrients of your choice) and PERT Pilot (a free iOS app that enables people with exocrine pancreatic insufficiency, known as EPI or PEI, to track their enzyme intake) using PowerPoint.

This, and ongoing use of LLMs to help me with coding projects like these apps, is what led me to the realization that I can now make icons, too.

I was working to add a widget to Carb Pilot, so that users can have a widget on the home screen to more quickly enter meals without having to open the app and then tap; this saves a click every time. I went from having it be a single button to having 4 buttons to simulate the Carb Pilot home screen. For the “saved meals” button, I wanted a list icon, to indicate the list of previous meals. I went to SF Symbols, Apple’s icon library, and picked out the list icon I wanted to use, and referenced it in XCode. It worked, but it lacked something.

A light purple iOS widget with four buttons - top left is blue and says AI: top right is purple with a white microphone icon; bottom left is periwinkle blue with a white plus sign icon; bottom right is bright green with a custom list icon, where instead of bullets the three items are an apple, cupcake, and banana mini-icons. It occurred to me that maybe I could tweak it somehow and make the bullets of the list represent food items. I wasn’t sure how, so I asked the LLM if it was possible. Because I’ve done my other ‘design’ work in PowerPoint, I went there and quickly dropped some shapes and lines to simulate the icon, then tested exporting – yes, you can export as SVG! I spent a few more minutes tweaking versions of it and exporting it. It turns out, yes, you can export as SVG, but then the way I designed it wasn’t really suited for SVG use. When I had dropped the SVG into XCode, it didn’t show up. I asked the LLM again and it suggested trying PNG format. I exported the icon from powerpoint as PNG, dropped it into XCode, and it worked!

(That was a good reminder that even when you use the “right” format, you may need to experiment to see what actually works in practice with whatever tools you’re using, and not let the first failure be a sign that it can’t work.)

Use What Works

There’s a theme you’ll be hearing from me: try and see what works. Just try. You don’t know if you don’t try. With LLMs and other types of AI, we have more opportunities to try new and different things that we may not have known how to do before. From coding your own apps to doing data science to designing custom icons, these are all things I didn’t know how to do before but now I can. A good approach is to experiment, try different things (and different prompts), and not be afraid to use “nontraditional” tools for projects, creative or otherwise. If it works, it works!

The prompt matters when using Large Language Models (LLMs) and AI in healthcare

I see more and more research papers coming out these days about different uses of large language models (LLMs, a type of AI) in healthcare. There are papers evaluating it for supporting clinicians in decision-making, aiding in note-taking and improving clinical documentation, and enhancing patient education. But I see a wide-sweeping trend in the titles and conclusions of these papers, exacerbated by media headlines, making sweeping claims about the performance of one model versus another. I challenge everyone to pause and consider a critical fact that is less obvious: the prompt matters just as much as the model.

As an example of this, I will link to a recent pre-print of a research article I worked on with Liz Salmi (published article here pre-print here).

Liz nerd-sniped me about an idea of a study to have a patient and a neuro-oncologist evaluate LLM responses related to patient-generated queries about a chart note (or visit note or open note or clinical note, however you want to call it). I say nerd-sniped because I got very interested in designing the methods of the study, including making sure we used the APIs to model these ‘chat’ sessions so that the prompts were not influenced by custom instructions, ‘memory’ features within the account or chat sessions, etc. I also wanted to test something I’ve observed anecdotally from personal LLM use across other topics, which is that with 2024-era models the prompt matters a lot for what type of output you get. So that’s the study we designed, and wrote with Jennifer Clarke, Zhiyong Dong, Rudy Fischmann, Emily McIntosh, Chethan Sarabu, and Catherine (Cait) DesRoches, and I encourage you to check out the article here pre-print and enjoy the methods section, which is critical for understanding the point I’m trying to make here. 

In this study, the data showed that when LLM outputs were evaluated for a healthcare task, the results varied significantly depending not just on the model but also on how the task was presented (the prompt). Specifically, persona-based prompts—designed to reflect the perspectives of different end users like clinicians and patients—yielded better results, as independently graded by both an oncologist and a patient.

The Myth of the “Best Model for the Job”

Many research papers conclude with simplified takeaways: Model A is better than Model B for healthcare tasks. While performance benchmarking is important, this approach often oversimplifies reality. Healthcare tasks are rarely monolithic. There’s a difference between summarizing patient education materials, drafting clinical notes, or assisting with complex differential diagnosis tasks.

But even within a single task, the way you frame the prompt makes a profound difference.

Consider these three prompts for the same task:

  • “Explain the treatment options for early-stage breast cancer.”
  • “You’re an oncologist. Explain the treatment options for early-stage breast cancer.”
  • “You’re an oncologist. Explain the treatment options for early-stage breast cancer as you would to a newly diagnosed patient with no medical background.”

The second and third prompt likely result in a more accessible and tailored response. If a study only tests general prompts (e.g. prompt one), it may fail to capture how much more effective an LLM can be with task-specific guidance.

Why Prompting Matters in Healthcare Tasks

Prompting shapes how the model interprets the task and generates its output. Here’s why it matters:

  • Precision and Clarity: A vague prompt may yield vague results. A precise prompt clarifies the goal and the speaker (e.g. in prompt 2), and also often the audience (e.g. in prompt 3).
  • Task Alignment: Complex medical topics often require different approaches depending on the user—whether it’s a clinician, a patient, or a researcher.
  • Bias and Quality Control: Poorly constructed prompts can inadvertently introduce biases

Selecting a Model for a Task? Test Multiple Prompts

When evaluating LLMs for healthcare tasks—or applying insights from a research paper—consider these principles:

  1. Prompt Variation Matters: If an LLM fails on a task, it may not be the model’s fault. Try adjusting your prompts before concluding the model is ineffective, and avoid broad sweeping claims about a field or topic that aren’t supported by the test you are running.
  2. Multiple Dimensions of Performance: Look beyond binary “good” vs. “bad” evaluations. Consider dimensions like readability, clinical accuracy, and alignment with user needs, as an example when thinking about performance in healthcare. In our paper, we saw some cases where a patient and provider overlapped in ratings, and other places where the ratings were different.
  3. Reproducibility and Transparency: If a study doesn’t disclose how prompts were designed or varied, its conclusions may lack context. Reproducibility in AI studies depends not just on the model, but on the interaction between the task, model, and prompt design. You should be looking for these kinds of details when reading or peer reviewing papers. Take results and conclusions with a grain of salt if these methods are not detailed in the paper.
  4. Involve Stakeholders in Evaluation: As shown in the preprint mentioned earlier, involving both clinical experts and patients in evaluating LLM outputs adds critical perspectives often missing in standard evaluations, especially as we evolve to focus research on supporting patient needs and not simply focusing on clinician and healthcare system usage of AI.

What This Means for Healthcare Providers, Researchers, and Patients

  • For healthcare providers, understand that the way you frame a question can improve the usefulness of AI tools in practice. A carefully constructed prompt, adding a persona or requesting information for a specific audience, can change the output.
  • For researchers, especially those developing or evaluating AI models, it’s essential to test prompts across different task types and end-user needs. Transparent reporting on prompt strategies strengthens the reliability of your findings.
  • For patients, recognizing that AI-generated health information is shaped by both the model and the prompt. This can support critical thinking when interpreting AI-driven health advice. Remember that LLMs can be biased, but so too can be humans in healthcare. The same approach for assessing bias and evaluating experiences in healthcare should be used for LLM output as well as human output. Everyone (humans) and everything (LLMs) are capable of bias or errors in healthcare.

Prompts matter, so consider model type as well as the prompt as a factor in assessing LLMs in healthcare. Blog by Dana M. LewisTLDR: Instead of asking “Which model is best?”, a better question might be:

“How do we design and evaluate prompts that lead to the most reliable, useful results for this specific task and audience?”

I’ve observed, and this study adds evidence, that prompt interaction with the model matters.

A Slackbot for using Slack to access and use a chat-based LLM in public

I’ve been thinking a lot about how to help my family, friends, and colleagues use LLMs to power their work. (As I’ve written about here, and more recently here with lots of tips on prompting and effectively using LLMs for different kinds of projects). 

Scott has been on the same page, especially thinking about how to help colleagues use LLMs effectively, but taking a slightly different approach: he built a Slackbot (a bot for Slack) which uses GPT-3.5 and GPT-4 to answer questions. This uses the API of GPT but presents it to the user in Slack instead of having to use ChatGPT as the chat interface. So, it’s a LLM chatbot, different than ChatGPT (because it’s a different chat interface), but uses the same AI (GPT-3.5 and GPT-4 from OpenAI). You could implement the same idea (a chat-based bot in Slack) using different AIs/LLMs, of course.

Using a slack-based bot for an LLM achieves a couple of things:

  1. More people can try GPT-4 and compare it to GPT-3.5 to get a taste for prompting and responses, without having to pay $20/month for a ChatGPT Pro account to get access to GPT-4.
  2. If you spend a lot of time in Slack for work, you don’t have to switch windows to access GPT.
  3. If your employer doesn’t want you to use the ChatGPT consumer product for work due to security concerns, but is more comfortable with OpenAI’s confidentiality guarantees for their API, a Slack bot interface provides a more secure alternative.
  4. You can see “in public” how other people in your Slack workspace are using it, and learn from their prompts and interactions new tricks and ideas (in addition to reading my long post here with lots of tips and suggestions).
  5. You can privately (in DM by yourself) talk to the bot and ask it your “dumb” questions (which aren’t dumb), and you can reason through things at your own speed or learn how to do things that you haven’t done before, and do so in private just like if you had your own ChatGPT account. It doesn’t burn social capital or take other people’s time, which you can save for more nuanced questions.

Scott has implemented it for work, but also came up with another fun use for it: we created a family-focused Slack channel for kids in our lives (we have ten (!) nieces and nephews – you can see how I used GPT to help with special custom books for some of them recently) so their parents and they can access GPT in a safe environment, too. ChatGPT (and OpenAI’s GPT API) have some reasonable safeguards in it already to protect kids, but more usefully we want to be able to provide custom instructions to keep it always at a kid-focused level. Plus this way we can share the experience with them, too.

I asked the bot to summarize itself for a lay audience not familiar with the project, and this was the output:

### Summary of the SlackAskBot

The SlackAskBot is a smart assistant designed to operate within Slack, a popular communication platform used by teams for messaging and collaboration. Imagine having a helpful robot in your group chats or direct messages that can understand your questions and provide informative responses. That’s what the SlackAskBot does. It’s like adding a knowledgeable team member to your Slack workspace who is always ready to assist.

#### How It Works:

  1. **Integration with Slack:**

– The bot is integrated into Slack, meaning it can read messages in channels it’s added to or when directly messaged. It’s like inviting a friend into your group chat who can listen and respond to conversations.

  1. **Understanding Messages:**

– When you type a message that includes the bot (either by mentioning it with an @ symbol in a channel or by sending it a direct message), the bot pays attention. It’s designed to understand that you’re talking to it and that you’re expecting some form of response.

  1. **Processing Your Requests:**

– Once the bot receives a message, it doesn’t just look at the latest message you’ve sent. It considers the context by looking at the conversation history. This way, it can provide responses that make sense within the ongoing discussion. It’s akin to a human joining a conversation, where understanding the previous discussion points is crucial for making relevant comments.

  1. **Generating Responses:**

– The bot uses a powerful language model (think of it as a very advanced AI that understands and generates human-like text) to come up with responses to your messages. It’s as if the bot thinks about what you’ve asked and then crafts a reply that aims to be helpful and informative.

  1. **Interacting Back:**

– After thinking through your message and generating a response, the bot then sends this response back into the Slack channel or direct message. You’ll see its reply just like you would from any other team member, providing the information or assistance you requested.

  1. **Special Features:**

– The bot is designed with specific settings for different channels or direct messages. This means it can tailor its responses based on where it’s talking. For example, it might respond differently in a casual chat compared to a more formal project discussion channel.

  1. **Respecting Privacy and Permissions:**

– The bot is programmed to respect Slack’s rules and the privacy of your conversations. It only listens and responds to messages where it’s explicitly mentioned or directly messaged, ensuring that it doesn’t intrude on private discussions.

#### In Summary:

The SlackAskBot acts as a virtual assistant within your Slack workspace, ready to provide information and answers to your questions. It’s designed to understand the context of conversations, generate helpful responses, and interact with users in a way that enhances team communication and productivity. Whether you’re asking for quick facts, seeking advice on a project, or just interacting for fun, the SlackAskBot is there to make your Slack experience more informative and engaging.


The SlackAskBot is open source – you can check out the code and instructions for how to implement it in your own Slack environment in the repository here.

We are using Slack as a chat interface for LLMs like GPT, making the user experience similar to ChatGPT

Effective Pair Programming and Coding and Prompt Engineering and Writing with LLMs like ChatGPT and other AI tools

I’ve been puzzled when I see people online say that LLM’s “don’t write good code”. In my experience, they do. But given that most of these LLMs are used in chatbot mode – meaning you chat and give it instructions to generate the code – that might be where the disconnect lies. To get good code, you need effective prompting and to do so, you need clear thinking and ideas on what you are trying to achieve and how.

My recipe and understanding is:

Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

It also involves understanding what these systems can and can’t do. For example, as I’ve written about before, they can’t “know” things (although they can increasingly look things up) and they can’t do “mental” math. But, they can generally repeat patterns of words to help you see what is known about a topic and they can write code that you can execute (or it can execute, depending on settings) to solve a math problem.

What the system does well is help code small chunks, walk you through processes to link these sections of code up, and help you implement them (if you ask for it). The smaller the task (ask), the more effective it is. Or also – the easier it is for you to see when it completes the task and when it hasn’t been able to finish due to limitations like response length limits, information falling out of the context window (what it knows that you’ve told it); unclear prompting; and/or because you’re asking it to do things for which it doesn’t have expertise. Some of the last part – lack of expertise – can be improved with specific prompting techniques –  and that’s also true for right-sizing the task it’s focusing on.

Right-size the task by giving a clear ask

If I were to ask an LLM to write me code for an iOS app to do XYZ, it could write me some code, but it certainly wouldn’t (at this point in history, written in February 2024), write all code and give me a downloadable file that includes it all and the ability to simply run it. What it can do is start writing chunks and snippets of code for bits and pieces of files that I can take and place and build upon.

How do I know this? Because I made that mistake when trying to build my first iOS apps in April and May 2023 (last year). It can’t do that (and still can’t today; I repeated the experiment). I had zero ideas how to build an iOS app; I had a sense that it involved XCode and pushing to the Apple iOS App Store, and that I needed “Swift” as the programming language. Luckily, though, I had a much stronger sense of how I wanted to structure the app user experience and what the app needed to do.

I followed the following steps:

  1. First, I initiated chat as a complete novice app builder. I told it I was new to building iOS apps and wanted to use XCode. I had XCode downloaded, but that was it. I told it to give me step by step instructions for opening XCode and setting up a project. Success! That was effective.
  2. I opened a different chat window after that, to start a new chat. I told it that it was an expert in iOS programming using Swift and XCode. Then I described the app that I wanted to build, said where I was in the process (e.g. had opened and started a project in XCode but had no code yet), and asked it for code to put on the home screen so I could build and open the app and it would have content on the home screen. Success!
  3. From there, I was able to stay in the same chat window and ask it for pieces at a time. I wanted to have a new user complete an onboarding flow the very first time they opened the app. I explained the number of screens and content I wanted on those screens; the chat was able to generate code, tell me how to create that in a file, and how to write code that would trigger this only for new users. Success!
  4. I was able to then add buttons to the home screen; have those buttons open new screens of the app; add navigation back to the home; etc. Success!
  5. (Rinse and repeat, continuing until all of the functionality was built out a step at a time).

To someone with familiarity building and programming things, this probably follows a logical process of how you might build apps. If you’ve built iOS apps before and are an expert in Swift programming, you’re either not reading this blog post or are thinking I (the human) am dumb and inexperienced.

Inexperienced, yes, I was (in April 2023). But what I am trying to show here is for someone new to a process and language, this is how we need to break down steps and work with LLMs to give it small tasks to help us understand and implement the code it produces before moving forward with a new task (ask). It takes these small building block tasks in order to build up to a complete app with all the functionality that we want. Nowadays, even though I can now whip up a prototype project and iOS app and deploy it to my phone within an hour (by working with an LLM as described above, but skipping some of the introductory set-up steps now that I have experience in those), I still follow the same general process to give the LLM the big picture and efficiently ask it to code pieces of the puzzle I want to create.

As the human, you need to be able to keep the big picture – full app purpose and functionality – in mind while subcontracting with the LLM to generate code for specific chunks of code to help achieve new functionality in our project.

In my experience, this is very much like pair programming with a human. In fact, this is exactly what we did when we built DIYPS over ten years ago (wow) and then OpenAPS within the following year. I’ve talked endlessly about how Scott and I would discuss an idea and agree on the big picture task; then I would direct sub-tasks and asks that he, then also Ben and others would be coding on (at first, because I didn’t have as much experience coding and this was 10 years ago without LLMs; I gradually took on more of those coding steps and roles as well). I was in charge of the big picture project and process and end goal; it didn’t matter who wrote which code or how; we worked together to achieve the intended end result. (And it worked amazingly well; here I am 10 years later still using DIYPS and OpenAPS; and tens of thousands of people globally are all using open source AID systems spun off of the algorithm we built through this process!)

Two purple boxes. The one on the left says "big picture project idea" and has a bunch of smaller size boxes within labeled LLM, attempting to show how an LLM can do small-size tasks within the scope of a bigger project that you direct it to do. On the right, the box simply says "finished project". Today, I would say the same is true. It doesn’t matter – for my types of projects – if a human or an LLM “wrote” the code. What matters is: does it work as intended? Does it achieve the goal? Does it contribute to the goal of the project?

Coding can be done – often by anyone (human with relevant coding expertise) or anything (LLM with effective prompting) – for any purpose. The critical key is knowing what the purpose is of the project and keeping the coding heading in the direction of serving that purpose.

Tips for right-sizing the ask

  1. Consider using different chat windows for different purposes, rather than trying to do it all in one. Yes, context windows are getting bigger, but you’ll still likely benefit from giving different prompts in different windows (more on effective prompting below).Start with one window for getting started with setting up a project (e.g. how to get XCode on a Mac and start a project; what file structure to use for an app/project that will do XYZ; how to start a Jupyter notebook for doing data science with python; etc); brainstorming ideas to scope your project; then separately for starting a series of coding sub-tasks (e.g. write code for the home page screen for your app; add a button that allows voice entry functionality; add in HealthKit permission functionality; etc.) that serves the big picture goal.
  2. Make a list for yourself of the steps needed to build a new piece of functionality for your project. If you know what the steps are, you can specifically ask the LLM for that.Again, use a separate window if you need to. For example, if you want to add in the ability to save data to HealthKit from your app, you may start a new chat window that asks the LLM generally how does one add HealthKit functionality for an app? It’ll describe the process of certain settings that need to be done in XCode for the project; adding code that prompts the user with correct permissions; and then code that actually does the saving/revising to HealthKit.

    Make your list (by yourself or with help), then you can go ask the LLM to do those things in your coding/task window for your specific project. You can go set the settings in XCode yourself, and skip to asking it for the task you need it to do, e.g. “write code to prompt the user with HealthKit permissions when button X is clicked”.

    (Sure, you can do the ask for help in outlining steps in the same window that you’ve been prompting for coding sub-tasks, just be aware that the more you do this, the more quickly you’ll burn through your context window. Sometimes that’s ok, and you’ll get a feel for when to do a separate window with the more experience you get.)

  • Pay attention as you go and see how much code it can generate and when it falls short of an ask. This will help you improve the rate at which you successfully ask and it fully completes a task for future asks. I observe that when I don’t know – due to my lack of expertise – the right size of a task, it’s more prone to give me ½-⅔ of the code and solution but need additional prompting after that. Sometimes I ask it to continue where it cut off; other times I start implementing/working with the bits of code (the first ⅔) it gave me, and have a mental or written note that this did not completely generate all steps/code for the functionality and to come back.Part of why sometimes it is effective to get started with ⅔ of the code is because you’ll likely need to debug/test the first bit of code, anyway. Sometimes when you paste in code it’s using methods that don’t match the version you’re targeting (e.g. functionality that is outdated as of iOS 15, for example, when you’re targeting iOS 17 and newer) and it’ll flag a warning or block it from working until you fix it.

    Once you’ve debugged/tested as much as you can of the original ⅔ of code it gave you, you can prompt it to say “Ok, I’ve done X and Y. We were trying to (repeat initial instructions/prompt) – what are the remaining next steps? Please code that.” to go back and finish the remaining pieces of that functionality.

    (Note that saying “please code that” isn’t necessarily good prompt technique, see below).

    Again, much of this is paying attention to how the sub-task is getting done in service of the overall big picture goal of your project; or the chunk that you’ve been working on if you’re building new functionality. Keeping track with whatever method you prefer – in your head, a physical written list, a checklist digitally, or notes showing what you’ve done/not done – is helpful.

Most of the above I used for coding examples, but I follow the same general process when writing research papers, blog posts, research protocols, etc. My point is that this works for all types of projects that you’d work on with an LLM, whether the output generation intended is code or human-focused language that you’d write or speak.

But, coding or writing language, the other thing that makes a difference in addition to right-sizing the task is effective prompting. I’ve intuitively noticed that has made the biggest difference in my projects for getting the output matching my expertise. Conversely, I have actually peer reviewed papers for medical journals that do a horrifying job with prompting. You’ll hear people talk about “prompt engineering” and this is what it is referring to: how do you engineer (write) a prompt to get the ideal response from the LLM?

Tips for effective prompting with an LLM

    1. Personas and roles can make a difference, both for you and for the LLM. What do I mean by this? Start your prompt by telling the LLM what perspective you want it to take. Without it, you’re going to make it guess what information and style of response you’re looking for. Here’s an example: if you asked it what caused cancer, it’s going to default to safety and give you a general public answer about causes of cancer in very plain, lay language. Which may be fine. But if you’re looking to generate a better understanding of the causal mechanism of cancer; what is known; and what is not known, you will get better results if you prompt it with “You are an experienced medical oncologist” so it speaks from the generated perspective of that role. Similarly, you can tell it your role. Follow it with “Please describe the causal mechanisms of cancer and what is known and not known” and/or “I am also an experienced medical researcher, although not an oncologist” to help contextualize that you want a deeper, technical approach to the answer and not high level plain language in the response.

      Compare and contrast when you prompt the following:

      A. “What causes cancer?”

      B. “You are an experienced medical oncologist. What causes cancer? How would you explain this differently in lay language to a patient, and how would you explain this to another doctor who is not an oncologist?”

      C. “You are an experienced medical oncologist. Please describe the causal mechanisms of cancer and what is known and not known. I am also an experienced medical researcher, although not an oncologist.”

      You’ll likely get different types of answers, with some overlap between A and the first part of answer B. Ditto for a tiny bit of overlap between the latter half of answer B and for C.

      I do the same kind of prompting with technical projects where I want code. Often, I will say “You are an expert data scientist with experience writing code in Python for a Jupyter Notebook” or “You are an AI programming assistant with expertise in building iOS apps using XCode and SwiftUI”. Those will then be followed with a brief description of my project (more on why this is brief below) and the first task I’m giving it.

      The same also goes for writing-related tasks; the persona I give it and/or the role I reference for myself makes a sizable difference in getting the quality of the output to match the style and quality I was seeking in a response.

  • Be specific. Saying “please code that” or “please write that” might work, sometimes, but more often or not will get a less effective output than if you provide a more specific prompt.I am a literal person, so this is something I think about a lot because I’m always parsing and mentally reviewing what people say to me because my instinct is to take their words literally and I have to think through the likelihood that those words were intended literally or if there is context that should be used to filter those words to be less literal. Sometimes, you’ll be thinking about something and start talking to someone about something, and they have no idea what on earth you’re talking about because the last part of your out-loud conversation with them was about a completely different topic!

    LLMs are the same as the confused conversational partner who doesn’t know what you’re thinking about. LLMs only know what you’ve last/recently told it (and more quickly than humans will ‘forget’ what you told it about a project). Remember the above tips about brainstorming and making a list of tasks for a project? Providing a description of the task along with the ask (e.g. we are doing X related to the purpose of achieving Y, please code X) will get you better output more closely matching what you wanted than saying “please code that” where the LLM might code something else to achieve Y if you didn’t tell it you wanted to focus on X.

    I find this even more necessary with writing related projects. I often find I need to give it the persona “You are an expert medical researcher”, the project “we are writing a research paper for a medical journal”, the task “we need to write the methods section of the paper”, and a clear ask “please review the code and analyses and make an outline of the steps that we have completed in this process, with sufficient detail that we could later write a methods section of a research paper”. A follow up ask is then “please take this list and draft it into the methods section”. That process with all of that specific context gives better results than “write a methods section” or “write the methods” etc.

  • Be willing to start over with a new window/chat. Sometimes the LLM can get itself lost in solving a sub-task and lose sight (via lost context window) of the big picture of a project, and you’ll find yourself having to repeat over and over again what you’re asking it to do. Don’t be afraid to cut your losses and start a new chat for a sub-task that you’ve been stuck on. You may be able to eventually come back to the same window as before, or the new window might become your new ‘home’ for the project…or sometimes a third, fourth, or fifth window will.
  • Try, try again.
    I may hold the record for the longest running bug that I (and the LLM) could. Not. solve. This was so, so annoying. No users apparently noticed it but I knew about it and it bugged me for months and months. Every few weeks I would go to an old window and also start a new window, describe the problem, paste the code in, and ask for help to solve it. I asked it to identify problems with the code; I asked it to explain the code and unexpected/unintended functionality from it; I asked it what types of general things would be likely to cause that type of bug. It couldn’t find the problem. I couldn’t find the problem. Finally, one day, I did all of the above, but then also started pasting every single file from my project and asking if it was likely to include code that could be related to the problem. By forcing myself to review all my code files with this problem in mind, even though the files weren’t related at all to the file/bug….I finally spotted the problem myself. I pasted the code in, asked if it was a possibility that it was related to the problem, the LLM said yes, I tried a change and…voila! Bug solved on January 16 after plaguing me since November 8. (And probably existed before then but I didn’t have functionality built until November 8 where I realized it was a problem). I was beating myself up about it and posted to Twitter about finally solving the bug (but very much with the mindset of feeling very stupid about it). Someone replied and said “congrats! sounds like it was a tough one!”. Which I realized was a very kind framing and one that I liked, because it was a tough one; and also I am doing a tough thing that no one else is doing and I would not have been willing to try to do without an LLM to support.

    Similarly, just this last week on Tuesday I spent about 3 hours working on a sub-task for a new project. It took 3 hours to do something that on a previous project took me about 40 minutes, so I was hyper aware of the time mismatch and perceiving that 3 hours was a long time to spend on the task. I vented to Scott quite a bit on Tuesday night, and he reminded me that sure it took “3 hours” but I did something in 3 hours that would take 3 years otherwise because no one else would do (or is doing) the project that I’m working on. Then on Wednesday, I spent an hour doing another part of the project and Thursday whipped through another hour and a half of doing huge chunks of work that ended up being highly efficient and much faster than they would have been, in part because the “three hours” it took on Tuesday wasn’t just about the code but about organizing my thinking, scoping the project and research protocol, etc. and doing a huge portion of other work to organize my thinking to be able to effectively prompt the LLM to do the sub-task (that probably did actually take closer to the ~40 minutes, similar to the prior project).

    All this to say: LLMs have become pair programmers and collaborators and writers that are helping me achieve tasks and projects that no one else in the world is working on yet. (It reminds me very much of my early work with DIYPS and OpenAPS where we did the work, quietly, and people eventually took notice and paid attention, albeit slower than we wished but years faster than had we not done that work. I’m doing the same thing in a new field/project space now.) Sometimes, the first attempt to delegate a sub-task doesn’t work. It may be because I haven’t organized my thinking enough, and the lack of ideal output shows that I have not prompted effectively yet. Sometimes I can quickly fix the prompt to be effective; but sometimes it highlights that my thinking is not yet clear; my ability to communicate the project/task/big picture is not yet sufficient; and the process of achieving the clarity of thinking and translating to the LLM takes time (e.g. “that took 3 hours when it should have taken 40 minutes”) but ultimately still moves me forward to solving the problem or achieving the tasks and sub-tasks that I wanted to do. Remember what I said at the beginning:

    Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

 

  • Try it anyway.
    I am trying to get out of the habit of saying “I can’t do X”, like “I can’t code/program an iOS app”…because now I can. I’ve in fact built and shipped/launched/made available multiple iOS apps (check out Carb Pilot if you’re interested in macronutrient estimates for any reason; you can customize so you only see the one(s) you care about; or if you have EPI, check out PERT Pilot, which is the world’s first and only app for tracking pancreatic enzyme replacement therapy and has the same AI feature for generating macronutrient estimates to aid in adjusting enzyme dosing for EPI.) I’ve also made really cool, 100% custom-to-me niche apps to serve a personal purpose that save me tons of time and energy. I can do those things, because I tried. I flopped a bunch along the way – it took me several hours to solve a simple iOS programming error related to home screen navigation in my first few apps – but in the process I learned how to do those things and now I can build apps. I’ve coded and developed for OpenAPS and other open source projects, including a tool for data conversion that no one else in the world had built. Yet, my brain still tries to tell me I can’t code/program/etc (and to be fair, humans try to tell me that sometimes, too).

    I bring that up to contextualize that I’m working on – and I wish others would work on to – trying to address the reflexive thoughts of what we can and can’t do, based on prior knowledge. The world is different now and tools like LLMs make it possible to learn new things and build new projects that maybe we didn’t have time/energy to do before (not that we couldn’t). The bar to entry and the bar to starting and trying is so much lower than it was even a year ago. It really comes down to willingness to try and see, which I recognize is hard: I have those thought patterns too of “I can’t do X”, but I’m trying to notice when I have those patterns; shift my thinking to “I used to not be able to do X; I wonder if it is possible to work with an LLM to do part of X or learn how to do Y so that I could try to do X”.

    A recent real example for me is power calculations and sample size estimates for future clinical trials. That’s something I can’t do; it requires a statistician and specialized software and expertise.

    Or…does it?

    I asked my LLM how power calculations are done. It explained. I asked if it was possible to do it using Python code in a Jupyter notebook. I asked what information would be needed to do so. It walked me through the decisions I needed to make about power and significance, and highlighted variables I needed to define/collect to put into the calculation. I had generated the data from a previous study so I had all the pieces (variables) I needed. I asked it to write code for me to run in a Jupyter notebook, and it did. I tweaked the code, input my variables, ran it..and got the result. I had run a power calculation! (Shocked face here). But then I got imposter syndrome again, reached out to a statistician who I had previously worked with on a research project. I shared my code and asked if that was the correct or an acceptable approach and if I was interpreting it correctly. His response? It was correct, and “I couldn’t have done it better myself”.

    (I’m still shocked about this).

    He also kindly took my variables and put it in the specialized software he uses and confirmed that the results output matched what my code did, then pointed out something that taught me something for future projects that might be different (where the data is/isn’t normally distributed) although it didn’t influence the output of my calculation for this project.

    What I learned from this was a) this statistician is amazing (which I already knew from working with him in the past) and kind to support my learning like this; b) I can do pieces of projects that I previously thought were far beyond my expertise; c) the blocker is truly in my head, and the more we break out of or identify the patterns stopping us from trying, the farther we will get.

    “Try it anyway” also refers to trying things over time. The LLMs are improving every few months and often have new capabilities that didn’t before. Much of my work is done with GPT-4 and the more nuanced, advanced technical tasks are way more efficient than when using GPT-3.5. That being said, some tasks can absolutely be done with GPT-3.5-level AI. Doing something now and not quite figuring it out could be something that you sort out in a few weeks/months (see above about my 3 month bug); it could be something that is easier to do once you advance your thinking ; or it could be more efficiently done with the next model of the LLM you’re working with.

  • Test whether custom instructions help. Be aware though that sometimes too many instructions can conflict and also take up some of your context window. Plus if you forget what instructions you gave it, you might get seemingly unexpected responses in future chats. (You can always change the custom instructions and/or turn it on and off.)

I’m hoping this helps give people confidence or context to try things with LLMs that they were not willing to try before; or to help get in the habit of remembering to try things with LLMs; and to get the best possible output for the project that they’re working on.

Remember:

  • Right-size the task by making a clear ask.
  • You can use different chat windows for different levels of the same project.
  • Use a list to help you, the human, keep track of all the pieces that contribute to the bigger picture of the project.
  • Try giving the LLM a persona for an ask; and test whether you also need to assign yourself a persona or not for a particular type of request.
  • Be specific, think of the LLM as a conversational partner that can’t read your mind.
  • Don’t be afraid to start over with a new context window/chat.
  • Things that were hard a year ago might be easier with an LLM; you should try again.
  • You can do more, partnering with an LLM, than you can on your own, and likely can do things you didn’t realize were possible for you to do!

Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

Have any tips to help others get more effective output from LLMs? I’d love to hear them, please comment below and share your tips as well!

Tips for prompting LLMs like ChatGPT, written by Dana M. Lewis and available from DIYPS.org