Pain and translation and using AI to improve healthcare at an individual level

I think differently from most people. Sometimes, this is a strength; and sometimes this is a challenge. This is noticeable when I approach healthcare encounters in particular: the way I perceive signals from my body is different from a typical person. I didn’t know this for the longest time, but it’s something I have been becoming more aware of over the years.

The most noticeable incident that brought me to this realization involved when I pitched head first off a mountain trail in New Zealand over five years ago. I remember yelling – in flight – help, I broke my ankle, help. When I had arrested my fall, clung on, and then the human daisy chain was pulling me back up onto the trail, I yelped and stopped because I could not use my right ankle to help me climb up the trail. I had to reposition my knee to help move me up. When we got up to the trail and had me sitting on a rock, resting, I felt a wave of nausea crest over me. People suggested that it was dehydration and I should drink. I didn’t feel dehydrated, but ok. Then because I was able to gently rest my foot on the ground at a normal perpendicular angle, the trail guides hypothesized that it was not broken, just sprained. It wasn’t swollen enough to look like a fracture, either. I felt like it hurt really bad, worse than I’d ever hurt an ankle before and it didn’t feel like a sprain, but I had never broken a bone before so maybe it was the trauma of the incident contributing to how I was feeling. We taped it and I tried walking. Nope. Too-strong pain. We made a new goal of having me use poles as crutches to get me to a nearby stream a half mile a way, to try to ice my ankle. Nope, could not use poles as crutches, even partial weight bearing was undoable. I ended up doing a mix of hopping, holding on to Scott and one of the guides. That got exhausting on my other leg pretty quickly, so I also got down on all fours (with my right knee on the ground but lifting my foot and ankle in the air behind me) to crawl some. Eventually, we realized I wasn’t going to be able to make it to the stream and the trail guides decided to call for a helicopter evacuation. The medics, too, when they arrived via helicopter thought it likely wasn’t broken. I got flown to an ER and taken to X-Ray. When the technician came out, I asked her if she saw anything obvious and whether it looked broken or not. She laughed and said oh yes, there’s a break. When the ER doc came in to talk to me he said “you must have a really high pain tolerance” and I said “oh really? So it’s definitely broken?” and he looked at me like I was crazy, saying “it’s broken in 3 different places”. (And then he gave me extra pain meds before setting my ankle and putting the cast on to compensate for the fact that I have high pain tolerance and/or don’t communicate pain levels in quite the typical way.)

A week later, when I was trying not to fall on my broken ankle and broke my toe, I knew instantly that I had broken my toe, both by the pain and the nausea that followed. Years later when I smashed another toe on another chair, I again knew that my toe was broken because of the pain + following wave of nausea. Nausea, for me, is apparently a response to very high level pain. And this is something I’ve carried forward to help me identify and communicate when my pain levels are significant, because otherwise my pain tolerance is such that I don’t feel like I’m taken seriously because my pain scale is so different from other people’s pain scales.

Flash forward to the last few weeks. I have an autoimmune disease causing issues with multiple areas of my body. I have some progressive slight muscle weakness that began to concern me, especially as it spread to multiple limbs and areas of my body. This was followed with pain in different parts of my spine which has escalated. Last weekend, riding in the car, I started to get nauseous from the pain and had to take anti-nausea medicine (which thankfully helped) as well as pain medicine (OTC, and thankfully it also helped lower it down to manageable levels). This has happened several other times.

Some of the symptoms are concerning to my healthcare provider and she agreed I should probably have a MRI and a consult from neurology. Sadly, the first available new patient appointment with the neurologist I was assigned to was in late September. Gulp. I was admittedly nervous about my symptom progression, my pain levels (intermittent as they are), and how bad things might get if we are not able to take any action between now and September. I also, admittedly, was not quite sure how I would cope with the level of pain I have been experiencing at those peak moments that cause nausea.

I had last spoken to my provider a week prior, before the spine pain started. I reached out to give her an update, confirm that my specialist appointment was not until September, and express my concern about the progression and timeline. She too was concerned and I ended up going in for imaging sooner.

Over the last week, because I’ve been having these progressive symptoms, I used Katie McCurdy’s free templates from Pictal Health to help visualize and show the progression of symptoms over time. I wasn’t planning on sending my visuals to my doctor, but it helped me concretely articulate my symptoms and confirm that I was including everything that I thought was meaningful for my healthcare providers to know. I also shared them with Scott to confirm he didn’t think I had missed anything. The icons in some cases were helpful but in other cases didn’t quite match how I was experiencing pain and I modified them somewhat to better match how I saw the pain I was experiencing.

(PS – check out Katie’s templates here, you can make a copy in Google Drive and use them yourself!)

As I spoke with the nurse who was recording my information at intake for imaging, she asked me to characterize the pain. I did and explained that it was probably usually a 7/10 then but periodically gets stronger to the point of causing nausea, which for me is a broken bone pain-level response. She asked me to characterize the pain – was it burning, tingling…? None of the words she said matched how it feels. It’s strong pain; it sometimes gets worse. But it’s not any of the words she mentioned.

When the nurse asked if it was “sharp”, Scott spoke up and explained the icon that I had used on my visual, saying maybe it was “sharp” pain. I thought about it and agreed that it was probably the closest word (at least, it wasn’t a hard no like the words burning, tingling, etc. were), and the nurse wrote it down. That became the word I was able to use as the closest approximation to how the pain felt, but again with the emphasis of it periodically reaching nausea-inducing levels equivalent to broken bone pain, because I felt saying “sharp” pain alone did not characterize it fully.

This, then, is one of the areas where I feel that artificial intelligence (AI) gives me a huge helping hand. I often will start working with an LLM (a large language model) and describing symptoms. Sometimes I give it a persona to respond as (different healthcare provider roles); sometimes I clarify my role as a patient or sometimes as a similar provider expert role. I use different words and phrases in different questions and follow ups; I then study the language it uses in response.

If you’re not familiar with LLMs, you should know it is not human intelligence; there is no brain that “knows things”. It’s not an encyclopedia. It’s a tool that’s been trained on a bajillion words, and it learns patterns of words as a result, and records “weights” that are basically cues about how those patterns of words relate to each other. When you ask it a question, it’s basically autocompleting the next word based on the likelihood of it being the next word in a similar pattern. It can therefore be wildly wrong; it can also still be wildly useful in a lot of ways, including this context.

What I often do in these situations is not looking for factual information. Again, it’s not an encyclopedia. But I myself am observing the LLM in using a pattern of words so that I am in turn building my own set of “weights” – meaning, building an understanding of the patterns of words it uses – to figure out a general outline of what is commonly known by doctors and medical literature; the common terminology that is being used likely by doctors to intake and output recommendations; and basically build a list of things that do and do not match my scenario or symptoms or words, or whatever it is I am seeking to learn about.

I can then learn (from the LLM as well as in person clinical encounters) that doctors and other providers typically ask about burning, tingling, etc and can make it clear that none of those words match at all. I can then accept from them (or Scott, or use a word I learned from an LLM) an alternative suggestion where I’m not quite sure if it’s a perfect match, but it’s not absolutely wrong and therefore is ok to use to describe somewhat of the sensation I am experiencing.

The LLM and AI, basically, have become a translator for me. Again, notice that I’m not asking it to describe my pain for me; it would make up words based on patterns that have nothing to do with me. But when I observe the words it uses I can then use my own experience to rule things in/out and decide what best fits and whether and when to use any of those, if they are appropriate.

Often, I can do this in advance of a live healthcare encounter. And that’s really helpful because it makes me a better historian (to use clinical terms, meaning I’m able to report the symptoms and chronology and characterization more succinctly without them having to play 20 questions to draw it out of me); and it saves me and the clinicians time for being able to move on to other things.

At this imaging appointment, this was incredibly helpful. I had the necessary imaging and had the results at my fingertips and was able to begin exploring and discussing the raw data with my LLM. When I then spoke with the clinician, I was able to better characterize my symptoms in context of the imaging results and ask questions that I felt were more aligned with what I was experiencing, and it was useful for a more efficient but effective conversation with the clinician about what our working hypothesis was; what next short-term and long-term pathways looked like; etc.

This is often how I use LLMs overall. If you ask an LLM if it knows who Dana Lewis is, it “does” know. It’ll tell you things about me that are mostly correct. If you ask it to write a bio about me, it will solidly make up ⅓ of it that is fully inaccurate. Again, remember it is not an encyclopedia and does not “know things”. When you remember that the LLM is autocompleting words based on the likelihood that they match the previous words – and think about how much information is on the internet and how many weights (patterns of words) it’s been able to build about a topic – you can then get a better spidey-sense about when things are slightly more or less accurate at a general level. I have actually used part of a LLM-written bio, but not by asking it to write a bio. That doesn’t work because of made up facts. I have instead asked it to describe my work, and it does a pretty decent job. This is due to the number of articles I have written and authored; the number of articles describing my work; and the number of bios I’ve actually written and posted online for conferences and such. So it has a lot of “weights” probably tied to the types of things I work on, and having it describe the type of work I do or am known for gets pretty accurate results, because it’s writing in a general high level without enough detail to get anything “wrong” like a fact about an award, etc.

This is how I recommend others use LLMs, too, especially those of us as patients or working in healthcare. LLMs pattern match on words in their training; and they output likely patterns of words. We in turn as humans can observe and learn from the patterns, while recognizing these are PATTERNS of connected words that can in fact be wrong. Systemic bias is baked into human behavior and medical literature, and this then has been pattern-matched by the LLM. (Note I didn’t say “learned”; but they’ve created weights based on the patterns they observe over and over again). You can’t necessarily course-correct the LLM (it’ll pretend to apologize and maybe for a short while adjust it’s word patterns but in a new chat it’s prone to make the same mistakes because the training has not been updated based on your feedback, so it reverts to using the ‘weights’ (patterns) it was trained on); instead, we need to create more of the correct/right information and have it voluminously available for LLMs to train on in the future. At an individual level then, we can let go of the obvious not-right things it’s saying and focus on what we can benefit from in the patterns of words it gives us.

And for people like me, with a high (or different type of) pain tolerance and a different vocabulary for what my body is feeling like, this has become a critical tool in my toolbox for optimizing my healthcare encounters. Do I have to do this to get adequate care? No. But I’m an optimizer, and I want to give the best inputs to the healthcare system (providers and my medical records) in order to increase my chances of getting the best possible outputs from the healthcare system to help me maintain and improve and save my health when these things are needed.

TLDR: LLMs can be powerful tools in the hands of patients, including for real-time or ahead-of-time translation and creating shared, understandable language for improving communication between patients and providers. Just as you shouldn’t tell a patient not to use Dr. Google, you should similarly avoid falling into the trap of telling a patient not to use LLMs because they’re “wrong”. Being wrong in some cases and some ways does not mean LLMs are useless or should not be used by patients. Each of these tools has limitations but a lot of upside and benefits; restricting patients or trying to limit use of tools is like limiting the use of other accessibility tools. I spotted a quote from Dr. Wes Ely that is relevant: “Maleficence can be created with beneficent intent”. In simple words, he is pointing out that harm can happen even with good intent.

Don’t do harm by restricting or recommending avoiding tools like LLMs.

A Slackbot for using Slack to access and use a chat-based LLM in public

I’ve been thinking a lot about how to help my family, friends, and colleagues use LLMs to power their work. (As I’ve written about here, and more recently here with lots of tips on prompting and effectively using LLMs for different kinds of projects). 

Scott has been on the same page, especially thinking about how to help colleagues use LLMs effectively, but taking a slightly different approach: he built a Slackbot (a bot for Slack) which uses GPT-3.5 and GPT-4 to answer questions. This uses the API of GPT but presents it to the user in Slack instead of having to use ChatGPT as the chat interface. So, it’s a LLM chatbot, different than ChatGPT (because it’s a different chat interface), but uses the same AI (GPT-3.5 and GPT-4 from OpenAI). You could implement the same idea (a chat-based bot in Slack) using different AIs/LLMs, of course.

Using a slack-based bot for an LLM achieves a couple of things:

  1. More people can try GPT-4 and compare it to GPT-3.5 to get a taste for prompting and responses, without having to pay $20/month for a ChatGPT Pro account to get access to GPT-4.
  2. If you spend a lot of time in Slack for work, you don’t have to switch windows to access GPT.
  3. If your employer doesn’t want you to use the ChatGPT consumer product for work due to security concerns, but is more comfortable with OpenAI’s confidentiality guarantees for their API, a Slack bot interface provides a more secure alternative.
  4. You can see “in public” how other people in your Slack workspace are using it, and learn from their prompts and interactions new tricks and ideas (in addition to reading my long post here with lots of tips and suggestions).
  5. You can privately (in DM by yourself) talk to the bot and ask it your “dumb” questions (which aren’t dumb), and you can reason through things at your own speed or learn how to do things that you haven’t done before, and do so in private just like if you had your own ChatGPT account. It doesn’t burn social capital or take other people’s time, which you can save for more nuanced questions.

Scott has implemented it for work, but also came up with another fun use for it: we created a family-focused Slack channel for kids in our lives (we have ten (!) nieces and nephews – you can see how I used GPT to help with special custom books for some of them recently) so their parents and they can access GPT in a safe environment, too. ChatGPT (and OpenAI’s GPT API) have some reasonable safeguards in it already to protect kids, but more usefully we want to be able to provide custom instructions to keep it always at a kid-focused level. Plus this way we can share the experience with them, too.

I asked the bot to summarize itself for a lay audience not familiar with the project, and this was the output:

### Summary of the SlackAskBot

The SlackAskBot is a smart assistant designed to operate within Slack, a popular communication platform used by teams for messaging and collaboration. Imagine having a helpful robot in your group chats or direct messages that can understand your questions and provide informative responses. That’s what the SlackAskBot does. It’s like adding a knowledgeable team member to your Slack workspace who is always ready to assist.

#### How It Works:

  1. **Integration with Slack:**

– The bot is integrated into Slack, meaning it can read messages in channels it’s added to or when directly messaged. It’s like inviting a friend into your group chat who can listen and respond to conversations.

  1. **Understanding Messages:**

– When you type a message that includes the bot (either by mentioning it with an @ symbol in a channel or by sending it a direct message), the bot pays attention. It’s designed to understand that you’re talking to it and that you’re expecting some form of response.

  1. **Processing Your Requests:**

– Once the bot receives a message, it doesn’t just look at the latest message you’ve sent. It considers the context by looking at the conversation history. This way, it can provide responses that make sense within the ongoing discussion. It’s akin to a human joining a conversation, where understanding the previous discussion points is crucial for making relevant comments.

  1. **Generating Responses:**

– The bot uses a powerful language model (think of it as a very advanced AI that understands and generates human-like text) to come up with responses to your messages. It’s as if the bot thinks about what you’ve asked and then crafts a reply that aims to be helpful and informative.

  1. **Interacting Back:**

– After thinking through your message and generating a response, the bot then sends this response back into the Slack channel or direct message. You’ll see its reply just like you would from any other team member, providing the information or assistance you requested.

  1. **Special Features:**

– The bot is designed with specific settings for different channels or direct messages. This means it can tailor its responses based on where it’s talking. For example, it might respond differently in a casual chat compared to a more formal project discussion channel.

  1. **Respecting Privacy and Permissions:**

– The bot is programmed to respect Slack’s rules and the privacy of your conversations. It only listens and responds to messages where it’s explicitly mentioned or directly messaged, ensuring that it doesn’t intrude on private discussions.

#### In Summary:

The SlackAskBot acts as a virtual assistant within your Slack workspace, ready to provide information and answers to your questions. It’s designed to understand the context of conversations, generate helpful responses, and interact with users in a way that enhances team communication and productivity. Whether you’re asking for quick facts, seeking advice on a project, or just interacting for fun, the SlackAskBot is there to make your Slack experience more informative and engaging.


The SlackAskBot is open source – you can check out the code and instructions for how to implement it in your own Slack environment in the repository here.

We are using Slack as a chat interface for LLMs like GPT, making the user experience similar to ChatGPT

Effective Pair Programming and Coding and Prompt Engineering and Writing with LLMs like ChatGPT and other AI tools

I’ve been puzzled when I see people online say that LLM’s “don’t write good code”. In my experience, they do. But given that most of these LLMs are used in chatbot mode – meaning you chat and give it instructions to generate the code – that might be where the disconnect lies. To get good code, you need effective prompting and to do so, you need clear thinking and ideas on what you are trying to achieve and how.

My recipe and understanding is:

Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

It also involves understanding what these systems can and can’t do. For example, as I’ve written about before, they can’t “know” things (although they can increasingly look things up) and they can’t do “mental” math. But, they can generally repeat patterns of words to help you see what is known about a topic and they can write code that you can execute (or it can execute, depending on settings) to solve a math problem.

What the system does well is help code small chunks, walk you through processes to link these sections of code up, and help you implement them (if you ask for it). The smaller the task (ask), the more effective it is. Or also – the easier it is for you to see when it completes the task and when it hasn’t been able to finish due to limitations like response length limits, information falling out of the context window (what it knows that you’ve told it); unclear prompting; and/or because you’re asking it to do things for which it doesn’t have expertise. Some of the last part – lack of expertise – can be improved with specific prompting techniques –  and that’s also true for right-sizing the task it’s focusing on.

Right-size the task by giving a clear ask

If I were to ask an LLM to write me code for an iOS app to do XYZ, it could write me some code, but it certainly wouldn’t (at this point in history, written in February 2024), write all code and give me a downloadable file that includes it all and the ability to simply run it. What it can do is start writing chunks and snippets of code for bits and pieces of files that I can take and place and build upon.

How do I know this? Because I made that mistake when trying to build my first iOS apps in April and May 2023 (last year). It can’t do that (and still can’t today; I repeated the experiment). I had zero ideas how to build an iOS app; I had a sense that it involved XCode and pushing to the Apple iOS App Store, and that I needed “Swift” as the programming language. Luckily, though, I had a much stronger sense of how I wanted to structure the app user experience and what the app needed to do.

I followed the following steps:

  1. First, I initiated chat as a complete novice app builder. I told it I was new to building iOS apps and wanted to use XCode. I had XCode downloaded, but that was it. I told it to give me step by step instructions for opening XCode and setting up a project. Success! That was effective.
  2. I opened a different chat window after that, to start a new chat. I told it that it was an expert in iOS programming using Swift and XCode. Then I described the app that I wanted to build, said where I was in the process (e.g. had opened and started a project in XCode but had no code yet), and asked it for code to put on the home screen so I could build and open the app and it would have content on the home screen. Success!
  3. From there, I was able to stay in the same chat window and ask it for pieces at a time. I wanted to have a new user complete an onboarding flow the very first time they opened the app. I explained the number of screens and content I wanted on those screens; the chat was able to generate code, tell me how to create that in a file, and how to write code that would trigger this only for new users. Success!
  4. I was able to then add buttons to the home screen; have those buttons open new screens of the app; add navigation back to the home; etc. Success!
  5. (Rinse and repeat, continuing until all of the functionality was built out a step at a time).

To someone with familiarity building and programming things, this probably follows a logical process of how you might build apps. If you’ve built iOS apps before and are an expert in Swift programming, you’re either not reading this blog post or are thinking I (the human) am dumb and inexperienced.

Inexperienced, yes, I was (in April 2023). But what I am trying to show here is for someone new to a process and language, this is how we need to break down steps and work with LLMs to give it small tasks to help us understand and implement the code it produces before moving forward with a new task (ask). It takes these small building block tasks in order to build up to a complete app with all the functionality that we want. Nowadays, even though I can now whip up a prototype project and iOS app and deploy it to my phone within an hour (by working with an LLM as described above, but skipping some of the introductory set-up steps now that I have experience in those), I still follow the same general process to give the LLM the big picture and efficiently ask it to code pieces of the puzzle I want to create.

As the human, you need to be able to keep the big picture – full app purpose and functionality – in mind while subcontracting with the LLM to generate code for specific chunks of code to help achieve new functionality in our project.

In my experience, this is very much like pair programming with a human. In fact, this is exactly what we did when we built DIYPS over ten years ago (wow) and then OpenAPS within the following year. I’ve talked endlessly about how Scott and I would discuss an idea and agree on the big picture task; then I would direct sub-tasks and asks that he, then also Ben and others would be coding on (at first, because I didn’t have as much experience coding and this was 10 years ago without LLMs; I gradually took on more of those coding steps and roles as well). I was in charge of the big picture project and process and end goal; it didn’t matter who wrote which code or how; we worked together to achieve the intended end result. (And it worked amazingly well; here I am 10 years later still using DIYPS and OpenAPS; and tens of thousands of people globally are all using open source AID systems spun off of the algorithm we built through this process!)

Two purple boxes. The one on the left says "big picture project idea" and has a bunch of smaller size boxes within labeled LLM, attempting to show how an LLM can do small-size tasks within the scope of a bigger project that you direct it to do. On the right, the box simply says "finished project". Today, I would say the same is true. It doesn’t matter – for my types of projects – if a human or an LLM “wrote” the code. What matters is: does it work as intended? Does it achieve the goal? Does it contribute to the goal of the project?

Coding can be done – often by anyone (human with relevant coding expertise) or anything (LLM with effective prompting) – for any purpose. The critical key is knowing what the purpose is of the project and keeping the coding heading in the direction of serving that purpose.

Tips for right-sizing the ask

  1. Consider using different chat windows for different purposes, rather than trying to do it all in one. Yes, context windows are getting bigger, but you’ll still likely benefit from giving different prompts in different windows (more on effective prompting below).Start with one window for getting started with setting up a project (e.g. how to get XCode on a Mac and start a project; what file structure to use for an app/project that will do XYZ; how to start a Jupyter notebook for doing data science with python; etc); brainstorming ideas to scope your project; then separately for starting a series of coding sub-tasks (e.g. write code for the home page screen for your app; add a button that allows voice entry functionality; add in HealthKit permission functionality; etc.) that serves the big picture goal.
  2. Make a list for yourself of the steps needed to build a new piece of functionality for your project. If you know what the steps are, you can specifically ask the LLM for that.Again, use a separate window if you need to. For example, if you want to add in the ability to save data to HealthKit from your app, you may start a new chat window that asks the LLM generally how does one add HealthKit functionality for an app? It’ll describe the process of certain settings that need to be done in XCode for the project; adding code that prompts the user with correct permissions; and then code that actually does the saving/revising to HealthKit.

    Make your list (by yourself or with help), then you can go ask the LLM to do those things in your coding/task window for your specific project. You can go set the settings in XCode yourself, and skip to asking it for the task you need it to do, e.g. “write code to prompt the user with HealthKit permissions when button X is clicked”.

    (Sure, you can do the ask for help in outlining steps in the same window that you’ve been prompting for coding sub-tasks, just be aware that the more you do this, the more quickly you’ll burn through your context window. Sometimes that’s ok, and you’ll get a feel for when to do a separate window with the more experience you get.)

  • Pay attention as you go and see how much code it can generate and when it falls short of an ask. This will help you improve the rate at which you successfully ask and it fully completes a task for future asks. I observe that when I don’t know – due to my lack of expertise – the right size of a task, it’s more prone to give me ½-⅔ of the code and solution but need additional prompting after that. Sometimes I ask it to continue where it cut off; other times I start implementing/working with the bits of code (the first ⅔) it gave me, and have a mental or written note that this did not completely generate all steps/code for the functionality and to come back.Part of why sometimes it is effective to get started with ⅔ of the code is because you’ll likely need to debug/test the first bit of code, anyway. Sometimes when you paste in code it’s using methods that don’t match the version you’re targeting (e.g. functionality that is outdated as of iOS 15, for example, when you’re targeting iOS 17 and newer) and it’ll flag a warning or block it from working until you fix it.

    Once you’ve debugged/tested as much as you can of the original ⅔ of code it gave you, you can prompt it to say “Ok, I’ve done X and Y. We were trying to (repeat initial instructions/prompt) – what are the remaining next steps? Please code that.” to go back and finish the remaining pieces of that functionality.

    (Note that saying “please code that” isn’t necessarily good prompt technique, see below).

    Again, much of this is paying attention to how the sub-task is getting done in service of the overall big picture goal of your project; or the chunk that you’ve been working on if you’re building new functionality. Keeping track with whatever method you prefer – in your head, a physical written list, a checklist digitally, or notes showing what you’ve done/not done – is helpful.

Most of the above I used for coding examples, but I follow the same general process when writing research papers, blog posts, research protocols, etc. My point is that this works for all types of projects that you’d work on with an LLM, whether the output generation intended is code or human-focused language that you’d write or speak.

But, coding or writing language, the other thing that makes a difference in addition to right-sizing the task is effective prompting. I’ve intuitively noticed that has made the biggest difference in my projects for getting the output matching my expertise. Conversely, I have actually peer reviewed papers for medical journals that do a horrifying job with prompting. You’ll hear people talk about “prompt engineering” and this is what it is referring to: how do you engineer (write) a prompt to get the ideal response from the LLM?

Tips for effective prompting with an LLM

    1. Personas and roles can make a difference, both for you and for the LLM. What do I mean by this? Start your prompt by telling the LLM what perspective you want it to take. Without it, you’re going to make it guess what information and style of response you’re looking for. Here’s an example: if you asked it what caused cancer, it’s going to default to safety and give you a general public answer about causes of cancer in very plain, lay language. Which may be fine. But if you’re looking to generate a better understanding of the causal mechanism of cancer; what is known; and what is not known, you will get better results if you prompt it with “You are an experienced medical oncologist” so it speaks from the generated perspective of that role. Similarly, you can tell it your role. Follow it with “Please describe the causal mechanisms of cancer and what is known and not known” and/or “I am also an experienced medical researcher, although not an oncologist” to help contextualize that you want a deeper, technical approach to the answer and not high level plain language in the response.

      Compare and contrast when you prompt the following:

      A. “What causes cancer?”

      B. “You are an experienced medical oncologist. What causes cancer? How would you explain this differently in lay language to a patient, and how would you explain this to another doctor who is not an oncologist?”

      C. “You are an experienced medical oncologist. Please describe the causal mechanisms of cancer and what is known and not known. I am also an experienced medical researcher, although not an oncologist.”

      You’ll likely get different types of answers, with some overlap between A and the first part of answer B. Ditto for a tiny bit of overlap between the latter half of answer B and for C.

      I do the same kind of prompting with technical projects where I want code. Often, I will say “You are an expert data scientist with experience writing code in Python for a Jupyter Notebook” or “You are an AI programming assistant with expertise in building iOS apps using XCode and SwiftUI”. Those will then be followed with a brief description of my project (more on why this is brief below) and the first task I’m giving it.

      The same also goes for writing-related tasks; the persona I give it and/or the role I reference for myself makes a sizable difference in getting the quality of the output to match the style and quality I was seeking in a response.

  • Be specific. Saying “please code that” or “please write that” might work, sometimes, but more often or not will get a less effective output than if you provide a more specific prompt.I am a literal person, so this is something I think about a lot because I’m always parsing and mentally reviewing what people say to me because my instinct is to take their words literally and I have to think through the likelihood that those words were intended literally or if there is context that should be used to filter those words to be less literal. Sometimes, you’ll be thinking about something and start talking to someone about something, and they have no idea what on earth you’re talking about because the last part of your out-loud conversation with them was about a completely different topic!

    LLMs are the same as the confused conversational partner who doesn’t know what you’re thinking about. LLMs only know what you’ve last/recently told it (and more quickly than humans will ‘forget’ what you told it about a project). Remember the above tips about brainstorming and making a list of tasks for a project? Providing a description of the task along with the ask (e.g. we are doing X related to the purpose of achieving Y, please code X) will get you better output more closely matching what you wanted than saying “please code that” where the LLM might code something else to achieve Y if you didn’t tell it you wanted to focus on X.

    I find this even more necessary with writing related projects. I often find I need to give it the persona “You are an expert medical researcher”, the project “we are writing a research paper for a medical journal”, the task “we need to write the methods section of the paper”, and a clear ask “please review the code and analyses and make an outline of the steps that we have completed in this process, with sufficient detail that we could later write a methods section of a research paper”. A follow up ask is then “please take this list and draft it into the methods section”. That process with all of that specific context gives better results than “write a methods section” or “write the methods” etc.

  • Be willing to start over with a new window/chat. Sometimes the LLM can get itself lost in solving a sub-task and lose sight (via lost context window) of the big picture of a project, and you’ll find yourself having to repeat over and over again what you’re asking it to do. Don’t be afraid to cut your losses and start a new chat for a sub-task that you’ve been stuck on. You may be able to eventually come back to the same window as before, or the new window might become your new ‘home’ for the project…or sometimes a third, fourth, or fifth window will.
  • Try, try again.
    I may hold the record for the longest running bug that I (and the LLM) could. Not. solve. This was so, so annoying. No users apparently noticed it but I knew about it and it bugged me for months and months. Every few weeks I would go to an old window and also start a new window, describe the problem, paste the code in, and ask for help to solve it. I asked it to identify problems with the code; I asked it to explain the code and unexpected/unintended functionality from it; I asked it what types of general things would be likely to cause that type of bug. It couldn’t find the problem. I couldn’t find the problem. Finally, one day, I did all of the above, but then also started pasting every single file from my project and asking if it was likely to include code that could be related to the problem. By forcing myself to review all my code files with this problem in mind, even though the files weren’t related at all to the file/bug….I finally spotted the problem myself. I pasted the code in, asked if it was a possibility that it was related to the problem, the LLM said yes, I tried a change and…voila! Bug solved on January 16 after plaguing me since November 8. (And probably existed before then but I didn’t have functionality built until November 8 where I realized it was a problem). I was beating myself up about it and posted to Twitter about finally solving the bug (but very much with the mindset of feeling very stupid about it). Someone replied and said “congrats! sounds like it was a tough one!”. Which I realized was a very kind framing and one that I liked, because it was a tough one; and also I am doing a tough thing that no one else is doing and I would not have been willing to try to do without an LLM to support.

    Similarly, just this last week on Tuesday I spent about 3 hours working on a sub-task for a new project. It took 3 hours to do something that on a previous project took me about 40 minutes, so I was hyper aware of the time mismatch and perceiving that 3 hours was a long time to spend on the task. I vented to Scott quite a bit on Tuesday night, and he reminded me that sure it took “3 hours” but I did something in 3 hours that would take 3 years otherwise because no one else would do (or is doing) the project that I’m working on. Then on Wednesday, I spent an hour doing another part of the project and Thursday whipped through another hour and a half of doing huge chunks of work that ended up being highly efficient and much faster than they would have been, in part because the “three hours” it took on Tuesday wasn’t just about the code but about organizing my thinking, scoping the project and research protocol, etc. and doing a huge portion of other work to organize my thinking to be able to effectively prompt the LLM to do the sub-task (that probably did actually take closer to the ~40 minutes, similar to the prior project).

    All this to say: LLMs have become pair programmers and collaborators and writers that are helping me achieve tasks and projects that no one else in the world is working on yet. (It reminds me very much of my early work with DIYPS and OpenAPS where we did the work, quietly, and people eventually took notice and paid attention, albeit slower than we wished but years faster than had we not done that work. I’m doing the same thing in a new field/project space now.) Sometimes, the first attempt to delegate a sub-task doesn’t work. It may be because I haven’t organized my thinking enough, and the lack of ideal output shows that I have not prompted effectively yet. Sometimes I can quickly fix the prompt to be effective; but sometimes it highlights that my thinking is not yet clear; my ability to communicate the project/task/big picture is not yet sufficient; and the process of achieving the clarity of thinking and translating to the LLM takes time (e.g. “that took 3 hours when it should have taken 40 minutes”) but ultimately still moves me forward to solving the problem or achieving the tasks and sub-tasks that I wanted to do. Remember what I said at the beginning:

    Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

 

  • Try it anyway.
    I am trying to get out of the habit of saying “I can’t do X”, like “I can’t code/program an iOS app”…because now I can. I’ve in fact built and shipped/launched/made available multiple iOS apps (check out Carb Pilot if you’re interested in macronutrient estimates for any reason; you can customize so you only see the one(s) you care about; or if you have EPI, check out PERT Pilot, which is the world’s first and only app for tracking pancreatic enzyme replacement therapy and has the same AI feature for generating macronutrient estimates to aid in adjusting enzyme dosing for EPI.) I’ve also made really cool, 100% custom-to-me niche apps to serve a personal purpose that save me tons of time and energy. I can do those things, because I tried. I flopped a bunch along the way – it took me several hours to solve a simple iOS programming error related to home screen navigation in my first few apps – but in the process I learned how to do those things and now I can build apps. I’ve coded and developed for OpenAPS and other open source projects, including a tool for data conversion that no one else in the world had built. Yet, my brain still tries to tell me I can’t code/program/etc (and to be fair, humans try to tell me that sometimes, too).

    I bring that up to contextualize that I’m working on – and I wish others would work on to – trying to address the reflexive thoughts of what we can and can’t do, based on prior knowledge. The world is different now and tools like LLMs make it possible to learn new things and build new projects that maybe we didn’t have time/energy to do before (not that we couldn’t). The bar to entry and the bar to starting and trying is so much lower than it was even a year ago. It really comes down to willingness to try and see, which I recognize is hard: I have those thought patterns too of “I can’t do X”, but I’m trying to notice when I have those patterns; shift my thinking to “I used to not be able to do X; I wonder if it is possible to work with an LLM to do part of X or learn how to do Y so that I could try to do X”.

    A recent real example for me is power calculations and sample size estimates for future clinical trials. That’s something I can’t do; it requires a statistician and specialized software and expertise.

    Or…does it?

    I asked my LLM how power calculations are done. It explained. I asked if it was possible to do it using Python code in a Jupyter notebook. I asked what information would be needed to do so. It walked me through the decisions I needed to make about power and significance, and highlighted variables I needed to define/collect to put into the calculation. I had generated the data from a previous study so I had all the pieces (variables) I needed. I asked it to write code for me to run in a Jupyter notebook, and it did. I tweaked the code, input my variables, ran it..and got the result. I had run a power calculation! (Shocked face here). But then I got imposter syndrome again, reached out to a statistician who I had previously worked with on a research project. I shared my code and asked if that was the correct or an acceptable approach and if I was interpreting it correctly. His response? It was correct, and “I couldn’t have done it better myself”.

    (I’m still shocked about this).

    He also kindly took my variables and put it in the specialized software he uses and confirmed that the results output matched what my code did, then pointed out something that taught me something for future projects that might be different (where the data is/isn’t normally distributed) although it didn’t influence the output of my calculation for this project.

    What I learned from this was a) this statistician is amazing (which I already knew from working with him in the past) and kind to support my learning like this; b) I can do pieces of projects that I previously thought were far beyond my expertise; c) the blocker is truly in my head, and the more we break out of or identify the patterns stopping us from trying, the farther we will get.

    “Try it anyway” also refers to trying things over time. The LLMs are improving every few months and often have new capabilities that didn’t before. Much of my work is done with GPT-4 and the more nuanced, advanced technical tasks are way more efficient than when using GPT-3.5. That being said, some tasks can absolutely be done with GPT-3.5-level AI. Doing something now and not quite figuring it out could be something that you sort out in a few weeks/months (see above about my 3 month bug); it could be something that is easier to do once you advance your thinking ; or it could be more efficiently done with the next model of the LLM you’re working with.

  • Test whether custom instructions help. Be aware though that sometimes too many instructions can conflict and also take up some of your context window. Plus if you forget what instructions you gave it, you might get seemingly unexpected responses in future chats. (You can always change the custom instructions and/or turn it on and off.)

I’m hoping this helps give people confidence or context to try things with LLMs that they were not willing to try before; or to help get in the habit of remembering to try things with LLMs; and to get the best possible output for the project that they’re working on.

Remember:

  • Right-size the task by making a clear ask.
  • You can use different chat windows for different levels of the same project.
  • Use a list to help you, the human, keep track of all the pieces that contribute to the bigger picture of the project.
  • Try giving the LLM a persona for an ask; and test whether you also need to assign yourself a persona or not for a particular type of request.
  • Be specific, think of the LLM as a conversational partner that can’t read your mind.
  • Don’t be afraid to start over with a new context window/chat.
  • Things that were hard a year ago might be easier with an LLM; you should try again.
  • You can do more, partnering with an LLM, than you can on your own, and likely can do things you didn’t realize were possible for you to do!

Clear thinking + clear communication of ideas/request = effective prompting => effective code and other outputs

Have any tips to help others get more effective output from LLMs? I’d love to hear them, please comment below and share your tips as well!

Tips for prompting LLMs like ChatGPT, written by Dana M. Lewis and available from DIYPS.org

Personalized Story Prompts for Kids Books and Early Reader Books

For the holidays this year, I decided to try my hand at creating another set of custom, illustrated stories for my nieces and nephews (and bonus nieces and nephews). I have a few that are very advanced readers and/or too old for this, but I ended up with a list of 8 kids in my life from not-yet-reading to beginning reading to early 2nd grade reading level. I wanted to write stories that would appeal to each kid, include them as the main character, be appropriate for their reading (or read-to) level, and also include some of their interests.

Their interests were varied which made it quite a challenge! Here’s the list I worked from:

  • 2nd grade reading level, Minecraft
  • early 2nd grade reading level: soccer, stunt biking, parkour, ninja, Minecraft
  • beginning reading level: soccer, stunt biking, ninja, Spiderman
  • beginning reading level: Peppa Pig, moko jumbies
  • (read to younger child): Minnie Mouse, Peppa Pig, Bluey, and tea parties
  • (read to younger child): Bluey, Olaf, Elsa, & Anna
  • (read to younger child): cars/vehicles

I enlisted ChatGPT, an LLM, and ended up creating stories for each kid, matching their grade levels and interests, then illustrating them.

But illustrating them was actually a challenge (still), trying to create images with similar characters that would be on every page of the story and similar enough throughout that they were the “same” character.

Illustration challenges and how I got successful prompts:

My first pass on images wasn’t very good. I could get basic details to repeat, but often had images that looked like this – slightly different style and character throughout:

8 different illustrations in slightly different styles and almost different characters of a girl with blonde, shoulder length hair and a purple dress in an enchanted forest

Different styles throughout and that makes it look like a different character, even though it’s the same character in the whole story. This was a book to read to a <3 year old, though, and I thought she wouldn’t mind the different styles and left it as is. I also battled with adding, for personal use, the characters that most interested her: Peppa Pig and Minnie Mouse.

Interestingly, if I described with a prompt to illustrate a scene including a character “inspired by, but distinct from, Peppa Pig”…it essentially drew Peppa Pig or a character from it. No problems.

But if you gave the same prompt “inspired by, but distinct from, Minnie Mouse”? No go. No image at all: ChatGPT would block it for copyright reasons and wouldn’t draw any of the image. I riffed a bunch of times and finally was able to prompt a good enough mouse with round ears and a red dress with white polka dots. I had to ultimately illustrate the mouse character alone with the human character, because if I tried to get a Peppa-inspired character and then separately a mouse character, it wanted to draw the mouse with a pig-style face in the correct dress! I could never work around that effectively for the time I had available (and all the other books I was trying to illustrate!) so I stopped with what I had.

This was true for other characters, too, with copyright issues. It won’t draw anything from or like Bluey – or Frozen, when prompted. But I could get it to draw “an ethereal but warm, tall female adult with icy blonde hair, blue eyes, in an icy blue dress”, which you can see in the fourth image on the top row here:

Another series of illustrations with slightly different characters but closer in style throughout. there's one image showing a Frozen-inspired female character that I got by not prompting with Frozen.

I also managed to get slightly closer matching characters throughout this, but still quite a bit of variability. Again, for a young being-read-to-child, it was good enough for my purposes. (I never could get it to draw a Bluey-like character, even when I stopped referencing Bluey by name and described the shape and character, so I gave up on that.)

I tried a variety of prompts and series of prompts for each book. Sometimes, I would give it the story and prompt it with each page’s text, asking for an illustration and to keep it in the same style and the same character as the previous image. That didn’t work well, even when I told it in every prompt to use the same style and character plus the actual image prompt. I then tried to create a “custom” GPT, with the GPT’s instructions to use the same style throughout. That started to give me slightly better results, but I still had to remind it constantly to use the same style.

I also played around with taking an image that I liked, starting a new chat, and asking it to describe that image. Then I’d use that prompt to create a new prompt, describing the character in the same way. That started to get me slightly better results, especially when I did so using the custom GPT I had designed (you can try using this GPT here). I started to get better, more consistent characters:

A series of images of a young cartoon-drawn boy with wavy blonde hair riding a bike through an enchanted forest.

 

A series of drawings of a cartoon-like character with spiky blonde hair, blue eyes, and various outfits including a ninja costume

Those two had some variability, but a lot improved beyond the first several books. They are for the beginning and second-grade reading levels, too, so they are older kids with more attention to detail so it was worth the extra effort to try to get theirs to be more consistent.

The last one with the ninja and ninja outfits is another one that ran into copyright issues. I tried to have it illustrate a character inspired by, but distinct from, Spiderman – nope, no illustration at all. I asked it to illustrate the first picture in the soccer park with a spider strand looping in the corner of the image, like Spiderman had swung by but was out of sight and not picture – NOPE. You can’t even get an image that has Spiderman in the prompt at all, even if Spiderman isn’t in the picture! (I gave up and moved on without illustrating spiderwebs, even though Spiderman is described in the story).

My other favorite and pretty consistent one was two more of the early reader ones:

A series of images showing a young cartoon boy with wavy brown hair at a car fair

The hard part from that book was actually trying to do the cars consistently, rather than the human character. The human character was fairly consistent (although in different outfits, despite clear outfit prompts – argh) throughout, because I had learned from the previous images and prompt processes and used the Custom GPT, but the cars varied more. But, for a younger reader, hopefully that doesn’t matter.

The other, more-consistent character one for an early reader had some variations in style but did a better job matching the character throughout even when the style changed.

Another example with a mostly consistent young cartoon drawn girl with whispy blonde pigtails and big blue eyes, plus moko jumbies and peppa pig

How I wrote each story:

I also found some processes for building better stories. Again, see the above list of very, varied interests for each kid. Some prompts were straight forward (Minecraft) and other were about really different characters or activities (moko jumbies and Peppa Pig? Minnie Mouse and Peppa Pig? soccer ninja and Minecraft?).

What I ended up doing for each:

  1. In a new ChatGPT window (not the custom GPT for illustrating): Describe the reading level; the name of the character(s); and the interests. Ask it to brainstorm story ideas based on these interests.
  2. It usually gave 3 story ideas in a few sentences each, including a title. Sometimes, I would pick one and move on. Other times, I would take one of the ideas and tweak it a bit and ask for more ideas based on that. Or, I’d have it try again generally, asking for 3 more ideas.
  3. Once I had an idea that I liked, I would ask it to outline the story, based on the chosen story idea and the grade level we were targeting. Sometimes I would tweak the title and other times I would take the title as-is.
  4. Once it had the outline, I could have it then write the entire story (especially for the younger, beginner reader or read-to levels that are so short), but for the “chapter” books of early 2nd and 2nd grade reading level, I had it give me a chapter at a time, based on the outline. As each chapter was generated, I edited and tweaked it and took the text to where I would build the book. Sometimes, I would re-write the whole chapter myself, then give it back the chapter text and ask it to write the next one. If you didn’t give it back, it wouldn’t know what the chapter ended up as, so this is an important step to do when you’re making more than minor sentence construction changes.
  5. Because I know my audience(s) well, I tweaked it heavily as I went, incorporating their interests. For example, in the second images I showed above, there’s a dancing dog. It’s their actual dog, with the dog named in the story along with them as characters. Or in the chapter book for the character with the bike, it described running up a big mountain on a quest and being tired. I tossed in an Aunt-Dana reference including reminding the character about run-walking as a way to keep moving forward without stopping and cover the distance that needs to be covered. I also tweaked the stories to include character traits (like kindness) that each child has, and/or behaviors that their family prioritizes.

I described the images processes first, then the story writing, in this blog post, but I actually did the opposite for each book. I would write (brainstorm, outline, write, edit, write) the entire book, then I would go start a new chat window (eventually solely using my custom GPT) and ask for illustrations. Sometimes, I would give it the page of the story’s text and ask it to illustrate it. That’s helpful when you don’t know what to illustrate, and it did fairly well for some of the images (especially the Minecraft-inspired ones!). Ultimately, though, I would often get an image, ask what the prompt was for the image, tweak the prompt, and give it back to better match the story or what I wanted to illustrate. Once I was regularly asking for the image prompts, I had realized that giving the character details repeatedly for every image helped with consistency. Then I would use the ad-nauseam details myself for a longer prompt, which resulted in better images throughout, so I spent more energy deciding myself what to illustrate to best match the story.

All in all, I made 7 custom books (and 8 copies, one of the Minecraft books I copied and converted to a different named character for a friend’s child!). Between writing and editing, and illustrating, I probably spent an average of one hour per book! That’s a lot of time, but it did get more efficient as I went, and in some cases the hour included completely starting over and re-working the images in the book for consistency compared to the version I had before. The next books I create will probably take less time, both because I figured out the above processes but also because hopefully DALL*E and other illustration tools will get better about being able to illustrate the same character consistently across multiple prompts to illustrate a story.

How other people can use this to create stories – and why:

I have been so excited about this project. I love, love, love to read and I love reading with my nieces and nephews (and bonus kids in my life) and finding books that match their interest and help spark or maintain their love of reading. That’s why I did this project, and I have been bursting for WEEKS waiting to be able to give everyone their books! I wanted it to be a surprise for their parents, too, which meant that I couldn’t tell 2/3 of my closest circles about my cool project.

One of my friends without young kids that I finally told about my project loved the idea: she works as staff at an elementary school, supporting some students who are working on their reading skills who are nonverbal. She thought it would be cool to make a book for one student in particular, and described some of her interests: violins, drums, raspberries, and unicorns. I was in the car when she told me this, and I was able to follow the same process as above in the mobile ChatGPT app and list the interests, ask for a brainstorm of story ideas for a beginning reading level style book that had some repetitive text using the interests to aid in reading. It created a story about a unicorn who gathers other animals in the forest to play in an orchestra (with drums and violins) and eat raspberries. I had it illustrate the story, and it did so (with slightly different unicorns throughout). I only had to have it re-draw one image, because it put text in one of the last images that didn’t need to be there.

Illsutrations from a quick story about a unicorn, drums, violin, and an orchestra, plus raspberries

It was quick and easy, and my friend and her student LOVED it, and the other teachers and staff at the school are now working on personalized books for a lot of other students to help them with reading skills!

It really is an efficient and relatively easy way to generate personalized content; it can do so at different reading levels (especially when a teacher or someone who knows the student can tweak it to better match the reading level or sounds and words they are working on next); and you can generate pretty good matching illustrations too.

The hardest part is consistent characters; but when you don’t need consistency throughout a whole book, the time it takes drops to ~5 or so minutes to write, tweak, and illustrate an entire story.

Illustrations require a paid ChatGPT account, but if you have one and want to try out the custom GPT I built for (slightly more consistent) illustrations of stories, you can check it out here.

Custom stories: prompting and effective illustrating with ChatGPT, a blog post by Dana M. Lewis from DIYPS.org

What Do You See When You See (Or Think Of) Diabetes?

What do you see when you see (or think of) diabetes?

In my house, I see small piles of low treatments (for hypoglycemia) in every place that I hang out. On my desk next to my computer. In my bedside table. On the counter next to the door where I grab them before heading out for a run or a walk. On the edge of the bathtub in my shower, because low blood sugars happen everywhere.

Sometimes, one of my nephews spots them in a translucent pocket on my shorts. His brain sees candy at first, not a medical treatment. Which is fine – he’s young. He’s learning that for Aunt Dana, they’re not “candy” or a “treat” – they’re a medical treatment.

All of the nieces and nephews have learned or are learning that Aunt Dana has “robot parts”, which is how they see my pump clipped to my pocket or waist band or the hard lump (CGM sensor) they feel or see on my arm.

What I hope people see, though, is that diabetes is not a death sentence. Thanks to improvements in insulin, insulin delivery, and blood glucose measuring, it’s no longer visibly tied to possible complications of diabetes, like amputations, kidney dialysis, or loss of vision. That is what I saw when I was diagnosed with diabetes in 2002, and what was presented to me.

I hope instead that people see people with diabetes like me living our lives, running 82 mile ultramarathons (for those of us who wish to do that), experiencing pregnancy (for those who wish to do that), achieving our career goals, living life in whatever ways we want to live our lives. Just like everyone else.

It’s worth noting that when typing this, autocorrect in my first sentence suggested “treat” instead of “treatment”.

That’s how computers “see” diabetes, too, with sugar and carbs equivalent with diabetes. Despite the fact that medical research shows that diabetes is a complicated combination of genetics, immune system shenanigans (my words), and numerous other factors not in a person’s control, humans haven’t gotten that message. People are still stigmatized and joked about.

So computers learn that. And that’s what they see.

When I was testing Stable Diffusion (an open source AI tool for generating images) recently, I learned about a site “Lexica” that shows you what other people have generated with similar key words. I thought it would be interesting to get ideas for better images to visualize concepts in posts about diabetes, so I searched diabetes.

A screenshot of search results in Lexica for the term "diabetes". Primarily it is images of people portrayed as very overweight and many images of a lot of food.

I should’ve known better. Humans say and think “diabetes” in response to seeing pictures of carbohydrates, so that’s what computers learn.

AI doesn’t know any better because humans haven’t taught themselves any better.

Sadly, “insulin pump” as a key word is disheartening in a different way.

A screenshot of image results from Lexica for the term "insulin pump", which mostly shows a mix of devices that look like blood glucose meters or pulse oximeters.

There are so few existing visuals and images of people with insulin pumps that the visual images generated by AI are a mix of weird hybrid old school computer components and blood glucose monitors or other medical devices.

“Hypoglycemia” mostly generates cartoons in foreign languages or made up languages that I’m guessing are jokes by people without diabetes about having low blood sugar and using it as an excuse for various things. “Hyperglycemia” brings a mix of the hypoglycemia-style cartoons and the diabetes-style images of carbs and how the AI thinks people with diabetes all look.

I’ve noticed this with AI-writing tools, too. AI is good at completing your sentence or writing a few sentences based on well known concepts and topics that already exist today. It’s not yet good at helping you write content about new concepts or building on existing content.

It’s trained on the content of today and the past, which means all of the biases, stereotypes, and stigmatizing content that aren’t good today are also extrapolated into our future with AI.

I don’t have all the answers or solutions (I wish I did), but I want to flag this as a problem. We can’t expect AI to do better trained on what we have and do today, because what we do today (stigmatize, stereotype, and harm people living with chronic diseases) is not ok and not good enough.

We need to change today and train AI with different inputs in order to get different outputs.

That starts with us changing our behavior today. As I wrote a few days ago, please speak up when you see chronic diseases being used as a “joke” and when we see people being stereotyped or when we see racism occurring.

It’s hard, it’s uncomfortable – both to speak up, and to be corrected.

I’ve been corrected before, on verbal patterns and phrases I learned from society that I didn’t realize were harmful and stigmatizing to other people.

I’m working on learning to say “I’m sorry, you’re right, and let me learn from this” and trying to do better in the future, living up to my statement that I’m going to learn from that moment.

It can absolutely be done. It desperately needs to be done, by all of us.

We can course-correct, whether it’s in a one on one conversation, something we see in a small social network in social media, or even in a large room at a conference.

I still remember and appreciate greatly when I flagged that a diabetes joke was made at a conference on stage over four years ago. Upon hearing the joke, I noted that half the room laughed; and that it wasn’t ok. So I spoke up on Twitter, because I was live tweeting from the conference. I didn’t think much would come from it. But it did. Amazingly, it did.

John Wilbanks saw my tweet, realized it wasn’t ok, and instead of tweeting support or agreement (which also would have been great), took an amazing, colossally huge and unexpected step. He literally got up from his seat, went to the microphone, and interrupted the panel that had moved on to other topics. He called out the fact that diabetes was used as a joke a few minutes prior and that it wasn’t ok.

He put on a master class for how to speak up and how to use his power to intervene.

It was incredibly powerful because although the “joke” had gone over most people’s heads and they didn’t think it was a big deal, he brought attention to the fact that it had happened, was hurtful and harmful, and created a moment for reflection for the entire room of hundreds of people.

We need more of this.

When someone flags that they are being stereotyped, stigmatized, being discriminated against – we need to speak up. We need to support them.

It matters not just for today (although it matters incredibly much for today, too) but also for the future.

AI (artificial intelligence) learns from what we teach it, much like our children learn from what we teach and show them. I don’t have kids, but I know what I do and how I behave matters to my nieces and nephews and how they see the future.

We need to understand that AI is learning from what we are doing today, and what we do today matters. It should be enough to want to not be racist, discriminating, stereotyping, and harmful to other people today. But it’s not enough.

The loudest voices are often the ones establishing “normal” for our culture, our children, and the AI systems that may be running much of the world before our children graduate college. We need to speak up to help shape the conversation today, because  what we are doing today is teaching our children, our technology, and is what we’ll get in the future, ten-fold.

And I want the future to look different and be better, for all of us.

What do you see when you think of diabetes? And what are we teaching our children and our technology?