AI’s Role in Healthcare – Episode 1: New Opportunities and Ethical Challenges – with Max Gordon and Cyrus Brodén

Two of the MAIVAN Mavens share their insights

Swedish original: AI:ns roll i vården – Avsnitt 1: Nya möjligheter och etiska utmaningar

How is generative AI used today, and how might it impact patient care in the future? Hear experts’ thoughts on technology as a tool to enhance patient benefits.

In 2023, Max Gordon, lecturer and senior orthopedic consultant, as well as the head of operations at the Clinical AI Research Lab at Danderyd Hospital, was named ‘AI Swede of the Year’ for his groundbreaking research in healthcare. Cyrus Brodén, a physician, researcher, and head of innovation in orthopedics at Uppsala University Hospital, has specialized in AI for medical administration and diagnostic tools. They now share their insights on new opportunities and ethical challenges in the miniseries AI’s Role in Healthcare.

© Johnson & Johnson jnj.gallery.video (in Swedish)

English transcript:

« We need to achieve more with the resources we have,
and AI is an innovation that can be of great help. »

“My name is Max Gordon. I’m an orthopedic surgeon. I’m also a researcher at the Karolinska Institute and the ‘AI Swede of the Year.’ I began researching AI in 2014, so I’ve been involved in it for a few years, and I’m the head of operations for this clinical AI lab we have at HI as well. So, a fairly broad range of work across many fields.

My name is Cyrus Brodén. I am a doctor, researcher, and head of innovation at the orthopedics department at Uppsala University Hospital. I work clinically every day in orthopedics, and then I also conduct research focused on technical solutions for solving healthcare challenges. Artificial intelligence, at its core, is a very broad concept. Today, we’ve narrowed it somewhat, and we talk more about advanced statistical models that contain a lot of computations. These are simply models that can process a wide variety of data types. Previously, we could only analyze, say, blood samples, age, and gender. Today, we can input an entire X-ray image and analyze it. It’s much richer than what we could do before.

There’s no established definition of what AI really is, but according to my definition, it refers to tasks that require human intelligence but that machines can perform, such as reasoning or analyzing images. Some tasks have become quite intuitive for our computers to perform, and we don’t consider them AI anymore. When we talk about AI, it almost sounds like we’re discussing futuristic things, tasks that have been difficult to accomplish until now. But we use AI in our daily lives all the time. AI on the internet—when you Google, or go on YouTube—and now we see a wave where AI is beginning to be used in healthcare, among other areas.

I believe collaboration between academia, industry, and hospitals is extremely important. I think we need each other. Looking at generative and language models, for example, which we now want to use in healthcare, these are often developed by the industry, by a few companies that invest hundreds of millions of kronor to develop these language models. But then, we have to think about how we can use these language models to solve our problems. That’s where hospitals and clinics come in, to fine-tune these language models with our patient data or Swedish medical data. This is where we step in. Academia, of course, pushes AI development forward, which is important. Algorithms and such—academia is also essential for evaluating these tools, stress-testing these tools. How well do these language models perform? What biases do they contain?

The visions for the future—if you project five to ten years ahead—are, of course, grand in nature. But I think we’ll work entirely differently with text. We won’t need to read through a lot of notes but will have them summarized for us. We’ll be able to get our conversations summarized directly into the medical record. That technology already exists today but will become more widespread. We’ll be able to conduct deeper analyses, see which patients respond to which medications. But we’ll also have many new medications because we see this technology enables us to narrow down our search. Previously, when searching through a million different molecules, it was hard to know where to start. But now, we can reduce that to maybe 400, making it easier to review and find new ones. This opens up immense opportunities in medicine for new treatments, better workflows, and the ability to communicate with patients through these tools. We’ll be able to monitor patients and provide a better overall care experience. One could feed in the entire complexity of a patient’s treatment.

In the future, we’ll develop the third wave of AI, called perceptual AI, where we use a number of sensors to gather data from patients and then fuse that data to gain an understanding of how the patient is feeling. Based on that, diagnostic assessments can be made, and treatment options can be set up.

We can finally get a more objective assessment of an X-ray or a photograph. For instance, if there’s a classification for joint changes in the hand, that will be the same whether you’re in Stockholm, Östersund, or Sundsvall, as it’ll use the same algorithm, eliminating discrepancies. We’ll also be able to monitor patients over time. Work is underway on digital twins, where all lab samples and everything taken can be fed into an algorithm, continuously adapting to your trends and adjusting medications accordingly. This is also on the way. I believe that all this data collection that currently burdens healthcare staff when we need to enter it into registries—all of that will be fully automated because there’s no need for a human to read a note and enter it into the next system. Much of that will change.

One thing to remember, though, is that there are risks with an AI trained to predict how well you’ll do if you take a particular medication—it’s not very good at handling new medications. If a new medication appears that wasn’t part of the training data, the AI has no reference for how to handle it. So, we’ll constantly need to work on these, constantly developing and adapting them. It’s not a finished product you take in, and it stays the same for 10-20 years; we must be vigilant about what’s changing. Have our patients suddenly started riding electric scooters, and how does that affect our metrics? If someone measures range of motion and uses a scooter, that’s irrelevant to their hip or knee pain because they can still travel far. These are things we must be cautious about, not just with new medications but the whole environment of the patient.

The ability to summarize information from patient meetings and assist with medical record documentation is a way to streamline care and primarily help us be more present in conversations. Often, when we meet with a patient today, we sit at a computer, and the eye contact is often with the computer and not always with the patient. Hopefully, AI will paradoxically make us a bit more human, allowing us to focus on being in the moment with the patient and more empathetic, comforting in our conversations with patients.

The doctor-patient connection is one of the most exciting areas for generative AI, as it can take on roles. You could ask it to behave like a 7-year-old, or a 12-year-old, adjusting the information to the appropriate level. It has the ability to understand what you want to say. However, what should be said is still something we need to decide—what is important in this particular situation? Sometimes that can be difficult. I think we’ll find it much easier to reach our patients. When we opened up our records, everyone could read their notes, but I wonder how many actually understand their records. What does this really mean? So, it could act as a translator between what we write and what they need to hear. Patients could also use it to find out what their symptoms are related to. There’s a tool that can work through symptoms and combine everything to get a better assessment of the issue before even seeing a doctor, preparing them better.

AI can truly tailor treatments to the patient and account for many different factors, including gender, ethnicity, age—all those things that we’ve struggled with, where some groups are forgotten. But with that said, when training the model, these groups must be included from the start. Otherwise, it’s hard to adjust for it, as we’ve seen in generative contexts that it often takes on the form it was trained in. For example, CEO positions were often depicted as white men. This can be corrected, but it’s easy to overcorrect. The best approach is to ensure the data includes CEOs who aren’t just white men but a range of individuals. The same applies to medicine. Make sure we cover the full spectrum. Toward the edges, AI becomes less reliable, so we should remain vigilant.

The language models that perform best are created by industry, but it’s unclear which data they were trained on. They’ve pulled data from the internet. It’s a bit unclear what data, as they don’t reveal much. The results generated reflect that internet data, so there are certainly risks of bias here. Researchers and clinicians should critically examine the results, conduct studies on biases—do we get different answers if we change the gender in a medical note? Do we get different answers if we include certain diseases? What do these language models do with a rare disease they might not be familiar with in Europe or the West—do they ignore it? These are questions that researchers can explore. If we train these AI algorithms with representative data without human biases, we can compare our assessments with AI’s, potentially becoming aware of our own biases. This could support a more equal healthcare system for all.

Technology in education and application is challenging, as it’s a moving target. However, technology is becoming easier to use. Today, we hardly need a manual when we get a new phone—it’s intuitive enough. Generative AI, which was revolutionary because anyone could write a text and interact with it, is accessible to everyone. I think we should expose students to it. They should understand the limits and problems that arise in extreme cases, where AI sometimes makes faulty assumptions. It’s very useful but must be monitored, knowing that we’re in the driver’s seat.

Universities must prioritize AI as part of education. When I talk about AI to my colleagues, many feel intimidated—it sounds technical, it sounds difficult. It’s important to have people who can explain things simply. Courses are essential, but we must explain what’s relevant. A clinician or doctor doesn’t need to know the exact model structure but should understand the principles. Education should cater to the students. When I teach medical students, I use language models to generate a patient case, like an ACL injury, and then ask the students if this represents the typical patient group. I introduce them to the AI tool and encourage critical thinking, questioning whether everything the tool produces is entirely accurate.”


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *