Podcast: Exploring the promise of AI in medicine
Transcript
Dr. Blair Bigham: I'm Blair Bigham.
Dr. Mojola Omole: I'm Mojola Omole. This is a CMAJ Podcast. This episode, we're going to look at the increasing role of AI in healthcare delivery. We're going to start our discussion with a practice paper that was published in CMAJ. It looked at the use of AI in the detection and diagnosis of colorectall cancers by the detection of polyps.
Dr. Blair Bigham: So Jola, you're a colorectall endoscopist. You put cameras into colons.
Dr. Mojola Omole: Yes, I do.
Dr. Blair Bigham: I don't think colorectal endoscopist is the right term for that, but you're a general surgeon who does colonoscopies.
Dr. Mojola Omole: Yes, I do.
Dr. Blair Bigham: Do you want to give us a summary of this paper?
Dr. Mojola Omole: Yeah, for sure. It was a really interesting paper because they were looking at polyp detection, and basically their goal was that, can we get to the point where we can look at the polyp using this AI and say, "You know what? It's less than five millimeters. It's most likely going to be not significant." Then we can just throw it into the trash instead of sending it to pathology.
Dr. Blair Bigham: So for listeners like me who don't put cameras into people's colons, what's the process like? What do you do? You have a camera up there. You're looking at your screen. What goes through your mind?
Dr. Mojola Omole: When I'm looking at a polyp?
Dr. Blair Bigham: Yeah.
Dr. Mojola Omole: "Oh, this looks like a polyp." Then everyone tilts their head sideways in the room. "Is it a polyp? Is it big enough? Let's just take it anyway." So those are for the small ones, and then the obvious ones are really, "Oh, okay. There's a polyp," and you remove it, so this is really talking about those very small polyps that the human eye misses. We know that every 1% increase in adenoma detection rate is also associated with a 3% decrease in colon cancer mortality, so having this technology perfected is going to be really a game-changer in terms of colorectal cancer detection and prevention.
Dr. Blair Bigham: We're using the colorectal cancer detection paper here as an anecdote for the broader question of how artificial intelligence can augment human performance and perhaps help our struggling healthcare system. So after we speak to the author of this paper, we're going to get into a really interesting conversation, I suspect, with a futurist. I've never spoken to a futurist before.
Dr. Mojola Omole: We're going to be speaking to Zayna Khayat about where AI is going to have the biggest impact in medicine, so that's coming up.
Dr. Blair Bigham: Dr. Michael Byrne is a co-author of a practice paper in CMAJ entitled Artificial intelligence in colorectal cancer screening. Mike, thanks for joining us.
Dr. Michael Byrne: Thank you very much. Blair, Jola.
Dr. Blair Bigham: Let's jump right into things. Your paper examines two uses of artificial intelligence in colorectal screening, detection and diagnosis. So let's start with detection. Tell me how good is AI at detecting polyps that are worrisome?
Dr. Michael Byrne: I think a fairly obvious opening statement is already there are studies showing that it's a lot better than many physicians, which should be hopefully an alarm ringing in our heads to tell us that, for years, we're a bit complacent and think we're doing pretty well, but actually we're not doing that well. I think if you look at most of the studies, Blair, in the last two or three years, they're suggesting that AI can increase the polyp pickup rate for most doctors by at least 10 to 15%, and that's a big gain.
Dr. Blair Bigham: You said the word most twice. I'm curious. Are some physicians able to beat the computer? Are there superstars out there, or is the computer better than everyone?
Dr. Michael Byrne: Hopefully, ultimately the computer will be better than everybody, and that's dealing with the inaccuracies of the human eye and brain and the distraction and the fatigue and the limitations of the human eye, but for now, definitely, what we want machines to do is put an expert on your shoulder. So the people out there who are diagnosing lots and lots of polyps and finding them, rather have that performance replicated in more regular settings, so I think, for now, the machine is getting to the point of what the true global expert is doing.
Dr. Blair Bigham: So give us a view of what this looks like with that guide on your shoulder. When you are looking at your endoscopy screen, are you seeing… does a little red dot show up?
Dr. Michael Byrne: It depends, so there's no one user interface that is the norm right now. It's still being ironed out by the different manufacturers and people who make these solutions, but essentially what it does is it puts a little bounding box around the polyp, and it may or may not give an audible alert. I'm doing my procedure and on the same screen, if a polyp shows up, I may or may not have seen it, but the machine puts a little green or yellow box around the polyp, and it draws my attention on the screen.
Dr. Blair Bigham: All of this in real-time?
Dr. Michael Byrne: All of this in real-time. Yeah.
Dr. Mojola Omole: I'm a general surgeon and I also scope, so just how much faster or slower does it change your scoping?
Dr. Michael Byrne: Yeah, that's a very good point you make, Jola, because one of the barriers that we're having with AI is implementation and user acceptance, so it's confidence in the accuracy of tools, but it's also, is there a nuisance factor? Is it going to make me better, but is it also going to improve efficiency? As you and I both know as endoscopists, if it's slowing us down, a lot of people are going to go, "I don't want to be slowed down; I want to be at the same pace or actually made more efficient and quicker so I can do more procedures hopefully to serve my patients," and also for the selfish practice things that you and I know about in the real world. So, for now, hopefully it doesn't really slow things down. I think there will be a little bit of time for physicians to get used to the new "distraction on the screen" of a new tool telling you to have a look over there, but I think most studies suggest that it has little impact on overall timing of the procedure.
Dr. Mojola Omole: What's the false positive rate for it?
Dr. Michael Byrne: So, original models were pretty high. It would beep on everything. If it saw a little bubble or if it saw a different fold on the bowel wall or a piece of stool or an air bubble, it would put a box around it, which is very frustrating. Most tools now are getting a false positive in the range of maybe 3 to 5%. If you and I as endoscopists know this, when we look at the screen, we question ourselves all the time with our nurse saying, "Is that a polyp? I'm not sure." So yeah, we have to remember the inefficiencies of what we human physicians have every day and not expect machines to be perfect.
Dr. Blair Bigham: So that's the detection side of things. Tell us about the diagnostic side of things. So either a machine or a human eye detects something that looks worrisome and then you take a biopsy. How does AI play into the diagnosis of that biopsy?
Dr. Michael Byrne: So I'm particularly excited about this aspect of AI. It's a field I've been involved with for several years. Doing a so-called optical biopsy means that the human eye looks at something and, without doing a physical biopsy, you make a determination as to what kind of tissue that is, so, in the colon, it's, "Is it a precancerous polyp or not?" The accuracy of the human eye for most people, and I'm sure Jola will agree with this, is not that great. Whereas the machine, the studies are suggesting, is very accurate for telling you in real time, less than a second, what that polyp is. That's going to change practice because right now what happens is every single polyp that we see, we take it off and we send it to the pathology lab.
What AI tools should allow is what's called resect-and-discard. So you use the machine, it tells you what it is, and then you take it off and you throw it in the garbage because you've already done your virtual pathology. Hugely cost-saving and efficiency-gaining. Why is that important? Lots of reasons. We all should be custodians of appropriate use of healthcare dollars in the current day, very expensive healthcare overall. If for example, you can obviate the need for pathology to be involved with these very small and common lesions, you might save the US system, for example, a billion dollars a year. That's one metric that's out there.
Dr. Mojola Omole: When you're looking for the polyp, it's not just how it looks that you're assessing, if this is adenoma versus hyperplastic? Does the AI also have part of the algorithm, what does it look like when you're there with the biopsy forcep, or is it basically strictly looking at it from its visual characteristic?
Dr. Michael Byrne: It's generally looking at the visual characteristics for now, and again, I'm sure you know this. The human physician, if we are trying to do our own optical biopsy, in other words, a virtual pathology, which is possible, we're asked to look for three things. color, shape of the pits, the little round structures on the polyp surface, and the vascular or the blood vessel pattern. One is called the NICE classification. Most physicians are not that accurate at that. If you look at what an AI algorithm doing an optical biopsy sees, it's looking at least 1,000 features per polyp, things that we can't even conceptualize as humans, so it's looking at many things on the surface, below the surface, that the human eye often just can't even appreciate.
Dr. Mojola Omole: We're talking about smaller polyps, right? Because my worry is, what is the chances of it missing a malignant polyp?
Dr. Michael Byrne: Yeah. For now, we're definitely in this space talking about smaller polyps to bring into practice this resect-and-discard. In other words, make a diagnosis, take it off and throw it in the garbage. You're absolutely right, and an important point is for larger polyps, so let's say 5 or 10 millimeters and bigger, the chance of a cancerous involvement of those polyps is bigger, as you know. That doesn't mean that AI doesn't have a role there. That just means that's a different kind of tool. And lots of groups, including mine and others, are looking at that as well. Not only how can we improve how we manage polyps overall, detect more and do an optical biopsy on them, how can we deal with the larger, more obviously precancerous lesions? How do we take them off? Do we send them to people like you as a surgeon to cut them out surgically or can we remove them with endoscopic surgery? And AI will be able to tell us depth of invasion, chance of lymph node involvement, et cetera, so risk-stratify the patient in real time as well, but we're not quite there yet.
Dr. Mojola Omole: That would be great.
Dr. Blair Bigham: Michael, you run a business developing this type of AI. When you talk about it to your colleagues, how receptive are they? Are they excited? Are they nervous? Are they skeptical?
Dr. Michael Byrne: I think a few years ago there was a huge amount of skepticism, as with any technology. I hope and I believe a lot of that has gone away. I think the concern about performance has mostly gone away. Although we've still got a ways to go to get user confidence. So we need physician confidence to get patient confidence, but I think many of the studies are so promising that the physician confidence, even those who know little or nothing about AI and see this black box, are accepting that machines and technology can and should and will help our performance. I think the ongoing skepticism is also concern about turf battles. So, for example, in the space we've just been speaking about for the last five, 10 minutes, the practice of optically biopsying a polyp and sending it, putting it in the garbage takes away the need for a pathologist to look at those small polyps. That is a right territory infringement, and in this space in radiology where a lot of diagnostic radiology may go down the path of being read by a machine, that definitely frightens people.
It shouldn't, because I think there's a lot of things that we can have where it's basically hybrid intelligence, where it's the human physician and the machine working in perfect harmony, and we take away all of the things that we don't like doing, the things that we're inefficient at, the fatiguing things, the administrative work, the scheduling, the billing, the reporting - and focus much more on the patient. So it shouldn't be seen, I think, as a threat. There's a slide I use in some of my talks which finishes by saying AI won't replace physicians, but physicians who use AI will replace those who do not use AI. I think that's probably the answer for now, that we're getting to the point where people will embrace it, but we need to help them overcome their concerns about a territory or a turf or a practice.
Dr. Blair Bigham: Aside from those cultural, deeply ingrained things in medicine, are there any other obstacles that are pressing right now that are preventing the acceleration of this particular technology?
Dr. Michael Byrne: Yes, many. I mean, we really are, I wouldn't say at the beginning, I think we're at a very exciting time in medicine where machine performance is starting to outperform humans in many aspects, but we're just crossing that intersection. We've got concerns around introducing bias, ethical use of data, regulatory barriers, who's responsible if the machines make a mistake, robust models because right now most AI models are trained on somewhat limited variability in the data sets. Oftentimes-
Dr. Blair Bigham: Tell me more about that, Michael, because we've been talking a lot in the last couple of months about how guidelines and data sets are sometimes not inclusive of the general population, particularly when guidelines from elsewhere might be used. How representative is the general population in the group used to generate this machine learning algorithm?
Dr. Michael Byrne: So in the work we're talking about in terms of the paper in the journal on polyps, it's good, but it's absolutely not perfect. I think it's very clear to most people in this space that we really need to now have much more generalizable tools that are developed from having data not just from teaching hospitals but from community settings, from rural hospitals and from all sorts of jurisdictions, ethnicities, ages, whatever. We need that entire mix to make these models robust.
Dr. Mojola Omole: I think ultimately what we would love to get to is where patients don't have to undergo an invasive procedure. Is there a role for AI in virtual colonoscopies?
Dr. Michael Byrne: Absolutely. There's a role for AI in all of these things. There's a role for AI in analyzing all the data that come out of the liquid and stool assays, like the FIT testing. There's a role for AI in radiology, as you say, a CT or a virtual colonoscopy picking up those polyps. Maybe the AI can also tell from the X-ray image what kind of polyp it is. There's a whole field of radiomics or deep radiomics where you can not just see a lesion, hopefully, you can also characterize it. Yeah, I strongly believe that AI will help in all aspects of imaging, whether they're visible or invisible to the human eye.
Dr. Blair Bigham: This is totally fascinating. We have a very bright future. Thank you so much, Michael.
Dr. Michael Byrne: My pleasure. Thank you very much for inviting me.
Dr. Blair Bigham: Dr. Michael Byrne is a gastroenterologist in Vancouver and the Chairman and CEO of Satisfai Health Inc. As exciting as it is to hear about the use of AI to support diagnostics and detection, our next guest does not think this is where AI is going to have its greatest impact on medicine. Zayna Khayat is a future strategist and Vice President of Growth & Client Success at digital health solutions firm, Teladoc Health. She's also adjunct faculty at the Rotman School of Business. Zayna, thanks for joining us.
Zayna Khayat: Thanks for having me.
Dr. Mojola Omole: So Zayna, where do you think AI will actually have the greatest impact in healthcare?
Zayna Khayat: Yeah, I mean, everybody wants to right away go to what we want to imagine in our head, that a machine will replace these highly trained diagnosticians called physicians, but the reality is that's going to be an impact, I don't know how quick. But the majority of the impact of AI right now is just doing what we do with reams of data very poorly as humans, either because we don't have capacity, we don't have computing power, and we just don't have the speed. So doing what we do smarter, faster, better, and cheaper. So, I would suggest that the two biggest areas will be one, just reducing total costs. Healthcare is about 30% inefficient.
Dr. Mojola Omole: That's it?
Zayna Khayat: 30%, right, so there's 30 cents on every dollar.
Dr. Mojola Omole: It feels like more.
Zayna Khayat: Yeah, but that's huge. No business can operate at a 30 to 40% inefficiency, yet we tolerate it in healthcare. A big driver is we really use a very analog way of managing information and making decisions, and that's without ever touching patient care. I think AI can help us reduce inefficiency and therefore liberate capital. The second is increasing throughput, which is a different way of using your resources smarter because you can move people through a clinic faster or you can triage a lot better. Therefore, you get the right people to the right place, so none of that affects how a decision is made about a diagnosis that impacts care. It's just the organization of the care, I think, that's going to be where 99% of the value will come for at least the next decade.
Dr. Blair Bigham: Zayna, give me a few other examples of how AI can get us out of the analog.
Zayna Khayat: Just basic things. I was telling this story the other day. I finally signed up for my bivalent fourth dose for my vaccine. So, in theory, my primary care should have let me know, "Hey, you're due. Have you done it?" Of course, I hear nothing from my primary care because all they do is reactive care, but the local pharmacy knew that I was due because I did my third dose, sent me a text and an email, gave me a way to self-schedule my dose, gave me all my reminders before all my prep work. I showed up, and then they gave me a clipboard to fill out three pages of forms. I was like, "What?" So that would be just such a simple example where imagine if all those clipboards with those forms were eliminated from healthcare, and we really don't need them.
Now you've got two ways value gets created there. One is without any AI, you've now digitized this analog thing, so you could take all the human steps of touching that form out of the equation. My human steps of filling the stupid thing out and the clinician's human steps of making it, printing it, putting on the clipboard, giving it to me, and then manually typing in the information afterwards. Bizarre, but then now the utility of that information is now in a data set where maybe Zayna is pair matched to the 80,000 others like me in Canada who also were about to get their fourth dose, and maybe they'll be some patterns they can see or they can predict to make that whole process faster, smarter, better, cheaper. That's the power of all these exchanges in healthcare. Once they become digital and data-based, you can just be so much smarter about all of it at every step. And we are missing that entire black box with our analog approach to care organization.
Dr. Blair Bigham: There's a lot of obstacles obviously to digitizing, but one concern that comes up often for Canadians is data privacy. How concerned should people be about this idea of their health information being digitized and available on a cloud?
Zayna Khayat: I don't think there should be concern because if people knew how their health information right now is available, not on the cloud, they would be rioting in the streets. Just actually the other day when I was doing my bivalent vaccine, the pharmacist is asking the patient questions, and we all heard all of it. Their address, their phone number, why they're here today. That's pretty private information that we all got privy into, or the fact that I think 15% of faxes go to the wrong fax. CIBC for a while was getting hundreds of thousands of referral requests because one number was wrong and propagated over and over on a fax. I just think, what are we talking about? I think we often hold the next way to do things up to a very high standard and don't compare it to the absolutely terrible way we do things in the current analog way, and then somehow we think that's the gold standard.
Dr. Blair Bigham: A lot of physicians complain about the workload that is added when they go and digitize. I certainly experienced this in the United States where the digital record was far more comprehensive in a bad way. It was annoying and a waste of time often, and it changed medical education, certainly in the States.
Zayna Khayat: I hear that over and over again, so I think we have to really think about segmentation. So the place where your Epics of the world have embedded is pretty much hospital-based care. So that is a part of healthcare, but that is not the entire system. It's just what we obsess about, particularly in Canada, so that's what we count as the thing that everyone does, but it's not the thing that everyone does. It's hospital-based documentation, and it's just very bad design.
Zayna Khayat: Yeah, but it doesn't mean you shouldn't have documentation that's digital. To me, to say that because we've added burden because of terrible design and implementation, that means tech is bad, I think is a wrong framing, completely wrong framing.
Dr. Blair Bigham: So let's just say that we took a quantum leap and we were digitized. We were computerized. We got rid of all this paper redundancy and we had this nice streamlined electronic medical record with all of our data in it. It didn't take Jola and I forever to input at all. Maybe it just automatically populated. Somehow there's a box in the room listening to my conversations, and it just knows what to put where. What then is the future of AI in healthcare when it comes to diagnosis or really saving lives outside of system efficiency?
Zayna Khayat: Yeah, so that future is already here in other jurisdictions that have already waded through our mucky stage in Canada of just digitizing all our analog stuff. So my framework is in the next five years, generally most agree that the order of operations of where AI will help unlock new value that literally humans cannot do with the current capacity we have is, for sure, like I said, operations and logistics first and foremost. None of that affects direct patient care or decisions about care.
The second is around prediction and triage. So a little bit of what you've been talking about with your colonoscopy diagnostic, so that step of a doctor's process of predicting based on evidence what might be going on. That's what AI does really well. It replaces that and does it better, smarter, faster, cheaper. That I think will apply to a couple areas. I think dermatology, for sure. That's the easy one. Very easy to train AI. That's in practice. Radiology as we've been discussing, all types of imaging.
Then the other two clinical areas is psychiatry, and that's different. It's not images, but it's more around, there's so much data. We are working with some partners where in an interview, let's say to predict autism or dementia, having a camera this way and a camera from the side, there's so much data in intonation, face and voice and tone that can predict things, that no human will ever catch with accuracy. Cardiology is another big area. Then of course drug discovery. That's more statistical odds of finding the right match of the molecule for the target agent instead of what pharma has been for the last 100 years, which is throw a bunch of spaghetti strands against the wall, and one might stick and then it costs a billion dollars to develop it.
Dr. Blair Bigham: Just how far behind the eight ball is Canada, and how many years will it take before you think we've achieved a catch-up where we've digitized and now we're really focused on using AI to supplement or enhance a physician's day-to-day work?
Zayna Khayat: Yeah, so first thing is there's a great quote. Anthony Chang, one of the clinical experts in this area, I think, with an academic medical center in Boston, I love his quote. He said, "AI is what electricity did to humanity." Some think AI is the last human invention because every invention from now on will only happen because of AI, so it's really interesting framing. But he said, "If AI is electricity for humanity, healthcare is like a little light bulb in a hut right now," so even other places that are already building these tools into pretty low-hanging fruit, including in diagnostics and predicting risk of disease or what have you, which affects then obviously medical care, we're just infants. We're just so scratching the surface, so for Canada to be very behind, I think that the catch-up will be fast.
I often say digital came to healthcare around 2008, and if you think about that, that's already 10 years later than the rest of the world got transformed by digital, the eCommerce revolution, 1999, 2000, so we were already 10 years behind. Then I generally add a 15-year tax for Canada, so we just are getting started on realizing that we cannot keep delivering care using labour. It's impossible. You will never have enough hands on the patient no matter what you try to do. No matter how much time you try to poach them from other countries, we're just not going to do it. No matter how many medical schools you build.
So, technology replaces labour, that's what it does. Either labour is physical stuff or labour is cognitive actions, and AI will help with both, so I don't think it's bad that we're behind. I just think we need to place our focus where the most value can be and walk before we run. I find AI and all the big fancy tech captures so much imagination that big funding gets done for big centers, and I'd rather we fund maybe housing and food security with those moneys first, so we don't have such a demand on healthcare that we ask formal medical care to mop up.
Dr. Blair Bigham: Zayna, thank you so much for joining us. We could go on right into the next century talking about this type of stuff. Hopefully, Canada is able to catch up and digitize without crushing physicians with keystrokes. Thanks so much for joining us. Zayna Khayat is a future strategist and Vice President of Growth & Client Success at digital healthcare solutions firm, Teladoc Health. She's also adjunct faculty at the Rotman School of Business.
So, Jola, let's start with the anecdote. Do you think AI would make you a better endoscopist?
Dr. Mojola Omole: Well, my pride tells me that I'm as perfect as possible, but reality will say that, yes, of course. I do think that this has the ability to really increase the adenoma detection rate. The one part about it that I was like, "Huh," is that at the end of the day, it's whatever you show it, so if you are not taking the time to finish, to go slowly on your withdrawal, it won't see it anyways. Right?
Dr. Blair Bigham: Right.
Dr. Mojola Omole: I think it's as good as the endoscopist themselves in terms of having patience and doing the greater than 10 minutes withdrawal time, so that was the one thing that really stuck out to me that this is, I think, really phenomenal, especially when we talk about fatigue when we're doing a procedure but, at the end of the day, we give it the input.
Dr. Blair Bigham: Right, and then widening the lens going up to 30,000 feet, I wasn't overly surprised to hear Zayna talk about how computerization and digitization might make us more efficient, but I'm always skeptical that we can do that without throwing some of that administrative burden onto clinicians like nurses and physicians who are trying to spend as much time with patients as they can.
Dr. Mojola Omole: I think 100% we do. Even, I love electronic medical records. I think it's ridiculous that we just started doing it. I do think it's ridiculous that it's not connected through different healthcare systems. UHN in Toronto uses Epic. Me at Scarborough uses Epic, but if a patient is seen at the UHN, I don't automatically see the notes of what happened to them. I have to do a request to actually be able to access those notes.
Dr. Blair Bigham: It would be nice if we could leverage more technology to reduce the human data entry. I know at Stanford we had all the bells and whistles. There would be fancy lights and sensors that would detect urine output and chest tube drainage and things like that so that it would just automatically end up in the chart so that nurses had less cognitive workload, less time in front of the computer and could spend more time doing patient care.
Dr. Mojola Omole: That is amazing. You see, that to me is a great technology where it's not them having to do all these because it's at the end of the day, they're still writing in a paper and then going to the computer to write it in and going to the computer to check the order and all of those things, so that's great.
Dr. Blair Bigham: If video cameras and microphones could better understand human behaviour and human action and just auto-document, it would just save all of us a lot of frustrations and allow us to do patient care.
Dr. Mojola Omole: For sure.
Dr. Blair Bigham: But I mean, I don't want AI to get too far ahead of us. Otherwise, you and I might be out of a job podcasting.
Dr. Mojola Omole: Well, yeah, they definitely can take our podcasting job soon, but they'll always need us as physicians.
Dr. Blair Bigham: I hope so.
Dr. Mojola Omole: This is a great episode and really a lot to think about.
Dr. Blair Bigham: That's it for our episode this week on the CMAJ Podcast. Please remember to share our podcast wherever you download your audio from. I'm Blair Bigham.
Dr. Mojola Omole: I'm Mojola Omole. Until next time, be well.
Dr. Blair Bigham: Signing off.