A Roundtable on A.I. in Health Care

( PATRIK STOLLARZ/AFP/Getty Images )
Each year the news division hosts the WNYC Health Convening with support from the Alfred P. Sloan Foundation as an opportunity for healthcare experts and practitioners to inform WNYC's health reporting. This year, Siddhartha Mukherjee, M.D., associate professor of medicine at Columbia University, cancer researcher, co-founder of a new company named MANAS.Ai, which integrates AI and medicine, and author of several books, most recently, The Song of the Cell: An Exploration of Medicine and the New Human (Scribner, 2022), Shinjini Kundu, M.D., PhD, fellow physician and computer scientist at The Johns Hopkins Hospital, and Paul Friedman, M.D., chair of the Department of Cardiovascular Medicine at Mayo Clinic in Rochester, discuss how artificial intelligence is currently interacting with healthcare, including AI's role in diagnosing diseases, discovering the building blocks for medication, and concerns related to patient privacy and algorithm bias.
[MUSIC]
Brian Lehrer: It's The Brian Lehrer Show on WNYC. Good morning again, everyone. Each year WNYC's local news division hosts a health convening with support from the Alfred P. Sloan Foundation. The convening has typically been an opportunity for healthcare experts and practitioners to inform WNYC's health reporting in an off-the-air briefing and interaction. Well, this year we're bringing it on to the air as everyone on all sides of medicine, patients, health professionals, hospital administrators, researchers, businesses of various types, equity advocates, and others are trying to figure out what the opportunities and risks are of deploying AI as the technology develops.
In a little while we'll invite you to call in, listeners, with your AI in medicine questions and experiences. Some of the exciting developments in the field, AI's role in diagnosing diseases early and discovering the building blocks for medication. We'll get into some really interesting and hope-inspiring specifics and advances in certain areas. Patient billing is another rising use. There are also concerns we'll get into related to patient privacy, algorithm bias, and when AI gets things wrong, as in so many fields, threatens human employment as well.
Let me introduce our guests. Siddhartha Mukherjee, M.D., associate professor of medicine at Columbia University, cancer researcher, and author of several books, most recently, The Song of the Cell: An Exploration of Medicine and the New Human, published in 2022. He was here for a book interview for that then, and you may know him from the Pulitzer Prize-winning earlier book about cancer called The Emperor of All Maladies.
Also, Dr. Shinjini Kundu is here. She has the unusual combination of being a physician and a computer scientist at Johns Hopkins University. She specializes in radiology with a PhD in artificial intelligence as well. Her breakthrough led to recognition as a Forbes 30 Under 30 in healthcare, a young innovator as named by the MIT Technology Review, and a World Economic Forum Young Global Leader, among other awards.
Also with us, Dr. Paul Friedman, M.D., Chair of the Department of Cardiovascular Medicine at the Mayo Clinic. He helped develop software to allow doctors and researchers to derive data and information using AI from electrocardiograms. Dr. Kundu, Dr. Friedman, Dr. Mukherjee, thank you all so much for giving us your time for this discussion today. Welcome to WNYC.
Dr. Paul Friedman: Thank you.
Dr. Siddhartha Mukherjee: Thank you very much. Thank you.
Brian Lehrer: Let me dive in right away on one of the most exciting areas of AI research and one that cuts across all your fields, and that is early detection of disease, in some cases very early, for the best chance to prevent progression. Dr. Kundu, as a radiologist, I see you're working in the very different fields of early osteoarthritis, risk detection, and brain imaging relating to autism. I want to start with your research on detecting osteoarthritis, the most common form of arthritis, because I think this will be very straightforward for listeners. Traditionally, it's diagnosed, I gather, after bone damage is already apparent to doctors, but you created a program that looked at cartilage density to try and predict detection three years before the onset of symptoms. Can you tell our listeners a bit about that program and what the role of AI was that a human doctor couldn't do?
Dr. Shinjini Kundu: First of all, Brian, thank you so much for having me on the show. It's really great to be here with all of you. As you mentioned, Brian, osteoarthritis is a fairly common condition. It's a leading cause of disability in the United States, and about one in ten people have osteoarthritis, particularly of the knee. It can be debilitating in performing daily tasks and daily things that people need to do, like walking, climbing stairs, and participating in social and recreational activities that can significantly impact the quality of life. As you mentioned, Brian, the problem is that we can't detect osteoarthritis today until the patient develops pain, and then the doctor can see the bone damage on the X-ray. By this point, unfortunately, it's a late stage in the disease, and there's really no way to reverse the damage.
The current treatments can help with symptoms a little bit, like pain medications, physical therapy, but unfortunately, a lot of patients end up going to the eventual outcome of a joint replacement surgery. We're really detecting osteoarthritis too late here. The goal of the project was to see if we could detect osteoarthritis at a much earlier stage, and certainly much earlier than patients are feeling it and earlier than doctors can see it. What we did was that we looked at the knee cartilage images in healthy people, so patients who didn't have any symptoms, patients whose bones looked okay on the X-rays. We knew that by following them up over time, that in this group of patients, about half of them went on to develop clinically-obvious osteoarthritis with the pain and the bone damage, but the other group hadn't. The other group still remained healthy.
The goal was to be able to predict and develop a model, really, to look at the knee cartilage and to predict accurately who would go on to develop osteoarthritis at a future time point and who would stay healthy. To accomplish this, we developed a new AI technique called 3D Transport-Based Morphometry, or TBM for short. TBM was able to look at the cartilage images and find hidden patterns in the cartilage images that were undetectable to humans, and also to other algorithms for that matter. More importantly, this was before humans would be able to find those patterns.
Not only was it able to find those hidden patterns associated with future osteoarthritis, the model found that in a patient it hadn't seen before, if they had that pattern of damage in their knee cartilage, then that person would go on to develop osteoarthritis in the future with a 78% accuracy. What I find exciting about this work is not just the accuracy and not just that we may be able to detect osteoarthritis before symptoms develop, which is exciting in and of itself, but what I find exciting is that this TBM technique was able to explain its logic. It was able to actually show us the exact patterns that it was using to make its determination, which is in contrast to a lot of current machine learning algorithms, which can't explain their logic. They're called black boxes.
In this way, TBM actually showed us that there was a difference in the water content and the water distribution in patients who went on to develop osteoarthritis. That was very exciting for us because it's the idea that not only can we detect disease early, but we can maybe even start to see disease earlier than we're currently seeing it.
Brian Lehrer: Really, really interesting. Before we bring in our other guests, let me ask you this two-part follow-up question. If this was on people not having any symptoms, how would the humans in the field of medicine decide who should get screened? Screening costs money in each case, right, for individuals, insurance companies, or the government. Would there need to be a risk profile to even do this screening from a cost perspective? What could a patient do differently at that early, three years in advance, asymptomatic stage to actually have this information be truly preventative and not just anxiety-producing?
Dr. Shinjini Kundu: Yes, absolutely. I think those are excellent questions. The first question is about who should get screened. I absolutely think that we need good risk stratification and to select the patients appropriately who should get screened in the first place. The other thing is that early detection is one thing, but as you mentioned, if you know you're going to develop a disease, but you can't do anything about it, it's only just going to provoke anxiety. The other piece of this is in developing new therapies that actually have the potential to slow down the disease or to delay it. Coupling early detection with new therapies, that could be a potent combination to profoundly change the course of a disease.
I'll end on this note. This technology is not unique to osteoarthritis. I'm a neuroradiologist. As a radiologist, my specialization is on diseases of the brain, spine, head, and neck. We have many unsolved problems in neuroradiology. There are many conditions, for example, concussion, mild traumatic brain injury, autism, where patients experience the symptoms of these conditions, but we're not always able to find the changes in the brain that are responsible for producing the symptoms that patients are experiencing. The diagnosis is based on interviews and the experience of the symptoms and behaviors, but we don't really have another reliable way to detect a lot of conditions apart from that. For the past few years, I've been working on a lot of conditions in the brain. I've been trying to better uncover the biology behind conditions like autism and mild traumatic brain injury. Once we can uncover that biology better, then in conjunction with development of new treatments that can target that biology. In the future, we might be able to offer patients more therapeutic options if they are interested in them. I think that is the potential in the future for TBM.
Brian Lehrer: Very interesting. Dr. Mukherjee on The Emperor of All Maladies, as your book calls it, cancer, we're all concerned about cancer. If we haven't had it ourselves, everyone knows someone who has, it's our number two killer behind heart disease in this country. Dr. Friedman will get to you as a cardiologist too. One example of AI and cancer detection that I read about, I'll pass this along to our listeners, is a Swedish study of 80,000 women that showed a single radiologist working with AI detected 20% more cancers and mammograms than 2 human radiologists working without the technology.
Dr. Mukherjee, are you familiar with that study or others like it, and what it might suggest about the promise of AI in this area?
Dr. Siddhartha Mukherjee: Absolutely. First of all, thank you for having me on the show. Incidentally, I'm going to start with osteoarthritis since that's also something that we work on. I'll come back to cancer in a second. Dr. Kundu raised-- I'm actually very familiar with the EVM study because our laboratory published last in two months ago what I think is an important paper in osteoarthritis which will be relevant to cancer in a second.
The question that Dr. Kundu asked is, it's great to do early detection, but if you don't have anything to do about it, then you just cause anxiety and you end up with no real change in the trajectory of a human with disease. Well, it turns out that our laboratory discovered stem cells in cartilage. Those stem cells in cartilage, we think our response we showed in animal models that if you affect those stem cells, you can either make more of them in cartilage, and thereby, rescue early osteoarthritis or you can kill them, and if you kill them, you cause osteoarthritis. They're marked with a particular genetic marker, so you can follow them in time.
Now, why am I saying this? I'm saying this as a model for osteoarthritis, as a model for cancer, as a model for cardiovascular disease. If you can combine early detection with an understanding of the pathophysiology of a disease, what's going on, what's wrong, and then you can ask the question, how can we treat that disease early? Then paradigmatically in medicine, that's always been a formula for success.
One of the problems that has plagued the field of early detection in every sphere of early detection is that early detection is often not coupled. There are two problems, I should say. One problem is that early detection often picks up false positives. There are statistical biases of early detection, we can talk about those. That's one set of problems. The second set of problems is early detection without anything to do. I call this the all dressed up but nowhere to go problem. Both problems have plagued the field of early detection.
The remarkable thing about AI is not only can it help early detection as Dr. Kundu just pointed out, but also, it can help the development of drugs either by finding the pathways, and I gave you one example, in osteoarthritis by activating the skeletal or cartilage forming stem cells, or by creating mechanisms by which particular patients are stratified to a particular medical therapy so that you're not putting everyone into the study and getting relatively poor outcomes.
It is the combination of early detection and then the possibility of drug discovery or I would say broadly therapeutic discovery, it doesn't have to be drug, but therapeutic discovery aided by AI, that's really one of the promises of what AI can do to transform medicine. I'll leave for others to talk about the workflow that's required, clinical trials, physician visits, and the other workflow that can also be empowered by AI. I'll leave others to talk about that. As far as early detection--
Brian Lehrer: Well, let me just ask you one follow-up question on that because I see that you work with AI in drug development as well. Designing a drug and getting through clinical trials to final approval is very expensive and time-consuming. The New York Times reports that a cost per product somewhere around a billion dollars on average takes 10 to 15 years, and nearly 90% of the candidate drugs that enter human clinical trials fail. Wow. Do you think AI has the potential to speed things up significantly, both for medical advances, and again, to save money?
Dr. Siddhartha Mukherjee: I think both. Everyone knows that the cost of pharmaceuticals is really scandalous at times and beyond the capacity of most economies to afford. One of the ways that AI can speed this up is by stratifying patients, identifying patients who are really at risk, taking those patients, and running much smaller, smarter, cheaper clinical trials, and also to decrease the fail rate of drugs so that you don't have this vicious loop in which a pharmaceutical company says is "Look, I have to recover my R&D costs for the 90 failed drugs for the every 10 that are successful."
In both ways, by making smarter, cheaper, cleverer clinical trials, and by making smarter, cheaper, cleverer, drugs for clinical trials, AI can speed up the whole process, and hopefully, make drugs cheaper and more accessible.
Brian Lehrer: What's an example of speeding up the clinical trials because people might think, "Well, how can a computer speed up the process of having human beings take something versus taking a placebo and giving it the right amount of time to have meaningful results?"
Dr. Siddhartha Mukherjee: There are many examples, I'll give you a quick one. In fact, I'll just borrow from Dr. Kundu's example. Let's say you have a 100 patients and you put those 100 patients who are at risk for osteoarthritis, you put them on a clinical trial for a drug, but 50 of those patients were never going to develop osteoarthritis in the first place. Basically, you recruited a 100 patients at whatever cost you recruited them at, and 50 of them are not even eligible or they won't even respond because they're not even going to have the disease.
In contrast, if you could use AI, and this is one example of many, if you could AI to eliminate the 50 who are not even going to develop the disease and focus your attention on the 50 who are, then already, you've saved half the cost of the clinical trial. That's one example and I can give you dozens of such.
Brian Lehrer: Interesting. All right. We're going to take a short break We'll bring Dr. Friedman, the cardiologist, into this right after that, and we'll put out a listener invitation for some of you to call in as we talk about AI in medicine, risks, benefits, opportunities, costs, stay with us.
[MUSIC]
Brian Lehrer: Brian Lehrer on WNYC. We're talking about AI and medicine with Dr. Siddhartha Mukherjee, associate professor of medicine at Columbia University, cancer researcher, author of books including Song of the Cell and the Emperor of All Maladies, and Dr. Shinjini Kundu, physician and computer scientist at Johns Hopkins University. She specializes in radiology and has a PhD in artificial intelligence, and Dr. Paul Friedman, who will bring into the conversation now, chair of the Department of Cardiovascular Medicine at the Mayo Clinic, who helped develop software to allow doctors and researchers to derive data and information using AI from electrocardiograms.
Dr. Friedman, as a cardiologist who obviously treats heart disease, the leading cause of death in the United States, tell us about the interaction between AI and people's ECGs or EKGs, or electrocardiograms that you've been working on.
Dr. Paul Friedman: Sure. Well, first, thank you very much for having me. It's a pleasure to be here and I've enjoyed listening to the other panelists. The goal of this project when we first started out was to detect the presence of heart disease that people and their doctors don't know they have. As you pointed out, it's the number one cause of death in this country. What we did when we first started was to take an electrocardiogram, a standard ECG, that is where you lay down and the doctor puts patches on your chest and records the electrical signals of your heart, and see whether we could get the kind of information that normally you would need a CT scan or an echocardiogram or an MRI scan to see whether there's a weak heart pump present. There's a reason we started with that particular condition. First, it's present in about 2% of all people or 9% of people over the age of 60. It impacts seven million Americans. Second, if you know the condition is there, there are powerful, effective treatments, medications like beta blockers or ACE inhibitors, implantable devices like defibrillators, things that prevent bad things from happening, people, death, and hospitalization.
The problem is detecting the disease has been difficult and expensive even in resource-rich environments like the United States. We wondered if we could use artificial intelligence. The specific technical form we used was convolutional neural networks, a form that was designed to mimic the way the human cortex sees and processes images, to see if it could detect subtle patterns in ECGs that even expert humans can't identify.
Along the lines of clinical trials and how we did it, here's how we first started. We did it retrospectively, meaning we took charts from roughly 50,000 people who had the gold standard test, an echocardiogram and trained a network by showing it pattern after pattern after pattern. Here's an ECG, is there a weak heart pump? Then we tested it. What we found was that the computer's ability to identify a weak heart pump was a very powerful test.
Normally, we measure the strength of a test with what's called an AUC so a perfect test is a one, a coin toss is a 0.5, and an exercise treadmill test commonly done to identify heart disease is a 0.85. This test, the computer's ability to identify a weak heart pump, was a 0.92, so a powerful test. Then we thought, well, that's only from the ECG, but we know that heart disease is a function of age and sex. We thought maybe if we tell the computer the age and sex of the person whose ECG it is, it will perform even better. We tried it, and it made no difference. We thought, how is that even possible?
Then we asked the computer. We created another network and said, "Can you tell me the sex of this person from their ECG?" The area under the curve was 0.97, almost perfect. A computer reading an ECG is better able to determine someone's sex than most people are looking at them walking down the street. What was also interesting was looking at the age, so we could estimate their age, but it turns out that it was determining their biological age. That is, if the computer said your age was 40 but it was actually, say, 70, and we looked at the charts, those were people who were fit and exercising and generally well. Whereas the converse was also true for people who were 40 but the ECG called 70. They had multiple conditions like previous heart attacks and diabetes and high blood pressure. In short, it was measuring biological age.
Once we did that retrospective study, we looked at charts and saw it looked promising. We then did two prospective studies. You had asked about clinical trials, so I'll just take a quick second to tell about those. In one, we made this tool available to primary care doctors. There were no cardiologists, no computer scientists, family practitioners in often rural clinics, and nurse practitioners. In one group of clinics, we turned on the algorithm and said whenever this practitioner orders an ECG, tell them whether or not there's a weak heart pump. In the other group, we ran in the background and didn't tell them.
What we found when we did the study, and by the way, since it's software-based, we could deploy it rapidly. In the middle of the pandemic over eight months, we enrolled over 23,000 people. We found that with this tool, the addition of artificial intelligence to a standard test that's already integrated into workflows, so we didn't need to teach anyone how to order it. We didn't need to teach technicians how to do it, which is an important consideration because healthcare is so complex.
We found that the addition of this test increased the new diagnosis of a weak heart pump, a potentially life-threatening condition that neither the patient nor the doctor was aware of by 33%. What was really interesting was that the people who were most likely to make the new diagnosis were the nurse practitioners and physician assistants because they were twice as likely to follow the advice of the AI, showing that people can practice at the top of their license. Now, in fairness, many of the internists probably had more complex patient panels.
Now, we did two additional steps that I'll briefly mention. One was we said, "What if we wanted to be able to detect this from home?" We used Apple Watches only because the raw ECG data was made available in HealthKit. We did this without support from the company. For people who had Apple Watches who recorded ECGs, we thought, could we determine if there's a weak heart pump present?
Brian Lehrer: People may not even realize this, but that's one feature of an Apple Watch. You can open the app--
Dr. Paul Friedman: Thank you. Yes, you record an ECG.
Brian Lehrer: Put your finger on the side, and your heartbeat comes up, that kind of thing?
Dr. Paul Friedman: That's right. It's recording the electrical signals of your heart. Your arms are working essentially as volume conductors, like wires, and they record the electricity the heart is generating on the watch, which sends it to the phone. We created an app that would allow anyone who had previously been seen at Mayo Clinic to enroll in the study if they elected to and to underscore just how powerful these digital tools are for transforming the acquisition of new knowledge and advancing science one part-time study coordinator. I should underscore, usually for a trial to enroll 2500 people, it's tens of millions of dollars. We did one part-time study coordinator enrolled roughly 2500 people from 46 states and 11 countries in 5 months and obtained 125,000 ECGs. Think about how powerful that is. What we found was that those ECGs, once we retrained the AI to read the filtered watch signal, which is a little different than the standard one, were very powerful.
There wasn't quite as powerful as a 12-lead, but the AUC was 0.89, so a very powerful test for identifying heart disease. Now, a couple of important caveats. First, people stayed engaged in the study, and these were people ranging in age from 18 to 93. Second, you could say, well, what about health equity? We want healthcare to be available to everyone, and a watch is expensive. On the flip side, our hope would be that to put an echo machine or a CT scanner or an MRI scanner in a clinic, for example, in an under-resourced area is very difficult, but you could buy one watch very inexpensively and have anyone coming through be screened. I think these underscore the power of these technologies to help us identify disease.
Brian Lehrer: Fascinating. Now, listeners, we invite you to participate in this health convenience segment on artificial intelligence in medicine. Any doctors or other health professionals with personal experience in using AI and medicine, we'd love to hear from you on anything upsides or downsides or tell us a story. 212-433-WNYC. Anyone more on the computer science end of this want to share any stories or experiences? 212-433-9692. Patients, you too, of course. Anyone listening now ever have a doctor tell you yet that AI discovered something medical about you? What was that like or anyone with any question for our guests? 212-433-WNYC. 212-433-9692.
Now, we spent the first half talking about all these opportunities and all these medical and research advances using AI. I want to turn now to bias and privacy concerns. On bias, back in 2019, scientists published a landmark study in the journal Science and found that an algorithm used to predict the healthcare needs of around 100 million people was biased against Black patients. The data set that the algorithm used was based on healthcare bills easier to pull data than reaching into patient charts.
According to NPR, the algorithm relied on healthcare spending to predict future health needs, but with less access to care, historically, Black patients often spent less, and as a result, Black patients had to be much sicker to be recommended for extra care under the algorithm. I'll give you another one. Speaking to our colleagues at the public radio show On Point, Yolonda Wilson, professor of healthcare ethics at Saint Louis University, said that bias can enter and could provide "cover" for doctors and healthcare systems for lapses in care. Here's 30 seconds of Dr. Wilson on On Point.
Yolonda Wilson: We already see those kinds of lapses in care for particularly Back patients and Latino patients that in terms of pain management, in terms of certain kinds of treatment options, we already see disparities there. In some ways, this kind of inform could provide cover for those biases. Oh, I'm not biased. I'm just going where the data leads me to go.
Brian Lehrer: Dr. Mukherjee, let me ask you to enter on this. How do you think about the bias that may be backed into some of the exciting data that you've all been describing? Have you seen any other examples besides the two we mentioned already just now, of how that could impact care?
Dr. Siddhartha Mukherjee: The quick answer is absolutely there. These algorithms can be biased. It's a, I wouldn't say garbage in, garbage out, it's Data In Data Out, DIDO, that that leads to bias. If you happen to load your data with non-African American patients, non-Latino patients, you will get DIDO, which is the data in that you send in will generate the data out. This is actually an incredibly important problem. It's been recognized widely now by several studies.
One of the things that we have to do as a community of people working in AI is to prevent that and to make sure that that doesn't happen. There are several initiatives to do exactly that. They're often privately funded. I would encourage there to be a national conversation about this data that is already beginning to become one. I hope that all these efforts in the end prevent or really blockade the capacity, and this is a human project. Everyone talks about artificial intelligence as if it descended from Mars. No, we invented it. Because we invented it, we have all the reason to protect its sanctity, its sanity, its capacity to deliver accessible care.
You asked me to give you another example, I'll give you one more example. Patients are very diverse in their presentation. In cancer we know already from previous studies, long before predating artificial intelligence, that if an African American patient and a non African American patient walked into the same hospital, imagine they're two doors to the same hospital, they walked into the same hospital with the same diagnosis at the same stage of cancer, they would often depending on their health insurance and their socioeconomic status get very different care and potentially have very different outcomes.
Now, AI is just an amplifying device for this. We know this for prostate cancer, we know this for other forms of cancer. AI is just a device to amplify this. The idea here is to watch out for it, be particularly careful in analyzing data, and when you analyze the data, be totally honest and transparent about how the data is stratified so that other researchers can understand whether there's been a bias or not, and point it out to you so that next time you do this study, there isn't that bias.
Brian Lehrer: I'll give one other example that we saw reported in prepping ourselves for this segment. Takes longer to order blood tests for Hispanic kids, either because of doctor's bias or something like in some cases it takes longer to find an interpreter if the family only speaks Spanish. This, from what I read, tells the algorithm that it takes longer for Hispanic kids to get sepsis as if they were more resistant to it, but it's really about historic delays in treatment that the algorithm could misunderstand as resistance.
A follow up for you, Dr. Mukherjee, as a cancer specialist, listener wrote this text message a few minutes ago. Listener writes, "Earlier this year, I was given the opportunity to have my mammogram reviewed with the help of AI to better detect any anomalies. The cost was $40." It sounds like they got a physician only screening that was included from the start, and If they paid another $40 upfront, they could get an additional AI screening. Is this something you know about and does it imply financial barriers to effective screening?
Dr. Siddhartha Mukherjee: I do know about it. I think the model of upping the cost because of through AI is really a bad model. In fact, in medicine I think the word artificial intelligence applied to detection and the delivery of medicine is really the wrong use of the word. It really should be augmented medicine or some other word that you can choose because ultimately it's a human decision. We shouldn't take away from that. I'm not talking about drug discovery here, I'm not talking about the studies that cardiologists might do, I'm talking about the actual delivery and practice of medicine.
In that case, you can imagine AI as your benign whisperer into your ear saying, "Oh look, are you sure about that? Did you miss that upper left corner in that mammogram?" This cost upping is a very bad trend. I think that's the last place we should go, but rather the right-- Go ahead.
Brian Lehrer: Go ahead, rather? Go ahead. You can finish that comparison.
Dr. Siddhartha Mukherjee: Just to finish it. Rather, we should think about it as an augmentation of human skills, and the real delivery still happens through humans.
Brian Lehrer: Dr. Kundu, for you, a Pew study last year found that there is "wide concern about AI's potential impact on the personal connection between a patient and a healthcare provider". It says about 57% of respondents said the use of artificial intelligence to do things like diagnose disease and recommend treatment would make the patient provide a relationship worse. How do we ensure that it doesn't?
Dr. Shinjini Kundu: I think that's a great question. I think that I would argue that the use of artificial intelligence at the point of care actually has the potential to make the patient-physician relationship better. If we think about a few trends that have been happening in medicine, first of all, one trend is that medical knowledge is growing at a remarkable pace. If you just look at the data from a few years ago, it would take 29 hours of studying per day for a medical student to keep abreast of all of the developments in primary care. That number just keeps getting larger and and larger.
As you can see, just volume of information available is overwhelming any one physician's capacity to keep on top of it or to know it all. I think that is an opportunity for artificial intelligence to try to curate some of the evidence-based best practices for physicians to keep up to date.
A second area is that we have a lot of data, as we practice medicine, we have lab values, we have imaging tests, we have DNA tests, and the list could go on and on. Today, humans are pouring over, physicians are pouring over all of this data and trying to make inferences. I think AI could be used intelligently to try to automate some of that. Without taking away, of course, the art of being a physician, but to help the physician to just go through the sheer volume of data.
I can give you an another example, paperwork. The burdens on a physician today in terms of charting and talking to insurance companies and insurance paperwork has just been growing. I think that is a leading cause of burnout among physicians today because most physicians went into medicine to make patients better and to help the patients and not necessarily to be in the back office and doing the paperwork.
I think if AI could help with some of that, actually it would free up a lot of the physician's time to do what the physician went into medicine and took the Hippocratic oath to do, which is to form a therapeutic healing relationship with the patient and to better understand the patient's concerns and to really spend more time at the bedside to take care of the patient. I think that would really be the true potential of artificial intelligence in healthcare delivery. That's a future that I hope that we work towards.
Brian Lehrer: We'll continue in a minute with this segment about AI in medicine and looks like we have some very interesting callers to add to the conversation. Callers, we're going to go right to you as we continue. Brian Lehrer on WNYC.
[MUSIC]
Brian Lehrer: Brian Lehrer on WNYC with our special one hour health convening on AI in medicine with Dr. Siddhartha Mukherjee, associate professor of medicine at Columbia University, cancer researcher, author of books including Song of the Cell and The Emperor of All Maladies. Dr. Shinjini Kundu, a physician and computer scientist at Johns Hopkins University. She specializes in radiology and has a PhD in artificial intelligence, and Dr. Paul Friedman, chair of the Department of Cardiovascular Medicine at the Mayo Clinic in Rochester, Minnesota, who helped develop software to allow doctors and researchers to derive data and information using AI from electrocardiograms as he described a little earlier. Beth in Princeton, you're on WNYC. Hello Beth. Thank you for calling in.
Beth: Hi, thanks for taking my call. What I want to talk about is a company that I've invested in that has a remote patient monitoring device that's currently being used for dialysis patients. It measures in a bloodless way, potassium, hemoglobin, and hematocrit among other biometrics that patients who are on dialysis are at risk for catastrophic health problems if their levels of these biometrics get thrown off. It looks like a bandage. It's already out in the market, but it is still a startup company and it's using AI to revolutionize the care for these patients on dialysis.
They don't have to go get regular blood draws. The data is being transmitted to their doctors or to their electronic medical records and is letting the doctor know if there's a problem where there can be an intervention before the patient has to be hospitalized saving a lot of money and a lot of grief.
Brian Lehrer: Very interesting. Beth, thank you for telling us about it. Maureen in Rye, you're on WNYC. Hello, Maureen.
Maureen: Oh, hello. Hi. Thank you for taking this call. It's been a long time. I'm actually a speech scientist, so a PhD from City University Graduate Center, Speech and Hearing Sciences. I completed work on measuring motor control in the late 20th century and patented a method for rating motor dysfunction in collaboration with Mount Sinai. What's the AI connection? The method is acoustics. Acoustics is measurement of sound and the research had to do with looking at cerebellar patients with imaging of the cerebellum and measuring the atrophy in the cerebellum and correlating it to a mathematical [unintelligible 00:42:24]. It eventually became a mathematical algorithm, by calculating the loss of motor function in patients with neurodegenerative conditions.
The research is powerful because my most recent work has to do with monitoring motor control in pediatric care. The reason that we need monitoring motor control across all neurodevelopment or motor issues in neurodegenerative as well as neurodevelopment is because-- especially in pediatric care, there's a need there for children who have a neurodevelopmental problem to be monitored. That can be done through a mathematical algorithm that measures the strength of the sound signal in speech.
Brian Lehrer: When you say neurodevelopmental problems in children, that's autism, for example?
Maureen: Yes. Autism is overused. It's a term that leaves us without an understanding of what's going on in the actual child. We need that to have much more rigorous differential diagnosis so that children can be treated.
Brian Lehrer: Yes. I just wanted to be clear for the listeners that that was in that area that they may be more familiar with the terminology of Maureen. Thank you very much. I want to ask about your experience. A follow-up question to Dr. Kundu because I know you've been focusing on the brain as well, and some of your research, you're looking from what I've read at the link between gene's behavior and brain structure in people with autism. You recently published a study on your findings. What would you add to Maureen's call?
Dr. Shinjini Kundu: Yes, absolutely. I 100% agree that autism is a broad term that encompasses a lot of different symptoms and a lot of different severity of symptoms, and we need more precise way to better understand what's going on with the person. I'd like to talk a little bit about this study that you mentioned, this study was just published in Science Advances last week. It's hot off the press. The idea is that the same AI tool that we used for osteoarthritis could also be used to study conditions of the brain. We looked at common genes associated with autism, and we tried to find the correlation between perturbations in these particular genes, the changes in the brain, and the behaviors. The cool thing is that we found a strong correlation between perturbations in the gene and changes in the brain.
We were able to actually visualize what these changes are in the brain to better understand what's going on. Bear in mind that today, for most people looking at brain images like myself, neuroradiologists, other types of physicians, just by looking at brain images, you generally can't tell who has autism and who doesn't. You generally can't tell who has a genetic mutation and who doesn't. The fact that this AI model that we had developed using TBM was able to find that strong relationship and actually show us visually what the changes in the brain are really exciting. It starts to break down a condition that is very heterogeneous and better characterize the biology behind it.
I think that would be exciting because that might help us come up with new therapies that target those biological pathways.
Brian Lehrer: It's really interesting. I want to close in our last five minutes or so on the topic of oversight. I've read that one of the biggest barriers to implementation of, we'll call it augmented medicine like you called it, Dr. Mukherjee, that was you, right?
Dr. Siddhartha Mukherjee: Yes.
Brian Lehrer: Rather than just AI, but one of the biggest barriers is a lack of trust from a lack of transparency. Many doctors and other clinicians from what I've read are not yet confident that AI systems can be trusted, especially if they weren't part of building that system themselves. I guess my question for you and Dr. Friedman, I'm going to go to you on this since you're at Mayo Clinic who is mostly building healthcare AI besides places like the Mayo Clinic, are they mostly private tech companies, and is that a cause for concern?
Dr. Paul Friedman: There's really a mix of interested parties in this space. The tools are built by tech companies. Academic centers are heavily engaged as you've heard in developing the tools. They typically work under the safeguards of institutional review boards that review any research that involves humans or human data. It's a pressing question I think as a society. Mayo Clinic has participated in the creation of the Coalition for Health AI or CHAI and it is a private-public partnership that has multiple stakeholders working together to define core principles for health AI developers and users and healthcare organizations with the goal of really addressing these issues. I think there is a need to have non-profit coalitions to review it.
Additionally, I think all of us involved in this space recognize the importance of having a broad consensus of stakeholders and engaging them, and that can take multiple forms. If we do this incorrectly, the hazards you mentioned before, the risk of bias of getting wrong answers of it not being widely applicable of augmenting disparities is very real. On the other hand, if we do this right, because AI learns from what it's trained on, and we involve studies across multiple countries and populations, it stands the opportunity to bring us together as a community. AI done properly will let humans focus on these human connections and improve all of our skills. This need for the broader coalitions is important.
Brian Lehrer: Dr. Mukherjee, I'll give you the last word on the related question of FDA approval. The New York Times reports that the FDA has approved many new programs that use AI, but doctors are skeptical that the tools really improve care or are backed by solid research. It says large health systems like Stanford, Mayo Clinic, and Duke, as well as health insurers can build their own AI tools that affect care and coverage decisions for thousands of patients with little or no direct government oversight that from The New York Times. What kinds of oversight or vetting of AI programs do you think is taking place or needs to take place more? For that matter, how can we and the press since that's part of the purpose of this healthcare convening is to greater inform ourselves so we can act as watchdogs as is our role, what needs to take place?
Dr. Siddhartha Mukherjee: I'll take the press question first. I think there's an important role of the press to first understand what AI systems are being used, there are a variety of them. The first thing is to have an informed discussion and to really bring the press, which acts as one of the checks and balances of a system to understand the system. The questions you might want to ask is what data was used to generate the system? How wide, how large was that data set?
AI, the most typical use is to use a training data set and then use a test data set. You train on something, just like humans, you train on something, and then you take an exam and that exam is the test.
Brian Lehrer: We have 20 seconds. Go ahead.
Dr. Siddhartha Mukherjee: That's the press. Then as far as the FDA is concerned, similar stories, similar questions, educate the FDA on what the limits and benefits of AI are and allow them to regulate these multiple companies to make sure that the outcomes are correct.
Brian Lehrer: Listeners, that concludes this Brian Lehrer Show special or convening on AI in medicine. Health coverage on WNYC including this is supported in part by the Alfred P. Sloan Foundation, we thank them. We thank our guests one more time, Dr. Mukherjee, Dr. Kundu, Dr. Friedman, thank you so much for donating some of your valuable time today.
Dr. Siddhartha Mukherjee: Thank you very much.
Dr. Friedman: Thank you.
Dr. Shinjini Kundu: Thank you.
Brian Lehrer: We thank Brian Lehrer Show producer Amina Srna, who did most of the research for this segment.
[music]
That's The Brian Lehrer Show for today, which is produced by Mary Croke, Lisa Allison, Amina Srna, Carl Boisrond and Esperanza Rosenbaum. Zach Gottehrer-Cohen edits our national politics podcast. Megan Ryan is the head of live radio. We had Shayna Sengstock and Milton Ruiz at the audio controls today. Stay tuned for All Of It.
Copyright © 2024 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.