AI’s Potential in Healthcare: A Double-Edged Sword

What if your doctor was a robot?

Not long ago, I attended the 2023 Atlantic Festival. Knowing it to be a collection of great minds who think differently, I approached the festival excitedly. I come from a background in medicine and science, and was ignited by the rapid development of artificial intelligence (AI). I was ready to learn how these fields in particular would interact in the coming years. 

What I did not expect was the profound focus of AI’s use in healthcare throughout the festival. In nearly every panel or presentation we saw, questions surrounding the implications of AI advancement for healthcare persisted. The discussions left me in awe of the power of machine learning, head buzzing about the prospects of advancements in diagnosis, care access, therapies, and more. I was also quaking with concern over who is managing AI implementation in healthcare and its unintended consequences.

AI Revolutionizes Healthcare

Numerous new and incredible applications of AI to the healthcare setting were presented, each seeming to be greater than the last. AI is being used to assist doctors during colonoscopies, acting as a kind of “auto-correct” and identifying possible areas of pathology the physician may have otherwise missed. Similarly, in robotic and laparoscopic surgery, AI can highlight significant anatomical features and perform minor tasks.

We are even making leaps and bounds in the space of cancer diagnosis and treatment. Companies like GRAIL are harnessing the power of AI to develop multi-cancer early detection (MCED) tests that use a single blood draw. This makes the future of cancer diagnosis look incredibly efficient and effective, saving lives in the process. 

Representation of a cancer cell

Meanwhile, Genentech is harnessing AI to discover new cancer targeting antibody drugs, antibiotics, and other pharmaceuticals. The vast amount of complicated data required for such drug development takes years and often luck to develop, but AI can plow through genetic and protein data in a fraction of the time, leading to previously inaccessible drug discoveries. 

Less flashy but equally important, AI also holds promise to more efficiently assess individual patient needs, risks, and outcomes, bolstering the development of personalized care. This has the potential to effortlessly tailor care to each patient’s specific health needs to a far greater extent than any health care professional could manage alone. 

This is particularly powerful when combined with the use of telemedicine, allowing for individualized access everywhere. Utilizing AI-powered chatbots and virtual assistants continues to expand healthcare access dramatically. All of these exciting discoveries provoke notable optimism regarding the improvements to healthcare that AI will bring.

History Repeats Itself

A parallel motif at the festival was an attitude comparison of AI’s drivers to those pioneering the internet and social media. In that case, people like Zuckerburg created tools of immediate global access and believed it to be a good thing. Whether or not you believe that is true overall, there have still been many unintended consequences– think mental health, radicalism, and misinformation. By the same token, those who developed the first nuclear chain reaction did not intend to create a bomb. 

Reflecting on chain reactions of the past

Right now, massive numbers of AI-driven changes feel overwhelming and impossible to keep up with. In the opening of his book, “The Coming Wave”, panelist and co-founder of Deepmind and Inflection AI, Mustafa Suleyman, compares this massive proliferation of technology to a tsunami: intimidating and unstoppable. He also notes that for past technologies with the same propensity to advance humankind’s general abilities, like fire, printing, engines, and the internet, such proliferation is the default. 

We are about to be swept by an unstoppable wave of changes, a Cambrian explosion of innovation and solutions. Yet, as we have learned throughout history, new solutions tend to uncover new problems.

The Other Side of the Coin

With every solution comes new problems to solve. Hospitals already have a habit of over billing. It’s possible AI could be used to reduce the associated costs. It could also be used to make them even higher and more difficult to argue. AI is now being used to predict recovery time for some patients in hospitals too, as well as individualized health risks that the doctor may not have otherwise prepared for. We even have medical wearables that predict a Covid diagnosis days ahead of time, as testified by one panelist from first hand experience. These are fantastic abilities to have medically, but other companies, organizations, and individuals could take unethical advantage of them. While AI brings a myriad of benefits to the medical space, there is an opening for misuse. 

Besides the strings pulled by hospitals and insurance companies, AI and related technology has the potential to increase rates of overdiagnosis and overtreatment. Typically, this occurs when patients receive unnecessary tests, uncovering subclinical abnormal health data that lead to follow up tests, increased costs, and extra stress for the patient and their family over a technical abnormality that would otherwise have not affected the patient in any noticeable way. 

By developing broad diagnostic tests, using AI analyzed data from wearable devices, and receiving AI predictions on patient risk factors and outcomes, AI influenced health care workers could have access to more health data than ever. While it is better to overdiagnose than underdiagnose in most cases, the potential extra costs and mental strain associated with excess health data are worth being mindful of.

There is also the potential for harmful bias from AI. The AI is limited by the data it receives. In the case of medical history, many publications until recent years have had terrible representations of demographic diversity, often including significant overrepresentation of individuals of Caucasian descent. On the other hand, there were also other pivotal medical experiments that were based in underlying racism and extremely unethical, such as the Tuskegee syphilis experiment. Meanwhile, the strong majority of the published medical research on file was conducted by white men. 

They say in the programming community “garbage in, garbage out” in reference to the nature of machine learning. This essentially means that if you train an AI with bad data, it will produce similar quality results. Though I wouldn’t say the aforementioned studies were garbage since many had pivotal impacts on the field of medicine throughout history, they have obvious and sometimes egregious faults that could influence AI in unforeseeable ways down the line.

AI has a number of innate limitations too. It can’t complete tasks that require in time reaction, so it’s not replacing surgeons or the physical exam soon. In fact, the American College of Surgeons (ACS) says that AI is primarily used in diagnostic specialties for the time being. Even radiologists are using it only to enhance their abilities rather than replace them. Still, the ACS points out that a big issue now is that the AI is ahead of the medical infrastructure to support its improvements. With this in mind, it may only be a matter of time before some of these limitations change, but for now it seems the medical community can take a breath regarding their job status.

Coming to agreement over the use of AI in healthcare

A Call For Control

The tone of the panelists was generally filled with hope and excitement for the improvements AI will bring to the field of medicine and beyond. This was quickly followed by an echo of concern for AI’s potential to worsen disparities, create more problems, and shift away from its intended purpose. In order to address this concern, we need to control what we feed the AI so that it remains true to positive human values. Though it’s not certain this proliferation is even possible to control, regulations and corresponding enforcement are vital. An AI bill of rights issued by the white house provides us guidance for moderating the use of AI to maintain its safety and morality, however this document is not yet law. Individuals and organizations may still use AI problematically without user repercussions.

AI’s benefits and detriments depend on how it’s trained, who has control of it, and how they use it. Legislation can direct AI use towards positive outcomes, however the government tends to move slowly while AI continues to hurdle forward. This leaves who uses AI and how up for grabs with more users every day. Moving forward, AI will just become more involved in the healthcare setting too. Though AI will not replace the value and trust gained from face-to-face patient care, even your own doctor may be using it in their office to assist them soon, if not already.

Final Thoughts

AI is now everywhere, from businesses to doctors offices, to phones in your own pocket. With it comes a wealth of data and improvements to efficiency and quality, but it comes with its fair share of risks as well. In healthcare, AI has an opportunity to cut costs, increase access and personalization, and improve the healthcare experience on both sides, yet has equal ability to do the opposite. Right now AI regulations remain weak, and the dialogue surrounding AI seems to mimic that of previous waves of technology. Though at the individual level most will be unable to alter the course of AI governance, knowledge of its capabilities and use in personally relevant environments is a good way to be prepared for the technological changes we are witnessing every day.

Written by Calvin Floyd.

Images produced using Midjourney.