Skip to main content

AI in Healthcare

By ,

Photo by cottonbro studio.

Artificial Intelligence (AI) is a rapidly growing computer application that simulates the problem-solving abilities of the human brain. Computers can be trained to think the way we do, by taking in and analyzing data and using it to make decisions. AI in many forms is spreading rapidly through software that powers electronic products, e-commerce, education, fraud prevention, facial recognition, customer service, and on and on.

You may have heard about the pushback from creative humans who do not want to be replaced by AI when you need an image or a piece of writing, creations once believed to be exclusively reliant on purely human talent, experience, and training to be produced.

You may also have heard that AI falls short when it comes to ethics, that essentially a human sense of right and wrong that is very difficult to put into terms computers can quantify and analyze the way your 243 clicks on smoothie recipes will lead to a slew of blender advertisements ending up in your social media feed the next day.

You may have also heard of the potential dilemma of a self-driving car when the vehicle is given a choice between braking to avoid running over someone in the crosswalk or not braking at that same moment and getting rear-ended by an out-of-control drunk driver behind you. Whom does the car decide to sacrifice?

A matter of life and death

AI can be trained to make hospitals more efficient and speed up complex diagnoses.

It’s estimated that over 35% of organizations utilize some form of AI to conduct their affairs, a 270% increase over the past four years. The market value of AI software is expected to reach $22 billion within the next four years.

Healthcare is one of the fields that is beginning to see an increase in AI technology and not everyone is universally enamored of the development.

In certain medical settings, AI is undeniably helpful. It can be trained to make hospitals more efficient, speed up complex diagnoses, read and analyze scans and lab reports, and even perform parts of surgeries with robotics.

During a time when healthcare staffing is stretched and fragmented, these functions of AI are arguably capable of saving lives. In the US there are 400,000 preventable cases of harm caused by healthcare mistakes from human error, leading to 100,000 deaths each year. AI has the potential to reduce that rate significantly.

Patient trust

However, it has long been known that trust between provider and patient is a huge part of the success of a treatment, as well as of a patient’s adherence to suggested healthy lifestyle changes, and overall positive feelings about one’s health. This attitude of mutual respect lends itself to a patient staying longer under a certain provider’s care, and that kind of stable history is another factor in positive outcomes.

The build-up of two-way trust over time leads to more collaborative decision-making, a shared sense of responsibility, and more patient-centered care, all ideals that have gained a lot of traction since more holistic medicine began to replace the doctor-as-God model half a century ago.

A recent study by the Pew Research Center revealed that most Americans are not ready to trust AI when it comes to their healthcare. Sixty percent of 1100 Americans surveyed said they would be uncomfortable going to a provider who relied on AI to diagnose a disease or recommend treatment. Fifty-seven percent said their relationship with their provider would suffer, and only 38% believed AI use could lead to better outcomes while 33% feel it would lead to worse outcomes.

Sixty percent would not want AI-powered robots to participate in their surgery, and almost 80% would not want AI involved in their mental healthcare. There is also fear of security breaches, indeed an industry-wide concern in many fields other than healthcare, as AI’s presence throughout our world accelerates.

Proponents of AI

AI is color- and class-blind, social factors that lead to lower-quality care.

AI’s proponents ascribe this public resistance to AI in healthcare as a result of ignorance of or lack of familiarity with how AI works. That opinion is affirmed by the survey respondents’ approval of AI for specific applications such as detecting skin cancer, a practice already being used by dermatologists with some success.

Survey respondents also like the fact that AI is color- and class-blind, social factors that studies have shown lead to people of color and low-income patients receiving lower-quality care.

Perhaps most significant is the number of survey respondents who want the adoption of AI in healthcare to simply slow down, and this is where the intangibles such as trust and personal rapport between provider and patient come into play. Patients believe they have a right to be a part of their healthcare decision-making, and as the last three years of pandemic mismanagement have demonstrated, the erosion of trust can lead to a divisiveness that has created rifts among the many layers of healthcare infrastructure, including government agencies and lawmakers. Patients feel left out of a process that often seems to regard them as mere statistics to be digested by faceless profit-driven entities dictating their care.

It behooves the forces in charge of the implementation of AI in healthcare settings to be mindful of public resistance rather than dismissive of it. A 2019 report out of Duke University states: “The idea of viewing physician-patient relationships as a core element of quality health care is not something new. Effective physician-patient communication has been shown to positively influence health outcomes by increasing patient satisfaction, leading to greater patient understanding of health problems and treatments available, contributing to better adherence to treatment plans, and providing support and reassurance to patients. Collaborative decision-making enables physicians and patients to work as partners in order to achieve a mutual health goal. Trust within all areas of the physician-patient relationship is a critical factor [in positive outcomes]”.

Cautionary terms

More cautionary terms come from well-respected British medical journal The Lancet: “AI does not have voluntary agency and cannot be said to have motives or character. Promulgating trust in AI could erode a deeper, moral sense of trust. Embracing trust in AI as if AI were a moral agent also unwittingly fosters diffusion of responsibility.” There is a further concern that the issue of accountability, or lack thereof in the case of AI, may render patients without recourse in cases of errors in their treatment or care plans. The report continues: ” Absolving physicians of blame in times of error while muting praise for wise decisions takes medicine in the wrong direction. Although AI, like a faulty surgical instrument, might be causally implicated, we cannot rightly assign moral responsibility to it. Whether future versions of AI can be regarded as moral agents is only a matter of speculation.” The report asserts that the trust patients place in their providers must not be compromised in order to preserve the relationship known to be crucial to successful care: Trust voluntarily extended by the patient to the provider in willing vulnerability, and trust voluntarily accepted by the provider willingly accountable for giving the best care she can.

We humans are all in this together. How we determine when and where AI can be helpful in healthcare will involve our conscience and ethical compass, human tools we use every day to guide our actions, if we are to benefit from AI and not be enslaved by it. Removing these precious human elements from the field of healthcare will rob us of what matters most in healing: connection, relationships, and shared responsibility.

Ann Constantino, submitted on behalf of the SoHum Health’s Outreach department.

Skip to content