• Exela Technologies, Inc. to Host Fourth Quarter 2023 and Full Year 2023 Financial Results Conference CallRead more
  • Exela Technologies Holdings, Inc. Reports Full Year 2023 ResultsRead more
  • Exela Technologies Announces Adjournment of Annual Meeting until June 13, 2024Read more

Will Doctors Ever Trust AI?

Even before the COVID-19 pandemic, doctors around the country were burning out. Worker burnout is an issue in many industries, but it can have serious consequences in healthcare in particular. Overwhelmed, overworked, and under-supported workers run the risk of making errors on the job. And when your job involves administering complicated medical treatments and saving patients’ lives, the stakes are that much higher.

With so much on the line, you might think healthcare professionals would be eager for workload-reducing, time-saving technology. But, in fact, the medical field’s relationship with tech is a bit more complicated than that.

In theory, technology could do prognostic and diagnostic work quicker than a doctor, and just as accurately, which could inform triage and help direct doctors where their attention is needed most.

One ophthalmologist spoke with CNN recently about what AI could do for the overworked doctors in his field. He partnered with an AI research center to develop an algorithm that can diagnose eye diseases in around 30 seconds. In testing, the algorithm’s accuracy matched that of doctors themselves — it got it right 94.5% of the time.

The above is just one example of how AI could lessen the burden on overworked physicians, bettering both their own health and the well-being of their patients. But hospitals — and doctors — are still much slower to adopt this kind of technology than professionals in other industries.

Why Doctors are Skeptical of Automation

Healthcare automation technology faces a number of hurdles before it can be implemented and relied upon, but the most difficult for many solutions to overcome is the doctors themselves. In a 2019 Medscape survey, nearly half of physicians in the U.S. reported being uncomfortable with or anxious about AI-powered software. Many don’t trust it to be accurate enough. Others worry that it’s here to replace them.

But many AI researchers recognize the immense potential new technology has for bettering the healthcare system. How can they convince doctors, too?

Building Trust in Automated Systems

When dealing with broad skepticism of an entire industry, in this case healthcare AI technology, it becomes very important for companies to differentiate themselves from the rest of the field. Not all AI systems are created equal, and each one should be judged on its own merits. Companies need to build consumer trust in their own specific AI systems in the following ways:

1. Undersell and overdeliver.

When introducing any new technology, it’s important to properly manage expectations and not overhype the solution’s capabilities. IBM’s Watson Oncology AI was marketed on exaggerated claims that sold it as a revolution in cancer diagnosis, but reports showed that it consistently returned inaccurate results that could be unsafe. Tesla made a similar mistake in a different industry space when overselling its self-driving cars as fully autonomous, until a number of accidents made it obvious they weren’t.

In both cases, bold, attention-grabbing claims about a technology’s abilities led to even greater negative attention when the technology fell short. The unfortunate result of overselling is that big claims that are proven false garner the biggest negative reactions.

Getting buy-in for an AI initiative is important, and it can be tempting to make lofty claims and hype up a new product. However, to start solving the long-term trust problem, AI companies and healthcare organizations should take care to verify all claims and err on the side of underselling. It’s better to exceed expectations than fail to live up to them.

2. Be clear that humans are still in control.

No existing AI system runs so independently that it never needs human oversight or review, especially not in the healthcare space. Yet doctors often say that fear of giving complete control of their patients' wellbeing over to a machine is a key factor in their apprehension regarding AI. Clear communication regarding the nature of the AI tool as it relates to doctors’ and other healthcare professionals’ day-to-day work is essential to dispelling this mindset.

The fact is, even the most advanced and capable AI systems are no replacement for the value of human doctors and medical teams. Accurate diagnoses are only part of a dynamic relationship between patients and their healthcare providers. When used properly, AI streamlines those processes to promote deeper healthcare relationships. Be sure to emphasize the continued role of human safeguards and compassion in healthcare AI to put doctor’s minds at ease.

3. Highlight the things AI can handle.

Of course, you don’t want to make the AI system sound so limited in scope that it seems not worth the investment. There are plenty of examples of AI success stories where the technology has proven faster and more accurate than the people that rely on them.

For example, Google’s LYNA (LYmph Node Assistant) has proven effective at learning how to analyze slides with greater precision than human pathologists. When distinguishing between slides that exhibited cancerous cells and slides that didn’t, LYNA was correct 99% of the time.

The best way to market any AI solution is to emphasize its capabilities in a way that promotes — but does not exaggerate — its specific value to human users. It shouldn’t be marketed as a catch-all, cure-all solution, but rather a means of completing a particular task. This helps limit the hype, while still making the case that the system adds real value and improves doctors’ ability to do their jobs.

4. Use big, accurate datasets.

Diagnostic AI is based on machine learning and pattern recognition, which means the bigger and better the dataset it’s trained on, the more accurate its conclusions will be.

For instance, researchers at the University of California in San Diego trained an AI system with records from more than 1.3 million patient visits to a medical center in China, including highly specific doctors’ notes and lab test results. When put to the test, the AI returned diagnoses of glandular fever, roseola, chicken pox, hand-foot-mouth-disease, and different strains of the flu, and maintained an accuracy rate of 90-97%.

Final Thoughts on AI in the Healthcare Systems

The bottom line is, when it comes to AI, data matters. Make sure your systems use the maximum amount of high-quality data possible, and use that as a major selling point for your solution.

AI systems have improved efficiency, productivity, and accuracy in a wide variety of industries, and can offer the same benefits to healthcare. On top of that, wider adoption of AI-powered healthcare systems will fuel development of even more powerful healthcare technologies. Assuaging doctors’ concerns and demonstrating the real value of AI systems is the first step in launching healthcare into a new and exciting future.