Smile-on News Logo

Smile-on News

Healthcare Learning Logo

AI in healthcare and diagnostics

20 June 2018

AI in healthcare and diagnostics

The purpose of artificial intelligence (AI) is to have a machine mimic human cognitive function with the highest degree of precision and accuracy. Today, AI has become a part of everyday life and is used in many areas such as smart personal assistants, transportation and web browsing. From these examples, it is evident that the scope for AI has greatly increased over time. According to the Gartner Customer 360 Summit in 2011, by 2020 85% of customer interactions will be managed without a human.

AI has also made an impact in the healthcare field. The use of AI in healthcare is becoming more prominent due to increased availability in healthcare data and the rapid progress of analytical techniques. Common ways in which AI has changed the healthcare industry include data management, repetitive jobs such as X-Rays which can now be carried out by robots, treatment design, and even virtual medical assistants. For example, the start-up company Sens.ly has developed Molly, a virtual medical assistant to help monitor the patient’s condition and follow up with treatments between doctor visits. They have created a similar assistant known as Olivia for the NHS. In another example, Boston Children’s Hospital has developed an app for Amazon Alexa enabled devices which give basic health information and advice for parents of ill children. The app answers questions and judges whether symptoms require a doctor’s visit.

Greater precision in complex surgery

Experts have stated that robot surgery or robot related surgery allows doctors to perform complicated surgery with greater precision, due to increased visualization and enhanced dexterity. Compared to open surgery, robotic surgery is minimally invasive and allows surgeons to perform complex tasks through small incisions. Surgical robots are self-powered and can be programmed to aid in positioning and manipulation of surgical instruments, thus allowing for better flexibility, control and accuracy.

The da Vinci surgical system, from Intuitive Surgical Inc, is considered to be one of the world’s most advanced surgical robot. The da Vinci has robotic limbs with surgical instruments attached and provides a high-definition, magnified, 3D view of the surgical site. A surgeon controls the machine’s arms from a seat at a computer console near the operating table. This allows the surgeon to successfully perform surgeries in tight spaces and reduces the margin for error.

Robotics in dentistry

More recently AI has also emerged in the dental field. A robot was developed by Beihang University, Beijing, China and the Fourth Military Medical University’s Stomatological Hospital. In September 2017, the robot carried out a dental operation without human aid for the first time. Medical staff were present during the one-hour surgery in Xian, China, but they did not play an active role. In this procedure, two new teeth created by 3D printing were implanted into a woman’s mouth. Dr Zhao Yimin who works at the hospital stated the robot was designed to carry out dental procedures and avoid human errors.

The robot was built to help to deal with China’s shortage of qualified dentists. As a result of this, the number of people in need of new teeth in China has been reported to be around 400 million. Experts have warned that Hong Kong and Singapore are also facing a shortage of dentists. Patients are also at risk of poor surgery carried out by unqualified dentists, which can lead to further complications.

Benefits of robotic surgery for patients include reduced risk of infection due to smaller incisions, reduced blood loss, reduced pain, minimal scarring, shorter hospitalization and faster recovery time, although the safety of these operations is a matter of concern to some people, in addition to healthcare professionals losing their jobs to machines.

The challenges of AI

Since AI is fairly new, its reliability can be questionable. This was addressed in the BBC article ‘The Real Risk of Artificial Intelligence’. Here is an excerpt from the article:

“Take a system trained to learn which patients with pneumonia had a higher risk of death, so that they might be admitted to hospital. It inadvertently classified patients with asthma as being at lower risk. This was because in normal situations, people with pneumonia and a history of asthma go straight to intensive care and therefore get the kind of treatment that significantly reduces their risk of dying. The machine learning took this to mean that asthma + pneumonia = lower risk of death.”

On another note, AI must be reliable enough to keep sensitive information such as finances and addresses secure. AI also has to be kept up to date with new health cases and innovation, which would require further programming and upgrades, although this would not be too dissimilar to humans undertaking more training courses. The major difference is that AI cannot currently learn and adapt freely.

An adaptable human workforce

Due to these concerns, one of the challenges AI faces is widespread clinical adoption. To realize the value of AI, the healthcare industry needs to create a workforce that is knowledgeable and comfortable in using AI technology. Having a workforce in place would also address the challenge of training doctors and patients to use AI, as some may find it difficult or may not be open to accepting information given to them by a machine.

Adhering to regulations is also a challenge for AI in healthcare. In the US, there needs to be an approval from the FDA before an AI device or program is used. This is especially true while it is at its early stages. The existing approval process deals with the hardware more than data, therefore new regulations may be required for data from AI.

According to a 2015 study of death rates, robotic surgeons were involved in the deaths of 144 people between 2000 and 2013. Some forms of robotic surgery are much riskier than others and the death rate for head, neck, and cardiothoracic surgery is about 10 times higher compared to other forms of surgery.

Robotic surgery has increased greatly in recent years. Between 2007 and 2013, patients underwent more than 1.7 million robotic procedures in the U.S with the vast majority being in urology and gynaecology. Yet Jai Raman of Rush University Medical Center in Chicago states that no comprehensive study of the safety and reliability of robots have been performed.

In a perspective piece from The New England Journal of Medicine published in March 2015, the authors acknowledged the remarkable benefits machine learning can have on healthcare, but they also said these benefits can’t be realised without considering the ethical consequences.

‘Because of the many potential benefits, there’s a strong desire in society to have these tools piloted and implemented into health care,’ said the lead author, Danton Char, MD, assistant professor of anesthesiology, perioperative and pain medicine. ‘But we have begun to notice, from implementations in non-health care areas, that there can be ethical problems with algorithmic learning when it’s deployed at a large scale.’

They went on to say:

‘If clinicians always withdraw care in patients with certain findings (extreme prematurity or a brain injury, for example), machine-learning systems may conclude that such findings are always fatal. On the other hand, it’s also possible that machine learning, when properly deployed, could help resolve disparities in healthcare delivery if algorithms could be built to compensate for known biases or identify areas of needed research.’

The authors considered that data used to create algorithms can contain bias that is reflected in the algorithms and in the clinical recommendations they generate. They also mentioned that algorithms might be designed to skew results, depending on who is developing them or healthcare systems deploying them.

The physicians must also adequately understand how algorithms are created, critically assess the source of the data used to create the statistical models designed to predict outcomes, understand how the models function and guard against becoming overly dependent on them. There are also some concerns that machine-learning-based clinical guidance may introduce a third-party into the physician-patient relationship, challenging the dynamics of responsibility in the relationship and the expectation of confidentiality.     

There are also concerns about unethical, cheating algorithms, such as Uber’s Greyball algorithm which was used by some drivers to predict which potential passengers might be undercover law enforcement officers. Algorithms can also be used to cheat systems, such as when Volkswagwen’s algorithm enabled vehicles to pass emission tests by reducing nitrogen oxide emissions during tests.

Similarly, developers of AI for healthcare applications may have values that are not always aligned with the values of clinicians, according to Char and Stanford researchers Nigam Shah and David Magnus. There could be temptations to guide systems towards clinical actions that would improve quality metrics but not necessarily patient care, or to skew data being reviewed by potential hospital regulators.

It’s also possible to programme clinical decision-support systems in a manner that would generate increased profits for their designers or purchasers, such as by recommending tests, drugs, or devices in which they hold a stake, or by altering referral patterns.

“The motivations of profit versus best patient outcomes may at times conflict,” Char told AuntMinnie.com.

In addition, the limitations of AI systems are not fully understood by physicians creating the chance for blind faith or scepticism.

“Treating them as black boxes may lead physicians to over-rely or under-rely on AI systems,” Char said.

The authors noted, though, that physicians who use machine-learning systems can become more educated about how these algorithms are constructed, the datasets they were trained from, and their limitations.

“Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes,” they wrote.

Value-added AI

Although there are a quite a few ethical concerns with regards to AI in healthcare, AI could soon prove to be a self-running growth engine for the healthcare sector. A report from Accenture analysed ‘near-term value’ of AI applications in health care to determine how the potential impact of the technology stacks up against the upfront costs of implementation. Results from the report estimated that AI applications in health care could save up to $150 billion annually for the US health care economy by 2026.

The report looked at 10 AI applications with the potential to have a near-term impact in medicine and analysed each application to derive an associated estimated value. Researchers considered the impact of each application, likelihood of adoption, and value to the health economy. These applications included robot assisted surgery, virtual nursing assistants and administrative workflow assistance.

Robot-assisted surgery has an estimated value of $40 billion. Cognitive surgical robotics combines information taken from surgical experiences to improve surgical techniques. In this type of procedure, medical teams integrate the data from pre-op medical records with real-time operating metrics to improve surgical outcomes. The technique enhances the physician’s instrument precision and can lead to a 21 percent reduction in a patient’s length of hospital stay post-operation.

The virtual nursing assistant has an estimated value of $20 billion. Virtual nursing assistants could reduce unnecessary hospital visits and lessen the burden on medical professionals. According to Syneos Health Communications, 64% of patients reported they would be comfortable with AI virtual nurse assistants, listing the benefits of 24/7 access to answers and support, round-the-clock monitoring, and the ability to get quick answers to questions about medications.

Adminstrative workflow assistance has an estimated value of $18 billion. Automation of administrative workflow makes sure that care providers prioritize urgent matters. It can also help doctors, nurses, and assistants save time on routine tasks. Some applications of AI on the administrative end of health care include voice-to-text transcriptions that automate non-patient care activities like writing chart notes, prescribing medications, and ordering tests.

The UK government aims to ‘make the UK a world-leader in healthcare innovation’ to ensure that people throughout the country have access to world-class care. Despite these intentions, George Freeman MP has highlighted that ‘there is a gap between our ability to innovate within the UK and turn these innovations into health benefits for the population’. The NHS recognises the value of the use of AI but is lacking clarity about both the strategic direction to take and where to start. There has been a wave, however, of early adopters which has resulted in haphazard applications with sporadic benefits.

Conclusion

While advancements in AI for healthcare can reduce human error and boost overall outcomes and consumer trust, many still question its practical applicability, safety and ethical implications. Patients and caregivers alike fear that lack of human oversight and the potential for machine errors can lead to mismanagement of care, while data privacy remains one of the biggest challenges to AI-dependent health care.

Despite such concerns, the growing involvement of AI in healthcare is inevitable, and the potential benefits could possibly outweigh the risks.

 

References

10 common applications of artificial intelligence in healthcare - Novatio. (2018). Retrieved from http://novatiosolutions.com/10-common-applications-artificial-intelligence-healthcare/

AI a ‘weapon’ against avoidable cancer deaths. (2018). Retrieved from http://www.nationalhealthexecutive.com/Robot-News/ai-a-weapon-against-avoidable-cancer-deaths

Harwich, E., & Laycock, K. (2018). Thinking on its own: AI in the NHS [Ebook] (p. 14). Retrieved from http://www.reform.uk/wp-content/uploads/2018/01/AI-in-Healthcare-report_.pdf

Is artificial intelligence ethical in healthcare? – Physics World. (2018). Retrieved from https://physicsworld.com/a/is-artificial-intelligence-ethical-in-healthcare/

Medicine, A. (2018). Artificial Intelligence in Medicine. Retrieved from https://www.journals.elsevier.com/artificial-intelligence-in-medicine/

Narula, G. (2018). Everyday Examples of Artificial Intelligence and Machine Learning. Retrieved from https://www.techemergence.com/everyday-examples-of-ai/

Parkin, S. (2018). Would you trust your medical diagnosis to a robot? You may soon get the chance to find out. Retrieved from https://www.technologyreview.com/s/600868/the-artificially-intelligent-doctor-will-hear-you-now/

Researchers say use of artificial intelligence in medicine raises ethical questions. (2018). Retrieved from https://med.stanford.edu/news/all-news/2018/03/researchers-say-use-of-ai-in-medicine-raises-ethical-questions.html

Your future doctor may not be human. This is the rise of AI in medicine. (2018). Retrieved from https://futurism.com/ai-medicine-doctor/

Zaidi, D. (2018). The 3 most valuable applications of AI in health care. Retrieved from https://venturebeat.com/2018/04/22/the-3-most-valuable-applications-of-ai-in-health-care/

comments powered by Disqus

Features

This month's special feature is:

Dentistry Show Spotlight


Newsletter

Sign up to our newsletter


Twitter