DNY59/iStock/Getty Images Plus Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Endoscopists who use AI may see their cancer-detection skill degrade. Prolonged exposure to AI is diminishing doctors' focus and motivation. Favorable studies of AI in medicine may be corrupted by the study design. It's important to get a colonoscopy, especially past a certain age, as colorectal cancer is the second-most common cancer in the world after breast cancer. It's also the most common in people over age 55, according to the World Health Organization. Most patients probably don't ask their endoscopist how good their adenoma detection rate (ADR), the typical measure of the doctor's skill in detecting potential cancers. It might be worth asking in the future, however, because artificial intelligence could decrease your endoscopist's skill level, according to a new study in the scholarly journal "The Lancet Gastroenterology & Hepatology." The deskilling phenomenon Lead author Krzysztof Budzyń and collaborators at Poland's Academy of Silesia's Department of Gastroenterology and multiple partner institutions describe a phenomenon called "deskilling." It refers to the use of AI as a tool in medicine that may reduce physicians' competence -- in this case, a reduction in the endoscopist's ADR level. Also: Two subscription-free smart rings were just banned in the US - here's why "We found that routine exposure to AI in colonoscopy might reduce the ADR of standard, non-AI-assisted colonoscopy," wrote Budzyń and team. "To our knowledge, this is the first study that suggests AI exposure might have a negative impact on patient-relevant endpoints in medicine in general." Budzyń and team started from a simple premise: studies where endoscopists use AI resulted in improvements in ADR level, which means more cancers detected. The application of AI is seen as part of the growing trend of using computers for colonoscopies, which are called computer-assisted polyp detection systems. However, it's not known what the effects are on the physician who uses such a tool. "[…] Ongoing exposure to AI might change behaviour in different ways," wrote Budzyń and team, "positively, by training clinicians, or negatively, through a deskilling effect, whereby automation use leads to a decay in cognitive skills." An experiment to test AI's effect To test what effects there might be, Budzyń and team conducted a randomized trial at four endoscopy centers in Poland where 1,443 patients were given colonoscopies both before and after AI was introduced into the centers toward the end of 2021. They then looked at how the quality of the colonoscopies changed from before AI started being used to after. Also: You can learn AI for free with these new courses from Anthropic As they described it, "We evaluated changes in the quality of all diagnostic, non-AI-assisted colonoscopies between Sept 8, 2021, and March 9, 2022, by comparing two different phases: the period approximately 3 months before AI implementation in clinical practice versus the period 3 months after AI implementation in clinical practice." The AI software was an application called CADe running on a dedicated machine called an endoscopy CAD system, which analyzes what's coming out of the endoscope put into the patient. The software and hardware are made by Olympus of Japan, better known for their digital cameras. CADe "uses artificial intelligence (AI) to suggest the potential presence of lesions," said Olympus, and the company has studies that show CADe can boost detection rates. A big drop in reliability From before CADe was put into service and after, Budzyń and team measured the change in ADR in the non-AI-assisted colonoscopies, defined as "the proportion of colonoscopies in which one or more adenomas [pre-cancerous lesions] are detected," known as "adenomas per colonoscopy," or APC. It's "a widely accepted indicator of colonoscopist performance, with a higher ADR associated with a greater cancer prevention effect." They found a noticeable drop in ADR after CADe had been introduced. "ADR at standard, non-AI assisted colonoscopies decreased significantly from 28·4% (226 of 795) before AI exposure to 22·4% (145 of 648) after AI exposure, corresponding to an absolute difference of –6·0% (95% CI –10·5 to –1·6, p=0·0089)," wrote Budzyń and team. Moreover, of the 19 endoscopists who were evaluated, each of whom had performed more than 2,000 procedures, all but four of them "had a lower ADR when performing standard colonoscopies after AI exposure than before," which they interpret as "suggesting a detrimental effect on endoscopist capability." Academy of Silesia's Department of Gastroenterology Doctors losing focus because of AI Budzyń and team make a lot of caveats about the findings. They warn that "Interpretation of these data is challenging" because of statistical confounders and possible "selection bias." Also, there really aren't enough colonoscopies per endoscopist in the study to reliably assess the individual ADR competency of each endoscopist, they concede. More examples per doctor would need to be observed. They caution that further studies are needed. But they offer a hypothesis as to what is happening: human abilities are being degraded by reliance on the machine. Also: Why AI chatbots make bad teachers - and how teachers can exploit that weakness "We assume that continuous exposure to decision support systems such as AI might lead to the natural human tendency to over-rely on their recommendations," they write, "leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance." There have already been studies suggesting such a degradation, they noted, such as a 2019 study "that showed reduced eye movements during colonoscopy when using AI for polyp detection, indicating a risk of overdependence." A study of AI in breast cancer mammography found that "physicians' detection capability decreased significantly when AI support was expected." The authors also said that the one Olympus CADe system might not be representative of all AI medical applications. However, they offer that "Given the general nature of the AI tools and the tendency for humans to over-rely on them, we do not think the study results apply only to this specific AI." Want to follow my work? Add ZDNET as a trusted source on Google.