NubiSoft’s Post

View organization page for NubiSoft, graphic

476 followers

Implementing AI in medicine is not a piece of cake 🍰 Some projects can only be described as failures -> see IBM Watson. What conclusions can we draw? We studied reports from several pilot implementations of AI tools in hospitals. The resulting descriptions highlighted several factors that influence the adoption rate of AI tools by medical professionals. Here are a few that we found most interesting: ⭐ Transparency and comprehensibility Doctors need to understand how AI tools operate, specifically what data these tools use in their decisions. They also want to see the reasoning process. For instance, doctors at Sentara and UC San Diego Health stress that AI tools must be clear, and doctors should be actively engaged in AI-driven decision-making. Understandably, the doctor wants to make sure that the program takes all factors into account. 💡Protip for you: Transparency and involvement are key for effective AI integration in healthcare. Avoid creating "black box" solutions. To gain the physician's trust, it is necessary to show what information the algorithm is using to make one decision and not another. ⭐ Integration with clinical workflows: An AI tool being a separate, third-party program is often considered by healthcare professionals to be cumbersome to use, leading directly to low adoption rates. Proper integration - with the physician's workflow on the one hand, and with the hospital's deployed IT systems on the other - results in AI tools supporting, rather than hindering, clinical operations. 💡 Protip for you: try to integrate your AI tools with existing workflows and with the hospital's already implemented IT systems. ⭐ Human oversight: There is a strong preference for retaining the human factor in AI-enabled processes. For example, at UC San Diego Health, AI-generated responses in electronic health records (EHRs) must be reviewed and edited by clinicians before being sent to patients, so a human always has to be in the loop. 💡 Protip for you: design your tools so that chatbot results are always checked by a competent human. You can use the limited trust framework for this purpose, which we wrote about here: https://lnkd.in/dx3-7grw If you want more of this kind of information, be sure to read the latest article on our blog. 👇👇 💡 Based on our experience and market information, we've written about what doctors expect from AI tools. In the article, you'll find answers to questions such as    👉 How do doctors feel about using AI in their practice? 👉 What worries do doctors have about AI in medicine? 👉 What features are doctors hoping to see in AI tools? Here is the link - enjoy reading! 📚 https://lnkd.in/d4AWBDqW

To view or add a comment, sign in

Explore topics