Implementing AI in medicine is not a piece of cake 🍰 Some projects can only be described as failures -> see IBM Watson. What conclusions can we draw? We studied reports from several pilot implementations of AI tools in hospitals. The resulting descriptions highlighted several factors that influence the adoption rate of AI tools by medical professionals. Here are a few that we found most interesting: ⭐ Transparency and comprehensibility Doctors need to understand how AI tools operate, specifically what data these tools use in their decisions. They also want to see the reasoning process. For instance, doctors at Sentara and UC San Diego Health stress that AI tools must be clear, and doctors should be actively engaged in AI-driven decision-making. Understandably, the doctor wants to make sure that the program takes all factors into account. 💡Protip for you: Transparency and involvement are key for effective AI integration in healthcare. Avoid creating "black box" solutions. To gain the physician's trust, it is necessary to show what information the algorithm is using to make one decision and not another. ⭐ Integration with clinical workflows: An AI tool being a separate, third-party program is often considered by healthcare professionals to be cumbersome to use, leading directly to low adoption rates. Proper integration - with the physician's workflow on the one hand, and with the hospital's deployed IT systems on the other - results in AI tools supporting, rather than hindering, clinical operations. 💡 Protip for you: try to integrate your AI tools with existing workflows and with the hospital's already implemented IT systems. ⭐ Human oversight: There is a strong preference for retaining the human factor in AI-enabled processes. For example, at UC San Diego Health, AI-generated responses in electronic health records (EHRs) must be reviewed and edited by clinicians before being sent to patients, so a human always has to be in the loop. 💡 Protip for you: design your tools so that chatbot results are always checked by a competent human. You can use the limited trust framework for this purpose, which we wrote about here: https://lnkd.in/dx3-7grw If you want more of this kind of information, be sure to read the latest article on our blog. 👇👇 💡 Based on our experience and market information, we've written about what doctors expect from AI tools. In the article, you'll find answers to questions such as 👉 How do doctors feel about using AI in their practice? 👉 What worries do doctors have about AI in medicine? 👉 What features are doctors hoping to see in AI tools? Here is the link - enjoy reading! 📚 https://lnkd.in/d4AWBDqW
NubiSoft’s Post
More Relevant Posts
-
What do you mean, it's not enough to put an #AI sticker on the product to make it an absolute success? Well... not really. 🤷♀ If artificial intelligence software is to have any chance of acceptance in healthcare, it must first and foremost be #useful, not necessarily high-tech. Some projects with machine learning, large language models, etc. have had very low #adoption rates because they were designed without end-user preferences in mind. That's why we at NubiSoft recently published a nice article on what to avoid when building products using artificial intelligence in medicine. It's a must-read for anyone who implements such systems and doesn't want to make the same mistakes twice ;) Here's the link: https://lnkd.in/dYH2adDD
Implementing AI in medicine is not a piece of cake 🍰 Some projects can only be described as failures -> see IBM Watson. What conclusions can we draw? We studied reports from several pilot implementations of AI tools in hospitals. The resulting descriptions highlighted several factors that influence the adoption rate of AI tools by medical professionals. Here are a few that we found most interesting: ⭐ Transparency and comprehensibility Doctors need to understand how AI tools operate, specifically what data these tools use in their decisions. They also want to see the reasoning process. For instance, doctors at Sentara and UC San Diego Health stress that AI tools must be clear, and doctors should be actively engaged in AI-driven decision-making. Understandably, the doctor wants to make sure that the program takes all factors into account. 💡Protip for you: Transparency and involvement are key for effective AI integration in healthcare. Avoid creating "black box" solutions. To gain the physician's trust, it is necessary to show what information the algorithm is using to make one decision and not another. ⭐ Integration with clinical workflows: An AI tool being a separate, third-party program is often considered by healthcare professionals to be cumbersome to use, leading directly to low adoption rates. Proper integration - with the physician's workflow on the one hand, and with the hospital's deployed IT systems on the other - results in AI tools supporting, rather than hindering, clinical operations. 💡 Protip for you: try to integrate your AI tools with existing workflows and with the hospital's already implemented IT systems. ⭐ Human oversight: There is a strong preference for retaining the human factor in AI-enabled processes. For example, at UC San Diego Health, AI-generated responses in electronic health records (EHRs) must be reviewed and edited by clinicians before being sent to patients, so a human always has to be in the loop. 💡 Protip for you: design your tools so that chatbot results are always checked by a competent human. You can use the limited trust framework for this purpose, which we wrote about here: https://lnkd.in/dx3-7grw If you want more of this kind of information, be sure to read the latest article on our blog. 👇👇 💡 Based on our experience and market information, we've written about what doctors expect from AI tools. In the article, you'll find answers to questions such as 👉 How do doctors feel about using AI in their practice? 👉 What worries do doctors have about AI in medicine? 👉 What features are doctors hoping to see in AI tools? Here is the link - enjoy reading! 📚 https://lnkd.in/d4AWBDqW
To view or add a comment, sign in
-
How to successfully implement #AI in digital health applications?
Implementing AI in medicine is not a piece of cake 🍰 Some projects can only be described as failures -> see IBM Watson. What conclusions can we draw? We studied reports from several pilot implementations of AI tools in hospitals. The resulting descriptions highlighted several factors that influence the adoption rate of AI tools by medical professionals. Here are a few that we found most interesting: ⭐ Transparency and comprehensibility Doctors need to understand how AI tools operate, specifically what data these tools use in their decisions. They also want to see the reasoning process. For instance, doctors at Sentara and UC San Diego Health stress that AI tools must be clear, and doctors should be actively engaged in AI-driven decision-making. Understandably, the doctor wants to make sure that the program takes all factors into account. 💡Protip for you: Transparency and involvement are key for effective AI integration in healthcare. Avoid creating "black box" solutions. To gain the physician's trust, it is necessary to show what information the algorithm is using to make one decision and not another. ⭐ Integration with clinical workflows: An AI tool being a separate, third-party program is often considered by healthcare professionals to be cumbersome to use, leading directly to low adoption rates. Proper integration - with the physician's workflow on the one hand, and with the hospital's deployed IT systems on the other - results in AI tools supporting, rather than hindering, clinical operations. 💡 Protip for you: try to integrate your AI tools with existing workflows and with the hospital's already implemented IT systems. ⭐ Human oversight: There is a strong preference for retaining the human factor in AI-enabled processes. For example, at UC San Diego Health, AI-generated responses in electronic health records (EHRs) must be reviewed and edited by clinicians before being sent to patients, so a human always has to be in the loop. 💡 Protip for you: design your tools so that chatbot results are always checked by a competent human. You can use the limited trust framework for this purpose, which we wrote about here: https://lnkd.in/dx3-7grw If you want more of this kind of information, be sure to read the latest article on our blog. 👇👇 💡 Based on our experience and market information, we've written about what doctors expect from AI tools. In the article, you'll find answers to questions such as 👉 How do doctors feel about using AI in their practice? 👉 What worries do doctors have about AI in medicine? 👉 What features are doctors hoping to see in AI tools? Here is the link - enjoy reading! 📚 https://lnkd.in/d4AWBDqW
To view or add a comment, sign in
-
Great open access article for AI implementation in clinical/hospital settings. The "Generalizable insights" in the Modeling assumptions, Stakeholder inclusion and Organizational structure sections are good cautionary tales for any product manager that has to drive a similar AI implementation use case. https://lnkd.in/eax9MZvB Boag, W., Hasan, A., Kim, J.Y. et al. The algorithm journey map: a tangible approach to implementing AI solutions in healthcare. npj Digit. Med. 7, 87 (2024).
To view or add a comment, sign in
-
How Does AI Work For Medical Note Taking and Risk Scoring? (Augmedix, HDAI) — Faces of digital health - Faces of Digital Health #AI in Medical Note Taking and Risk Scoring Augmedix, in collaboration with HDAI, is using artificial intelligence (AI) to revolutionize medical note taking and risk scoring in healthcare. By leveraging AI technology, healthcare providers can improve efficiency, accuracy, and patient outcomes. #The Role of AI in Medical Note Taking AI-powered medical note taking systems, such as Augmedix, use natural language processing (NLP) algorithms to transcribe and summarize patient encounters in real-time. This technology allows healthcare providers to focus more on patient care instead of spending excessive time on documentation. #Benefits of AI in Medical Note Taking The use of AI in medical note taking offers several advantages. It reduces administrative burden, improves accuracy, enhances ai.mediformatica.com #data #model #augmedix #health #medical #healthcare #risk #documentation #naibchamoun #prediction #this #training #digitalhealth #healthit #healthtech #healthcaretechnology @MediFormatica (https://buff.ly/49jaiTT)
How Does AI Work For Medical Note Taking and Risk Scoring? (Augmedix, HDAI) — Faces of digital health
facesofdigitalhealth.com
To view or add a comment, sign in
-
Have you ever wondered what it takes to build an AI solution in healthcare? Who is involved in approving use of the solution? Do you want to see the details of the process for the first deep learning model successfully integrated into clinical care in the United States? Thrilled to share our latest interdisciplinary manuscript titled 'The algorithm journey map: a tangible approach to implementing AI solutions in healthcare.' It was a big interdisciplinary effort led by William Boag, Jee Young Kim, PhD, Alifia Hasan where we tried to unpack the design, development, integration, and maintenance of the sociotechnical system called Sepsis Watch: https://lnkd.in/e-r3V2Bt This is a special paper, because it is so simple, yet so radical. At Duke Institute for Health Innovation, we often build and do things that have never been done before. For this paper, we rigorously retraced our steps over 20 months to build the first of its kind Algorithm Journey Map. It's an intersection of user research (customer journey mapping), quality improvement (process mapping), and sociotechnical research in AI. Learn from our mistakes! We sure are :-)
The algorithm journey map: a tangible approach to implementing AI solutions in healthcare - npj Digital Medicine
link.springer.com
To view or add a comment, sign in
-
Discover how AI and ML are revolutionizing healthcare! Our latest blog delves into the integration of advanced technologies in clinical practices, enhancing diagnostic accuracy and patient management. Read more: The Future of Healthcare https://lnkd.in/ghUzDUiY #HealthTech #AI #MachineLearning #HealthcareInnovation
The Future of Healthcare: Integrating Advanced AI and Machine Learning into Clinical Practices
https://healthficial.com
To view or add a comment, sign in
-
Visiting Professor MMU; Vice President, CILIP; KM Consultant; Coach; Facilitator. Chair Health Literacy Partnership. UKMCBK Steering Group. HILJ Editorial Adv Bd. Former CKO NHS (England). Topol Review. Walford Award
Eyes wide open! Thank you again, Prof. Spencer Dorn. Your post illustrates perfectly why I'm not looking for a data-driven health service - but rather for an intelligent health service, in which technological knowledge management systems underpin medicine as an art informed by science. Healthcare is a knowledge intensive sector. Harnessing technology offers great promise, and some perils, as we respond to the expansion of medical knowledge, increasing demand for healthcare and a global workforce shortage. Advances in clinical decision support systems, twinned with the electronic patient record, and using machine learning, will enable practitioners to draw on evidence from multiple sources - from research, data and experience. Together these different forms of evidence can fuel and inform learning health systems, to augment the work of practitioners, and AI will bring new intelligence, detecting patterns that we humans lack the processing power to reveal. Yet all of it needs to be understood in context so that medicine can be practised at its best - as an art informed by science. #KM #KnowledgeManagement #InformationManagement #ClinicalDecisionSupport #Medicine #Evidence #EvidenceBasedPractice Tom Foley Philip Scott Jeremy Wyatt Mark Salmon Geoff Walton Louise Goswami Hatim Abdulhussein Chris Jones CILIP: The library and information association The Federation for Informatics Professionals BCS, The Chartered Institute for IT
Of the near-term possibilities for AI in healthcare, I’m most bullish on using it to synthesize and summarize information. But we must appreciate the risks. AI that distills clinical information too finely will strip it of context – and sometimes meaning. As Harvard's Isaac Kohane has explained, "Medicine is fundamentally an information- and data-processing discipline." The challenge is clinicians struggle to process endless volumes of clinical data and information scattered across myriad locations (e.g., notes, results, messages, images, lists, codes, scanned documents, and many more). For example, the days before I’m in clinic, I block 90 minutes to scan the “charts” of the next day’s patients, pull out the tiny fraction of relevant information, and add it to a note. With some effort, today’s AI could generate curated, role/specialty-specific summaries of relevant diagnostic tests (e.g., GI procedures, abdominal imaging studies, and specific laboratory tests), medication trials (GI-related), and other key data points (e.g., body weights over time). This would save me time and energy. It could also surface information I may have otherwise overlooked. The tradeoff is that I would see the information out of context. As the famed sociologist Charles Tilly explained, context is necessary for valid answers. Was the colonoscopy repeated in response to a CT scan that showed colonic wall thickening? Was the 24-hour pH study performed because she had ongoing heartburn despite taking high-dose Omeprazole? Was Linzess started during a flare, and why did he stop it? Earlier this week, in a post lamenting the downsides of digital technologies, the academic Adam Kotsko explained, “Computers cannot give us the information because they cannot, and likely never will, understand meaning and context. Only we can — though we must take the time to learn and cultivate the skills necessary.” Of course, it’s not like today’s physicians are perfect information consumers. (Our tech is not usable enough, we don’t have enough time and energy, and, like all humans, we’re imperfect). And in many straightforward clinical scenarios, it’s OK to interpret information without deep context. The point is that there’s no free lunch. As we work to harness new tools, we must keep our eyes wide open, develop nuanced viewpoints, and work to find the right balance between automated and manual work (or, in this case, distilling and contextualizing). #healthcareai #clinicalinformatics #healthcareonlinkedin
To view or add a comment, sign in
-
Exciting news from the healthcare sector! Researchers have introduced a "sociotechnical" framework for healthcare AI, blending medical knowledge with ethical values. This innovative approach promises to make AI tools in healthcare more responsible and aligned with human needs. 💭 At EON Reality, we applaud this step towards ethical tech integration. How do you see this impacting healthcare and technology? #HealthcareAI #EthicalTechnology #Innovation #EONReality #ai #tech https://lnkd.in/eidNQrqj
New Healthcare AI Framework Incorporates Medical Knowledge, Values
healthitanalytics.com
To view or add a comment, sign in
-
Innovative Digital and Population Healthcare Clinician | Family and Preventive Medicine Board Certification | 4 years Health Tech Product Development | VBC contracts | genAI & Machine Learning
National Academy of Medicine has convened key stakeholders today to explore guideposts for use of AI/LLM in healthcare and medicine. https://lnkd.in/gVQMWJHE Shorthand summary of final remarks from Thomas Maddox: What problems exist in healthcare that Generative AI can support? -We have more info than we can handle -We have more people and fewer clinicians than we need to treat them -We have less resources than we need -Our care is more inequitable than it should be How do we think about responsible oversight? -Oversight should align with development of AI models -Ensure oversight and interoperability of the data through government -Ensure evidence standards developed by research funders, academics -Develop post-deployment monitoring, rather than only pre-deployment -Ensure we emphasize decreased biases not only in AI but also the communities and systems that train them -Construct a learning healthcare system, agile framework What are the next steps? -Near use deployment should focus on decreasing excess workload -Later deployment of AI for diagnosis and treatment -Think about trends in the populations and ensure we have the right data Looking forward to the forthcoming guidance.
Generative AI & LLMs in Health & Medicine - National Academy of Medicine
https://nam.edu
To view or add a comment, sign in
-
Director - Cardiac Vascular & Thoracic Surgery, Accord Superspeciality Hospital | Founder - MHF | Project Director-Ladakh(Accord) | ISB Hyderabad - Public Policy | Johns Hopkins -Telehealth | Stanford Univ - ML/AI
The Critical Need for a Chief Health AI Officer: Bridging Clinical Medicine and Artificial Intelligence Navigating the AI Revolution in Healthcare AI, particularly generative AI (genAI) systems like ChatGPT, is reshaping healthcare by streamlining operations, enhancing clinical decision-making, and improving patient outcomes. The number of FDA-approved AI-enabled medical devices surged by 30% in 2023, highlighting the growing impact of AI in healthcare. However, the integration of AI into healthcare systems is fraught with challenges. These include ensuring the reliability, ethical use, and security of AI tools, as well as maintaining compliance with rapidly evolving regulations. The Role of the Chief Health AI Officer The CHAIO is tasked with developing a comprehensive AI strategy aligned with the organization's goals, identifying high-impact use cases, and ensuring effective implementation. This role requires a deep understanding of both clinical medicine and the latest advancements in AI, machine learning, and data science technologies. The CHAIO must work closely with leaders in clinical informatics, data management, bioethics, and hospital operations to ensure the seamless integration of AI into existing health informatics streams. Strategic Leadership and Cross-Functional Collaboration One of the CHAIO’s primary responsibilities is to guide the organization in achieving a return on AI investments by identifying high-impact use cases and recommending technologies that are fit for purpose. Collaboration with AI companies, research institutions, and healthcare organizations is crucial for accessing state-of-the-art technology and expertise. Governance and Ethical Application The CHAIO must establish and expand governance structures and policies to ensure the responsible application of AI-enabled technologies. This involves rigorous testing and validation processes, continuous safety monitoring, and staying abreast of regulatory changes and standards. Ensuring the trustworthiness of AI tools, whether procured from a vendor or internally developed, is essential for maintaining patient safety and delivering value. The Future of AI in Healthcare As AI continues to advance, the CHAIO’s role will become increasingly critical. Treating AI as merely another technology tool risks squandering resources and even causing patient harm. Therefore, healthcare organizations must formulate a clear AI strategy encompassing data infrastructure, talent development, technology selection, safe practices. Conclusion The establishment of the CHAIO role is a crucial step toward data-driven and AI-optimized healthcare systems.The CHAIO, with their deep understanding of clinical medicine and AI, is uniquely positioned to lead this transformation, ensuring that AI's benefits are harnessed while mitigating its risks. #AI #healthcare #newhorizons Credit: NEJM AI ( https://lnkd.in/gPeu4dD4)
To view or add a comment, sign in
-