Unite.AI 01月08日
Addressing AI Skepticism in Healthcare: Overcoming Obstacles To Secure Communication
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

医疗行业对人工智能的应用日益增长,旨在提高效率和改善患者体验。然而,对AI的信任度仍有待提高,尤其是在数据安全和准确性方面。尽管如此,大部分医疗领导者计划在2025年增加AI预算。文章提出了克服AI应用障碍的建议,包括使用可靠的医学资源训练AI、确保符合HIPAA的数据实践、设计改善工作流程的界面、进行适当的员工培训以及利用AI纠正错误。这些措施旨在帮助医疗机构安全有效地利用AI,提升医疗服务质量。

⚕️ 使用可靠医学资源训练AI:医疗机构应确保AI的训练数据来自可信来源,如电子健康记录(EHR),并定期更新,以提高AI的准确性和可靠性。

🔒 确保符合HIPAA的数据实践:在处理患者健康信息(PHI)时,必须严格遵守HIPAA法规,包括最小化数据收集、授权访问、使用加密技术以及签署商业伙伴协议(BAA)。

🖥️ 设计改善工作流程的界面:在选择AI平台时,应考虑其用户友好性和可定制性,以适应现有的工作流程,避免增加医护人员的负担,并促进不同系统之间的信息交换。

👨‍🏫 进行适当的员工培训:医疗领导者需要理解AI的能力和局限性,并确保员工接受适当的培训,以便在必要时进行人工监督和干预,从而弥合AI与人类医生之间的差距。

✅ 利用AI纠正错误:AI可以作为一种补充工具,帮助医生发现并纠正医疗记录中的错误,例如BMI计算错误或遗漏的检查信息,从而提高医疗质量和患者安全。

Healthcare leaders are keen to embrace AI, partly to keep pace with competitors and other industries, but, more importantly, to increase efficiency and improve patient experiences. However, only 77% of healthcare leaders actually trust AI to benefit their business.

While AI chatbots excel at handling routine tasks, processing data, and summarizing information, the highly regulated healthcare industry worries most about the reliability and accuracy of the data that is fed into and interpreted by these tools. Without proper usage and employee training, data breaches become additional pressing threats.

Even so, 95% of healthcare leaders plan to increase AI budgets by up to 30% in 2025, with large language models (LLMs) emerging as one of the most trusted tools. As LLMs mature, 53% of healthcare leaders have already implemented formal policies to help their teams adapt to them, and another 39% plan to implement policies soon.

For healthcare providers who want to streamline communication services with AI but are still wary of doing so, here are some recommendations for overcoming the most common obstacles.

1.   Train AI With Reliable Medical Sources

While healthcare leaders may not be directly involved in AI training, they must play a pivotal role in overseeing its implementation. They should ensure that chatbot providers are training and regularly updating their AI with credible sources.

The rich, structured data captured by mandatory electronic health records (EHRs) offer vast repositories of digital health data that can now serve as the foundation for training AI algorithms. Advanced LLMs can comprehend medical research, technical analysis, literature reviews, and critical assessments. However, rather than training these tools with all the data at once, new evidence shows that focusing on a smaller number of intersections maximizes AI performance while keeping the training cost low.

2.   Ensure HIPAA-Compliant Data Practices

The Health Insurance Portability and Accountability Act (HIPAA) outlines standards for protecting sensitive patient health information (PHI). To align with these regulations, healthcare leaders should ensure third-party vendors:

Healthcare leaders using these tools should regularly check access reports—a step that is also easy to automate with AI—and send alerts to management if unusual activity occurs.

Moreover, they must obtain clear and informed consent from patients before collecting and using their PHI. When requesting consent, communicate how patient data will be used and protected.

3.   Well-Designed Interfaces That Improve Workflows

One of the biggest obstacles when transitioning to mandatory EHRs was the usability of the technology. Physicians were unsatisfied with the amount of time spent on clerical tasks as they adjusted to the complicated workflows, increasing their risk for professional burnout, and the chance of making mistakes that can affect patient treatment.

When working with third-party vendors, request a demo and a second opinion before selecting an AI platform or software solution. Don’t forget to ask if their product allows customization that adapts to current programs so that you can integrate the ready-to-use features that best suit your workflows.

User-centered design and standardized data formats and protocols will help facilitate seamless information exchange across healthcare technology and AI platforms. With these standards in place, AI algorithms can be meaningfully integrated into clinical care across various healthcare settings. Established protocols also help these tools perform better by facilitating interoperability and enabling access to larger, more diverse datasets.

4.   Proper Usage and Employee Training

A 2024 study found that medical advice provided by ‘human physicians and AI’ was, in fact, more comprehensive but less empathic than that provided by ‘human physicians’ alone. To bridge the gap, healthcare leaders must understand AI’s capabilities and limitations and ensure proper human oversight and intervention.

Healthcare leaders can embed chatbots in their websites and patient apps to offer users instant access to medical information, assisting in self-diagnosis and health education. These tools can send timely reminders to patients to refill their prescriptions, helping patients adhere to treatment plans. They can also help classify patients based on the severity of their condition, assisting healthcare providers in prioritizing cases and allocating resources efficiently.

Nevertheless, these tools can still hallucinate, and it’s imperative that a human validator be involved in complex tasks. Work with third-party experts to define your vision for AI communication tools and create your desired workflows. Once you agree on your use cases, operational and cultural change management processes—like Kotter’s 8-step change process—offer a roadmap for onboarding employees, ultimately enhancing patient outcomes.

5.   Ask the Chatbot To Catch Mistakes

No business leader wants to make mistakes, but the healthcare industry is a high-stakes environment where even minor oversights can lead to severe repercussions. Yet, even the best clinicians aren’t immune to medical errors. AI can be a powerful tool to improve patient care by catching errors and filling in the gaps.

A 2023 investigation using GPT-4 to transcribe and summarize a conversation between a patient and clinician later employed the chatbot to review the conversation for errors. During the validation, it caught a mistake in the patient's body mass index (BMI). The chatbot also noticed that the patient notes didn’t mention the blood tests that were ordered, nor the rationale for ordering them.

This example indicates that AI can be used as a supplement to help doctors handle AI hallucinations, omissions, and errors that can be used to train and improve AI applications.

Healthcare AI exists to support doctors and nurses, simplify workflows, improve patient accessibility to care, and minimize oversights. While they can't fully replace the empathy, intuition, and real-world experience that human healthcare providers bring to the table, these tools offer excellent analytical and time-saving benefits. When healthcare leaders take their time to ensure careful adherence to HIPAA regulations, transparent communication with patients, and proper employee training, they can implement these tools safely and confidently.

The post Addressing AI Skepticism in Healthcare: Overcoming Obstacles To Secure Communication appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

医疗AI HIPAA 数据安全 员工培训 医疗错误
相关文章