热点
关于我们
xx
xx
"
幻觉检测
" 相关文章
Using AI Hallucinations to Evaluate Image Realism
Unite.AI
2025-03-25T12:27:59.000000Z
张向征:大模型安全研究与实践
36氪 - 科技频道
2025-03-11T10:31:34.000000Z
RAG-Check: A Novel AI Framework for Hallucination Detection in Multi-Modal Retrieval-Augmented Generation Systems
MarkTechPost@AI
2025-01-12T06:30:50.000000Z
WACK: Advancing Hallucination Detection by Identifying Knowledge-Based Errors in Language Models Through Model-Specific, High-Precision Datasets and Prompting Techniques
MarkTechPost@AI
2024-11-01T12:05:44.000000Z
Meta AI Researchers Introduce Token-Level Detective Reward Model (TLDR) to Provide Fine-Grained Annotations for Large Vision Language Models
MarkTechPost@AI
2024-10-26T09:38:20.000000Z
大模型评测技术研讨会暨国际标准IEEE P3419第二次工作组会议成功召开
智源研究院
2024-10-24T17:00:57.000000Z
Haize Labs Introduced Sphynx: A Cutting-Edge Solution for AI Hallucination Detection with Dynamic Testing and Fuzzing Techniques
MarkTechPost@AI
2024-08-06T13:19:51.000000Z
Patronus AI Releases Lynx v1.1: An 8B State-of-the-Art RAG Hallucination Detection Model
MarkTechPost@AI
2024-08-01T16:19:47.000000Z
大模型评测技术研讨会暨国际标准IEEE P3419第二次工作组会议成功召开
智源社区
2024-07-19T10:51:33.000000Z
OpenAI 翁荔提出大模型「外在幻觉」:万字 blog 详解抵抗办法、产幻原因和检测方式
IT之家
2024-07-13T15:23:21.000000Z
Patronus AI Introduces Lynx: A SOTA Hallucination Detection LLM that Outperforms GPT-4o and All State-of-the-Art LLMs on RAG Hallucination Tasks
MarkTechPost@AI
2024-07-13T02:46:18.000000Z
Galileo Introduces Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost
MarkTechPost@AI
2024-06-15T05:01:51.000000Z
Deciphering Doubt: Navigating Uncertainty in LLM Responses
MarkTechPost@AI
2024-06-09T05:00:57.000000Z