热点
关于我们
xx
xx
"
幻觉检测
" 相关文章
VeriTrail: Detecting hallucination and tracing provenance in multi-step AI workflows
智源社区
2025-08-05T17:09:14.000000Z
Counterfactual Probing for Hallucination Detection and Mitigation in Large Language Models
cs.AI updates on arXiv.org
2025-08-05T17:08:39.000000Z
DHCP: Detecting Hallucinations by Cross-modal Attention Pattern in Large Vision-Language Models
cs.AI updates on arXiv.org
2025-08-01T04:08:31.000000Z
Enhancing Hallucination Detection via Future Context
cs.AI updates on arXiv.org
2025-07-29T04:22:26.000000Z
First Hallucination Tokens Are Different from Conditional Ones
cs.AI updates on arXiv.org
2025-07-29T04:21:37.000000Z
ICR Probe: Tracking Hidden State Dynamics for Reliable Hallucination Detection in LLMs
cs.AI updates on arXiv.org
2025-07-23T04:03:27.000000Z
Towards Mitigation of Hallucination for LLM-empowered Agents: Progressive Generalization Bound Exploration and Watchdog Monitor
cs.AI updates on arXiv.org
2025-07-23T04:03:14.000000Z
Cleanse: Uncertainty Estimation Approach Using Clustering-based Semantic Consistency in LLMs
cs.AI updates on arXiv.org
2025-07-22T04:44:38.000000Z
Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Training
cs.AI updates on arXiv.org
2025-07-17T04:14:38.000000Z
一文搞懂 | 大模型为什么出现幻觉?从成因到缓解方案
掘金 人工智能
2025-07-15T06:19:08.000000Z
CodeMirage: Hallucinations in Code Generated by Large Language Models
cs.AI updates on arXiv.org
2025-07-10T04:06:06.000000Z
一文搞懂 | 大模型为什么出现幻觉?从成因到缓解方案
安全客
2025-07-09T05:51:20.000000Z
KEA Explain: Explanations of Hallucinations using Graph Kernel Analysis
cs.AI updates on arXiv.org
2025-07-08T06:58:14.000000Z
TUM-MiKaNi at SemEval-2025 Task 3: Towards Multilingual and Knowledge-Aware Non-factual Hallucination Identification
cs.AI updates on arXiv.org
2025-07-02T22:33:35.000000Z
Is Automated Hallucination Detection in LLMs Feasible? A Theoretical and Empirical Investigation
MarkTechPost@AI
2025-05-07T04:10:42.000000Z
Using AI Hallucinations to Evaluate Image Realism
Unite.AI
2025-03-25T12:27:59.000000Z
张向征:大模型安全研究与实践
36氪 - 科技频道
2025-03-11T10:31:34.000000Z
RAG-Check: A Novel AI Framework for Hallucination Detection in Multi-Modal Retrieval-Augmented Generation Systems
MarkTechPost@AI
2025-01-12T06:30:50.000000Z
WACK: Advancing Hallucination Detection by Identifying Knowledge-Based Errors in Language Models Through Model-Specific, High-Precision Datasets and Prompting Techniques
MarkTechPost@AI
2024-11-01T12:05:44.000000Z
Meta AI Researchers Introduce Token-Level Detective Reward Model (TLDR) to Provide Fine-Grained Annotations for Large Vision Language Models
MarkTechPost@AI
2024-10-26T09:38:20.000000Z