MarkTechPost@AI 01月28日
Leveraging Hallucinations in Large Language Models to Enhance Drug Discovery
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

研究指出LLM会产生看似合理但不准确的内容,然而在药物发现等领域,其幻觉有潜在价值。该研究用七款LLM进行实验,结果表明幻觉可提升药物发现任务中的LLM性能,如Llama-3.1-8B的ROC-AUC提高了18.35%。

💊LLM幻觉在药物发现领域有潜在作用,可产生新想法

🔬用七款LLM进行实验,如GPT-4o和Llama-3.1-8B等

📈幻觉描述融入提示可提升LLM性能,如Llama-3.1-8B的ROC-AUC提高

🧪不同LLM产生的幻觉对分子性质预测任务的性能有影响

Researchers have highlighted concerns regarding hallucinations in LLMs due to their generation of plausible but inaccurate or unrelated content. However, these hallucinations hold potential in creativity-driven fields like drug discovery, where innovation is essential. LLMs have been widely applied in scientific domains, such as materials science, biology, and chemistry, aiding tasks like molecular description and drug design. While traditional models like MolT5 offer domain-specific accuracy, LLMs often produce hallucinated outputs when not fine-tuned. Despite their lack of factual consistency, such outputs can provide valuable insights, such as high-level molecular descriptions and potential compound applications, thereby supporting exploratory processes in drug discovery.

Drug discovery, a costly and time-intensive process, involves evaluating vast chemical spaces and identifying novel solutions to biological challenges. Previous studies have used machine learning and generative models to assist in this field, with researchers exploring the integration of LLMs for molecule design, dataset curation, and prediction tasks. Hallucinations in LLMs, often viewed as a drawback, can mimic creative processes by recombining knowledge to generate novel ideas. This perspective aligns with creativity’s role in innovation, exemplified by groundbreaking accidental discoveries like penicillin. By leveraging hallucinated insights, LLMs could advance drug discovery by identifying molecules with unique properties and fostering high-level innovation.

ScaDS.AI and Dresden University of Technology researchers hypothesize that hallucinations can enhance LLM performance in drug discovery. Using seven instruction-tuned LLMs, including GPT-4o and Llama-3.1-8B, they incorporated hallucinated natural language descriptions of molecules’ SMILES strings into prompts for classification tasks. The results confirmed their hypothesis, with Llama-3.1-8B achieving an 18.35% ROC-AUC improvement over the baseline. Larger models and Chinese-generated hallucinations demonstrated the greatest gains. Analyses revealed that hallucinated text provides unrelated yet insightful information, aiding predictions. This study highlights hallucinations’ potential in pharmaceutical research and offers new perspectives on leveraging LLMs for innovative drug discovery.

To generate hallucinations, SMILES strings of molecules are translated into natural language using a standardized prompt where the system is defined as an “expert in drug discovery.” The generated descriptions are evaluated for factual consistency using the HHM-2.1-Open Model, with MolT5-generated text as the reference. Results show low factual consistency across LLMs, with ChemLLM scoring 20.89% and others averaging 7.42–13.58%. Drug discovery tasks are formulated as binary classification problems, predicting specific molecular properties via next-token prediction. Prompts include SMILES, descriptions, and task instructions, with models constrained to output “Yes” or “No” based on the highest probability.

The study examines how hallucinations generated by different LLMs impact performance in molecular property prediction tasks. Experiments use a standardized prompt format to compare predictions based on SMILES strings alone, SMILES with MolT5-generated descriptions, and hallucinated descriptions from various LLMs. Five MoleculeNet datasets were analyzed using ROC-AUC scores. Results show that hallucinations generally improve performance over SMILES or MolT5 baselines, with GPT-4o achieving the highest gains. Larger models benefit more from hallucinations, but improvements plateau beyond 8 billion parameters. Temperature settings influence hallucination quality, with intermediate values yielding the best performance enhancements.

In conclusion, the study explores the potential benefits of hallucinations in LLMs for drug discovery tasks. By hypothesizing that hallucinations can enhance performance, the research evaluates seven LLMs across five datasets using hallucinated molecule descriptions integrated into prompts. Results confirm that hallucinations improve LLM performance compared to baseline prompts without hallucinations. Notably, Llama-3.1-8B achieved an 18.35% ROC-AUC gain. GPT-4o-generated hallucinations provided consistent improvements across models. Findings reveal that larger model sizes generally benefit more from hallucinations, while factors like generation temperature have minimal impact. The study highlights hallucinations’ creative potential in AI and encourages further exploration of drug discovery applications.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 70k+ ML SubReddit.

[Recommended Read] Nebius AI Studio expands with vision models, new language models, embeddings and LoRA (Promoted)

The post Leveraging Hallucinations in Large Language Models to Enhance Drug Discovery appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LLM幻觉 药物发现 性能提升 实验研究
相关文章