TechCrunch News 03月05日
Experts don’t think AI is ready to be a ‘co-scientist’
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

谷歌等科技巨头纷纷推出AI科研助手,声称能加速科学研究,尤其是在生物医药等文献密集的领域。然而,许多研究人员对此持怀疑态度,认为这些工具更多是炒作,缺乏实际数据支持。AI在科学发现中的作用尚不明确,可能更适合缩小可能性范围,而非解决突破性难题。AI在自动化繁琐任务方面有潜力,但无法取代人类的直觉和毅力。此外,AI的技术缺陷和风险,如产生幻觉,以及可能产生的低质量研究,也使科学家对其持谨慎态度。

🤖 谷歌的AI科研助手被批评缺乏细节和可重复性,其在急性髓系白血病药物再利用方面的潜力描述过于模糊,无法得到科学家的认真对待。

🧪 科学家们担心AI工具可能产生误导性研究,充斥学术领域,导致同行评审过程不堪重负。一篇研究发现,AI捏造的“垃圾科学”正在涌入谷歌学术。

💡 AI在科学研究中的价值在于自动化技术难度高或繁琐的任务,例如总结新的学术文献或按照要求格式化工作,而非生成假设。

🔬 现有AI系统在设计和实施研究分析以验证或反驳假设方面存在局限性,无法进行物理实验,且在数据有限的问题上表现不佳。

🌍 AI的训练方式和能源消耗也引发了伦理问题,即使解决了所有伦理问题,目前的AI仍然不够可靠,无法作为科学家工作的基础。

Last month, Google announced the “AI co-scientist,” an AI the company said was designed to aid scientists in creating hypotheses and research plans. Google pitched it as a way to uncover new knowledge, but experts think it — and tools like it — fall well short of PR promises.

“This preliminary tool, while interesting, doesn’t seem likely to be seriously used,” Sarah Beery, a computer vision researcher at MIT, told TechCrunch. “I’m not sure that there is demand for this type of hypothesis-generation system from the scientific community.”

Google is the latest tech giant to advance the notion that AI will dramatically speed up scientific research someday, particularly in literature-dense areas such as biomedicine. In an essay earlier this year, OpenAI CEO Sam Altman said that “superintelligent” AI tools could “massively accelerate scientific discovery and innovation.” Similarly, Anthropic CEO Dario Amodei has boldly predicted that AI could help formulate cures for most cancers.

But many researchers don’t consider AI today to be especially useful in guiding the scientific process. Applications like Google’s AI co-scientist appear to be more hype than anything, they say, unsupported by empirical data.

For example, in its blog post describing the AI co-scientist, Google said the tool had already demonstrated potential in areas such as drug repurposing for acute myeloid leukemia, a type of blood cancer that affects bone marrow. Yet the results are so vague that “no legitimate scientist would take [them] seriously,” said Favia Dubyk, a pathologist affiliated with Northwest Medical Center-Tucson in Arizona.

“This could be used as a good starting point for researchers, but […] the lack of detail is worrisome and doesn’t lend me to trust it,” Dubyk told TechCrunch. “The lack of information provided makes it really hard to understand if this can truly be helpful.”

It’s not the first time Google has been criticized by the scientific community for trumpeting a supposed AI breakthrough without providing a means to reproduce the results.

In 2020, Google claimed one of its AI systems trained to detect breast tumors achieved better results than human radiologists. Researchers from Harvard and Stanford published a rebuttal in the journal Nature, saying the lack of detailed methods and code in Google’s research “undermine[d] its scientific value.”

Scientists have also chided Google for glossing over the limitations of its AI tools aimed at scientific disciplines such as materials engineering. In 2023, the company said around 40 “new materials” had been synthesized with the help of one of its AI systems, called GNoME. Yet, an outside analysis found not a single one of the materials was, in fact, net new.

“We won’t truly understand the strengths and limitations of tools like Google’s ‘co-scientist’ until they undergo rigorous, independent evaluation across diverse scientific disciplines,” Ashique KhudaBukhsh, an assistant professor of software engineering at Rochester Institute of Technology, told TechCrunch. “AI often performs well in controlled environments but may fail when applied at scale.”

Part of the challenge in developing AI tools to aid in scientific discovery is anticipating the untold number of confounding factors. AI might come in handy in areas where broad exploration is needed, like narrowing down a vast list of possibilities. But it’s less clear whether AI is capable of the kind of out-of-the-box problem-solving that leads to scientific breakthroughs.

“We’ve seen throughout history that some of the most important scientific advancements, like the development of mRNA vaccines, were driven by human intuition and perseverance in the face of skepticism,” KhudaBukhsh said. “AI, as it stands today, may not be well-suited to replicate that.”

Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan, believes that tools such as Google’s AI co-scientist focus on the wrong kind of scientific legwork.

Sinapayen sees a genuine value in AI that could automate technically difficult or tedious tasks, like summarizing new academic literature or formatting work to fit a grant application’s requirements. But there isn’t much demand within the scientific community for an AI co-scientist that generates hypotheses, she says — a task from which many researchers derive intellectual fulfillment.

“For many scientists, myself included, generating hypotheses is the most fun part of the job,” Sinapayen told TechCrunch. “Why would I want to outsource my fun to a computer, and then be left with only the hard work to do myself? In general, many generative AI researchers seem to misunderstand why humans do what they do, and we end up with proposals for products that automate the very part that we get joy from.”

Beery noted that often the hardest step in the scientific process is designing and implementing the studies and analyses to verify or disprove a hypothesis — which isn’t necessarily within reach of current AI systems. AI can’t use physical tools to carry out experiments, of course, and it often performs worse on problems for which extremely limited data exists.

“Most science isn’t possible to do entirely virtually — there is frequently a significant component of the scientific process that is physical, like collecting new data and conducting experiments in the lab,” Beery said. “One big limitation of systems [like Google’s AI co-scientist] relative to the actual scientific process, which definitely limits its usability, is context about the lab and researcher using the system and their specific research goals, their past work, their skillset, and the resources they have access to.”

AI’s technical shortcomings and risks — such as its tendency to hallucinate — also make scientists wary of endorsing it for serious work.

KhudaBukhsh fears AI tools could simply end up generating noise in the scientific literature, not elevating progress.

It’s already a problem. A recent study found that AI-fabricated “junk science” is flooding Google Scholar, Google’s free search engine for scholarly literature.

“AI-generated research, if not carefully monitored, could flood the scientific field with lower-quality or even misleading studies, overwhelming the peer-review process,” KhudaBukhsh said. “An overwhelmed peer-review process is already a challenge in fields like computer science, where top conferences have seen an exponential rise in submissions.”

Even well-designed studies could end up being tainted by misbehaving AI, Sinapayen said. While she likes the idea of a tool that could assist with literature review and synthesis, Sinapayen said she wouldn’t trust AI today to execute that work reliably.

“Those are things that various existing tools are claiming to do, but those are not jobs that I would personally leave up to current AI,” Sinapayen said, adding that she takes issue with the way many AI systems are trained and the amount of energy they consume, as well. “Even if all the ethical issues […] were solved, current AI is just not reliable enough for me to base my work on their output one way or another.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI科研 科学研究 人工智能 科研伦理
相关文章