MIT News - Artificial intelligence 2024年12月21日
Ecologists find computer vision models’ blind spots in retrieving wildlife images
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

研究人员利用多模态视觉语言模型(VLMs)辅助生态学家从海量自然图像数据集中检索信息,通过INQUIRE数据集测试发现,大型VLMs在简单查询中表现良好,如识别礁石上的碎片,但在需要专业知识的查询中表现不佳,如识别生物的特殊条件或行为。例如,VLMs能识别沙滩上的水母,但难以识别绿蛙的“黄斑病”。研究表明,VLMs需要更多特定领域的训练数据才能处理复杂查询。INQUIRE数据集的建立旨在帮助VLMs更好地理解科学术语,并最终帮助研究人员自动找到所需的图像,为生物多样性和气候变化分析提供重要工具。

🔍 大型VLMs在简单图像检索任务中表现良好,例如识别礁石上的碎片,但对于需要专业知识的复杂查询,如识别生物的特定条件或行为,则表现不佳。

🧪 INQUIRE数据集的建立,旨在测试VLMs在理解生态学和环境科学领域复杂查询方面的能力,该数据集包含500万张野生动物图片和250个生态学家提供的搜索提示。

📈 实验结果表明,更大的模型通常在简单和复杂的搜索中更有效,因为它们具有更广泛的训练数据,但即使是像GPT-4o这样的大型模型,在重新排序相关图像时,准确率也仅为59.6%。

🌱 INQUIRE数据集的创建,通过与生态学家、生物学家等专家的讨论,收集了他们需要的图像类型,包括动物的独特身体状况和行为,例如“使用塑料垃圾作为外壳的寄居蟹”等罕见事件。

🎯 研究人员正与iNaturalist合作开发一个查询系统,允许用户按物种筛选搜索,并改进重排序系统,以提供更好的结果,从而帮助科学家和对自然感兴趣的人更快地找到他们想要的图像。

Try taking a picture of each of North America's roughly 11,000 tree species, and you’ll have a mere fraction of the millions of photos within nature image datasets. These massive collections of snapshots — ranging from butterflies to humpback whales — are a great research tool for ecologists because they provide evidence of organisms’ unique behaviors, rare conditions, migration patterns, and responses to pollution and other forms of climate change.

While comprehensive, nature image datasets aren’t yet as useful as they could be. It’s time-consuming to search these databases and retrieve the images most relevant to your hypothesis. You’d be better off with an automated research assistant — or perhaps artificial intelligence systems called multimodal vision language models (VLMs). They’re trained on both text and images, making it easier for them to pinpoint finer details, like the specific trees in the background of a photo.

But just how well can VLMs assist nature researchers with image retrieval? A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), University College London, iNaturalist, and elsewhere designed a performance test to find out. Each VLM’s task: locate and reorganize the most relevant results within the team’s “INQUIRE” dataset, composed of 5 million wildlife pictures and 250 search prompts from ecologists and other biodiversity experts. 

Looking for that special frog

In these evaluations, the researchers found that larger, more advanced VLMs, which are trained on far more data, can sometimes get researchers the results they want to see. The models performed reasonably well on straightforward queries about visual content, like identifying debris on a reef, but struggled significantly with queries requiring expert knowledge, like identifying specific biological conditions or behaviors. For example, VLMs somewhat easily uncovered examples of jellyfish on the beach, but struggled with more technical prompts like “axanthism in a green frog,” a condition that limits their ability to make their skin yellow.

Their findings indicate that the models need much more domain-specific training data to process difficult queries. MIT PhD student Edward Vendrow, a CSAIL affiliate who co-led work on the dataset in a new paper, believes that by familiarizing with more informative data, the VLMs could one day be great research assistants. “We want to build retrieval systems that find the exact results scientists seek when monitoring biodiversity and analyzing climate change,” says Vendrow. “Multimodal models don’t quite understand more complex scientific language yet, but we believe that INQUIRE will be an important benchmark for tracking how they improve in comprehending scientific terminology and ultimately helping researchers automatically find the exact images they need.”

The team’s experiments illustrated that larger models tended to be more effective for both simpler and more intricate searches due to their expansive training data. They first used the INQUIRE dataset to test if VLMs could narrow a pool of 5 million images to the top 100 most-relevant results (also known as “ranking”). For straightforward search queries like “a reef with manmade structures and debris,” relatively large models like “SigLIP” found matching images, while smaller-sized CLIP models struggled. According to Vendrow, larger VLMs are “only starting to be useful” at ranking tougher queries.

Vendrow and his colleagues also evaluated how well multimodal models could re-rank those 100 results, reorganizing which images were most pertinent to a search. In these tests, even huge LLMs trained on more curated data, like GPT-4o, struggled: Its precision score was only 59.6 percent, the highest score achieved by any model.

The researchers presented these results at the Conference on Neural Information Processing Systems (NeurIPS) earlier this month.

Inquiring for INQUIRE

The INQUIRE dataset includes search queries based on discussions with ecologists, biologists, oceanographers, and other experts about the types of images they’d look for, including animals’ unique physical conditions and behaviors. A team of annotators then spent 180 hours searching the iNaturalist dataset with these prompts, carefully combing through roughly 200,000 results to label 33,000 matches that fit the prompts.

For instance, the annotators used queries like “a hermit crab using plastic waste as its shell” and “a California condor tagged with a green ‘26’” to identify the subsets of the larger image dataset that depict these specific, rare events.

Then, the researchers used the same search queries to see how well VLMs could retrieve iNaturalist images. The annotators’ labels revealed when the models struggled to understand scientists’ keywords, as their results included images previously tagged as irrelevant to the search. For example, VLMs’ results for “redwood trees with fire scars” sometimes included images of trees without any markings.

“This is careful curation of data, with a focus on capturing real examples of scientific inquiries across research areas in ecology and environmental science,” says Sara Beery, the Homer A. Burnell Career Development Assistant Professor at MIT, CSAIL principal investigator, and co-senior author of the work. “It’s proved vital to expanding our understanding of the current capabilities of VLMs in these potentially impactful scientific settings. It has also outlined gaps in current research that we can now work to address, particularly for complex compositional queries, technical terminology, and the fine-grained, subtle differences that delineate categories of interest for our collaborators.”

“Our findings imply that some vision models are already precise enough to aid wildlife scientists with retrieving some images, but many tasks are still too difficult for even the largest, best-performing models,” says Vendrow. “Although INQUIRE is focused on ecology and biodiversity monitoring, the wide variety of its queries means that VLMs that perform well on INQUIRE are likely to excel at analyzing large image collections in other observation-intensive fields.”

Inquiring minds want to see

Taking their project further, the researchers are working with iNaturalist to develop a query system to better help scientists and other curious minds find the images they actually want to see. Their working demo allows users to filter searches by species, enabling quicker discovery of relevant results like, say, the diverse eye colors of cats. Vendrow and co-lead author Omiros Pantazis, who recently received his PhD from University College London, also aim to improve the re-ranking system by augmenting current models to provide better results.

University of Pittsburgh Associate Professor Justin Kitzes highlights INQUIRE’s ability to uncover secondary data. “Biodiversity datasets are rapidly becoming too large for any individual scientist to review,” says Kitzes, who wasn’t involved in the research. “This paper draws attention to a difficult and unsolved problem, which is how to effectively search through such data with questions that go beyond simply ‘who is here’ to ask instead about individual characteristics, behavior, and species interactions. Being able to efficiently and accurately uncover these more complex phenomena in biodiversity image data will be critical to fundamental science and real-world impacts in ecology and conservation.”

Vendrow, Pantazis, and Beery wrote the paper with iNaturalist software engineer Alexander Shepard, University College London professors Gabriel Brostow and Kate Jones, University of Edinburgh associate professor and co-senior author Oisin Mac Aodha, and University of Massachusetts at Amherst Assistant Professor Grant Van Horn, who served as co-senior author. Their work was supported, in part, by the Generative AI Laboratory at the University of Edinburgh, the U.S. National Science Foundation/Natural Sciences and Engineering Research Council of Canada Global Center on AI and Biodiversity Change, a Royal Society Research Grant, and the Biome Health Project funded by the World Wildlife Fund United Kingdom.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

多模态视觉语言模型 图像检索 生态研究 INQUIRE数据集 生物多样性
相关文章