AI News 2024年10月25日
AI is helping brands avoid controversial influencer partnerships  
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Lightricks 推出了 SafeCollab,这是一个基于 AI 的网红审查模块,旨在帮助品牌通过分析网红的社交媒体内容来识别潜在的风险,从而避免与可能损害品牌形象的网红合作。该工具利用大型语言模型,可以快速评估网红的风险,帮助品牌节省时间,并确保合作安全。例如,Boys Lie 品牌就曾因为与 Brooke Schofield 合作而陷入争议,因为 Schofield 在过去曾发表过种族主义言论。SafeCollab 可以帮助品牌避免类似的事件发生,并确保合作的顺利进行。

👨‍💻 SafeCollab 是一款基于 AI 的网红审查模块,可以帮助品牌通过分析网红的社交媒体内容来识别潜在的风险,从而避免与可能损害品牌形象的网红合作。

🔍 SafeCollab 利用大型语言模型,可以快速评估网红的风险,帮助品牌节省时间,并确保合作安全。它可以分析网红的视频、音频、图片和文字内容,识别潜在的风险因素,例如种族主义言论、暴力内容、不当图片等。

💡 SafeCollab 可以帮助品牌避免类似 Boys Lie 与 Brooke Schofield 合作的事件发生,因为 Schofield 在过去曾发表过种族主义言论。该工具可以帮助品牌在合作之前识别这些风险,并确保合作的顺利进行。

📈 SafeCollab 可以帮助品牌快速识别有争议的网红,并避免与他们合作,从而保护品牌形象,并提高品牌声誉。

Influencer partnerships can be great for brands looking to pump out content that promotes their products and services in an authentic way. These types of engagements can yield significant brand awareness and brand sentiment lift, but they can be risky too. Social media stars are unpredictable at the best of times, with many deliberately chasing controversy to increase their fame. 

These antics don’t always reflect well on the brands that collaborate with especially attention-hungry influencers, leaving marketers no choice but to conduct careful due diligence on the individuals they work with. Luckily, that task can be made much easier thanks to the evolving utility of AI.  

Lightricks, a software company best known for its AI-powered video and image editing tools, is once again expanding the AI capabilities of its suite with this week’s announcement of SafeCollab. An AI-powered influencer vetting module that lives within the company’s Popular Pays creator collaboration platform, SafeCollab is a new tool for marketers that automates the vetting process.  

Traditionally, marketers have had no choice but to spend hours researching the backgrounds of influencers, looking through years’ worth of video uploads and social media posts. It’s a lengthy, manual process that can only be automated with intelligent tools. 

SafeCollab provides that intelligence with its underlying large language models, which do the job of investigating influencers to ensure the image they portray is consistent with brand values. The LLMs perform what amounts to a risk assessment of creators’ content across multiple social media channels in minutes, searching through hours of videos, audio uploads, images and text.  

In doing this, SafeCollab significantly reduces the time it takes for brand marketers to perform due diligence on the social media influencers they’re considering partnering with. Likewise, when creators opt in to SafeCollab, they make it easier for marketers to understand the brand safety implications of working together, reducing friction from campaign lifecycles. 

Brands can’t take chances 

The idea here is to empower brand marketers to avoid working with creators whose content is not aligned with the brand’s values – as well as those who have a tendency to kick up a storm.  

Such due diligence is vital, for even the most innocuous influencers can have some skeletons in their closets. A case in point is the popular lifestyle influencer Brooke Schofield, who has more than 2.2 million followers on TikTok and co-hosts the “Canceled” podcast on YouTube. With her large following, good looks and keen sense of fashion, Schofield looked like a great fit for the clothing brand Boys Lie, which collaborated with her on an exclusive capsule collection called “Bless His Heart.” 

However, Boys Lie quickly came to regret its collaboration with Schofield when a scandal erupted in April after fans unearthed a number of years-old social media posts where she expressed racist views.  

The posts, which were uploaded on X between 2012 and 2015 when Schofield was a teenager, contained a string of racist profanities and insulting jokes about Black people’s hairstyles. In one post, she vigorously defended George Zimmerman, a white American who was controversially acquitted of the murder of the Black teenager Trayvon Martin.  

Schofield apologized profusely for her posts, admitting that they were “very hurtful” while stressing that she’s a changed person, having had time to “learn and grow and formulate my own opinions.”  

However, Boys Lie decided it had no option but to drop its association with Schofield. After a statement on Instagram saying it’s “working on a solution,” the company followed by quietly withdrawing the clothing collection they had previously collaborated on.  

Accelerating due diligence  

If the marketing team at Boys Lie had access to a tool like SafeCollab, they likely would have uncovered Schofield’s controversial posts long before commissioning the collaboration. The tool, which is a part of Lightricks’ influencer marketing platform Popular Pays, is all about helping brands to automate their due diligence processes when working with social media creators.  

By analyzing years of creators’ histories of posts across platforms like Instagram, TikTok, and YouTube, it can check everything they’ve posted online to make sure there’s nothing that might reflect badly on a brand.  

Brands can define their risk parameters, and the tool will quickly generate an accurate risk assessment evaluation, so they can confidently choose the influencers they want to work with, safe in the knowledge that their partnerships are unlikely to spark any backlash.  

Without a platform like SafeCollab, the task of performing all of this due diligence falls on the shoulders of marketers, and that means spending hours trawling through each influencer’s profiles, checking everything and anything they’ve ever said or done to ensure there’s nothing in their past that the brand would rather not be associated with.  

When we consider that the scope of work might include audio voiceovers, extensive comment threads and frame-by-frame analyses of video content, it’s a painstaking process that never really ends. After all, the top influencers have a habit of churning out fresh content every day. Careful marketers have no choice but to continuously monitor what they’re posting.  

Beyond initial history scans, SafeCollab’s real-time monitoring algorithms assume full responsibility, generating instant alerts to any problematic content, such as posts that contain graphic language, inappropriate images, promote violence or drug and alcohol use, mention violence, or whatever else the brand deems to be unsavory.  

AI’s expanding applications 

With the launch of SafeCollab, Lightricks is demonstrating yet another use case for generative AI. The company first made a name for itself as a developer of AI-powered video and image editing apps, including Photoleap, Facetune and Videoleap.  

The latter app incorporates AI-powered video filters and text-to-video generative AI functionalities. It also boasts an AI Effects feature, where users can apply specialized AI art styles to achieve the desired vibe for each video they create.  

Lightricks is also the company behind LTX Studio, which is a comprehensive platform that helps advertising production firms and filmmakers to create storyboards and asset-rich pitch decks for their video projects using text-to-video generative AI.  

With all of Lightricks’ AI apps, the primary benefit is that they save users time by automating manual work and bringing creative visions to life, and SafeCollab is a great example of that. By automating the due diligence process from start to finish, marketers can quickly identify controversial influencers they’d rather steer clear of, without spending hours conducting exhaustive research.  

The post AI is helping brands avoid controversial influencer partnerships   appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 网红营销 品牌安全 SafeCollab Lightricks
相关文章