<section class="blog-post-content lb-rtxt"><table><tbody><tr><td><p></p></td></tr></tbody></table><p>Earlier this year, we <a href="https://variety.com/2025/film/news/ai-twelvelabs-amazon-bedrock-1236361144/">preannounced</a> that TwelveLabs video understanding models were coming to <a href="https://aws.amazon.com/bedrock/">Amazon Bedrock</a>. Today, we’re announcing the models are now available for searching through videos, classifying scenes, summarizing, and extracting insights with precision and reliability.</p><p><a href="https://www.twelvelabs.io/">TwelveLabs</a> has introduced Marengo, a video embedding model proficient at performing tasks such as search and classification, and Pegasus, a video language model that can generate text based on video data. These models are <a href="https://press.aboutamazon.com/2024/12/generative-ai-startup-twelve-labs-works-with-aws-to-make-videos-as-searchable-as-text">trained on Amazon SageMaker HyperPod</a> to deliver groundbreaking video analysis that provides text summaries, metadata generation, and creative optimization.</p><p>With the TwelveLabs models in Amazon Bedrock, you can find specific moments using natural language video search capabilities like “show me the first touchdown of the game” or “find the scene where the main characters first meet” and instantly jump to those exact moments. You can also build applications to understand video content by generating descriptive text such as titles, topics, hashtags, summaries, chapters, or highlights for discovering insights and connections without requiring predefined labels or categories.</p><p>For example, you can find recurring themes in customer feedback or spot product usage patterns that weren’t obvious before. Whether you have hundreds or thousands of hours of video content, you can now transform that entire library into a searchable knowledge resource while maintaining enterprise-grade security and performance.</p><p>Let’s take a look at the <a href="https://www.youtube.com/watch?v=LWUh5wDUzAY">Marengo</a> and <a href="https://www.youtube.com/watch?v=XQDlLnCC_8M">Pegasus</a> videos that TwelveLabs has published.</p><p>You can transform video workflows with these models across industries. Media producers and editors can instantly locate specific scenes or dialogue, which means you can focus on storytelling rather than sifting through hours of footage. Marketing teams are streamlining their advertising workflows by quickly personalizing content to resonate with various audiences, while security teams are using the technology to proactively identify potential risks by spotting patterns across multiple video feeds.</p><p><strong class="c6">Getting started with TwelveLabs models in Amazon Bedrock</strong><br />Before getting started, if you’re new to using TwelveLabs models, go to the <a href="https://us-west-2.console.aws.amazon.com/bedrock/home?region=us-west-2#modelaccess&amp;trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock console</a> and choose <strong>Model access</strong> in the bottom left navigation pane. To access the latest TwelveLabs models, request access for <strong>Marengo Embed 2.7</strong> and <strong>Pegasus 1.2</strong> in <strong>TwelveLabs</strong>.</p><p><img class="aligncenter size-full wp-image-98164 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/11/2025-twelvelabs-bedrock-1-model-access.jpg" alt="" width="2214" height="445" /></p><p>To use TwelveLabs models in Amazon Bedrock, choose <strong>Chat/Text Playground</strong> under <strong>Test</strong> in the left navigation pane. Choose <strong>Select model</strong>, select <strong>TwelveLabs</strong> as the category and <strong>Pegasus</strong> as the model, and then choose <strong>Apply</strong>.</p><p><img class="aligncenter size-full wp-image-98165 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/11/2025-twelvelabs-bedrock-2-chat-playground-1.jpg" alt="" width="2420" height="1192" /></p><p>For searching or generating text from your video, you should upload your video to an <a href="https://aws.amazon.com/s3/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Simple Storage Service (Amazon S3)</a> bucket or input Base64-formatted video string.</p><p>I will use a <a href="https://www.youtube.com/watch?v=XUt7TuMv7yQ">sample video</a> generated with Amazon Nova Reel with a prompt: “A cute raccoon playing guitar underwater.”</p><p>Enter the <strong>S3 URI</strong> of your uploaded video and the <strong>S3 Bucket Owner</strong>, and then run your prompt: “Tell me about the video by timeline”.</p><p><img class="aligncenter size-full wp-image-98168 c7" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/07/11/2025-twelvelabs-bedrock-2-chat-playground-2.png" alt="" width="2274" height="1372" /></p><p>You can get following model response:</p><p><code>The video opens with an underwater scene featuring a sandy seabed and swaying seaweed, creating a serene atmosphere. At around 5 seconds, a raccoon appears, swimming towards the camera with a curious expression. By 10 seconds, the raccoon is seen swimming near the seabed. The focus then shifts to a group of koi fish swimming gracefully near the seabed from 11 to 13 seconds.</code></p><p><code>At 14 seconds, the scene transitions to an ukulele resting on the seabed, with the camera zooming in to highlight its details. The raccoon reappears at 20 seconds, swimming among the koi fish, and by 23 seconds, it begins to play the ukulele. The raccoon continues to play the ukulele from 29 to 41 seconds, with the koi fish swimming around it, adding a lively and colorful touch to the underwater setting.</code></p><p><code>From 42 to 49 seconds, the raccoon is shown playing the ukulele with the koi fish swimming around it. The video concludes with a screen displaying "Video generated with Amazon Novo Reel 1.1" and "Available in Amazon Bedrock" from 50 to 54 seconds, followed by the AWS logo from 55 to 57 seconds.</code></p><p>The TwelveLabs models can be easily integrated into your applications using the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html">Amazon Bedrock Converse API</a>, which provides a unified interface for conversational AI interactions.</p><p>Here’s an example of how to use the <a href="https://aws.amazon.com/sdk-for-python/">AWS SDK for Python (Boto3)</a> with the TwelveLabs Pegasus model:</p><pre class="lang-python">import boto3import jsonimport osAWS_REGION = "us-east-1"MODEL_ID = "twelvelabs.pegasus-1-2-v1:0"VIDEO_PATH = "sample.mp4"def read_file(file_path: str) -> bytes: """Read a file in binary mode.""" try: with open(file_path, 'rb') as file: return file.read() except Exception as e: raise Exception(f"Error reading file {file_path}: {str(e)}")bedrock_runtime = boto3.client( service_name="bedrock-runtime", region_name=AWS_REGION)request_body = { "messages": [ { "role": "user", "content": [ { "inputPrompt": "tell me about the video", "mediaSource: { "base64String": read_file(VIDEO_PATH) } }, ], } ]}response = bedrock_runtime.converse( modelId=MODEL_ID, messages=request_body["messages"])print(response["output"]["message"]["content"][-1]["text"])</pre><p>The TwelveLabs Marengo Embed 2.7 model generates vector embeddings from video, text, audio, or image inputs. These embeddings can be used for similarity search, clustering, and other machine learning (ML) tasks. The model supports asynchronous inference through the <a href="https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_StartAsyncInvoke.html">Bedrock AsyncInvokeModel API</a>.</p><p>For video source, you can request JSON format for the TwelveLabs Marengo Embed 2.7 model using the <code>AsyncInvokeModel</code> API.</p><pre class="lang-json">{ "modelId": "twelvelabs.marengo-embed-2.7", "modelInput": { "inputType": "video", "mediaSource": { "s3Location": { "uri": "s3://your-video-object-s3-path", "bucketOwner": "your-video-object-s3-bucket-owner-account" } } }, "outputDataConfig": { "s3OutputDataConfig": { "s3Uri": "s3://your-bucket-name" } }}</pre><p>You can get a response delivered to the specified S3 location.</p><pre class="lang-json">{ "embedding": [0.345, -0.678, 0.901, ...], "embeddingOption": "visual-text", "startSec": 0.0, "endSec": 5.0}</pre><p>To help you get started, check out a <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/service_code_examples.html">broad range of code examples</a> for multiple use cases and a variety of programming languages. To learn more, visit <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-pegasus.html">TwelveLabs Pegasus 1.2</a> and <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-marengo.html">TwelveLabs Marengo Embed 2.7</a> in the AWS Documentation.</p><p><strong class="c6">Now available</strong><br />TwelveLabs models are generally available today in Amazon Bedrock: the Marengo model in the US East (N. Virginia), Europe (Ireland), and Asia Pacific (Seoul) Region, and the Pegasus model in US West (Oregon), and Europe (Ireland) Region accessible with cross-Region inference from US and Europe Regions. Check the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">full Region list</a> for future updates. To learn more, visit the <a href="https://aws.amazon.com/bedrock/twelvelabs/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">TwelveLabs in Amazon Bedrock</a> product page and the <a href="https://aws.amazon.com/bedrock/pricing/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock pricing</a> page.</p><p>Give TwelveLabs models a try on the <a href="https://console.aws.amazon.com/bedrock?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el">Amazon Bedrock console</a> today, and send feedback to <a href="https://repost.aws/tags/TAQeKlaPaNRQ2tWB6P7KrMag/amazon-bedrock">AWS re:Post for Amazon Bedrock</a> or through your usual AWS Support contacts.</p><p>— <a href="https://twitter.com/channyun">Channy</a></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="3570777f-f56a-4155-a534-32d31c9b2f4a" data-title="TwelveLabs video understanding models are now available in Amazon Bedrock" data-url="https://aws.amazon.com/blogs/aws/twelvelabs-video-understanding-models-are-now-available-in-amazon-bedrock/"><p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div></aside>