AWS Blogs 03月31日 17:08
Get insights from multimodal content with Amazon Bedrock Data Automation, now generally available
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

亚马逊推出Bedrock Data Automation,可从文档、图像、音频、视频等多模态内容中获取有价值的见解,减少开发时间和精力,现已普遍可用,并支持跨区域推理端点。

🎯亚马逊Bedrock Data Automation可从多模态内容中获取见解,减少开发成本

💻可作为独立功能或Amazon Bedrock Knowledge Bases的解析器

📋支持两种输出方式:标准输出和自定义输出

🌐现普遍可用并支持跨区域推理端点

<section class="blog-post-content lb-rtxt"><table><tbody><tr><td><p></p></td></tr></tbody></table><p><em><strong>March 21, 2025</strong>: Updated with a link to a <a href="https://aws.amazon.com/blogs/machine-learning/unleashing-the-multimodal-power-of-amazon-bedrock-data-automation-to-transform-unstructured-data-into-actionable-insights/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;new post focused on use cases for Amazon Bedrock Data Automation</a>.</em></p><p>Many applications need to interact with content available through different modalities. Some of these applications process complex documents, such as insurance claims and medical bills. Mobile apps need to analyze user-generated media. Organizations need to build a semantic index on top of their digital assets that include documents, images, audio, and video files. However, getting insights from unstructured multimodal content is not easy to set up: you have to implement processing pipelines for the different data formats and go through multiple steps to get the information you need. That usually means having multiple models in production for which you have to handle cost optimizations (through fine-tuning and prompt engineering), safeguards (for example, against hallucinations), integrations with the target applications (including data formats), and model updates.</p><p>To make this process easier, we <a href="https://aws.amazon.com/blogs/aws/new-amazon-bedrock-capabilities-enhance-data-processing-and-retrieval/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;introduced in preview during AWS re:Invent</a> <a href="https://aws.amazon.com/bedrock/bda/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;Amazon Bedrock Data Automation</a>, a capability of <a href="https://aws.amazon.com/bedrock/&quot;&gt;Amazon Bedrock</a> that streamlines the generation of valuable insights from unstructured, multimodal content such as documents, images, audio, and videos. With Bedrock Data Automation, you can reduce the development time and effort to build intelligent document processing, media analysis, and other multimodal data-centric automation solutions.</p><p>You can use Bedrock Data Automation as a standalone feature or as a parser for <a href="https://aws.amazon.com/bedrock/knowledge-bases/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;Amazon Bedrock Knowledge Bases</a> to index insights from multimodal content and provide more relevant responses for <a href="https://aws.amazon.com/what-is/retrieval-augmented-generation/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;Retrieval-Augmented Generation (RAG)</a>.</p><p>Today, Bedrock Data Automation is now generally available with support for cross-region inference endpoints to be available in more <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/&quot;&gt;AWS Regions</a> and seamlessly use compute across different locations. Based on your feedback during the preview, we also improved accuracy and added support for logo recognition for images and videos.</p><p>Let’s have a look at how this works in practice.</p><p><strong>Using Amazon Bedrock Data Automation with cross-region inference endpoints<br /></strong> The <a href="https://aws.amazon.com/blogs/aws/new-amazon-bedrock-capabilities-enhance-data-processing-and-retrieval/&quot;&gt;blog post published for the Bedrock Data Automation preview</a> shows how to use the visual demo in the <a href="https://console.aws.amazon.com/bedrock?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;Amazon Bedrock console</a> to extract information from documents and videos. I recommend you go through the console demo experience to understand how this capability works and what you can do to customize it. For this post, I focus more on how Bedrock Data Automation works in your applications, starting with a few steps in the console and following with code samples.</p><p>The <strong>Data Automation</strong> section of the <a href="https://console.aws.amazon.com/bedrock?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;Amazon Bedrock console</a> now asks for confirmation to enable cross-region support the first time you access it. For example:</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/02/18/bda-ga-cross-region-confirm.png&quot;&gt;&lt;img class="aligncenter size-full wp-image-93600" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/02/18/bda-ga-cross-region-confirm.png&quot; alt="Console screenshot." width="936" height="258" /></a></p><p>From an API perspective, the <code>InvokeDataAutomationAsync</code> operation now <strong>requires</strong> an additional parameter (<code>dataAutomationProfileArn</code>) to specify the data automation profile to use. The value for this parameter depends on the Region and your AWS account ID:</p><p><code>arn:aws:bedrock:&lt;REGION&gt;:&lt;ACCOUNT_ID&gt;:data-automation-profile/us.data-automation-v1</code></p><p>Also, the <code>dataAutomationArn</code> parameter has been renamed to <code>dataAutomationProjectArn</code> to better reflect that it contains the project <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;Amazon Resource Name (ARN)</a>. When invoking Bedrock Data Automation, you now need to specify a project or a blueprint to use. If you pass in blueprints, you will get custom output. To continue to get standard default output, configure the parameter <code>DataAutomationProjectArn</code> to use <code>arn:aws:bedrock:&lt;REGION&gt;:aws:data-automation-project/public-default</code>.</p><p>As the name suggests, the <code>InvokeDataAutomationAsync</code> operation is asynchronous. You pass the input and output configuration and, when the result is ready, it’s written on an <a href="https://aws.amazon.com/s3/&quot;&gt;Amazon Simple Storage Service (Amazon S3)</a> bucket as specified in the output configuration. You can receive an <a href="https://aws.amazon.com/eventbridge&quot;&gt;Amazon EventBridge</a> notification from Bedrock Data Automation using the <code>notificationConfiguration</code> parameter.</p><p>With Bedrock Data Automation, you can configure outputs in two ways:</p><ul><li class="c4"><strong>Standard output</strong> delivers predefined insights relevant to a data type, such as document semantics, video chapter summaries, and audio transcripts. With standard outputs, you can set up your desired insights in just a few steps.</li><li><strong>Custom output</strong> lets you specify extraction needs using blueprints for more tailored insights.</li></ul><p>To see the new capabilities in action, I create a project and customize the standard output settings. For documents, I choose plain text instead of markdown. Note that you can automate these configuration steps using the Bedrock Data Automation API.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/02/19/bda-ga-standard-output-document.png&quot;&gt;&lt;img class="aligncenter size-full wp-image-93606" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/02/19/bda-ga-standard-output-document.png&quot; alt="Console screenshot." width="1352" height="601" /></a></p><p>For videos, I want a full audio transcript and a summary of the entire video. I also ask for a summary of each chapter.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/02/25/bda-ga-standard-output-video-1.png&quot;&gt;&lt;img class="aligncenter size-full wp-image-93720" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/02/25/bda-ga-standard-output-video-1.png&quot; alt="Console screenshot." width="1249" height="487" /></a></p><p>To configure a blueprint, I choose <strong>Custom output setup</strong> in the <strong>Data automation</strong> section of the Amazon Bedrock console navigation pane. There, I search for the <strong>US-Driver-License</strong> sample blueprint. You can browse other sample blueprints for more examples and ideas.</p><p>Sample blueprints can’t be edited, so I use the <strong>Actions</strong> menu to duplicate the blueprint and add it to my project. There, I can fine-tune the data to be extracted by modifying the blueprint and adding custom fields that can use <a href="https://aws.amazon.com/ai/generative-ai/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;generative AI</a> to extract or compute data in the format I need.</p><p><a href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/02/18/bda-ga-cross-sample-blueprint.png&quot;&gt;&lt;img class="aligncenter size-full wp-image-93601" src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2025/02/18/bda-ga-cross-sample-blueprint.png&quot; alt="Console screenshot." width="1246" height="1086" /></a></p><p>I upload the image of a US driver’s license on an S3 bucket. Then, I use this sample Python script that uses Bedrock Data Automation through the <a href="https://aws.amazon.com/sdk-for-python/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;AWS SDK for Python (Boto3)</a> to extract text information from the image:</p><pre class="lang-python">import jsonimport sysimport timeimport boto3DEBUG = FalseAWS_REGION = '&lt;REGION&gt;'BUCKET_NAME = '&lt;BUCKET&gt;'INPUT_PATH = 'BDA/Input'OUTPUT_PATH = 'BDA/Output'PROJECT_ID = '&lt;PROJECT_ID&gt;'BLUEPRINT_NAME = 'US-Driver-License-demo'# Fields to displayBLUEPRINT_FIELDS = [ 'NAME_DETAILS/FIRST_NAME', 'NAME_DETAILS/MIDDLE_NAME', 'NAME_DETAILS/LAST_NAME', 'DATE_OF_BIRTH', 'DATE_OF_ISSUE', 'EXPIRATION_DATE']# AWS SDK for Python (Boto3) clientsbda = boto3.client('bedrock-data-automation-runtime', region_name=AWS_REGION)s3 = boto3.client('s3', region_name=AWS_REGION)sts = boto3.client('sts')def log(data): if DEBUG: if type(data) is dict: text = json.dumps(data, indent=4) else: text = str(data) print(text)def get_aws_account_id() -&gt; str: return sts.get_caller_identity().get('Account')def get_json_object_from_s3_uri(s3_uri) -&gt; dict: s3_uri_split = s3_uri.split('/') bucket = s3_uri_split[2] key = '/'.join(s3_uri_split[3:]) object_content = s3.get_object(Bucket=bucket, Key=key)['Body'].read() return json.loads(object_content)def invoke_data_automation(input_s3_uri, output_s3_uri, data_automation_arn, aws_account_id) -&gt; dict: params = { 'inputConfiguration': { 's3Uri': input_s3_uri }, 'outputConfiguration': { 's3Uri': output_s3_uri }, 'dataAutomationConfiguration': { 'dataAutomationProjectArn': data_automation_arn }, 'dataAutomationProfileArn': f"arn:aws:bedrock:{AWS_REGION}:{aws_account_id}:data-automation-profile/us.data-automation-v1" } response = bda.invoke_data_automation_async(**params) log(response) return responsedef wait_for_data_automation_to_complete(invocation_arn, loop_time_in_seconds=1) -&gt; dict: while True: response = bda.get_data_automation_status( invocationArn=invocation_arn ) status = response['status'] if status not in ['Created', 'InProgress']: print(f" {status}") return response print(".", end='', flush=True) time.sleep(loop_time_in_seconds)def print_document_results(standard_output_result): print(f"Number of pages: {standard_output_result['metadata']['number_of_pages']}") for page in standard_output_result['pages']: print(f"- Page {page['page_index']}") if 'text' in page['representation']: print(f"{page['representation']['text']}") if 'markdown' in page['representation']: print(f"{page['representation']['markdown']}")def print_video_results(standard_output_result): print(f"Duration: {standard_output_result['metadata']['duration_millis']} ms") print(f"Summary: {standard_output_result['video']['summary']}") statistics = standard_output_result['statistics'] print("Statistics:") print(f"- Speaket count: {statistics['speaker_count']}") print(f"- Chapter count: {statistics['chapter_count']}") print(f"- Shot count: {statistics['shot_count']}") for chapter in standard_output_result['chapters']: print(f"Chapter {chapter['chapter_index']} {chapter['start_timecode_smpte']}-{chapter['end_timecode_smpte']} ({chapter['duration_millis']} ms)") if 'summary' in chapter: print(f"- Chapter summary: {chapter['summary']}")def print_custom_results(custom_output_result): matched_blueprint_name = custom_output_result['matched_blueprint']['name'] log(custom_output_result) print('\n- Custom output') print(f"Matched blueprint: {matched_blueprint_name} Confidence: {custom_output_result['matched_blueprint']['confidence']}") print(f"Document class: {custom_output_result['document_class']['type']}") if matched_blueprint_name == BLUEPRINT_NAME: print('\n- Fields') for field_with_group in BLUEPRINT_FIELDS: print_field(field_with_group, custom_output_result)def print_results(job_metadata_s3_uri) -&gt; None: job_metadata = get_json_object_from_s3_uri(job_metadata_s3_uri) log(job_metadata) for segment in job_metadata['output_metadata']: asset_id = segment['asset_id'] print(f'\nAsset ID: {asset_id}') for segment_metadata in segment['segment_metadata']: # Standard output standard_output_path = segment_metadata['standard_output_path'] standard_output_result = get_json_object_from_s3_uri(standard_output_path) log(standard_output_result) print('\n- Standard output') semantic_modality = standard_output_result['metadata']['semantic_modality'] print(f"Semantic modality: {semantic_modality}") match semantic_modality: case 'DOCUMENT': print_document_results(standard_output_result) case 'VIDEO': print_video_results(standard_output_result) # Custom output if 'custom_output_status' in segment_metadata and segment_metadata['custom_output_status'] == 'MATCH': custom_output_path = segment_metadata['custom_output_path'] custom_output_result = get_json_object_from_s3_uri(custom_output_path) print_custom_results(custom_output_result)def print_field(field_with_group, custom_output_result) -&gt; None: inference_result = custom_output_result['inference_result'] explainability_info = custom_output_result['explainability_info'][0] if '/' in field_with_group: # For fields part of a group (group, field) = field_with_group.split('/') inference_result = inference_result[group] explainability_info = explainability_info[group] else: field = field_with_group value = inference_result[field] confidence = explainability_info[field]['confidence'] print(f'{field}: {value or '&lt;EMPTY&gt;'} Confidence: {confidence}')def main() -&gt; None: if len(sys.argv) &lt; 2: print("Please provide a filename as command line argument") sys.exit(1) file_name = sys.argv[1] aws_account_id = get_aws_account_id() input_s3_uri = f"s3://{BUCKET_NAME}/{INPUT_PATH}/{file_name}" # File output_s3_uri = f"s3://{BUCKET_NAME}/{OUTPUT_PATH}" # Folder data_automation_arn = f"arn:aws:bedrock:{AWS_REGION}:{aws_account_id}:data-automation-project/{PROJECT_ID}" print(f"Invoking Bedrock Data Automation for '{file_name}'", end='', flush=True) data_automation_response = invoke_data_automation(input_s3_uri, output_s3_uri, data_automation_arn, aws_account_id) data_automation_status = wait_for_data_automation_to_complete(data_automation_response['invocationArn']) if data_automation_status['status'] == 'Success': job_metadata_s3_uri = data_automation_status['outputConfiguration']['s3Uri'] print_results(job_metadata_s3_uri)if name == "main": main()</pre><p>The initial configuration in the script includes the name of the S3 bucket to use in input and output, the location of the input file in the bucket, the output path for the results, the project ID to use to get custom output from Bedrock Data Automation, and the blueprint fields to show in output.</p><p>I run the script passing the name of the input file. In output, I see the information extracted by Bedrock Data Automation. The <strong>US-Driver-License</strong> is a match and the name and dates in the driver’s license are printed in output.</p><p>As expected, I see in output the information I selected from the blueprint associated with the Bedrock Data Automation project.</p><p>Similarly, I run the same script on a <a href="https://www.youtube.com/watch?v=WYTZELB8JdU&quot;&gt;video file</a> from my colleague <a href="https://www.linkedin.com/in/mikegchambers&quot;&gt;Mike Chambers</a>. To keep the output small, I don’t print the full audio transcript or the text displayed in the video.</p><div class="hide-language"><pre class="lang-bash">python bda.py mike-video.mp4Invoking Bedrock Data Automation for 'mike-video.mp4'.......................................................................................................................................................................................................................................................................... SuccessAsset ID: 0- Standard outputSemantic modality: VIDEODuration: 810476 msSummary: In this comprehensive demonstration, a technical expert explores the capabilities and limitations of Large Language Models (LLMs) while showcasing a practical application using AWS services. He begins by addressing a common misconception about LLMs, explaining that while they possess general world knowledge from their training data, they lack current, real-time information unless connected to external data sources.To illustrate this concept, he demonstrates an "Outfit Planner" application that provides clothing recommendations based on location and weather conditions. Using Brisbane, Australia as an example, the application combines LLM capabilities with real-time weather data to suggest appropriate attire like lightweight linen shirts, shorts, and hats for the tropical climate.The demonstration then shifts to the Amazon Bedrock platform, which enables users to build and scale generative AI applications using foundation models. The speaker showcases the "OutfitAssistantAgent," explaining how it accesses real-time weather data to make informed clothing recommendations. Through the platform's "Show Trace" feature, he reveals the agent's decision-making process and how it retrieves and processes location and weather information.The technical implementation details are explored as the speaker configures the OutfitAssistant using Amazon Bedrock. The agent's workflow is designed to be fully serverless and managed within the Amazon Bedrock service.Further diving into the technical aspects, the presentation covers the AWS Lambda console integration, showing how to create action group functions that connect to external services like the OpenWeatherMap API. The speaker emphasizes that LLMs become truly useful when connected to tools providing relevant data sources, whether databases, text files, or external APIs.The presentation concludes with the speaker encouraging viewers to explore more AWS developer content and engage with the channel through likes and subscriptions, reinforcing the practical value of combining LLMs with external data sources for creating powerful, context-aware applications.Statistics:- Speaket count: 1- Chapter count: 6- Shot count: 48Chapter 0 00:00:00:00-00:01:32:01 (92025 ms)- Chapter summary: A man with a beard and glasses, wearing a gray hooded sweatshirt with various logos and text, is sitting at a desk in front of a colorful background. He discusses the frequent release of new large language models (LLMs) and how people often test these models by asking questions like "Who won the World Series?" The man explains that LLMs are trained on general data from the internet, so they may have information about past events but not current ones. He then poses the question of what he wants from an LLM, stating that he desires general world knowledge, such as understanding basic concepts like "up is up" and "down is down," but does not need specific factual knowledge. The man suggests that he can attach other systems to the LLM to access current factual data relevant to his needs. He emphasizes the importance of having general world knowledge and the ability to use tools and be linked into agentic workflows, which he refers to as "agentic workflows." The man encourages the audience to add this term to their spell checkers, as it will likely become commonly used.Chapter 1 00:01:32:01-00:03:38:18 (126560 ms)- Chapter summary: The video showcases a man with a beard and glasses demonstrating an "Outfit Planner" application on his laptop. The application allows users to input their location, such as Brisbane, Australia, and receive recommendations for appropriate outfits based on the weather conditions. The man explains that the application generates these recommendations using large language models, which can sometimes provide inaccurate or hallucinated information since they lack direct access to real-world data sources.The man walks through the process of using the Outfit Planner, entering Brisbane as the location and receiving weather details like temperature, humidity, and cloud cover. He then shows how the application suggests outfit options, including a lightweight linen shirt, shorts, sandals, and a hat, along with an image of a woman wearing a similar outfit in a tropical setting.Throughout the demonstration, the man points out the limitations of current language models in providing accurate and up-to-date information without external data connections. He also highlights the need to edit prompts and adjust settings within the application to refine the output and improve the accuracy of the generated recommendations.Chapter 2 00:03:38:18-00:07:19:06 (220620 ms)- Chapter summary: The video demonstrates the Amazon Bedrock platform, which allows users to build and scale generative AI applications using foundation models (FMs). [speaker_0] introduces the platform's overview, highlighting its key features like managing FMs from AWS, integrating with custom models, and providing access to leading AI startups. The video showcases the Amazon Bedrock console interface, where [speaker_0] navigates to the "Agents" section and selects the "OutfitAssistantAgent" agent. [speaker_0] tests the OutfitAssistantAgent by asking it for outfit recommendations in Brisbane, Australia. The agent provides a suggestion of wearing a light jacket or sweater due to cool, misty weather conditions. To verify the accuracy of the recommendation, [speaker_0] clicks on the "Show Trace" button, which reveals the agent's workflow and the steps it took to retrieve the current location details and weather information for Brisbane. The video explains that the agent uses an orchestration and knowledge base system to determine the appropriate response based on the user's query and the retrieved data. It highlights the agent's ability to access real-time information like location and weather data, which is crucial for generating accurate and relevant responses.Chapter 3 00:07:19:06-00:11:26:13 (247214 ms)- Chapter summary: The video demonstrates the process of configuring an AI assistant agent called "OutfitAssistant" using Amazon Bedrock. [speaker_0] introduces the agent's purpose, which is to provide outfit recommendations based on the current time and weather conditions. The configuration interface allows selecting a language model from Anthropic, in this case the Claud 3 Haiku model, and defining natural language instructions for the agent's behavior. [speaker_0] explains that action groups are groups of tools or actions that will interact with the outside world. The OutfitAssistant agent uses Lambda functions as its tools, making it fully serverless and managed within the Amazon Bedrock service. [speaker_0] defines two action groups: "get coordinates" to retrieve latitude and longitude coordinates from a place name, and "get current time" to determine the current time based on the location. The "get current weather" action requires calling the "get coordinates" action first to obtain the location coordinates, then using those coordinates to retrieve the current weather information. This demonstrates the agent's workflow and how it utilizes the defined actions to generate outfit recommendations. Throughout the video, [speaker_0] provides details on the agent's configuration, including its name, description, model selection, instructions, and action groups. The interface displays various options and settings related to these aspects, allowing [speaker_0] to customize the agent's behavior and functionality.Chapter 4 00:11:26:13-00:13:00:17 (94160 ms)- Chapter summary: The video showcases a presentation by [speaker_0] on the AWS Lambda console and its integration with machine learning models for building powerful agents. [speaker_0] demonstrates how to create an action group function using AWS Lambda, which can be used to generate text responses based on input parameters like location, time, and weather data. The Lambda function code is shown, utilizing external services like OpenWeatherMap API for fetching weather information. [speaker_0] explains that for a large language model to be useful, it needs to connect to tools providing relevant data sources, such as databases, text files, or external APIs. The presentation covers the process of defining actions, setting up Lambda functions, and leveraging various tools within the AWS environment to build intelligent agents capable of generating context-aware responses.Chapter 5 00:13:00:17-00:13:28:10 (27761 ms)- Chapter summary: A man with a beard and glasses, wearing a gray hoodie with various logos and text, is sitting at a desk in front of a colorful background. He is using a laptop computer that has stickers and logos on it, including the AWS logo. The man appears to be presenting or speaking about AWS (Amazon Web Services) and its services, such as Lambda functions and large language models. He mentions that if a Lambda function can do something, then it can be used to augment a large language model. The man concludes by expressing hope that the viewer found the video useful and insightful, and encourages them to check out other videos on the AWS developers channel. He also asks viewers to like the video, subscribe to the channel, and watch other videos.</pre></div><p><strong>Things to know<br /></strong> <a href="https://aws.amazon.com/bedrock/bda/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;Amazon Bedrock Data Automation</a> is now available via cross-region inference in the following two AWS Regions: US East (N. Virginia) and US West (Oregon). When using Bedrock Data Automation from those Regions, data can be processed using cross-region inference in any of these four Regions: US East (Ohio, N. Virginia) and US West (N. California, Oregon). All these Regions are in the US so that data is processed within the same geography. We’re working to add support for more Regions in Europe and Asia later in 2025.</p><p>There’s no change in pricing compared to the preview and when using cross-region inference. For more information, visit <a href="https://aws.amazon.com/bedrock/pricing/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;Amazon Bedrock pricing</a>.</p><p>Bedrock Data Automation now also includes a number of security, governance and manageability related capabilities such as <a href="https://aws.amazon.com/kms/&quot;&gt;AWS Key Management Service (AWS KMS)</a> <a href="https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;customer managed keys</a> support for granular encryption control, <a href="https://aws.amazon.com/privatelink/&quot;&gt;AWS PrivateLink</a> to connect directly to the Bedrock Data Automation APIs in your virtual private cloud (VPC) instead of connecting over the internet, and tagging of Bedrock Data Automation resources and jobs to track costs and enforce tag-based access policies in <a href="https://aws.amazon.com/iam/&quot;&gt;AWS Identity and Access Management (IAM)</a>.</p><p>I used Python in this blog post but Bedrock Data Automation is available with any <a href="https://aws.amazon.com/tools/&quot;&gt;AWS SDKs</a>. For example, you can use Java, .NET, or Rust for a backend document processing application; JavaScript for a web app that processes images, videos, or audio files; and Swift for a native mobile app that processes content provided by end users. It’s never been so easy to get insights from multimodal data.</p><p>Here are a few reading suggestions to learn more (including code samples):</p><p>– <a href="https://x.com/danilop&quot;&gt;Danilo&lt;/a&gt;&lt;/p&gt;&lt;p&gt;—&lt;/p&gt;&lt;p&gt;How is the News Blog doing? Take this <a href="https://amazonmr.au1.qualtrics.com/jfe/form/SV_eyD5tC5xNGCdCmi&quot;&gt;1 minute survey</a>!</p><p><em>(This <a href="https://amazonmr.au1.qualtrics.com/jfe/form/SV_eyD5tC5xNGCdCmi&quot;&gt;survey&lt;/a&gt; is hosted by an external company. AWS handles your information as described in the <a href="https://aws.amazon.com/privacy/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;amp;sc_channel=el&quot;&gt;AWS Privacy Notice</a>. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)</em></p></section><aside class="blog-comments"><div data-lb-comp="aws-blog:cosmic-comments" data-env="prod" data-content-id="fddee6c4-a623-4dba-b9bf-44c58e21ba6d" data-title="Get insights from multimodal content with Amazon Bedrock Data Automation, now generally available" data-url="https://aws.amazon.com/blogs/aws/get-insights-from-multimodal-content-with-amazon-bedrock-data-automation-now-generally-available/&quot;&gt;&lt;p data-failed-message="Comments cannot be loaded… Please refresh and try again.">Loading comments…</p></div></aside>

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

亚马逊Bedrock 数据自动化 多模态内容 跨区域端点
相关文章