AWS Machine Learning Blog 21小时前
Long-running execution flows now supported in Amazon Bedrock Flows in public preview
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Amazon Bedrock Flows 现已支持长时间运行(异步)流程,允许用户构建和扩展预定义的生成式 AI 工作流程。这项新功能将流程执行时间从 5 分钟(同步)延长至 24 小时(异步),使用户能够处理大型数据集、运行复杂的工作流程,并集成多个系统,同时提供流畅的用户体验。Dentsu 等机构可利用此功能处理大型图书、转化文档,并实现多步骤 AI 工作流程,从而提高效率和可扩展性。

🚀 Amazon Bedrock Flows 引入长时间运行执行(异步)流程,将工作流程执行时间从 5 分钟扩展到 24 小时,满足处理大型数据集和复杂工作流程的需求。

💡 该功能支持处理大型负载和资源密集型任务,并能处理多步骤决策生成式 AI 工作流程,集成了多个外部系统。

🛠️ 用户可以通过 Amazon Bedrock API 和控制台创建和管理长时间运行的流程,并通过内置的执行跟踪功能监控流程状态和结果。

📚 Dentsu 等机构利用此功能,将书籍转换为易于阅读的格式,处理大型输入并执行复杂任务,提高了生成式 AI 应用的效率。

Today, we announce the public preview of long-running execution (asynchronous) flow support within Amazon Bedrock Flows. With Amazon Bedrock Flows, you can link foundation models (FMs), Amazon Bedrock Prompt Management, Amazon Bedrock Agents, Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails, and other AWS services together to build and scale predefined generative AI workflows.

As customers across industries build increasingly sophisticated applications, they’ve shared feedback about needing to process larger datasets and run complex workflows that take longer than a few minutes to complete. Many customers told us they want to transform entire books, process massive documents, and orchestrate multi-step AI workflows without worrying about runtime limits, highlighting the need for a solution that can handle long-running background tasks. To address those concerns, Amazon Bedrock Flows introduces a new feature in public preview that extends workflow execution time from 5 minutes (synchronous) to 24 hours (asynchronous).

With Amazon Bedrock long-running execution flows (asynchronous), you can chain together multiple prompts, AI services, and Amazon Bedrock components into complex, long-running workflows (up to 24 hours asynchronously). The new capabilities include built-in execution tracing directly using the AWS Management Console and Amazon Bedrock Flow API for observability. These enhancements significantly streamline workflow development and management in Amazon Bedrock Flows, helping you focus on building and deploying your generative AI applications.

By decoupling the workflow execution (asynchronously through long-running flows that can run for up to 24 hours) from the user’s immediate interaction, you can now build applications that can handle large payloads that take longer than 5 minutes to process, perform resource-intensive tasks, apply multiple rules for decision-making, and even run the flow in the background while integrating with multiple systems—while providing your users with a seamless and responsive experience.

Solution overview

Organizations using Amazon Bedrock Flows now can use long-running execution flow capabilities to design and deploy long-running workflows for building more scalable and efficient generative AI applications. This feature offers the following benefits:

Dentsu, a leading advertising agency and creative powerhouse, needs to handle complex, multi-step generative AI use cases that require longer execution time. One use case is their Easy Reading application, which converts books with many chapters and illustrations into easily readable formats to enable people with intellectual disabilities to access literature. With Amazon Bedrock long-running execution flows, now Dentsu can:

“Amazon Bedrock has been amazing to work with and demonstrate value to our clients,” says Victoria Aiello, Innovation Director, Dentsu Creative Brazil. “Using traces and flows, we are able to show how processing happens behind the scenes of the work AI is performing, giving us better visibility and accuracy on what’s to be produced. For the Easy Reading use case, long-running execution flows will allow for processing of the entire book in one go, taking advantage of the 24-hour flow execution time instead of writing custom code to manage multiple sections of the book separately. This saves us time when producing new books or even integrating with different models; we can test different results according to the needs or content of each book.”

Let’s explore how the new long-running execution flow capability in Amazon Bedrock Flows enables Dentsu to build a more efficient and long-running book processing generative AI application. The following diagram illustrates the end-to-end flow of Dentsu’s book processing application. The process begins when a client uploads a book to Amazon Simple Storage Service (Amazon S3), triggering a flow that processes multiple chapters, where each chapter undergoes accessibility transformations and formatting according to specific user requirements. The transformed chapters are then collected, combined with a table of contents, and stored back in Amazon S3 as a final accessible document. This long-running execution (asynchronous) flow can handle large books efficiently, processing them within the 24-hour execution window while providing status updates and traceability throughout the transformation process.

In the following sections, we demonstrate how to create a long-running execution flow in Amazon Bedrock Flows using Dentsu’s real-world use case of books transformation.

Prerequisites

Before implementing the new capabilities, make sure you have the following:

After these components are in place, you can implement Amazon Bedrock long-running execution flow capabilities in your generative AI use case.

Create a long-running execution flow

Complete the following steps to create your long-running execution flow:

    On the Amazon Bedrock console, in the navigation pane under Builder tools, choose Flows. Choose Create a flow. Provide a name for your a new flow, for example, easy-read-long-running-flow.

For detailed instructions on creating a flow, see Amazon Bedrock Flows is now generally available with enhanced safety and traceability. Amazon Bedrock provides different node types to build your prompt flow.

The following screenshot shows the high-level flow of Dentsu’s book conversion generative AI-powered application. The workflow demonstrates a sequential process from input handling through content transformation to final storage and delivery.

The following table outlines the core components and nodes within the preceding workflow, designed for document processing and accessibility transformation.

Node Purpose
Flow Input Entry point accepting an array of S3 prefixes (chapters) and accessibility profile
Iterator Processes each chapter (prefix) individually
S3 Retrieval Downloads chapter content from the specified Amazon S3 location
Easifier Applies accessibility transformation rules to chapter content
HTML Formatter Formats transformed content with appropriate HTML structure
Collector Assembles transformed chapters while maintaining order
Lambda Function Combines chapters into a single document with table of contents
S3 Storage Stores the final transformed document in Amazon S3
Flow Output Returns Amazon S3 location of the transformed book with metadata

Test the book processing flow

We are now ready to test the flow through the Amazon Bedrock console or API. We use a fictional book called “Beyond Earth: Humanity’s Journey to the Stars.” This book tells the story of humanity’s greatest adventure beyond our home planet, tracing our journey from the first satellites and moonwalks to space stations and robotic explorers that continue to unveil the mysteries of our solar system.

    On the Amazon Bedrock console, choose Flows in the navigation pane. Choose the flow (easy-read-long-running-flow) and choose Create execution.

The flow must be in the Prepared state before creating an execution.

The Execution tab shows the previous executions for the selected flow.

    Provide the following input:

dyslexia test input

{  "chapterPrefixes": [    "books/beyond-earth/chapter_1.txt",    "books/beyond-earth/chapter_2.txt",    "books/beyond-earth/chapter_3.txt"  ],  "metadata": {    "accessibilityProfile": "dyslexia",    "bookId": "beyond-earth-002",    "bookTitle": "Beyond Earth: Humanity's Journey to the Stars"  }}

These are the different chapters of our book that need to be transformed.

    Choose Create.

Amazon Bedrock Flows initiates the long-running execution (asynchronous) flow of our workflow. The dashboard displays the executions of our flow with their respective statuses (Running, Succeeded, Failed, TimedOut, Aborted). When an execution is marked as Completed, the results become available in our designated S3 bucket.

Choosing an execution takes you to the summary page containing its details. The Overview section displays start and end times, plus the execution Amazon Resource Name (ARN)—a unique identifier that’s essential for troubleshooting specific executions later.

When you select a node in the flow builder, its configuration details appear. For instance, choosing the Easifier node reveals the prompt used, the selected model (here it’s Amazon Nova Lite), and additional configuration parameters. This is essential information for understanding how that specific component is set up.

The system also provides access to execution traces, offering detailed insights into each processing step, tracking real-time performance metrics, and highlighting issues that occurred during the flow’s execution. Traces can be enabled using the API and sent to an Amazon CloudWatch log. In the API, set the enableTrace field to true in an InvokeFlow request. Each flowOutputEvent in the response is returned alongside a flowTraceEvent.

We have now successfully created and executed a long-running execution flow. You can also use Amazon Bedrock APIs to programmatically start, stop, list, and get flow executions. For more details on how to configure flows with enhanced safety and traceability, refer to Amazon Bedrock Flows is now generally available with enhanced safety and traceability.

Conclusion

The integration of long-running execution flows in Amazon Bedrock Flows represents a significant advancement in generative AI development. With these capabilities, you can create more efficient AI-powered solutions to automate long-running operations, addressing critical challenges in the rapidly evolving field of AI application development.

Long-running execution flow support in Amazon Bedrock Flows is now available in public preview in AWS Regions where Amazon Bedrock Flows is available, except for the AWS GovCloud (US) Regions. To get started, open the Amazon Bedrock console or APIs to begin building flows with long-running execution flow with Amazon Bedrock Flows. To learn more, see Create your first flow in Amazon Bedrock and Track each step in your flow by viewing its trace in Amazon Bedrock.

We’re excited to see the innovative applications you will build with these new capabilities. As always, we welcome your feedback through AWS re:Post for Amazon Bedrock or your usual AWS contacts. Join the generative AI builder community at community.aws to share your experiences and learn from others.


About the authors

Shubhankar Sumar is a Senior Solutions Architect at AWS, where he specializes in architecting generative AI-powered solutions for enterprise software and SaaS companies across the UK. With a strong background in software engineering, Shubhankar excels at designing secure, scalable, and cost-effective multi-tenant systems on the cloud. His expertise lies in seamlessly integrating cutting-edge generative AI capabilities into existing SaaS applications, helping customers stay at the forefront of technological innovation.

Amit Lulla is a Principal Solutions Architect at AWS, where he architects enterprise-scale generative AI and machine learning solutions for software companies. With over 15 years in software development and architecture, he’s passionate about turning complex AI challenges into bespoke solutions that deliver real business value. When he’s not architecting cutting-edge systems or mentoring fellow architects, you’ll find Amit on the squash court, practicing yoga, or planning his next travel adventure. He also maintains a daily meditation practice, which he credits for keeping him centered in the fast-paced world of AI innovation.

Huong Nguyen is a Principal Product Manager at AWS. She is leading the Amazon Bedrock Flows, with 18 years of experience building customer-centric and data-driven products. She is passionate about democratizing responsible machine learning and generative AI to enable customer experience and business innovation. Outside of work, she enjoys spending time with family and friends, listening to audiobooks, traveling, and gardening.

Christian Kamwangala is an AI/ML and Generative AI Specialist Solutions Architect at AWS, based in Paris, France. He partners with enterprise customers to architect, optimize, and deploy production-grade AI solutions leveraging AWS’s comprehensive machine learning stack. Christian specializes in inference optimization techniques that balance performance, cost, and latency requirements for large-scale deployments. In his spare time, Christian enjoys exploring nature and spending time with family and friends.

Jeremy Bartosiewicz is a Senior Solutions Architect at AWS, with over 15 years of experience working in technology in multiple roles. Coming from a consulting background, Jeremy enjoys working on a multitude of projects that help organizations grow using cloud solutions. He helps support large enterprise customers at AWS and is part of the Advertising and Machine Learning TFCs.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Amazon Bedrock 生成式AI 异步流程
相关文章