AWS Machine Learning Blog 2024年10月25日
Create a generative AI-based application builder assistant using Amazon Bedrock Agents
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍利用亚马逊Bedrock Agents作为软件应用构建助手的相关内容。包括其借助大语言模型作为推理引擎构建动态复杂业务流程,加速生成式AI应用开发,可定制化满足特定项目需求等,还提及了相关的解决方案概述、用例、前提条件、实施步骤、代理指令和用户提示以及成本考虑等方面。

亚马逊Bedrock Agents利用大语言模型的推理能力,将用户请求的任务分解为多个步骤,通过调用公司API和利用RAG访问知识库为用户提供最终响应,具有巨大的用例灵活性,能降低开发成本并保护数据安全。

该助手可通过三层软件应用的所有层级帮助完成任务,能按用户选择的语言生成和解释UI及后端层的代码片段,还能依据AWS良好架构框架推荐软件和架构设计最佳实践,并用自然语言问题生成SQL查询并执行。

使用亚马逊Bedrock Agents需完成一些前提条件,如克隆GitHub仓库、设置Amazon SageMaker笔记本、获取亚马逊Bedrock上模型的访问权限等,实施该解决方案涵盖了多项学习目标。

应用构建助手的代理指令明确了其能够回答的三类问题,包括软件应用设计最佳实践、生成SQL查询、生成并解释代码,且每个用户问题默认包含特定系统提示。

In this post, we set up an agent using Amazon Bedrock Agents to act as a software application builder assistant.

Agentic workflows are a fresh new perspective in building dynamic and complex business use- case based workflows with the help of large language models (LLM) as their reasoning engine or brain. These agentic workflows decompose the natural language query-based tasks into multiple actionable steps with iterative feedback loops and self-reflection to produce the final result using tools and APIs.

Amazon Bedrock Agents helps you accelerate generative AI application development by orchestrating multistep tasks. Amazon Bedrock Agents uses the reasoning capability of foundation models (FMs) to break down user-requested tasks into multiple steps. They use the developer-provided instruction to create an orchestration plan and then carry out the plan by invoking company APIs and accessing knowledge bases using Retrieval Augmented Generation (RAG) to provide a final response to the end user. This offers tremendous use case flexibility, enables dynamic workflows, and reduces development cost. Amazon Bedrock Agents is instrumental in customization and tailoring apps to help meet specific project requirements while protecting private data and securing their applications. These agents work with AWS managed infrastructure capabilities and Amazon Bedrock, reducing infrastructure management overhead. Additionally, agents streamline workflows and automate repetitive tasks. With the power of AI automation, you can boost productivity and reduce cost.

Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Solution overview

Typically, a three-tier software application has a UI interface tier, a middle tier (the backend) for business APIs, and a database tier. The generative AI–based application builder assistant from this post will help you accomplish tasks through all three tiers. It can generate and explain code snippets for UI and backend tiers in the language of your choice to improve developer productivity and facilitate rapid development of use cases. The agent can recommend software and architecture design best practices using the AWS Well-Architected Framework for the overall system design.

The agent can generate SQL queries using natural language questions using a database schema DDL (data definition language for SQL) and execute them against a database instance for the database tier.

We use Amazon Bedrock Agents with two knowledge bases for this assistant. Amazon Bedrock Knowledge Bases inherently uses the Retrieval Augmented Generation (RAG) technique. A typical RAG implementation consists of two parts:

The following diagram illustrates how our application builder assistant acts as a coding assistant, recommends AWS design best practices, and aids in SQL code generation.

Based on the three workflows in the preceding figure, let’s explore the type of task you need for different use cases:

Prerequisites

To run this solution in your AWS account, complete the following prerequisites:

    Clone the GitHub repository and follow the steps explained in the README. Set up an Amazon SageMaker notebook on an ml.t3.medium Amazon Elastic Compute Cloud (Amazon EC2) instance. For this post, we have provided an AWS CloudFormation template, available in the GitHub repository. The CloudFormation template also provides the required AWS Identity and Access Management (IAM) access to set up the vector database, SageMaker resources, and AWS Lambda Acquire access to models hosted on Amazon Bedrock. Choose Manage model access in the navigation pane on the Amazon Bedrock console and choose from the list of available options. We use Anthropic’s Claude v3 (Sonnet) on Amazon Bedrock and Amazon Titan Embeddings Text v2 on Amazon Bedrock for this post.

Implement the solution

In the GitHub repository notebook, we cover the following learning objectives:

    Choose the underlying FM for your agent. Write a clear and concise agent instruction to use one of the two knowledge bases and base agent LLM. (Examples given later in the post.) Create and associate an action group with an API schema and a Lambda function. Create, associate, and ingest data into the two knowledge bases. Create, invoke, test, and deploy the agent. Generate UI and backend code with LLMs. Recommend AWS best practices for system design with the AWS Well-Architected Framework guidelines. Generate, run, and validate the SQL from natural language understanding using LLMs, few-shot examples, and a database schema as a knowledge base. Clean up agent resources and their dependencies using a script.

Agent instructions and user prompts

The application builder assistant agent instruction looks like the following.

Hello, I am AI Application Builder Assistant. I am capable of answering the following three categories of questions:- Best practices for design of software applications using the content inside the AWS best practices and AWS well-architected framework Knowledge Base. I help customers understand AWS best practices for building applications with AWS services.- Generate a valid SQLite query for the customer using the database schema inside the Northwind DB knowledge base and then execute the query that answers the question based on the [Northwind] dataset. If the Northwind DB Knowledge Base search function result did not contain enough information to construct a full query try to construct a query to the best of your ability based on the Northwind database schema.- Generate and Explain code for the customer following standard programming language syntax</p><p>Feel free to ask any questions along those lines!

Each user question to the agent by default includes the following system prompt.

Note: The following system prompt remains the same for each agent invocation, only the {user_question_to_agent} gets replaced with user query.

Question: {user_question_to_agent} Given an input question, you will use the existing Knowledge Bases on AWS Well-Architected Framework and Northwind DB Knowledge Base.- For building and designing software applications, you will use the existing Knowledge Base on AWS well-architected framework to generate a response of the most relevant design principles and links to any documents. This Knowledge Base response can then be passed to the functions available to answer the user question. The final response to the direct answer to the user question. It has to be in markdown format highlighting any text of interest. Remove any backticks in the final response.- To generate code for a given user question,  you can use the default Large Language model to come up with the response. This response can be in code markdown format. You can optionally provide an explanation for the code.- To explain code for a given user question, you can use the default Large Language model to come up with the response.- For SQL query generation you will ONLY use the existing database schemas in the Northwind DB Knowledge Base to create a syntactically correct SQLite query and then you will EXECUTE the SQL Query using the functions and API provided to answer the question.Make sure to use ONLY existing columns and tables based on the Northwind DB database schema. Make sure to wrap table names with square brackets. Do not use underscore for table names unless that is part of the database schema. Make sure to add a semicolon after the end of the SQL statement generated.</p><p>Remove any backticks and any html tags like <table><th><tr> in the final response.Here are a few examples of questions I can help answer by generating and then executing a SQLite query:- What are the total sales amounts by year?</p>- What are the top 5 most expensive products?</p>- What is the total revenue for each employee?</p>

Cost considerations

The following are important cost considerations:

Clean up

To avoid incurring unnecessary costs, the implementation automatically cleans up resources after an entire run of the notebook. You can check the notebook instructions in the Clean-up Resources section on how to avoid the automatic cleanup and experiment with different prompts.

The order of resource cleanup is as follows:

    Disable the action group. Delete the action group. Delete the alias. Delete the agent. Delete the Lambda function. Empty the S3 bucket. Delete the S3 bucket. Delete IAM roles and policies. Delete the vector DB collection policies. Delete the knowledge bases.

Conclusion

This post demonstrated how to query and integrate workflows with Amazon Bedrock Agents using multiple knowledge bases to create a generative AI–based software application builder assistant that can author and explain code, generate SQL using DDL schemas, and recommend design suggestions using the AWS Well-Architected Framework.

Beyond code generation and explanation of code as demonstrated in this post, to run and troubleshoot application code in a secure test environment, you can refer to Code Interpreter setup with Amazon Bedrock Agents

For more information on creating agents to orchestrate workflows, see Amazon Bedrock Agents.

Acknowledgements

The author thanks all the reviewers for their valuable feedback.


About the Author

Shayan Ray is an Applied Scientist at Amazon Web Services. His area of research is all things natural language (like NLP, NLU, NLG). His work has been focused on conversational AI, task-oriented dialogue systems and LLM-based agents. His research publications are on natural language processing, personalization, and reinforcement learning.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

亚马逊Bedrock Agents 生成式AI应用 软件应用构建 成本考虑 前提条件
相关文章