AWS Machine Learning Blog 2024年11月27日
Create a generative AI assistant with Slack and Amazon Bedrock
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍如何将Slack与AWS生成式AI服务集成,构建一个自然语言助手,用户可以通过它查询非结构化数据集。通过将AWS Well-Architected框架的知识整合到Amazon Bedrock知识库中,并利用Amazon Bedrock Guardrails实现安全防护和负责任的AI,演示了如何创建一个基于生成式AI的Slack助手。Amazon Bedrock是一个完全托管的服务,提供来自领先AI公司的多种基础模型,而Amazon Bedrock知识库则提供了一种完全托管的检索增强生成(RAG)工作流程,能够从公司数据源获取数据并丰富提示,从而为自然语言查询提供更相关和准确的响应。Amazon Bedrock Guardrails则帮助构建和定制安全、隐私和真实性保护,确保生成式AI应用程序符合负责任的AI政策。

🤔 **无缝集成客户体验、协作工具和相关数据是实现基于知识的生产力提升的基础。**本文展示了如何将流行的Slack消息服务与AWS生成式AI服务集成,构建一个自然语言助手,用户可以通过它查询非结构化数据集,例如企业政策、HR信息、IT安全规范等。

💡 **Amazon Bedrock是一个完全托管的服务,提供来自领先AI公司的多种基础模型(FMs)。**它提供了一个单一的API,简化了访问和使用不同模型的过程,例如AI21 Labs、Anthropic、Cohere、Meta、Stability AI和Amazon自己的模型。

⚙️ **Amazon Bedrock知识库提供了一个完全托管的检索增强生成(RAG)工作流程。**它可以从公司数据源中提取数据,并将其与提示信息结合,从而为自然语言查询提供更相关、更准确的响应。这使得企业能够在无需大量机器学习专业知识的情况下,将先进的生成式AI功能集成到产品和服务中。

🛡️ **Amazon Bedrock Guardrails允许用户实现安全防护,构建和定制安全、隐私和真实性保护。**这些防护措施可以帮助防止不良内容,阻止提示注入,并删除敏感信息以保护隐私,从而保护公司的品牌和声誉。

📚 **解决方案概述:**本文提供的代码能够自动部署Amazon Bedrock知识库、Amazon Bedrock Guardrails以及将Amazon Bedrock知识库API与Slack斜杠命令助手集成的必要资源,例如使用Bolt for Python库。用户可以将示例文档替换为自己的企业数据集,例如企业政策、HR信息、IT安全规范或设备用户手册等。

Seamless integration of customer experience, collaboration tools, and relevant data is the foundation for delivering knowledge-based productivity gains. In this post, we show you how to integrate the popular Slack messaging service with AWS generative AI services to build a natural language assistant where business users can ask questions of an unstructured dataset.

To demonstrate, we create a generative AI-enabled Slack assistant with an integration to Amazon Bedrock Knowledge Bases that can expose the combined knowledge of the AWS Well-Architected Framework while implementing safeguards and responsible AI using Amazon Bedrock Guardrails.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 labs, Anthropic, Cohere, Meta, Stability AI and Amazon through a single API.

Amazon Bedrock Knowledge Bases provides a fully managed Retrieval Augmented Generation (RAG) workflow, a technique that fetches data from company data sources and enriches the prompt to provide more relevant and accurate responses to natural language queries. This makes Amazon Bedrock Knowledge Bases an attractive option to incorporate advanced generative AI capabilities into products and services without the need for extensive machine learning expertise.

Amazon Bedrock Guardrails enables you to implement safeguards to build and customize safety, privacy, and truthfulness protections for your generative AI applications to align with responsible AI policies. Guardrails can help prevent undesirable content, block prompt injections, and remove sensitive information for privacy, protecting your company’s brand and reputation.

This content builds on posts such as Deploy a Slack gateway for Amazon Bedrock by adding integrations to Amazon Bedrock Knowledge Bases and Amazon Bedrock Guardrails, and the Bolt for Python library to simplify Slack message acknowledgement and authentication requirements.

Solution overview

The code in the accompanying GitHub repo provided in this solution enables an automated deployment of Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails, and the required resources to integrate the Amazon Bedrock Knowledge Bases API with a Slack slash command assistant using the Bolt for Python library.

In this example, we ingest the documentation of the Amazon Well-Architected Framework into the knowledge base. Then we use the integration to the Amazon Bedrock Knowledge Bases API to provide a Slack assistant that can answer user questions on AWS architecture best practices. You can substitute the example documentation for your enterprise dataset, such as your corporate, HR, IT, or security policies, or equipment user or maintenance guides.

The following diagram illustrates the high-level solution architecture.

In the following sections, we discuss the key components in more detail.

Slack integration

The Slack integration is provided through the Slack Bolt Library for Python running in the Request Processor AWS Lambda function. The Slack Bolt Library handles authentication and permissions to the Slack application we build, and comes with built-in support for asynchronous request handling. Slack Bolt provides a dedicated user guide to deploy and run the library in a Lambda function.

Retrieval Augmented Generation

Amazon Bedrock Knowledge Bases gives FMs contextual information from your private data sources for RAG to deliver more relevant, accurate, and customized responses.

The RAG workflow consists of two key components: data ingestion and text generation.

Amazon Bedrock Knowledge Bases APIs

Amazon Bedrock Knowledge Bases provides a fully managed RAG workflow that is exposed using two main APIs:

The solution in this post calls the RetrieveAndGenerate API to return the natural language response to the Slack Bolt integration library.

Amazon Bedrock Guardrails

Amazon Bedrock Guardrails provides additional customizable safeguards on top of built-in protections offered by FMs, delivering safety features that are among the best in the industry.

In this solution, we configure Amazon Bedrock Guardrails with content filters, sensitive information filters, and word filters.

Content filters help detect and filter harmful user inputs and model-generated outputs across six categories: prompt injections, misconduct, insults, hate, violence, and sexually explicit content. In this solution, we use all six content filter categories.

Sensitive information filters detect sensitive information such as personally identifiable information (PII) data in a prompt or model responses. To align with your specific case, you can use custom sensitive information filters by defining them with regular expressions (regex).

In this solution, we configure sensitive information filters as follows:

Word filters are used to block words and phrases in input prompts and model responses. In this solution, we have enabled the AWS provided profanity filter. To align with your use case, you can create custom word filters.

Solution walkthrough

Slack interfaces with a simple REST API, configured with Lambda proxy integration that in turn interacts with Amazon Bedrock Knowledge Bases APIs.

The solution is deployed with the following high-level steps:

    Create a new Slack application. Enable third-party model access in Amazon Bedrock. Deploy the Slack to Amazon Bedrock integration using the AWS Cloud Development Kit (AWS CDK). Ingest the AWS Well-Architected Framework documents to the knowledge base.

Prerequisites

To implement this solution, you need the following prerequisites:

This post assumes a working knowledge of the listed AWS services. Some understanding of vector databases, vectorization, and RAG would be advantageous, but not necessary.

Create a new Slack application

After you have logged in to your Slack workspace, complete the following steps:

    Navigate to your Slack apps and create a new application.
    Choose From scratch when prompted.
    Provide an application name. For this post, we use the name aws-war-bot. Choose your workspace and choose Create App.
    To provide permissions for your Slack application, choose OAuth & Permissions in your Slack application navigation pane.
    In the Scopes section, under Bot Token Scopes, add the following permissions:
      calls:write commands incoming-webhook

    Under OAuth Tokens for Your Workspace, choose Install to [workspace name]. Choose a channel that the Slack application will be accessed from. You may want to first create a dedicated channel in Slack for this purpose. Choose Allow.
    When the Slack application install is complete, copy the token value generated for Bot User OAuth Token to use in a later step.
    Under Settings in the navigation pane, choose Basic Information. In the App Credentials section, copy the value for Signing Secret and save this to use later.

Enable model access in Amazon Bedrock

Complete the following steps to enable model access in Amazon Bedrock:

    On the Amazon Bedrock console, choose Model access in the navigation pane. Choose Modify model Access or Enable specific models (if this is the first time using Amazon Bedrock in your account). Select the models you want to use for the embeddings and RAG query response models. In this post, we use Amazon Titan Text Embeddings V2 as the embeddings model and Anthropic’s Claude Sonnet 3 for the RAG query models in the US-EAST-1 AWS Region. Choose Next. Review the model selection and choose Submit.

If you’re not using the US-EAST-1 Region, the models available to request may differ.

When the access request is complete, you will see the model’s status shown as Access granted for the selected models.

Deploy the Slack to Amazon Bedrock integration

In this section, you deploy the companion code to this post to your AWS account, which will deploy an API on API Gateway, a Lambda function, and an Amazon Bedrock knowledge base with OpenSearch Serverless as the vector database.

This section requires AWS CDK and TypeScript to be installed in your local integrated development environment (IDE) and for an AWS account to be bootstrapped. If this has not been done, refer to Getting started with the AWS CDK.

    Clone the code from the GitHub repository:
    git clone https://github.com/aws-samples/amazon-bedrock-knowledgebase-slackbot.git
    Open the amazon-bedrock-knowledgebase-slackbot directory in your preferred IDE and open the lib/amazon-bedrock-knowledgebase-slackbot-stack.ts file. Update the variables if needed (depending on model access and Regional support) for the RAG query and embeddings models:
    const RAG_MODEL_ID = "anthropic.claude-3-sonnet-20240229-v1:0"const EMBEDDING_MODEL = "amazon.titan-embed-text-v2:0"
    Save the changes after all updates are complete. From the root of your repository, run the command npm install. Run the command cdk synth to perform basic validation of AWS CDK code. This generates a CloudFormation template from the AWS CDK stack, which can be reviewed in the cdk.out directory created in the root of the repository. To deploy the application stack, run the following command, replacing the values with the token and the signing secret you created earlier:
    cdk deploy --context slackBotToken=%slackBotToken% --context slackSigningSecret=%slackSigningSecret%

The AWS CDK will deploy the stack as a CloudFormation template. You can monitor the progress of the deployment on the AWS CloudFormation console.

Additionally, AWS CDK will attempt to deploy the application stack to the default account and Region using the default credentials file profile. To change profiles, add the profile flag. For example:

cdk deploy --profile [my-profile]

When the deployment is complete, you will see an output similar to the following screenshot, which details the API endpoint that has just been deployed.

    Copy the API endpoint URL for later use.

You can also retrieve this URL on the Outputs tab of the CloudFormation stack AmazonBedrockKnowledgebaseSlackbotStack that was run to deploy this solution.

    Switch back to the Slack API page. Under the Slack application you created, choose Slash Commands in the navigation pane and then choose Create New Command.
    Provide the following information (make sure to include the Region and API ID that has been deployed):
      For Command, enter /ask-aws. For Request URL, enter https://[AWS-URL]/slack/[command]. For example, https://ab12cd3efg.execute-api.us-east-1.amazonaws.com/prod/slack/ask-aws. For Short Description, enter a description (for example, AWS WAR Bot).

    Choose Save. Reinstall the Slack application to your workspace in the Install App section by choosing Reinstall next to the workspace name.
    Choose the channel where the Slack app will be deployed and choose Allow.

In the Slack channel, you will see a message like the one in the following screenshot, indicating that an integration with the channel has been added.

Populate the Amazon Bedrock knowledge base

Complete the following steps to populate the Amazon Bedrock knowledge base with the combined information of the AWS Well-Architected Framework:

    Download the following AWS Well-Architected Framework documents:

You can also include any Well-Architected Lenses that are relevant to your organization by downloading from AWS Whitepapers and Guides.

    On the Amazon Bedrock console, choose Knowledge bases in the navigation pane. Choose the knowledge base you deployed (slack-bedrock-kb).
    In the Data source section under Source link, choose the S3 bucket link that is displayed.

This will open the S3 bucket that is being used by the Amazon Bedrock knowledge base as the data source.

    In the S3 bucket, choose Upload then Add files, and select all of the downloaded AWS Well-Architected documents from the previous step. When the documents have completed uploading, switch back to the Knowledge bases page on the Amazon Bedrock console. Select the data source name and choose Sync.

This will sync the documents from the S3 bucket to the OpenSearch Serverless vector database. The process can take over 10 minutes.

When the sync is complete, the data source will show a Status of Available.

Test the Slack application integration with Amazon Bedrock

Complete the following steps to test the integration:

    Open the Slack channel selected in the previous steps and enter /ask-aws.

The Slack application will be displayed.

    Choose the Slack application and enter your prompt. For this test, we use the prompt “Tell me about the AWS Well Architected Framework.

The Slack application will respond with Processing Request and a copy of the entered prompt. The application will then provide a response to the prompt.

    To test that the guardrails are working as required, write a prompt that will invoke a guardrail intervention.

When an intervention occurs, you will receive the following predefined message as your response.

Clean up

Complete the following steps to clean up your resources:

    From your terminal, run the following command, replacing the values with the token and the signing secret created earlier:
    cdk destroy --context slackBotToken=%slackBotToken% --context slackSigningSecret=%slackSigningSecret%
    When prompted, enter y to confirm the deletion of the deployed stack.

Conclusion

In this post, we implemented a solution that integrates an Amazon Bedrock knowledge base with a Slack chat channel to allow business users to ask natural language questions of an unstructured dataset from a familiar interface. You can use this solution for multiple use cases by configuring it to different Slack applications and populating the knowledge base with the relevant dataset.

To get started, clone the GitHub repo and enhance your customers’ interactions with Amazon Bedrock. For more information about Amazon Bedrock, see Getting started with Amazon Bedrock.


About the Authors

Barry Conway is an Enterprise Solutions Architect at AWS with 20 years of experience in the technology industry, bridging the gap between business and technology. Barry has helped banking, manufacturing, logistics, and retail organizations realize their business goals.

Dean Colcott is an AWS Senior GenAI/ML Specialist Solution Architect and SME for Amazon Bedrock. He has areas of depth in integrating generative AI outcomes into enterprise applications, full stack development, video analytics, and computer vision and enterprise data platforms.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Slack AWS 生成式AI Amazon Bedrock 知识库
相关文章