AWS Machine Learning Blog 2024年09月28日
Architecture to AWS CloudFormation code using Anthropic’s Claude 3 on Amazon Bedrock
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Anthropic’s Claude 3模型具有多模态能力,可处理图像和文本,能将架构图转化为CloudFormation模板,加速从架构到原型阶段的进程。

🎯Anthropic’s Claude 3模型可结合图像和文本信息进行分析,为图像理解提供新途径,例如分析图像中物体及它们的相对位置,理解图表中的数据等。

💻通过Anthropic’s Claude 3的图像分析能力,可将架构图作为输入生成AWS CloudFormation模板,用于快速原型设计和部署架构图等。

🛠️该模型使用few-shot prompting,提供多个CloudFormation模板作为参考示例,使其能学习团队的编码、命名和组织模式,还能了解如何自动化部署和管理AWS资源。

📋生成的CloudFormation模板仅供参考,开发者需根据安全指南进行测试和验证,且可根据需求更改预设的推理参数。

The Anthropic’s Claude 3 family of models, available on Amazon Bedrock, offers multimodal capabilities that enable the processing of images and text. This capability opens up innovative avenues for image understanding, wherein Anthropic’s Claude 3 models can analyze visual information in conjunction with textual data, facilitating more comprehensive and contextual interpretations. By taking advantage of its multimodal prowess, we can ask the model questions like “What objects are in the image, and how are they relatively positioned to each other?” We can also gain an understanding of data presented in charts and graphs by asking questions related to business intelligence (BI) tasks, such as “What is the sales trend for 2023 for company A in the enterprise market?” These are just some examples of the additional richness Anthropic’s Claude 3 brings to generative artificial intelligence (AI) interactions.

Architecting specific AWS Cloud solutions involves creating diagrams that show relationships and interactions between different services. Instead of building the code manually, you can use Anthropic’s Claude 3’s image analysis capabilities to generate AWS CloudFormation templates by passing an architecture diagram as input.

In this post, we explore some ways you can use Anthropic’s Claude 3 Sonnet’s vision capabilities to accelerate the process of moving from architecture to the prototype stage of a solution.

Use cases for architecture to code

The following are relevant use cases for this solution:

Solution overview

To demonstrate the solution, we use Streamlit to provide an interface for diagrams and prompts. Amazon Bedrock invokes the Anthropic’s Claude 3 Sonnet model, which provides multimodal capabilities. AWS Fargate is the compute engine for web application. The following diagram illustrates the step-by-step process.

The workflow consists of the following steps:

    The user uploads an architecture image (JPEG or PNG) on the Streamlit application, invoking the Amazon Bedrock API to generate a step-by-step explanation of the architecture using the Anthropic’s Claude 3 Sonnet model. The Anthropic’s Claude 3 Sonnet model is invoked using a step-by-step explanation and few-shot learning examples to generate the initial CloudFormation code. The few-shot learning example consists of three CloudFormation templates; this helps the model understand writing practices associated with CloudFormation code. The user manually provides instructions using the chat interface to update the initial CloudFormation code.

*Steps 1 and 2 are executed once when architecture diagram is uploaded. To trigger changes to the AWS CloudFormation code (step 3) provide update instructions from the Streamlit app

The CloudFormation templates generated by the web application are intended for inspiration purposes and not for production-level applications. It is the responsibility of a developer to test and verify the CloudFormation template according to security guidelines.

Few-shot Prompting

To help Anthropic’s Claude 3 Sonnet understand the practices of writing CloudFormation code, we use few-shot prompting by providing three CloudFormation templates as reference examples in the prompt. Exposing Anthropic’s Claude 3 Sonnet to multiple CloudFormation templates will allow it to analyze and learn from the structure, resource definitions, parameter configurations, and other essential elements consistently implemented across your organization’s templates. This enables Anthropic’s Claude 3 Sonnet to grasp your team’s coding conventions, naming conventions, and organizational patterns when generating CloudFormation templates. The following examples used for few-shot learning can be found in the GitHub repo.

Few-shot prompting example 1

Few-shot prompting example 2

Few-shot prompting example 3

Furthermore, Anthropic’s Claude 3 Sonnet can observe how different resources and services are configured and integrated within the CloudFormation templates through few-shot prompting. It will gain insights into how to automate the deployment and management of various AWS resources, such as Amazon Simple Storage Service (Amazon S3), AWS Lambda, Amazon DynamoDB, and AWS Step Functions.

Inference parameters are preset, but they can be changed from the web application if desired. We recommend experimenting with various combinations of these parameters. By default, we set the temperature to zero to reduce the variability of outputs and create focused, syntactically correct code.

Prerequisites

To access the Anthropic’s Claude 3 Sonnet foundation model (FM), you must request access through the Amazon Bedrock console. For instructions, see Manage access to Amazon Bedrock foundation models. After requesting access to Anthropic’s Claude 3 Sonnet, you can deploy the following development.yaml CloudFormation template to provision the infrastructure for the demo. For instructions on how to deploy this sample, refer to the GitHub repo. Use the following table to launch the CloudFormation template to quickly deploy the sample in either us-east-1 or us-west-2.

Region Stack
us-east-1 development.yaml
us-west-2 development.yaml

When deploying the template, you have the option to specify the Amazon Bedrock model ID you want to use for inference. This flexibility allows you to choose the model that best suits your needs. By default, the template uses the Anthropic’s Claude 3 Sonnet model, renowned for its exceptional performance. However, if you prefer to use a different model, you can seamlessly pass its Amazon Bedrock model ID as a parameter during deployment. Verify that you have requested access to the desired model beforehand and that the model possesses the necessary vision capabilities required for your specific use case.

After you launch the CloudFormation stack, navigate to the stack’s Outputs tab on the AWS CloudFormation console and collect the Amazon CloudFront URL. Enter the URL in your browser to view the web application.

In this post, we discuss CloudFormation template generation for three different samples. You can find the sample architecture diagrams in the GitHub repo. These samples are similar to the few-shot learning examples, which is intentional. As an enhancement to this architecture, you can employ a Retrieval Augmented Generation (RAG)-based approach to retrieve relevant CloudFormation templates from a knowledge base to dynamically augment the prompt.

Due to the non-deterministic behavior of the large language model (LLM), you might not get the same response as shown in this post.

Let’s generate CloudFormation templates for the following sample architecture diagram.

Uploading the preceding architecture diagram to the web application generates a step-by-step explanation of the diagram using Anthropic’s Claude 3 Sonnet’s vision capabilities.

Let’s analyze the step-by-step explanation. The generated response is divided into three parts:

    The context explains what the architecture diagram depicts. The architecture diagram’s flow gives the order in which AWS services are invoked and their relationship with each other. We get a summary of the entire generated response.

In the following step-by-step explanation, we see a few highlighted errors.

The step-by-step explanation is augmented with few-shot learning examples to develop an initial CloudFormation template. Let’s analyze the initial CloudFormation template:

AWSTemplateFormatVersion: '2010-09-09'Description: >  This CloudFormation stack sets up a serverless data processing pipeline triggered by file uploads to an S3 bucket.  It uses AWS Lambda to process the uploaded files, and Amazon SNS to send notifications upon successful processing.  This template is not production ready and should only be used for inspirationParameters:  S3BucketName:    Type: String    Description: Name of the S3 bucket for file uploads    AllowedPattern: ^[a-z0-9][a-z0-9-]*[a-z0-9]$    MinLength: 1    MaxLength: 63  EmailAddress:    Type: String    Description: Email address to receive notifications    AllowedPattern: ^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$Resources:  # S3 Bucket  S3Bucket:    Type: AWS::S3::Bucket    Properties:      BucketName: !Ref S3BucketName      NotificationConfiguration:        LambdaConfigurations:          - Event: 's3:ObjectCreated:*'            Function: !GetAtt ProcessingLambda.Arn  # SNS Topic  SNSTopic:    Type: AWS::SNS::Topic    Properties:      Subscription:        - Endpoint: !Ref EmailAddress          Protocol: email  # Lambda Function  ProcessingLambda:    Type: AWS::Lambda::Function    Properties:      FunctionName: ProcessingLambda      Runtime: python3.9      Handler: index.lambda_handler      Role: !GetAtt LambdaRole.Arn      Code:        ZipFile: |          import boto3          def lambda_handler(event, context):              s3 = boto3.client('s3')              sns = boto3.client('sns')              # Process the uploaded file              for record in event['Records']:                  bucket_name = record['s3']['bucket']['name']                  object_key = record['s3']['object']['key']                  # Process the file data                  # ...              # Send notification upon successful processing              sns.publish(                  TopicArn=!Ref SNSTopic,                  Message='File processing completed successfully',                  Subject='Data Processing Notification'              )              return {                  'statusCode': 200,                  'body': 'File processing completed successfully'              }  # Lambda Role  LambdaRole:    Type: AWS::IAM::Role    Properties:      AssumeRolePolicyDocument:        Version: '2012-10-17'        Statement:          - Effect: Allow            Principal:              Service: lambda.amazonaws.com            Action: 'sts:AssumeRole'      ManagedPolicyArns:        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole      Policies:        - PolicyName: S3Access          PolicyDocument:            Version: '2012-10-17'            Statement:              - Effect: Allow                Action:                  - 's3:GetObject'                Resource: !Join ['', ['arn:aws:s3:::', !Ref S3BucketName, '/*']]        - PolicyName: SNSPublish          PolicyDocument:            Version: '2012-10-17'            Statement:              - Effect: Allow                Action:                  - 'sns:Publish'                Resource: !Ref SNSTopic  # Lambda Permissions  LambdaPermission:    Type: AWS::Lambda::Permission    Properties:      FunctionName: !GetAtt ProcessingLambda.Arn      Action: 'lambda:InvokeFunction'      Principal: s3.amazonaws.com      SourceAccount: !Ref AWS::AccountId      SourceArn: !Join ['', ['arn:aws:s3:::', !Ref S3BucketName]]Outputs:  S3BucketName:    Description: Name of the S3 bucket for file uploads    Value: !Ref S3Bucket    Export:      Name: !Sub '${AWS::StackName}-S3BucketName'  LambdaFunctionArn:    Description: ARN of the Lambda function    Value: !GetAtt ProcessingLambda.Arn    Export:      Name: !Sub '${AWS::StackName}-LambdaFunctionArn'  SNSTopicArn:    Description: ARN of the SNS topic for notifications    Value: !Ref SNSTopic    Export:      Name: !Sub '${AWS::StackName}-SNSTopicArn'

After analyzing the CloudFormation template, we see that the Lambda code refers to an Amazon Simple Notification Service (Amazon SNS) topic using !Ref SNSTopic, which is not valid. We also want to add additional functionality to the template. First, we want to filter the Amazon S3 notification configuration to invoke Lambda only when *.csv files are uploaded. Second, we want to add metadata to the CloudFormation template. To do this, we use the chat interface to give the following update instructions to the web application:

Make the following updates:Use environment variables for AWS Lambda to access SNS Topic ARN.Add filter to S3 notification configuration to only invoke AWS lambda when *.csv files are uploadedAdd metadata to CloudFormation template

The updated CloudFormation template is as follows:

AWSTemplateFormatVersion: '2010-09-09'Description: >  This CloudFormation stack sets up a serverless data processing pipeline triggered by file uploads to an S3 bucket.  It uses AWS Lambda to process the uploaded files, and Amazon SNS to send notifications upon successful processing.  This template is not production ready and should only be used for inspiration.Metadata:  'AWS::CloudFormation::Interface':    ParameterGroups:      - Label:          default: 'S3 Bucket Configuration'        Parameters:          - S3BucketName      - Label:          default: 'Notification Configuration'        Parameters:          - EmailAddressParameters:  S3BucketName:    Type: String    Description: Name of the S3 bucket for file uploads    AllowedPattern: ^[a-z0-9][a-z0-9-]*[a-z0-9]$    MinLength: 1    MaxLength: 63  EmailAddress:    Type: String    Description: Email address to receive notifications    AllowedPattern: ^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$Resources:  # S3 Bucket  S3Bucket:    Type: AWS::S3::Bucket    Properties:      BucketName: !Ref S3BucketName      NotificationConfiguration:        LambdaConfigurations:          - Event: 's3:ObjectCreated:*'            Function: !GetAtt ProcessingLambda.Arn            Filter:              S3Key:                Rules:                  - Name: suffix                    Value: .csv  # SNS Topic  SNSTopic:    Type: AWS::SNS::Topic    Properties:      Subscription:        - Endpoint: !Ref EmailAddress          Protocol: email  # Lambda Function  ProcessingLambda:    Type: AWS::Lambda::Function    Properties:      FunctionName: ProcessingLambda      Runtime: python3.9      Handler: index.lambda_handler      Role: !GetAtt LambdaRole.Arn      Environment:        Variables:          SNS_TOPIC_ARN: !Ref SNSTopic      Code:        ZipFile: |          import boto3          import os          def lambda_handler(event, context):              s3 = boto3.client('s3')              sns = boto3.client('sns')              sns_topic_arn = os.environ['SNS_TOPIC_ARN']              # Process the uploaded file              for record in event['Records']:                  bucket_name = record['s3']['bucket']['name']                  object_key = record['s3']['object']['key']                  # Process the file data                  # ...              # Send notification upon successful processing              sns.publish(                  TopicArn=sns_topic_arn,                  Message='File processing completed successfully',                  Subject='Data Processing Notification'              )              return {                  'statusCode': 200,                  'body': 'File processing completed successfully'              }  # Lambda Role  LambdaRole:    Type: AWS::IAM::Role    Properties:      AssumeRolePolicyDocument:        Version: '2012-10-17'        Statement:          - Effect: Allow            Principal:              Service: lambda.amazonaws.com            Action: 'sts:AssumeRole'      ManagedPolicyArns:        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole      Policies:        - PolicyName: S3Access          PolicyDocument:            Version: '2012-10-17'            Statement:              - Effect: Allow                Action:                  - 's3:GetObject'                Resource: !Join ['', ['arn:aws:s3:::', !Ref S3BucketName, '/*']]        - PolicyName: SNSPublish          PolicyDocument:            Version: '2012-10-17'            Statement:              - Effect: Allow                Action:                  - 'sns:Publish'                Resource: !Ref SNSTopic  # Lambda Permissions  LambdaPermission:    Type: AWS::Lambda::Permission    Properties:      FunctionName: !GetAtt ProcessingLambda.Arn      Action: 'lambda:InvokeFunction'      Principal: s3.amazonaws.com      SourceAccount: !Ref AWS::AccountId      SourceArn: !Join ['', ['arn:aws:s3:::', !Ref S3BucketName]]Outputs:  S3BucketName:    Description: Name of the S3 bucket for file uploads    Value: !Ref S3Bucket    Export:      Name: !Sub '${AWS::StackName}-S3BucketName'  LambdaFunctionArn:    Description: ARN of the Lambda function    Value: !GetAtt ProcessingLambda.Arn    Export:      Name: !Sub '${AWS::StackName}-LambdaFunctionArn'  SNSTopicArn:    Description: ARN of the SNS topic for notifications    Value: !Ref SNSTopic    Export:      Name: !Sub '${AWS::StackName}-SNSTopicArn'

Additional examples

We have provided two more sample diagrams, their associated CloudFormation code generated by Anthropic’s Claude 3 Sonnet, and the prompts used to create them. You can see how diagrams in various forms, from digital to hand-drawn, or some combination, can be used. The end-to-end analysis of these samples can be found at sample 2 and sample 3 on the GitHub repo.

Best practices for architecture to code

In the demonstrated use case, you can observe how well the Anthropic’s Claude 3 Sonnet model could pull details and relationships between services from an architecture image. The following are some ways you can improve the performance of Anthropic’s Claude in this use case:

Clean up

To clean up the resources used in this demo, complete the following steps:

    On the AWS CloudFormation console, choose Stacks in the navigation pane. Select the deployed yaml development.yaml stack and choose Delete.

Conclusion

With the pattern demonstrated with Anthropic’s Claude 3 Sonnet, developers can effortlessly translate their architectural visions into reality by simply sketching their desired cloud solutions. Anthropic’s Claude 3 Sonnet’s advanced image understanding capabilities will analyze these diagrams and generate boilerplate CloudFormation code, minimizing the need for initial complex coding tasks. This visually driven approach empowers developers from a variety of skill levels, fostering collaboration, rapid prototyping, and accelerated innovation.

You can investigate other patterns, such as including RAG and agentic workflows, to improve the accuracy of code generation. You can also explore customizing the LLM by fine-tuning it to write CloudFormation code with greater flexibility.

Now that you have seen Anthropic’s Claude 3 Sonnet in action, try designing your own architecture diagrams using some of the best practices to take your prototyping to the next level.

For additional resources, refer to the :


About the Authors

Eashan Kaushik is an Associate Solutions Architect at Amazon Web Services. He is driven by creating cutting-edge generative AI solutions while prioritizing a customer-centric approach to his work. Before this role, he obtained an MS in Computer Science from NYU Tandon School of Engineering. Outside of work, he enjoys sports, lifting, and running marathons.

Chris Pecora is a Generative AI Data Scientist at Amazon Web Services. He is passionate about building innovative products and solutions while also focusing on customer-obsessed science. When not running experiments and keeping up with the latest developments in generative AI, he loves spending time with his kids.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Anthropic’s Claude 3 CloudFormation模板 图像分析 few-shot prompting
相关文章