AWS Machine Learning Blog 2024年10月16日
Bria 2.3, Bria 2.2 HD, and Bria 2.3 Fast are now available in Amazon SageMaker JumpStart
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Bria AI 推出了三个新的文本转图像基础模型 (FM)——Bria 2.3、Bria 2.2 HD 和 Bria 2.3 Fast,现已在 Amazon SageMaker JumpStart 上提供。这些模型专门使用商业级许可数据进行训练,提供高安全性和合规性标准,并提供完整的法律免责声明。Bria AI 的这些先进模型可以生成高质量且与上下文相关的视觉内容,可用于各个行业的营销、设计和图像生成用例,包括电子商务、媒体和娱乐、游戏、消费包装商品和零售。本文将讨论 Bria 的模型系列,解释 Amazon SageMaker 平台,并逐步介绍如何使用 SageMaker JumpStart 发现、部署和运行 Bria 2.3 模型的推理。

🤩 **Bria 模型系列概述** Bria AI 提供一系列高质量的视觉内容模型。这些先进模型代表了图像创建生成式 AI 技术的尖端: * **Bria 2.3** - 核心模型提供高质量的视觉内容,具有出色的逼真度和细节,能够生成具有各种艺术风格(包括逼真度)的复杂概念的惊人图像。 * **Bria 2.2 HD** - 针对高清优化,Bria 2.2 HD 提供高清视觉内容,满足高分辨率应用程序的苛刻需求,确保每个细节都清晰明了。 * **Bria 2.3 Fast** - 针对速度优化,Bria 2.3 Fast 以更快的速度生成高质量的视觉效果,非常适合需要快速周转时间而又不影响质量的应用程序。在 SageMaker g5 实例类型上使用该模型可以提供快速延迟和吞吐量(与 Bria 2.3 和 Bria 2.2 HD 相比),而 p4d 实例类型提供的延迟是 g5 实例的两倍。

🧐 **SageMaker JumpStart 概述** 借助 SageMaker JumpStart,您可以从广泛的公开可用 FM 中进行选择。ML 从业人员可以将 FM 部署到专用 SageMaker 实例(来自网络隔离环境),并使用 SageMaker 进行模型训练和部署来自定义模型。您现在可以在 Amazon SageMaker Studio 中或通过 SageMaker Python SDK 以编程方式发现和部署 Bria 模型。这样做使您能够利用 SageMaker 功能(例如 Amazon SageMaker Pipelines、Amazon SageMaker Debugger 或容器日志)来推导出模型性能和机器学习运营 (MLOps) 控制。 该模型部署在 AWS 安全环境中,并在您的虚拟专用云 (VPC) 控制下,有助于提供数据安全。Bria 模型目前可在提供 SageMaker JumpStart 的 22 个 AWS 区域的 SageMaker Studio 中进行部署和推理。Bria 模型需要 g5 和 p4 实例。

🚀 **在 SageMaker JumpStart 中发现 Bria 模型** 您可以通过 SageMaker Studio UI 和 SageMaker Python SDK 访问 SageMaker JumpStart 中的 FM。在本节中,我们将展示如何在 SageMaker Studio 中发现这些模型。 SageMaker Studio 是一个 IDE,它提供一个单一的基于 Web 的可视化界面,您可以在其中访问专用的工具来执行所有 ML 开发步骤,从准备数据到构建、训练和部署您的 ML 模型。有关如何入门和设置 SageMaker Studio 的更多详细信息,请参阅 Amazon SageMaker Studio。 在 SageMaker Studio 中,您可以通过选择导航窗格中的 JumpStart 或选择主页上的 JumpStart 来访问 SageMaker JumpStart。 在 SageMaker JumpStart 着陆页上,您可以找到来自流行模型中心预先训练的模型。您可以搜索 Bria,搜索结果将列出所有可用的 Bria 模型变体。对于本文,我们使用 Bria 2.3 商业文本转图像模型。 您可以选择模型卡以查看有关模型的详细信息,例如许可证、用于训练的数据以及如何使用模型。您还有两个选项,部署和预览笔记本,以部署模型并创建端点。

This post is co-written with Bar Fingerman from Bria.

We are thrilled to announce that Bria 2.3, 2.2 HD, and 2.3 Fast text-to-image foundation models (FMs) from Bria AI are now available in Amazon SageMaker JumpStart. Bria models are trained exclusively on commercial-grade licensed data, providing high standards of safety and compliance with full legal indemnity.

These advanced models from Bria AI generate high-quality and contextually relevant visual content that is ready to use in marketing, design, and image generation use cases across industries from ecommerce, media and entertainment, and gaming to consumer-packaged goods and retail.

In this post, we discuss Bria’s family of models, explain the Amazon SageMaker platform, and walk through how to discover, deploy, and run inference on a Bria 2.3 model using SageMaker JumpStart.

Overview of Bria 2.3, Bria 2.2 HD, and Bria 2.3 Fast

Bria AI offers a family of high-quality visual content models. These advanced models represent the cutting edge of generative AI technology for image creation:

Overview of SageMaker JumpStart

With SageMaker JumpStart, you can choose from a broad selection of publicly available FMs. ML practitioners can deploy FMs to dedicated SageMaker instances from a network-isolated environment and customize models using SageMaker for model training and deployment. You can now discover and deploy Bria models in Amazon SageMaker Studio or programmatically through the SageMaker Python SDK. Doing so enables you to derive model performance and machine learning operations (MLOps) controls with SageMaker features such as Amazon SageMaker Pipelines, Amazon SageMaker Debugger, or container logs.

The model is deployed in an AWS secure environment and under your virtual private cloud (VPC) controls, helping provide data security. Bria models are available today for deployment and inferencing in SageMaker Studio in 22 AWS Regions where SageMaker JumpStart is available. Bria models will require g5 and p4 instances.

Prerequisites

To try out the Bria models using SageMaker JumpStart, you need the following prerequisites:

Discover Bria models in SageMaker JumpStart

You can access the FMs through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we show how to discover the models in SageMaker Studio.

SageMaker Studio is an IDE that provides a single web-based visual interface where you can access purpose-built tools to perform all ML development steps, from preparing data to building, training, and deploying your ML models. For more details on how to get started and set up SageMaker Studio, refer to Amazon SageMaker Studio.

In SageMaker Studio, you can access SageMaker JumpStart by choosing JumpStart in the navigation pane or by choosing JumpStart on the Home page.

On the SageMaker JumpStart landing page, you can find pre-trained models from popular model hubs. You can search for Bria, and the search results will list all the Bria model variants available. For this post, we use the Bria 2.3 Commercial Text-to-image model.

You can choose the model card to view details about the model such as license, data used to train, and how to use the model. You also have two options, Deploy and Preview notebooks, to deploy the model and create an endpoint.

Subscribe to Bria models in AWS Marketplace

When you choose Deploy, if the model wasn’t already subscribed, you first have to subscribe before you can deploy the model. We demonstrate the subscription process for the Bria 2.3 Commercial Text-to-image model. You can repeat the same steps for subscribing to other Bria models.

After you choose Subscribe, you’re redirected to the model overview page, where you can read the model details, pricing, usage, and other information. Choose Continue to Subscribe and accept the offer on the following page to complete the subscription.

Configure and deploy Bria models using AWS Marketplace

The configuration page gives three different launch methods to choose from. For this post, we showcase how you can use SageMaker console:

    For Available launch method, select SageMaker console. For Region, choose your preferred Region. Choose View in Amazon SageMaker. For Model name, enter a name (for example, Model-Bria-v2-3). For IAM role, choose an existing IAM role or create a new role that has the SageMaker full access IAM policy attached. Choose Next.The recommended instance types for this model endpoint are ml.g5.2xlarge, ml.g5.12xlarge, ml.g5.48xlarge, ml.p4d.24xlarge, and ml.p4de.24xlarge. Make sure you have the account-level service limit for one or more of these instance types to deploy this model. For more information, refer to Requesting a quota increase. In the Variants section, select any of the recommended instance types provided by Bria (for example, ml.g5.2xlarge). Choose Create endpoint configuration.

    A success message should appear after the endpoint configuration is successfully created. Choose Next to create an endpoint.
    In the Create endpoint section, enter the endpoint name (for example, Endpoint-Bria-v2-3-Model) and choose Submit.After you successfully create the endpoint, it’s displayed on the SageMaker endpoints page on the SageMaker console.

Configure and deploy Bria models using SageMaker JumpStart

If the Bria models are already subscribed in AWS Marketplace, you can choose Deploy in the model card page to configure the endpoint.

On the endpoint configuration page, SageMaker pre-populates the endpoint name, recommended instance type, instance count, and other details for you. You can modify them based on your requirements and then choose Deploy to create an endpoint.

After you successfully create the endpoint, the status will show as In service.

Run inference in SageMaker Studio

You can test the endpoint by passing a sample inference request payload in SageMaker Studio, or you can use SageMaker notebook. In this section, we demonstrate using SageMaker Studio:

    In SageMaker Studio, in the navigation pane, choose Endpoints under Deployments. Choose the Bria endpoint you just created.
    On the Test inference tab, test the endpoint by sending a sample request.
    You can see the response on the same page, as shown in the following screenshot.

Text-to-image generation using a SageMaker notebook

You can also use a SageMaker notebook to run inference against the deployed endpoint using the SageMaker Python SDK.

The following code initiates the endpoint you created using SageMaker JumpStart:

from sagemaker.predictor import Predictorfrom sagemaker.serializers import JSONSerializerfrom sagemaker.deserializers import JSONDeserializer# Use the existing endpoint nameendpoint_name = "XXXXXXXX"  # Replace with your endpoint name# Create a SageMaker predictor objectbria_predictor = Predictor(    endpoint_name=endpoint_name,    serializer=JSONSerializer(),    deserializer=JSONDeserializer(),)bria_predictor.endpoint_name

The model responses are in base64 encoded format. The following function helps decode the base64 encoded image and displays it as an image:

import base64from PIL import Imageimport iodef display_base64_image(base64_string):    image_bytes = base64.b64decode(base64_string)    image_stream = io.BytesIO(image_bytes)    image = Image.open(image_stream)    # Display the image    image.show()

The following is a sample payload with a text prompt to generate an image using the Bria model:

payload = {  "prompt": "a baby riding a bicycle in a field of flowers",  "num_results": 1,  "sync": True}response = bria_predictor.predict(payload)artifacts = response['artifacts'][0]encoded_image = artifacts['image_base64']display_base64_image(encoded_image)

Example prompts

You can interact with the Bria 2.3 text-to-image model like any standard image generation model, where the model processes an input sequence and outputs response. In this section, we provide some example prompts and sample output.

We use the following prompts:

The model generates the following images.

The following is an example prompt for generating an image using the preceding text prompt:

payload = {"prompt": "Photography, dynamic, in the city, professional mail skateboarder, sunglasses, teal and orange hue","num_results": 1,"sync": True}response = bria_predictor.predict(payload)artifacts = response['artifacts'][0]encoded_image = artifacts['image_base64']display_base64_image(encoded_image)

Clean up

After you’re done running the notebook, delete all resources that you created in the process so your billing is stopped. Use the following code:

predictor.delete_model()predictor.delete_endpoint()

Conclusion

With the availability of Bria 2.3, 2.2 HD, and 2.3 Fast in SageMaker JumpStart and AWS Marketplace, enterprises can now use advanced generative AI capabilities to enhance their visual content creation processes. These models provide a balance of quality, speed, and compliance, making them an invaluable asset for any organization looking to stay ahead in the competitive landscape.

Bria’s commitment to responsible AI and the robust security framework of SageMaker provide enterprises with the full package for data privacy, regulatory compliance, and responsible AI models for commercial use. In addition, the integrated experience takes advantage of the capabilities of both platforms to simplify MLOps, data storage, and real-time processing.

For more information about using FMs in SageMaker JumpStart, refer to Train, deploy, and evaluate pretrained models with SageMaker JumpStart, JumpStart Foundation Models, and Getting started with Amazon SageMaker JumpStart.

Explore Bria models in SageMaker JumpStart today and revolutionize your visual content creation process!


About the Authors

Bar Fingerman is the Head of AI/ML Engineering at Bria. He leads the development and optimization of core infrastructure, enabling the company to scale cutting-edge generative AI technologies. With a focus on designing high-performance supercomputers for large-scale AI training, Bar leads the engineering group in deploying, managing, and securing scalable AI/ML cloud solutions. He works closely with leadership and cross-functional teams to align business goals while driving innovation and cost-efficiency.

Supriya Puragundla is a Senior Solutions Architect at AWS. She has over 15 years of IT experience in software development, design, and architecture. She helps key customer accounts on their data, generative AI, and AI/ML journeys. She is passionate about data-driven AI and the area of depth in ML and generative AI.

Rodrigo Merino is a Generative AI Solutions Architect Manager at AWS. With over a decade of experience deploying emerging technologies, ranging from generative AI to IoT, Rodrigo guides customers across various industries to accelerate their AI/ML and generative AI journeys. He specializes in helping organizations train and build models on AWS, as well as operationalize end-to-end ML solutions. Rodrigo’s expertise lies in bridging the gap between cutting-edge technology and practical business applications, enabling companies to harness the full potential of AI and drive innovation in their respective fields.

Eliad Maimon is a Senior Startup Solutions Architect at AWS, focusing on generative AI startups. He helps startups accelerate and scale their AI/ML journeys by guiding them through deep-learning model training and deployment on AWS. With a passion for AI and entrepreneurship, Eliad is committed to driving innovation and growth in the startup ecosystem.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Bria AI Amazon SageMaker JumpStart 文本转图像 生成式 AI 机器学习
相关文章