AWS Machine Learning Blog 05月16日 04:10
Cost-effective AI image generation with PixArt-Sigma inference on AWS Trainium and AWS Inferentia
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了如何使用AWS Trainium和AWS Inferentia加速PixArt-Sigma模型,以生成高质量的4K分辨率图像。PixArt-Sigma是一种扩散Transformer模型,通过数据集和架构的改进,在图像生成方面超越了之前的PixArt模型。文章详细说明了在AWS Trainium上部署PixArt-Sigma模型的步骤,包括环境设置、模型下载与编译,以及如何利用Neuron编译器优化模型性能。通过使用AWS的专用AI芯片,可以实现大型生成模型的高性价比部署,从而在运行PixArt-Sigma等扩散Transformer模型时获得最佳性能和效率。

🚀 PixArt-Sigma模型是一种扩散Transformer模型,它通过改进数据集和架构,能够生成4K分辨率的高质量图像,显著优于之前的PixArt模型。

⚙️ 为了在AWS Trainium上部署PixArt-Sigma,需要完成三个主要步骤:首先,在trn1、trn2或inf2主机上设置开发环境;其次,下载并编译PixArt-Sigma模型;最后,在AWS Trainium上部署模型以生成图像。

🧩 PixArt-Sigma模型由三个组件构成:文本编码器(将文本提示转换为嵌入)、去噪Transformer模型(迭代地对压缩图像的数值表示进行去噪)和解码器(将去噪器生成的潜在信息转换为输出图像)。每个组件都经过编译,以确保整个生成流程可以在Neuron上运行。

💡 为了提高性能,文章介绍了如何对PixArt的线性层进行分片。通过使用NeuronX Distributed的RowParallelLinear和ColumnParallelLinear层替换线性层,可以将注意力层分布在多个设备上,从而提升计算效率。

PixArt-Sigma is a diffusion transformer model that is capable of image generation at 4k resolution. This model shows significant improvements over previous generation PixArt models like Pixart-Alpha and other diffusion models through dataset and architectural improvements. AWS Trainium and AWS Inferentia are purpose-built AI chips to accelerate machine learning (ML) workloads, making them ideal for cost-effective deployment of large generative models. By using these AI chips, you can achieve optimal performance and efficiency when running inference with diffusion transformer models like PixArt-Sigma.

This post is the first in a series where we will run multiple diffusion transformers on Trainium and Inferentia-powered instances. In this post, we show how you can deploy PixArt-Sigma to Trainium and Inferentia-powered instances.

Solution overview

The steps outlined below will be used to deploy the PixArt-Sigma model on AWS Trainium and run inference on it to generate high-quality images.

Step 1 – Prerequisites and setup

To get started, you will need to set up a development environment on a trn1, trn2, or inf2 host. Complete the following steps:

    Launch a trn1.32xlarge or trn2.48xlarge instance with a Neuron DLAMI. For instructions on how to get started, refer to Get Started with Neuron on Ubuntu 22 with Neuron Multi-Framework DLAMI. Launch a Jupyter Notebook sever. For instructions to set up a Jupyter server, refer to the following user guide. Clone the aws-neuron-samples GitHub repository:
    git clone https://github.com/aws-neuron/aws-neuron-samples.git
    Navigate to the hf_pretrained_pixart_sigma_1k_latency_optimized.ipynb notebook:
    cd aws-neuron-samples/torch-neuronx/inference

The provided example script is designed to run on a Trn2 instance, but you can adapt it for Trn1 or Inf2 instances with minimal modifications. Specifically, within the notebook and in each of the component files under the neuron_pixart_sigma directory, you will find commented-out changes to accommodate Trn1 or Inf2 configurations.

Step 2 – Download and compile the PixArt-Sigma model for AWS Trainium

This section provides a step-by-step guide to compiling PixArt-Sigma for AWS Trainium.

Download the model

You will find a helper function in cache-hf-model.py in above mentioned GitHub repository that shows how to download the PixArt-Sigma model from Hugging Face. If you are using PixArt-Sigma in your own workload, and opt not to use the script included in this post, you can use the huggingface-cli to download the model instead.

The Neuron PixArt-Sigma implementation contains a few scripts and classes. The various files and scrips are broken down as follows:

├── compile_latency_optimized.sh # Full Model Compilation script for Latency Optimized├── compile_throughput_optimized.sh # Full Model Compilation script for Throughput Optimized├── hf_pretrained_pixart_sigma_1k_latency_optimized.ipynb # Notebook to run Latency Optimized Pixart-Sigma├── hf_pretrained_pixart_sigma_1k_throughput_optimized.ipynb # Notebook to run Throughput Optimized Pixart-Sigma├── neuron_pixart_sigma│ ├── cache_hf_model.py # Model downloading Script│ ├── compile_decoder.py # Text Encoder Compilation Script and Wrapper Class│ ├── compile_text_encoder.py # Text Encoder Compilation Script and Wrapper Class│ ├── compile_transformer_latency_optimized.py # Latency Optimized Transformer Compilation Script and Wrapper Class│ ├── compile_transformer_throughput_optimized.py # Throughput Optimized Transformer Compilation Script and Wrapper Class│ ├── neuron_commons.py # Base Classes and Attention Implementation│ └── neuron_parallel_utils.py # Sharded Attention Implementation└── requirements.txt

This notebook will help you to download the model, compile the individual component models, and invoke the generation pipeline to generate an image. Although the notebooks can be run as a standalone sample, the next few sections of this post will walk through the key implementation details within the component files and scripts to support running PixArt-Sigma on Neuron.

Sharding PixArt linear layers

For each component of PixArt (T5, Transformer, and VAE), the example uses Neuron specific wrapper classes. These wrapper classes serve two purposes. The first purpose is it allows us to trace the models for compilation:

class InferenceTextEncoderWrapper(nn.Module):    def __init__(self, dtype, t: T5EncoderModel, seqlen: int):        super().__init__()        self.dtype = dtype        self.device = t.device        self.t = t    def forward(self, text_input_ids, attention_mask=None):        return [self.t(text_input_ids, attention_mask)['last_hidden_state'].to(self.dtype)]

Please refer to the neuron_commons.py file for all wrapper modules and classes.

The second reason for using wrapper classes is to modify the attention implementation to run on Neuron. Because diffusion models like PixArt are typically compute-bound, you can improve performance by sharding the attention layer across multiple devices. To do this, you replace the linear layers with NeuronX Distributed’s RowParallelLinear and ColumnParallelLinear layers:

def shard_t5_self_attention(tp_degree: int, selfAttention: T5Attention):    orig_inner_dim = selfAttention.q.out_features    dim_head = orig_inner_dim // selfAttention.n_heads    original_nheads = selfAttention.n_heads    selfAttention.n_heads = selfAttention.n_heads // tp_degree    selfAttention.inner_dim = dim_head * selfAttention.n_heads    orig_q = selfAttention.q    selfAttention.q = ColumnParallelLinear(        selfAttention.q.in_features,        selfAttention.q.out_features,        bias=False,         gather_output=False)    selfAttention.q.weight.data = get_sharded_data(orig_q.weight.data, 0)    del(orig_q)    orig_k = selfAttention.k    selfAttention.k = ColumnParallelLinear(        selfAttention.k.in_features,         selfAttention.k.out_features,         bias=(selfAttention.k.bias is not None),        gather_output=False)    selfAttention.k.weight.data = get_sharded_data(orig_k.weight.data, 0)    del(orig_k)    orig_v = selfAttention.v    selfAttention.v = ColumnParallelLinear(        selfAttention.v.in_features,         selfAttention.v.out_features,         bias=(selfAttention.v.bias is not None),        gather_output=False)    selfAttention.v.weight.data = get_sharded_data(orig_v.weight.data, 0)    del(orig_v)    orig_out = selfAttention.o    selfAttention.o = RowParallelLinear(        selfAttention.o.in_features,        selfAttention.o.out_features,        bias=(selfAttention.o.bias is not None),        input_is_parallel=True)    selfAttention.o.weight.data = get_sharded_data(orig_out.weight.data, 1)    del(orig_out)    return selfAttention

Please refer to the neuron_parallel_utils.py file for more details on parallel attention.

Compile individual sub-models

The PixArt-Sigma model is composed of three components. Each component is compiled so the entire generation pipeline can run on Neuron:

Now that the model definition is ready, you need to trace a model to run it on Trainium or Inferentia. You can see how to use the trace() function to compile the decoder component model for PixArt in the following code block:

compiled_decoder = torch_neuronx.trace(    decoder,    sample_inputs,    compiler_workdir=f"{compiler_workdir}/decoder",    compiler_args=compiler_flags,    inline_weights_to_neff=False)

Please refer to the compile_decoder.py file for more on how to instantiate and compile the decoder.

To run models with tensor parallelism, a technique used to split a tensor into chunks across multiple NeuronCores, you need to trace with a pre-specified tp_degree. This tp_degree specifies the number of NeuronCores to shard the model across. It then uses the parallel_model_trace API to compile the encoder and transformer component models for PixArt:

compiled_text_encoder = neuronx_distributed.trace.parallel_model_trace(    get_text_encoder_f,    sample_inputs,    compiler_workdir=f"{compiler_workdir}/text_encoder",    compiler_args=compiler_flags,    tp_degree=tp_degree,)

Please refer to the compile_text_encoder.py file for more details on tracing the encoder with tensor parallelism.

Lastly, you trace the transformer model with tensor parallelism:

compiled_transformer = neuronx_distributed.trace.parallel_model_trace(    get_transformer_model_f,    sample_inputs,    compiler_workdir=f"{compiler_workdir}/transformer",    compiler_args=compiler_flags,    tp_degree=tp_degree,    inline_weights_to_neff=False,)

Please refer to the compile_transformer_latency_optimized.py file for more details on tracing the transformer with tensor parallelism.

You will use the compile_latency_optimized.sh script to compile all three models as described in this post, so these functions will be run automatically when you run through the notebook.

Step 3 – Deploy the model on AWS Trainium to generate images

This section will walk us through the steps to run inference on PixArt-Sigma on AWS Trainium.

Create a diffusers pipeline object

The Hugging Face diffusers library is a library for pre-trained diffusion models, and includes model-specific pipelines that bundle the components (independently-trained models, schedulers, and processors) needed to run a diffusion model. The PixArtSigmaPipeline is specific to the PixArtSigma model, and is instantiated as follows:

pipe: PixArtSigmaPipeline = PixArtSigmaPipeline.from_pretrained(    "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",    torch_dtype=torch.bfloat16,    local_files_only=True,    cache_dir="pixart_sigma_hf_cache_dir_1024")

Please refer to the hf_pretrained_pixart_sigma_1k_latency_optimized.ipynb notebook for details on pipeline execution.

Load compiled component models into the generation pipeline

After each component model has been compiled, load them into the overall generation pipeline for image generation. The VAE model is loaded with data parallelism, which allows us to parallelize image generation for batch size or multiple images per prompt. For more details, refer to the hf_pretrained_pixart_sigma_1k_latency_optimized.ipynb notebook.

vae_decoder_wrapper.model = torch_neuronx.DataParallel(     torch.jit.load(decoder_model_path), [0, 1, 2, 3], False)text_encoder_wrapper.t = neuronx_distributed.trace.parallel_model_load(    text_encoder_model_path)

Finally, the loaded models are added to the generation pipeline:

pipe.text_encoder = text_encoder_wrapperpipe.transformer = transformer_wrapperpipe.vae.decoder = vae_decoder_wrapperpipe.vae.post_quant_conv = vae_post_quant_conv_wrapper

Compose a prompt

Now that the model is ready, you can write a prompt to convey what kind of image you want generated. When creating a prompt, you should always be as specific as possible. You can use a positive prompt to convey what is wanted in your new image, including a subject, action, style, and location, and can use a negative prompt to indicate features that should be removed.

For example, you can use the following positive and negative prompts to generate a photo of an astronaut riding a horse on mars without mountains:

# Subject: astronaut# Action: riding a horse# Location: Mars# Style: photoprompt = "a photo of an astronaut riding a horse on mars"negative_prompt = "mountains"

Feel free to edit the prompt in your notebook using prompt engineering to generate an image of your choosing.

Generate an image

To generate an image, you pass the prompt to the PixArt model pipeline, and then save the generated image for later reference:

# pipe: variable holding the Pixart generation pipeline with each of # the compiled component modelsimages = pipe(        prompt=prompt,        negative_prompt=negative_prompt,        num_images_per_prompt=1,        height=1024, # number of pixels        width=1024, # number of pixels        num_inference_steps=25 # Number of passes through the denoising model    ).images        for idx, img in enumerate(images):         img.save(f"image_{idx}.png")

Cleanup

To avoid incurring additional costs, stop your EC2 instance using either the AWS Management Console or AWS Command Line Interface (AWS CLI).

Conclusion

In this post, we walked through how to deploy PixArt-Sigma, a state-of-the-art diffusion transformer, on Trainium instances. This post is the first in a series focused on running diffusion transformers for different generation tasks on Neuron. To learn more about running diffusion transformers models with Neuron, refer to Diffusion Transformers.


About the Authors

Achintya Pinninti is a Solutions Architect at Amazon Web Services. He supports public sector customers, enabling them to achieve their objectives using the cloud. He specializes in building data and machine learning solutions to solve complex problems.

Miriam Lebowitz is a Solutions Architect focused on empowering early-stage startups at AWS. She leverages her experience with AI/ML to guide companies to select and implement the right technologies for their business objectives, setting them up for scalable growth and innovation in the competitive startup world.

Sadaf Rasool is a Solutions Architect in Annapurna Labs at AWS. Sadaf collaborates with customers to design machine learning solutions that address their critical business challenges. He helps customers train and deploy machine learning models leveraging AWS Trainium or AWS Inferentia chips to accelerate their innovation journey.

John Gray is a Solutions Architect in Annapurna Labs, AWS, based out of Seattle. In this role, John works with customers on their AI and machine learning use cases, architects solutions to cost-effectively solve their business problems, and helps them build a scalable prototype using AWS AI chips.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

PixArt-Sigma AWS Trainium 图像生成 扩散模型 AI芯片
相关文章