AWS Machine Learning Blog 2024年07月03日
Accelerated PyTorch inference with torch.compile on AWS Graviton processors
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AWS 团队针对 AWS Graviton3 处理器优化了 PyTorch 的 torch.compile 功能,通过利用 Arm Compute Library (ACL) 内核和 oneDNN 优化,实现了高达 2 倍的 Hugging Face 模型推理性能提升,以及高达 1.35 倍的 TorchBench 模型推理性能提升。

🤔 **torch.compile 的优化目标:** torch.compile 通过将模型中的运算符合成到一个图中,减少内存读取次数和内核启动开销,从而提升性能。AWS 团队的目标是优化 torch.compile 后端,使其能够充分利用 Graviton3 处理器的性能。

🚀 **性能提升:** AWS 团队通过扩展 torch inductor 和 oneDNN 原语,实现了对 Graviton3 处理器的优化,并利用 ACL 内核来提升 torch.compile 的性能。通过对 TorchBench 和 Hugging Face 模型进行测试,结果表明,在 Graviton3 上使用 torch.compile 可以实现显著的性能提升,Hugging Face 模型推理性能提升高达 2 倍,TorchBench 模型推理性能提升高达 1.35 倍。

💻 **如何使用优化后的 torch.compile:** 从 PyTorch 2.3.1 版本开始,这些优化已经包含在 torch Python 轮子和 AWS Graviton PyTorch 深度学习容器 (DLC) 中。用户可以通过设置环境变量,例如 DNNL_DEFAULT_FPMATH_MODE 和 THP_MEM_ALLOC_ENABLE,进一步提升 torch.compile 的性能。

📚 **测试方法:** 文章中提供了使用 TorchBench 和 Hugging Face 模型进行性能测试的代码示例,用户可以参考这些示例来验证优化后的 torch.compile 的性能提升。

Originally PyTorch used an eager mode where each PyTorch operation that forms the model is run independently as soon as it’s reached. PyTorch 2.0 introduced torch.compile to speed up PyTorch code over the default eager mode. In contrast to eager mode, the torch.compile pre-compiles the entire model into a single graph in a manner that’s optimal for running on a given hardware platform. AWS optimized the PyTorch torch.compile feature for AWS Graviton3 processors. This optimization results in up to 2x better performance for Hugging Face model inference (based on geomean of performance improvement for 33 models) and up to 1.35x better performance for TorchBench model inference (geomean of performance improvement for 45 models) compared to the default eager mode inference across several natural language processing (NLP), computer vision (CV), and recommendation models on AWS Graviton3-based Amazon EC2 instances. Starting with PyTorch 2.3.1, the optimizations are available in torch Python wheels and AWS Graviton PyTorch deep learning container (DLC).

In this blog post, we show how we optimized torch.compile performance on AWS Graviton3-based EC2 instances, how to use the optimizations to improve inference performance, and the resulting speedups.

Why torch.compile and what’s the goal?

In eager mode, operators in a model are run immediately as they are encountered. It’s easier to use, more suitable for machine learning (ML) researchers, and hence is the default mode. However, eager mode incurs runtime overhead because of redundant kernel launch and memory read overhead. Whereas in torch compile mode, operators are first synthesized into a graph, wherein one operator is merged with another to reduce and localize memory reads and total kernel launch overhead.

The goal for the AWS Graviton team was to optimize torch.compile backend for Graviton3 processors. PyTorch eager mode was already optimized for Graviton3 processors with Arm Compute Library (ACL) kernels using oneDNN (also known as MKLDNN). So, the question was, how to reuse those kernels in torch.compile mode to get the best of graph compilation and the optimized kernel performance together?

Results

The AWS Graviton team extended the torch inductor and oneDNN primitives that reused the ACL kernels and optimized compile mode performance on Graviton3 processors. Starting with PyTorch 2.3.1, the optimizations are available in the torch Python wheels and AWS Graviton DLC. Please see the Running an inference section that follows for the instructions on installation, runtime configuration, and how to run the tests.

To demonstrate the performance improvements, we used NLP, CV, and recommendation models from TorchBench and the most downloaded NLP models from Hugging Face across Question Answering, Text Classification, Token Classification, Translation, Zero-Shot Classification, Translation, Summarization, Feature Extraction, Text Generation, Text2Text Generation, Fill-Mask, and Sentence Similarity tasks to cover a wide variety of customer use cases.

We started with measuring TorchBench model inference latency, in milliseconds (msec), for the eager mode, which is marked 1.0 with a red dotted line in the following graph. Then we compared the improvements from torch.compile for the same model inference, the normalized results are plotted in the graph. You can see that for the 45 models we benchmarked, there is a 1.35x latency improvement (geomean for the 45 models).

Image 1: PyTorch model inference performance improvement with torch.compile on AWS Graviton3-based c7g instance using TorchBench framework. The reference eager mode performance is marked as 1.0. (higher is better)

Similar to the preceding TorchBench inference performance graph, we started with measuring the Hugging Face NLP model inference latency, in msec, for the eager mode, which is marked 1.0 with a red dotted line in the following graph. Then we compared the improvements from torch.compile for the same model inference, the normalized results are plotted in the graph. You can see that for the 33 models we benchmarked, there is around 2x performance improvement (geomean for the 33 models).

Image 2: Hugging Face NLP model inference performance improvement with torch.compile on AWS Graviton3-based c7g instance using Hugging Face example scripts. The reference eager mode performance is marked as 1.0. (higher is better)

Running an inference

Starting with PyTorch 2.3.1, the optimizations are available in the torch Python wheel and in AWS Graviton PyTorch DLC. This section shows how to run inference in eager and torch.compile modes using torch Python wheels and benchmarking scripts from Hugging Face and TorchBench repos.

To successfully run the scripts and reproduce the speedup numbers mentioned in this post, you need an instance from the Graviton3 family (c7g/r7g/m7g/hpc7g) of hardware. For this post, we used the c7g.4xl (16 vcpu) instance. The instance, the AMI details, and the required torch library versions are mentioned in the following snippet.

Instance: c7g.4xl instanceRegion: us-west-2AMI: ami-05cc25bfa725a144a (Ubuntu 22.04/Jammy with 6.5.0-1017-aws kernel)# Install Pythonsudo apt-get updatesudo apt-get install -y python3 python3-pip# Upgrade pip3 to the latest versionpython3 -m pip install --upgrade pip# Install PyTorch and extensionspython3 -m pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1

The generic runtime tunings implemented for eager mode inference are equally applicable for the torch.compile mode, so, we set the following environment variables to further improve the torch.compile performance on AWS Graviton3 processors.

# Enable the fast math GEMM kernels, to accelerate fp32 inference with bfloat16 gemmexport DNNL_DEFAULT_FPMATH_MODE=BF16# Enable Linux Transparent Huge Page (THP) allocations,# to reduce the tensor memory allocation latencyexport THP_MEM_ALLOC_ENABLE=1# Set LRU Cache capacity to cache the primitives and avoid redundant# memory allocationsexport LRU_CACHE_CAPACITY=1024

TorchBench benchmarking scripts

TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance. We benchmarked 45 models using the scripts from the TorchBench repo. Following code shows how to run the scripts for the eager mode and the compile mode with inductor backend.

# Set OMP_NUM_THREADS to number of vcpus, 16 for c7g.4xl instanceexport OMP_NUM_THREADS=16# Install the dependenciessudo apt-get install -y libgl1-mesa-glxsudo apt-get install -y libpangocairo-1.0-0python3 -m pip install psutil numpy transformers pynvml numba onnx onnxruntime scikit-learn timm effdet gym doctr opencv-python h5py==3.10.0 python-doctr# Clone pytorch benchmark repogit clone https://github.com/pytorch/benchmark.gitcd benchmark# PyTorch benchmark repo doesn't have any release tags. So,# listing the commit we used for collecting the performance numbersgit checkout 9a5e4137299741e1b6fb7aa7f5a6a853e5dd2295# Setup the modelspython3 install.py# Colect eager mode performance using the following command. The results will be# stored at .userbenchmark/cpu/metric-<timestamp>.json.python3 run_benchmark.py cpu --model BERT_pytorch,hf_Bert,hf_Bert_large,hf_GPT2,hf_Albert,hf_Bart,hf_BigBird,hf_DistilBert,hf_GPT2_large,dlrm,hf_T5,mnasnet1_0,mobilenet_v2,mobilenet_v3_large,squeezenet1_1,timm_efficientnet,shufflenet_v2_x1_0,timm_regnet,resnet50,soft_actor_critic,phlippe_densenet,resnet152,resnet18,resnext50_32x4d,densenet121,phlippe_resnet,doctr_det_predictor,timm_vovnet,alexnet,doctr_reco_predictor,vgg16,dcgan,yolov3,pytorch_stargan,hf_Longformer,timm_nfnet,timm_vision_transformer,timm_vision_transformer_large,nvidia_deeprecommender,demucs,tts_angular,hf_Reformer,pytorch_CycleGAN_and_pix2pix,functorch_dp_cifar10,pytorch_unet --test eval --metrics="latencies,cpu_peak_mem"# Collect torch.compile mode performance with inductor backend# and weights pre-packing enabled. The results will be stored at# .userbenchmark/cpu/metric-<timestamp>.jsonpython3 run_benchmark.py cpu --model BERT_pytorch,hf_Bert,hf_Bert_large,hf_GPT2,hf_Albert,hf_Bart,hf_BigBird,hf_DistilBert,hf_GPT2_large,dlrm,hf_T5,mnasnet1_0,mobilenet_v2,mobilenet_v3_large,squeezenet1_1,timm_efficientnet,shufflenet_v2_x1_0,timm_regnet,resnet50,soft_actor_critic,phlippe_densenet,resnet152,resnet18,resnext50_32x4d,densenet121,phlippe_resnet,doctr_det_predictor,timm_vovnet,alexnet,doctr_reco_predictor,vgg16,dcgan,yolov3,pytorch_stargan,hf_Longformer,timm_nfnet,timm_vision_transformer,timm_vision_transformer_large,nvidia_deeprecommender,demucs,tts_angular,hf_Reformer,pytorch_CycleGAN_and_pix2pix,functorch_dp_cifar10,pytorch_unet --test eval --torchdynamo inductor --freeze_prepack_weights --metrics="latencies,cpu_peak_mem"

On successful completion of the inference runs, the script stores the results in JSON format. The following is the sample output:

{"name": "cpu""environ": {"pytorch_git_version": "d44533f9d073df13895333e70b66f81c513c1889"},"metrics": {"BERT_pytorch-eval_latency": 56.3769865,"BERT_pytorch-eval_cmem": 0.4169921875}}

Hugging Face benchmarking scripts

Google T5 Small Text Translation model is one of the around 30 Hugging Face models we benchmarked. We’re using it as a sample model to demonstrate how to run inference in eager and compile modes. The additional configurations and APIs required to run it in compile mode are highlighted in BOLD. Save the following script as google_t5_small_text_translation.py .

import argparsefrom transformers import T5Tokenizer, T5Modelimport torchfrom torch.profiler import profile, record_function, ProfilerActivityimport torch._inductor.config as config config.cpp.weight_prepack=True config.freezing=Truedef test_inference(mode, num_iter):tokenizer = T5Tokenizer.from_pretrained("t5-small")model = T5Model.from_pretrained("t5-small")input_ids = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt").input_ids  # Batch size 1decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids  # Batch size 1    if (mode == 'compile'):         model = torch.compile(model)with torch.no_grad():for _ in range(50):outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)with profile(activities=[ProfilerActivity.CPU]) as prof:with record_function("model_inference"):for _ in range(num_iter):outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)print(prof.key_averages().table(sort_by="self_cpu_time_total"))def main() -> None:global m, argsparser = argparse.ArgumentParser(__doc__)parser.add_argument("-m","--mode",choices=["eager", "compile"],default="eager",help="Which test to run.",)parser.add_argument("-n","--number",type=int,default=100,help="how many iterations to run.",)args = parser.parse_args()test_inference(args.mode, args.number)if __name__ == "__main__":main()

Run the script with the following steps.

# Set OMP_NUM_THREADS to number of vcpus to 4 because# the scripts are running inference in sequence, and# they don't need large number of vcpusexport OMP_NUM_THREADS=4# Install the dependenciespython3 -m pip install transformers# Run the inference script in Eager mode# using number of iterations as 1 just to show the torch profiler output# but for the benchmarking, we used 1000 iterations.python3 google_t5_small_text_translation.py -n 1 -m eager# Run the inference script in torch compile modepython3 google_t5_small_text_translation.py -n 1 -m compile

On successful completion of the inference runs, the script prints the torch profiler output with the latency breakdown for the torch operators. The following is the sample output from torch profiler:

# Torch profiler output for the eager mode run on c7g.xl (4vcpu)---------------    ------------  -----------  ------------  -----------  ------------  ------------Name                 Self CPU %   Self CPU     CPU total %   CPU total   CPU time avg    # of Calls---------------    ------------  -----------  ------------  -----------  ------------  ------------aten::mm            40.71%         12.502ms       40.71%      12.502ms     130.229us            96model_inference     26.44%         8.118ms       100.00%      30.708ms      30.708ms             1aten::bmm            6.85%         2.102ms         9.47%       2.908ms      80.778us            36aten::matmul         3.73%         1.146ms        57.26%      17.583ms     133.205us           132aten::select         1.88%       576.000us         1.90%     583.000us       0.998us           584aten::transpose      1.51%       464.000us         1.83%     563.000us       3.027us           186---------------    ------------  -----------  ------------  -----------  ------------  -------------Self CPU time total: 30.708ms# Torch profiler output for the compile mode run for the same model on the same instance------------------------- ----------  -----------  ------------  ------------  ------------  ------------Name                      Self CPU %    Self CPU    CPU total %    CPU total   CPU time avg   # of Calls------------------------- ----------  -----------  ------------  ------------  ------------  ------------mkldnn::_linear_pointwise   37.98%       5.461ms        45.91%       6.602ms      68.771us            96Torch-Compiled Region       29.56%       4.251ms        98.53%      14.168ms      14.168ms             1aten::bmm                   14.90%       2.143ms        21.73%       3.124ms      86.778us            36aten::select                 4.51%     648.000us         4.62%     665.000us       1.155us           576aten::view                   3.29%     473.000us         3.29%     473.000us       1.642us           288aten::empty                  2.53%     364.000us         2.53%     364.000us       3.165us           115-------------------------  ---------  -----------  ------------  ------------  ------------ -------------Self CPU time total: 14.379ms

What’s next

Next, we’re extending the torch inductor CPU backend support to compile Llama model, and adding support for fused GEMM kernels to enable torch inductor operator fusion optimization on AWS Graviton3 processors.

Conclusion

In this tutorial, we covered how we optimized torch.compile performance on AWS Graviton3-based EC2 instances, how to use the optimizations to improve PyTorch model inference performance, and demonstrated the resulting speedups. We hope that you will give it a try! If you need any support with ML software on Graviton, please open an issue on the AWS Graviton Technical Guide GitHub.


About the Author

Sunita Nadampalli is a Software Development Manager and AI/ML expert at AWS. She leads AWS Graviton software performance optimizations for AI/ML and HPC workloads. She is passionate about open source software development and delivering high-performance and sustainable software solutions for SoCs based on the Arm ISA.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

PyTorch torch.compile AWS Graviton3 深度学习 性能优化
相关文章