AWS Machine Learning Blog 07月08日 04:00
Qwen3 family of reasoning models now available in Amazon Bedrock Marketplace and Amazon SageMaker JumpStart
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Qwen3是通义千问家族的最新一代大语言模型,现已在Amazon Bedrock Marketplace和Amazon SageMaker JumpStart上推出。本文介绍了如何在AWS上部署Qwen3模型,包括0.6B、4B、8B和32B参数版本,并提供了在Amazon Bedrock Marketplace和SageMaker JumpStart上部署的详细步骤。Qwen3在推理、指令遵循、Agent能力和多语言支持方面都有显著提升,可以帮助用户构建和扩展生成式AI应用。

💡 Qwen3模型拥有卓越的推理能力,在数学、代码生成和常识性逻辑推理方面表现出色,超越了之前的QwQ和Qwen2.5 instruct模型。

✨ Qwen3支持在单个模型内无缝切换思考模式和非思考模式,从而在各种场景中实现最佳性能。

🌍 Qwen3支持100多种语言和方言,具备强大的多语言指令遵循和翻译能力。

🚀 用户可以通过Amazon Bedrock Marketplace或SageMaker JumpStart两种方式部署Qwen3模型,Bedrock Marketplace提供直观的界面,而SageMaker JumpStart则支持通过UI或SDK进行程序化部署。

Today, we are excited to announce that Qwen3, the latest generation of large language models (LLMs) in the Qwen family, is available through Amazon Bedrock Marketplace and Amazon SageMaker JumpStart. With this launch, you can deploy the Qwen3 models—available in 0.6B, 4B, 8B, and 32B parameter sizes—to build, experiment, and responsibly scale your generative AI applications on AWS.

In this post, we demonstrate how to get started with Qwen3 on Amazon Bedrock Marketplace and SageMaker JumpStart. You can follow similar steps to deploy the distilled versions of the models as well.

Solution overview

Qwen3 is the latest generation of LLMs in the Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:

Prerequisites

To deploy Qwen3 models, make sure you have access to the recommended instance types based on the model size. You can find these instance recommendations on Amazon Bedrock Marketplace or the SageMaker JumpStart console. To verify you have the necessary resources, complete the following steps:

    Open the Service Quotas console. Under AWS Services, select Amazon SageMaker. Check that you have sufficient quota for the required instance type for endpoint deployment. Make sure at least one of these instance types is available in your target AWS Region.

If needed, request a quota increase and contact your AWS account team for support.

Deploy Qwen3 in Amazon Bedrock Marketplace

Amazon Bedrock Marketplace gives you access to over 100 popular, emerging, and specialized foundation models (FMs) through Amazon Bedrock. To access Qwen3 in Amazon Bedrock, complete the following steps:

    On the Amazon Bedrock console, in the navigation pane under Foundation models, choose Model catalog. Filter for Hugging Face as a provider and choose a Qwen3 model. For this example, we use the Qwen3-32B model.

The model detail page provides essential information about the model’s capabilities, pricing structure, and implementation guidelines. You can find detailed usage instructions, including sample API calls and code snippets for integration.

The page also includes deployment options and licensing information to help you get started with Qwen3-32B in your applications.

    To begin using Qwen3-32B, choose Deploy.

You will be prompted to configure the deployment details for Qwen3-32B. The model ID will be pre-populated.

    For Endpoint name, enter an endpoint name (between 1–50 alphanumeric characters). For Number of instances, enter a number of instances (between 1–100). For Instance type, choose your instance type. For optimal performance with Qwen3-32B, a GPU-based instance type like ml.g5-12xlarge is recommended. To deploy the model, choose Deploy.

When the deployment is complete, you can test Qwen3-32B’s capabilities directly in the Amazon Bedrock playground.

    Choose Open in playground to access an interactive interface where you can experiment with different prompts and adjust model parameters like temperature and maximum length.

This is an excellent way to explore the model’s reasoning and text generation abilities before integrating it into your applications. The playground provides immediate feedback, helping you understand how the model responds to various inputs and letting you fine-tune your prompts for optimal results.You can quickly test the model in the playground through the UI. However, to invoke the deployed model programmatically with any Amazon Bedrock APIs, you must have the endpoint Amazon Resource Name (ARN).

Enable reasoning and non-reasoning responses with Converse API

The following code shows how to turn reasoning on and off with Qwen3 models using the Converse API, depending on your use case. By default, reasoning is left on for Qwen3 models, but you can streamline interactions by using the /no_think command within your prompt. When you add this to the end of your query, reasoning is turned off and the models will provide just the direct answer. This is particularly useful when you need quick information without explanations, are familiar with the topic, or want to maintain a faster conversational flow. At the time of writing, the Converse API doesn’t support tool use for Qwen3 models. Refer to the Invoke_Model API example later in this post to learn how to use reasoning and tools in the same completion.

import boto3from botocore.exceptions import ClientError# Create a Bedrock Runtime client in the AWS Region you want to use.client = boto3.client("bedrock-runtime", region_name="us-west-2")# Configurationmodel_id = ""  # Replace with Bedrock Marketplace endpoint arn# Start a conversation with the user message.user_message = "hello, what is 1+1 /no_think" #remove /no_think to leave default reasoning onconversation = [    {        "role": "user",        "content": [{"text": user_message}],    }]try:    # Send the message to the model, using a basic inference configuration.    response = client.converse(        modelId=model_id,        messages=conversation,        inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9},    )    # Extract and print the response text.    #response_text = response["output"]["message"]["content"][0]["text"]    #reasoning_content = response ["output"]["message"]["reasoning_content"][0]["text"]    #print(response_text, reasoning_content)    print(response)    except (ClientError, Exception) as e:    print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}")    exit(1)

The following is a response using the Converse API, without default thinking:

{'ResponseMetadata': {'RequestId': 'f7f3953a-5747-4866-9075-fd4bd1cf49c4', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Tue, 17 Jun 2025 18:34:47 GMT', 'content-type': 'application/json', 'content-length': '282', 'connection': 'keep-alive', 'x-amzn-requestid': 'f7f3953a-5747-4866-9075-fd4bd1cf49c4'}, 'RetryAttempts': 0}, 'output': {'message': {'role': 'assistant', 'content': [{'text': '\n\nHello! The result of 1 + 1 is **2**. 😊'}, {'reasoningContent': {'reasoningText': {'text': '\n\n'}}}]}}, 'stopReason': 'end_turn', 'usage': {'inputTokens': 20, 'outputTokens': 22, 'totalTokens': 42}, 'metrics': {'latencyMs': 1125}}

The following is an example with default thinking on; the <think> tokens are automatically parsed into the reasoningContent field for the Converse API:

{'ResponseMetadata': {'RequestId': 'b6d2ebbe-89da-4edc-9a3a-7cb3e7ecf066', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Tue, 17 Jun 2025 18:32:28 GMT', 'content-type': 'application/json', 'content-length': '1019', 'connection': 'keep-alive', 'x-amzn-requestid': 'b6d2ebbe-89da-4edc-9a3a-7cb3e7ecf066'}, 'RetryAttempts': 0}, 'output': {'message': {'role': 'assistant', 'content': [{'text': '\n\nHello! The sum of 1 + 1 is **2**. Let me know if you have any other questions or need further clarification! 😊'}, {'reasoningContent': {'reasoningText': {'text': '\nOkay, the user asked "hello, what is 1+1". Let me start by acknowledging their greeting. They might just be testing the water or actually need help with a basic math problem. Since it\'s 1+1, it\'s a very simple question, but I should make sure to answer clearly. Maybe they\'re a child learning math for the first time, or someone who\'s not confident in their math skills. I should provide the answer in a friendly and encouraging way. Let me confirm that 1+1 equals 2, and maybe add a brief explanation to reinforce their understanding. I can also offer further assistance in case they have more questions. Keeping it conversational and approachable is key here.\n'}}}]}}, 'stopReason': 'end_turn', 'usage': {'inputTokens': 16, 'outputTokens': 182, 'totalTokens': 198}, 'metrics': {'latencyMs': 7805}}

Perform reasoning and function calls in the same completion using the Invoke_Model API

With Qwen3, you can stream an explicit trace and the exact JSON tool call in the same completion. Up until now, reasoning models have forced the choice to either show the chain of thought or call tools deterministically. The following code shows an example:

messages = json.dumps( {    "messages": [        {            "role": "user",            "content": "Hi! How are you doing today?"        },         {            "role": "assistant",            "content": "I'm doing well! How can I help you?"        },         {            "role": "user",            "content": "Can you tell me what the temperate will be in Dallas, in fahrenheit?"        }    ],    "tools": [{        "type": "function",        "function": {            "name": "get_current_weather",            "description": "Get the current weather in a given location",            "parameters": {                "type": "object",                "properties": {                    "city": {                        "type":                            "string",                        "description":                            "The city to find the weather for, e.g. 'San Francisco'"                    },                    "state": {                        "type":                            "string",                        "description":                            "the two-letter abbreviation for the state that the city is in, e.g. 'CA' which would mean 'California'"                    },                    "unit": {                        "type": "string",                        "description":                            "The unit to fetch the temperature in",                        "enum": ["celsius", "fahrenheit"]                    }                },                "required": ["city", "state", "unit"]            }        }    }],    "tool_choice": "auto"})response = client.invoke_model(    modelId=model_id,     body=body)print(response)model_output = json.loads(response['body'].read())print(json.dumps(model_output, indent=2))

Response:

{'ResponseMetadata': {'RequestId': '5da8365d-f4bf-411d-a783-d85eb3966542', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Tue, 17 Jun 2025 18:57:38 GMT', 'content-type': 'application/json', 'content-length': '1148', 'connection': 'keep-alive', 'x-amzn-requestid': '5da8365d-f4bf-411d-a783-d85eb3966542', 'x-amzn-bedrock-invocation-latency': '6396', 'x-amzn-bedrock-output-token-count': '148', 'x-amzn-bedrock-input-token-count': '198'}, 'RetryAttempts': 0}, 'contentType': 'application/json', 'body': <botocore.response.StreamingBody object at 0x7f7d4a598dc0>}{  "id": "chatcmpl-bc60b482436542978d233b13dc347634",  "object": "chat.completion",  "created": 1750186651,  "model": "lmi",  "choices": [    {      "index": 0,      "message": {        "role": "assistant",        "reasoning_content": "\nOkay, the user is asking about the weather in San Francisco. Let me check the tools available. There's a get_weather function that requires location and unit. The user didn't specify the unit, so I should ask them if they want Celsius or Fahrenheit. Alternatively, maybe I can assume a default, but since the function requires it, I need to include it. I'll have to prompt the user for the unit they prefer.\n",        "content": "\n\nThe user hasn't specified whether they want the temperature in Celsius or Fahrenheit. I need to ask them to clarify which unit they prefer.\n\n",        "tool_calls": [          {            "id": "chatcmpl-tool-fb2f93f691ed4d8ba94cadc52b57414e",            "type": "function",            "function": {              "name": "get_weather",              "arguments": "{\"location\": \"San Francisco, CA\", \"unit\": \"celsius\"}"            }          }        ]      },      "logprobs": null,      "finish_reason": "tool_calls",      "stop_reason": null    }  ],  "usage": {    "prompt_tokens": 198,    "total_tokens": 346,    "completion_tokens": 148,    "prompt_tokens_details": null  },  "prompt_logprobs": null}

Deploy Qwen3-32B with SageMaker JumpStart

SageMaker JumpStart is a machine learning (ML) hub with FMs, built-in algorithms, and prebuilt ML solutions that you can deploy with just a few clicks. With SageMaker JumpStart, you can customize pre-trained models to your use case, with your data, and deploy them into production using either the UI or SDK.Deploying the Qwen3-32B model through SageMaker JumpStart offers two convenient approaches: using the intuitive SageMaker JumpStart UI or implementing programmatically through the SageMaker Python SDK. Let’s explore both methods to help you choose the approach that best suits your needs.

Deploy Qwen3-32B through SageMaker JumpStart UI

Complete the following steps to deploy Qwen3-32B using SageMaker JumpStart:

    On the SageMaker console, choose Studio in the navigation pane. First-time users will be prompted to create a domain. On the SageMaker Studio console, choose JumpStart in the navigation pane.

The model browser displays available models, with details like the provider name and model capabilities.

    Search for Qwen3 to view the Qwen3-32B model card.

Each model card shows key information, including:

    Choose the model card to view the model details page.

The model details page includes the following information:

The About tab includes important details, such as:

Before you deploy the model, it’s recommended to review the model details and license terms to confirm compatibility with your use case.

    Choose Deploy to proceed with deployment. For Endpoint name, use the automatically generated name or create a custom one. For Instance type¸ choose an instance type (default: ml.g6-12xlarge). For Initial instance count, enter the number of instances (default: 1).

Selecting appropriate instance types and counts is crucial for cost and performance optimization. Monitor your deployment to adjust these settings as needed. Under Inference type, Real-time inference is selected by default. This is optimized for sustained traffic and low latency.

    Review all configurations for accuracy. For this model, we strongly recommend adhering to SageMaker JumpStart default settings and making sure that network isolation remains in place. Choose Deploy to deploy the model.

The deployment process can take several minutes to complete.

When deployment is complete, your endpoint status will change to InService. At this point, the model is ready to accept inference requests through the endpoint. You can monitor the deployment progress on the SageMaker console Endpoints page, which will display relevant metrics and status information. When the deployment is complete, you can invoke the model using a SageMaker runtime client and integrate it with your applications.

Deploy Qwen3-32B using the SageMaker Python SDK

To get started with Qwen3-32B using the SageMaker Python SDK, you must install the SageMaker Python SDK and make sure you have the necessary AWS permissions and environment set up. The following is a step-by-step code example that demonstrates how to deploy and use Qwen3-32B for inference programmatically:

!pip install --force-reinstall --no-cache-dir sagemaker==2.235.2from sagemaker.serve.builder.model_builder import ModelBuilder from sagemaker.serve.builder.schema_builder import SchemaBuilder from sagemaker.jumpstart.model import ModelAccessConfig from sagemaker.session import Session import logging sagemaker_session = Session()artifacts_bucket_name = sagemaker_session.default_bucket() execution_role_arn = sagemaker_session.get_caller_identity_arn()# Changed to Qwen32B modeljs_model_id = "huggingface-reasoning-qwen3-32b"gpu_instance_type = "ml.g5.12xlarge"response = "Hello, I'm a language model, and I'm here to help you with your English."sample_input = {    "inputs": "Hello, I'm a language model,",    "parameters": {        "max_new_tokens": 128,         "top_p": 0.9,         "temperature": 0.6    }}sample_output = [{"generated_text": response}]schema_builder = SchemaBuilder(sample_input, sample_output)model_builder = ModelBuilder(     model=js_model_id,     schema_builder=schema_builder,     sagemaker_session=sagemaker_session,     role_arn=execution_role_arn,     log_level=logging.ERROR ) model = model_builder.build() predictor = model.deploy(    model_access_configs={js_model_id: ModelAccessConfig(accept_eula=True)},     accept_eula=True) predictor.predict(sample_input)

You can run additional requests against the predictor:

new_input = {"inputs": "What is Amazon doing in Generative AI?","parameters": {"max_new_tokens": 64, "top_p": 0.8, "temperature": 0.7},}prediction = predictor.predict(new_input)print(prediction)

The following are some error handling and best practices to enhance deployment code:

# Enhanced deployment code with error handlingimport backoffimport botocoreimport logginglogging.basicConfig(level=logging.INFO)logger = logging.getLogger(__name__)@backoff.on_exception(backoff.expo,                      (botocore.exceptions.ClientError,),                     max_tries=3)def deploy_model_with_retries(model_builder, model_id):    try:        model = model_builder.build()        predictor = model.deploy(            model_access_configs={model_id:ModelAccessConfig(accept_eula=True)},            accept_eula=True        )        return predictor    except Exception as e:        logger.error(f"Deployment failed: {str(e)}")        raisedef safe_predict(predictor, input_data):    try:        return predictor.predict(input_data)    except Exception as e:        logger.error(f"Prediction failed: {str(e)}")        return None

Clean up

To avoid unwanted charges, complete the steps in this section to clean up your resources.

Delete the Amazon Bedrock Marketplace deployment

If you deployed the model using Amazon Bedrock Marketplace, complete the following steps:

    On the Amazon Bedrock console, under Foundation models in the navigation pane, choose Marketplace deployments. In the Managed deployments section, locate the endpoint you want to delete. Select the endpoint, and on the Actions menu, choose Delete. Verify the endpoint details to make sure you’re deleting the correct deployment:
      Endpoint name Model name Endpoint status
    Choose Delete to delete the endpoint. In the deletion confirmation dialog, review the warning message, enter confirm, and choose Delete to permanently remove the endpoint.

Delete the SageMaker JumpStart predictor

The SageMaker JumpStart model you deployed will incur costs if you leave it running. Use the following code to delete the endpoint if you want to stop incurring charges. For more details, see Delete Endpoints and Resources.

predictor.delete_model()predictor.delete_endpoint()

Conclusion

In this post, we explored how you can access and deploy the Qwen3 models using Amazon Bedrock Marketplace and SageMaker JumpStart. With support for both the full parameter models and its distilled versions, you can choose the optimal model size for your specific use case. Visit SageMaker JumpStart in Amazon SageMaker Studio or Amazon Bedrock Marketplace to get started. For more information, refer to Use Amazon Bedrock tooling with Amazon SageMaker JumpStart models, SageMaker JumpStart pretrained models, Amazon SageMaker JumpStart Foundation Models, Amazon Bedrock Marketplace, and Getting started with Amazon SageMaker JumpStart.

The Qwen3 family of LLMs offers exceptional versatility and performance, making it a valuable addition to the AWS foundation model offerings. Whether you’re building applications for content generation, analysis, or complex reasoning tasks, Qwen3’s advanced architecture and extensive context window make it a powerful choice for your generative AI needs.


About the authors

Niithiyn Vijeaswaran is a Generative AI Specialist Solutions Architect with the Third-Party Model Science team at AWS. His area of focus is AWS AI accelerators (AWS Neuron). He holds a Bachelor’s degree in Computer Science and Bioinformatics.

Avan Bala is a Solutions Architect at AWS. His area of focus is AI for DevOps and machine learning. He holds a bachelor’s degree in Computer Science with a minor in Mathematics and Statistics from the University of Maryland. Avan is currently working with the Enterprise Engaged East Team and likes to specialize in projects about emerging AI technologies.

Mohhid Kidwai is a Solutions Architect at AWS. His area of focus is generative AI and machine learning solutions for small-medium businesses. He holds a bachelor’s degree in Computer Science with a minor in Biological Science from North Carolina State University. Mohhid is currently working with the SMB Engaged East Team at AWS.

Yousuf Athar is a Solutions Architect at AWS specializing in generative AI and AI/ML. With a Bachelor’s degree in Information Technology and a concentration in Cloud Computing, he helps customers integrate advanced generative AI capabilities into their systems, driving innovation and competitive edge. Outside of work, Yousuf loves to travel, watch sports, and play football.

John Liu has 15 years of experience as a product executive and 9 years of experience as a portfolio manager. At AWS, John is a Principal Product Manager for Amazon Bedrock. Previously, he was the Head of Product for AWS Web3 / Blockchain. Prior to AWS, John held various product leadership roles at public blockchain protocols, fintech companies and also spent 9 years as a portfolio manager at various hedge funds.

Rohit Talluri is a Generative AI GTM Specialist at Amazon Web Services (AWS). He is partnering with top generative AI model builders, strategic customers, key AI/ML partners, and AWS Service Teams to enable the next generation of artificial intelligence, machine learning, and accelerated computing on AWS. He was previously an Enterprise Solutions Architect and the Global Solutions Lead for AWS Mergers & Acquisitions Advisory.

Varun Morishetty is a Software Engineer with Amazon SageMaker JumpStart and Bedrock Marketplace. Varun received his Bachelor’s degree in Computer Science from Northeastern University. In his free time, he enjoys cooking, baking and exploring New York City.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Qwen3 大语言模型 AWS Amazon Bedrock SageMaker JumpStart
相关文章