AWS Machine Learning Blog 03月20日
Amazon Bedrock Guardrails announces IAM Policy-based enforcement to deliver safe AI interactions
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着生成式AI在企业中的应用加速,亚马逊Bedrock Guardrails提供可配置的安全保障,帮助组织构建安全的生成式AI应用。它支持多种政策类型,还新增了IAM策略强制实施功能,以确保组织安全政策在AI交互中得到一致执行。

🎯亚马逊Bedrock Guardrails可定制化保障生成式AI应用安全,适配具体用例和AI政策

🛡️支持六种政策类型,包括内容过滤、禁止话题、敏感信息过滤等

🔒新增IAM策略强制实施功能,确保安全政策在AI交互中一致执行

📋提供多种政策示例,如特定防护栏及其版本的强制使用

As generative AI adoption accelerates across enterprises, maintaining safe, responsible, and compliant AI interactions has never been more critical. Amazon Bedrock Guardrails provides configurable safeguards that help organizations build generative AI applications with industry-leading safety protections. With Amazon Bedrock Guardrails, you can implement safeguards in your generative AI applications that are customized to your use cases and responsible AI policies. You can create multiple guardrails tailored to different use cases and apply them across multiple foundation models (FMs), improving user experiences and standardizing safety controls across generative AI applications. Beyond Amazon Bedrock models, the service offers the flexible ApplyGuardrails API that enables you to assess text using your pre-configured guardrails without invoking FMs, allowing you to implement safety controls across generative AI applications—whether running on Amazon Bedrock or on other systems—at both input and output levels.

Today, we’re announcing a significant enhancement to Amazon Bedrock Guardrails: AWS Identity and Access Management (IAM) policy-based enforcement. This powerful capability enables security and compliance teams to establish mandatory guardrails for every model inference call, making sure organizational safety policies are consistently enforced across AI interactions. This feature enhances AI governance by enabling centralized control over guardrail implementation.

Challenges with building generative AI applications

Organizations deploying generative AI face critical governance challenges: content appropriateness, where models might produce undesirable responses to problematic prompts; safety concerns, with potential generation of harmful content even from innocent prompts; privacy protection requirements for handling sensitive information; and consistent policy enforcement across AI deployments.

Perhaps most challenging is making sure that appropriate safeguards are applied consistently across AI interactions within an organization, regardless of which team or individual is developing or deploying applications.

Amazon Bedrock Guardrails capabilities

Amazon Bedrock Guardrails enables you to implement safeguards in generative AI applications customized to your specific use cases and responsible AI policies. Guardrails currently supports six types of policies:

Policy-based enforcement of guardrails

Security teams often have organizational requirements to enforce the use of Amazon Bedrock Guardrails for every inference call to Amazon Bedrock. To support this requirement, Amazon Bedrock Guardrails provides the new IAM condition key bedrock:GuardrailIdentifier, which can be used in IAM policies to enforce the use of a specific guardrail for model inference. The condition key in the IAM policy can be applied to the following APIs:

The following diagram illustrates the policy-based enforcement workflow.

If the guardrail configured in your IAM policy doesn’t match the guardrail specified in the request, the request will be rejected with an access denied exception, enforcing compliance with organizational policies.

Policy examples

In this section, we present several policy examples demonstrating how to enforce guardrails for model inference.

Example 1: Enforce the use of a specific guardrail and its numeric version

The following example illustrates the enforcement of exampleguardrail and its numeric version 1 during model inference:

{    "Version": "2012-10-17",    "Statement": [        {            "Sid": "InvokeFoundationModelStatement1",            "Effect": "Allow",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringEquals": {                    "bedrock:GuardrailIdentifier": "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail:1"                }            }        },        {            "Sid": "InvokeFoundationModelStatement2",            "Effect": "Deny",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringNotEquals": {                    "bedrock:GuardrailIdentifier": "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail:1"                }            }        },        {            "Sid": "ApplyGuardrail",            "Effect": "Allow",            "Action": [                "bedrock:ApplyGuardrail"            ],            "Resource": [                "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail"            ]        }    ]}

The added explicit deny denies the user request for calling the listed actions with other GuardrailIdentifier and GuardrailVersion values irrespective of other permissions the user might have.

Example 2: Enforce the use of a specific guardrail and its draft version

The following example illustrates the enforcement of exampleguardrail and its draft version during model inference:

{    "Version": "2012-10-17",    "Statement": [        {            "Sid": "InvokeFoundationModelStatement1",            "Effect": "Allow",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringEquals": {                    "bedrock:GuardrailIdentifier": "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail"                }            }        },        {            "Sid": "InvokeFoundationModelStatement2",            "Effect": "Deny",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringNotEquals": {                    "bedrock:GuardrailIdentifier": "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail"                }            }        },        {            "Sid": "ApplyGuardrail",            "Effect": "Allow",            "Action": [                "bedrock:ApplyGuardrail"            ],            "Resource": [                "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail"            ]        }    ]}

Example 3: Enforce the use of a specific guardrail and its numeric versions

The following example illustrates the enforcement of exampleguardrail and its numeric versions during model inference:

{    "Version": "2012-10-17",    "Statement": [        {            "Sid": "InvokeFoundationModelStatement1",            "Effect": "Allow",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringLike": {                    "bedrock:GuardrailIdentifier": "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail:*"                }            }        },        {            "Sid": "InvokeFoundationModelStatement2",            "Effect": "Deny",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringNotLike": {                    "bedrock:GuardrailIdentifier": "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail:*"                }            }        },        {            "Sid": "ApplyGuardrail",            "Effect": "Allow",            "Action": [                "bedrock:ApplyGuardrail"            ],            "Resource": [                "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail"            ]        }    ]}

Example 4: Enforce the use of a specific guardrail and its versions, including the draft

The following example illustrates the enforcement of exampleguardrail and its versions, including the draft, during model inference:

{    "Version": "2012-10-17",    "Statement": [        {            "Sid": "InvokeFoundationModelStatement1",            "Effect": "Allow",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringLike": {                    "bedrock:GuardrailIdentifier": "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail*"                }            }        },        {            "Sid": "InvokeFoundationModelStatement2",            "Effect": "Deny",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringNotLike": {                    "bedrock:GuardrailIdentifier": "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail*"                }            }        },        {            "Sid": "ApplyGuardrail",            "Effect": "Allow",            "Action": [                "bedrock:ApplyGuardrail"            ],            "Resource": [                "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail"            ]        }    ]}

Example 5: Enforce the use of a specific guardrail and version pair from a list of guardrail and version pairs

The following example illustrates the enforcement of exampleguardrail1 and its version 1, or exampleguardrail2 and its version 2, or exampleguardrail3 and its version 3 and its draft during model inference:

{    "Version": "2012-10-17",    "Statement": [        {            "Sid": "InvokeFoundationModelStatement1",            "Effect": "Allow",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringEquals": {                    "bedrock:GuardrailIdentifier": [                        "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail1:1",                        "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail2:2",                        "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail3"                    ]                }            }        },        {            "Sid": "InvokeFoundationModelStatement2",            "Effect": "Deny",            "Action": [                "bedrock:InvokeModel",                "bedrock:InvokeModelWithResponseStream"            ],            "Resource": [                "arn:aws:bedrock:region::foundation-model/*"            ],            "Condition": {                "StringNotEquals": {                    "bedrock:GuardrailIdentifier": [                        "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail1:1",                        "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail2:2",                        "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail3"                    ]                }            }        },        {            "Sid": "ApplyGuardrail",            "Effect": "Allow",            "Action": [                "bedrock:ApplyGuardrail"            ],            "Resource": [                "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail1",                "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail2",                "arn:aws:bedrock:<region>:<account-id>:guardrail/exampleguardrail3"            ]        }    ]}

Known limitations

When implementing policy-based guardrail enforcement, be aware of these limitations:

Conclusion

The new IAM policy-based guardrail enforcement in Amazon Bedrock represents a crucial advancement in AI governance as generative AI becomes integrated into business operations. By enabling centralized policy enforcement, security teams can maintain consistent safety controls across AI applications regardless of who develops or deploys them, effectively mitigating risks related to harmful content, privacy violations, and bias. This approach offers significant advantages: it scales efficiently as organizations expand their AI initiatives without creating administrative bottlenecks, helps prevent technical debt by standardizing safety implementations, and enhances the developer experience by allowing teams to focus on innovation rather than compliance mechanics.

This capability demonstrates organizational commitment to responsible AI practices through comprehensive monitoring and audit mechanisms. Organizations can use model invocation logging in Amazon Bedrock to capture complete request and response data in Amazon CloudWatch Logs or Amazon Simple Storage Service (Amazon S3) buckets, including specific guardrail trace documentation showing when and how content was filtered. Combined with AWS CloudTrail integration that records guardrail configurations and policy enforcement actions, businesses can confidently scale their generative AI initiatives with appropriate safety mechanisms protecting their brand, customers, and data—striking the essential balance between innovation and ethical responsibility needed to build trust in AI systems.

Get started today with Amazon Bedrock Guardrails and implement configurable safeguards that balance innovation with responsible AI governance across your organization.


About the Authors

Shyam Srinivasan is on the Amazon Bedrock Guardrails product team. He cares about making the world a better place through technology and loves being part of this journey. In his spare time, Shyam likes to run long distances, travel around the world, and experience new cultures with family and friends.

Antonio Rodriguez is a Principal Generative AI Specialist Solutions Architect at AWS. He helps companies of all sizes solve their challenges, embrace innovation, and create new business opportunities with Amazon Bedrock. Apart from work, he loves to spend time with his family and play sports with his friends.

Satveer Khurpa is a Sr. WW Specialist Solutions Architect, Amazon Bedrock at Amazon Web Services. In this role, he uses his expertise in cloud-based architectures to develop innovative generative AI solutions for clients across diverse industries. Satveer’s deep understanding of generative AI technologies allows him to design scalable, secure, and responsible applications that unlock new business opportunities and drive tangible value.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

亚马逊Bedrock Guardrails 生成式AI 安全保障 政策类型 IAM策略
相关文章