MarkTechPost@AI 2024年08月16日
Portkey AI Open-Sourced AI Guardrails Framework to Enhance Real-Time LLM Validation, Ensuring Secure, Compliant, and Reliable AI Operations
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Portkey AI的Guardrails框架可增强实时LLM验证,确保AI操作安全、合规、可靠。

🎯 Guardrails能确保与大语言模型的交互更可靠安全,使请求和响应按预定义标准格式化,降低风险。

💻 Portkey AI提供集成的实时全防护平台,确保LLM的行为始终通过规定检查,应对传统及潜在的失败。

🛡️ Guardrail系统包括多种检查,如预定义正则匹配、JSON模式验证等,还支持基于LLM的Guardrails,可检测多种问题。

📋 将Guardrails投入生产需四个步骤,用户可根据结果定义相应行动,系统具有高度可配置性。

📝 Portkey AI会记录Guardrail的结果,这对构建评估数据集、提升AI模型质量很重要。

On Portkey AI, the Gateway Framework is replaced by a significant component, Guardrails, installed to make interacting with the large language model more reliable and safe. Specifically, Guardrails can ensure that requests and responses are formatted according to predefined standards, reducing the risks associated with variable or harmful LLM outputs.

On the other side, Portkey AI offers an integrated, fully-guardrailed platform that works in real-time to ensure the behaviors of LLM at all times pass all the prescribed checks. This would be important because LLMs are inherently brittle, often failing in the most unexpected ways. Traditional failures may manifest through API downtimes or inexplicable error codes, such as 400 or 500. More insidious are failures whereby a response with a 200 status code still disrupts an app’s workflow because the output is mismatched or wrong. The Guardrails on the Gateway Framework are designed to meet the challenges of validation at input and output against predefined checks.

The Guardrail system includes a set of predefined regex matching, JSON schema validation, and code detection in languages like SQL, Python, and TypeScript. Besides these deterministic checks, Portkey AI also supports LLM-based Guardrails that could detect Gibberish or scan for prompt injections, thus protecting against even more insidious types of failure. More than 20 kinds of Guardrail checks are currently supported, each configurable per need. It integrates with any Guardrail platform, including Aporia, SydeLabs, and Pillar Security. By adding the API keys, the user can include the policies of those other platforms in its Portkey calls.

It becomes quite easy to put Guardrails into production with the four steps: creating Guardrail checks, defining the Guardrail actions, enabling the Guardrails through configurations, and attaching these configurations to requests. A user can make a Guardrail by selecting from the given checks and then further defining what actions to take based on the result outcomes. These may include logging the result, denying the request, creating an evaluation dataset, falling back to another model, or retrying the request.

Built into the Portkey Guardrail system is the ability to be very configurable, based on the outcome of the various checks that Guardrail performs on an application. This means that, for example, the configuration can ensure that should a check fail, the request will either not proceed at all or with a particular status code. This is key flexibility if any organization will strike a balance between security concerns and operational efficiency.

One of Portkey’s Guardrails’ most potent aspects is its relation to the wider Gateway Framework, which orchestrates handling requests. That orchestration considers whether the Guardrail is configured to run asynchronously or synchronously. On the former count, Portkey logs the result of the Guardrail, which does not affect the request; on the latter count, a verdict from the Guardrail directly impacts how a request will be handled. For instance, synchronous mode checking may return a specially defined status code, like 446, that says not to process the request should it fail.

Portkey AI keeps logs of the results from Guardrail, including the number of checks that pass or fail, how long each check takes, and the feedback provided for each request. This logging ability is very important to an organization building an evaluation dataset to continuously improve the quality of AI models and protect them with Guardrails.

In conclusion, the guardrails on the Gateway Framework in Portkey AI embody one of the robust solutions for the intrinsic risk factors associated with running LLMs within a production environment. With complete checks and actions, Portkey ensures that AI applications are secure, compliant, and reliable against LLMs’ unpredictable behavior.


Check out the GitHub and Details. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

Don’t Forget to join our 48k+ ML SubReddit

Find Upcoming AI Webinars here


The post Portkey AI Open-Sourced AI Guardrails Framework to Enhance Real-Time LLM Validation, Ensuring Secure, Compliant, and Reliable AI Operations appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Portkey AI Guardrails框架 LLM验证 AI安全 实时防护
相关文章