AWS Machine Learning Blog 前天 00:30
AI agents unifying structured and unstructured data: Transforming support analytics and beyond with Amazon Q Plugins
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了如何通过为Amazon Q Business添加自定义插件来增强其在AWS支持数据分析方面的能力。传统的RAG(检索增强生成)架构在理解自然语言查询方面表现出色,但对于精确的数值分析和聚合却存在局限性。通过结合RAG的语义搜索能力与结构化数据查询,并利用Lambda函数和Amazon Bedrock LLM将自然语言请求转化为SQL查询,该解决方案能够提供更准确、更深入的分析结果。文中通过多个示例展示了新方案在处理计数、分类、多维度分析等复杂查询时的优势,从而帮助运营团队从AWS支持案例和健康事件中提取更有价值的洞察。

📊 **RAG架构的局限性与插件化解决方案**:文章指出,传统的RAG架构在处理涉及精确数值计算和聚合的查询时存在准确性问题,例如对支持案例数量的统计可能出现偏差。为了解决这一问题,通过为Amazon Q Business开发自定义插件,将其与结构化数据查询能力相结合,可以显著提高分析的精度,实现对AWS支持数据的深度挖掘。

📈 **结构化数据查询在分析中的关键作用**:文章强调,当用户提出包含聚合(如计数、总计、分布)的查询时,需要通过结构化查询进行精确的数值计算。向量相似性搜索本身无法保证准确的数值聚合,因此,提取和维护结构化元数据(如创建日期、类别、严重性级别、服务类型)对于精确过滤和查询至关重要,这使得更复杂的分析成为可能。

💡 **自定义插件增强Amazon Q Business分析能力**:通过为Amazon Q Business构建自定义插件,可以扩展其功能,使其能够理解分析意图,并将自然语言查询转换为精确的Amazon Athena SQL语句。这些SQL语句随后在结构化元数据表上执行,从而提供精确的数值结果,并与传统的RAG响应相结合,为用户提供更全面的信息。

🛠️ **QSI解决方案的架构与元数据处理**:文章详细介绍了Amazon Q Support Insights (QSI)解决方案的组成部分,包括数据收集管道(使用Lambda和EventBridge处理支持案例、健康事件等)和Amazon Q Business应用程序环境。通过提取支持数据中的结构化元数据到可查询的表中,并利用AWS Organizations支持多账户数据聚合,为精确查询奠定了基础。

🔄 **跨数据源分析与多维度洞察**:该增强解决方案不仅能处理单一数据源的分析,还能实现跨数据源的相关性分析,例如将支持案例与健康事件关联,从而进行更全面的运营评估和模式检测。这使得用户能够更深入地理解问题根源,并做出更明智的决策。

As organizations seek to derive greater value from their AWS Support data, operational teams are looking for ways to transform raw support cases and health events into actionable insights. While traditional analytics tools can provide basic reporting capabilities, teams need more sophisticated solutions that can understand and process natural language queries about their operational data. Retrieval-Augmented Generation (RAG) architecture forms the foundation for optimizing large language model outputs by referencing authoritative knowledge bases outside of their training data before generating responses. This architecture uses the power of semantic search and information retrieval capabilities to enhance accuracy.

In our previous blog post, Derive meaningful and actionable operational insights from AWS Using Amazon Q Business, we introduced a RAG-based solution using Amazon Q Business. However, while this approach excels at semantic search, it can face challenges with precise numerical analysis and aggregations. In this post, we address these limitations by showing how you can enhance Amazon Q with custom plugins to improve support analytics and incident response. Our solution combines the strengths of RAG with structured data querying, enabling more accurate answers to analytical questions about your AWS Support data. Detailed deployment instructions for the solution in this post are available in our AWS Samples GitHub repository.

Understanding RAG’s design for analytics

RAG architecture excels at finding relevant information through context-based matching, enabling natural language interactions with external data. However, the fundamental architecture of vector similarity search, which prioritizes semantic relevance over analytical aggregation capabilities, can introduce significant inaccuracies. Let’s examine this limitation with a simple query:

Give me total count of support cases in Feb 2025

The following is a RAG-only response indicating that the system attempts to piece together information from various documents, reporting 190 cases through potentially hallucinated aggregations.

Based on the provided JSON files, there were 190 AWS support cases documented in February 2025, calculated from the following case counts: 17 cases from Feb 1-3, 12 cases from Feb 4-5, 21 cases from Feb 7-8, 21 cases from Feb 8-10, 16 cases from Feb 10-11, 10 cases from Feb 11-12, 23 cases from Feb 19-20, 22 cases from Feb 20-21,27 cases from Feb 23-24, and 21 cases from Feb 25-26.

However, when enhanced with structured data querying capabilities, the same query yields 958 support cases, the accurate count verified against the actual dataset:

Based on the query results, there were a total of 958 support cases in February 2025.

This limitation becomes even more apparent with moderately complex queries that require categorical analysis:

Give me OpenSearch Support cases, distribution based on severity, opened in Jan 2025?

The following is a RAG-only response showing limited or inaccurate categorical breakdown:

Based on the support cases data from January 2025, here is the severity distribution for OpenSearch cases: Urgent: One case was opened regarding cluster health degradation and production system downtimeNormal: Two cases were identified:1. A case about OpenSearch RI Exchange2. A case concerning blue/green deployment upgrade from version 1.3 to 2.17The cases demonstrate varying levels of urgency, with one critical production issue and two normal severity cases requiring standard response times.

With structured data querying, we get precise severity distributions:

Based on the query results for OpenSearch Support cases opened in January 2025,here's the distribution by severity:1. Normal severity: 4 cases2. Critical severity: 4 cases3. High severity: 3 cases4. Low severity: 2 cases5. Urgent severity: 1 caseIn total, there were 14 OpenSearch Support cases opened in January 2025. The distribution shows that Normal and Critical severity cases were the most common, followed by High severity cases. There were fever Low and Urgent severity cases during this period.

The gap widens further when dealing with multi-dimensional analysis. Consider the following multi-dimensional query expecting detailed analysis with account-service combinations and temporal patterns.

Query:

Find accounts with frequent high-priority support cases since January 2024, excluding limit increase requests. For each account and AWS service combination, show the total number of cases and how many different days had issues. Only include results where there were more than 3 cases, and sort results by the highest number of cases first.

Response:

Based on the analysis of high-priority support cases since January 2024, excludinglimit increase requests, here are the key findings:1. The account with the most frequent high-priority cases is 111111111111, with 60 cases for the Connect service.2. The top 5 account and service combinations with the highest number of cases are:* Account 111111111111 (Connect): 60 cases* Account 222222222222 (RDS Oracle): 52 cases* Account 333333333333 (OpenSearch Service): 38 cases* Account 444444444444 (Aurora PostgreSQL): 25 cases* Account 555555555555 (EMR): 23 casesThe services that appear most frequently in the top results are:* Aurora PostgreSQL* OpenSearch Service* RDS Oracle* EMR* DynamoDBInterestingly, the number of total cases matches the number of unique days with cases for each account-service combination. This suggests that these accounts are experiencing issues on different days rather than multiple issues on the same day.There are 89 account-service combinations that have more than 3 high-priority cases since January 2024.Some accounts appear multiple times with different services, indicating they may be experiencing issues across various AWS offerings. For example:* Account 222222222222 appears with RDS Oracle (52 cases), DynamoDB (18 cases), and Aurora PostgreSQL (6 cases)* Account 111111111111 appears with Connect (60 cases) and other servicesThis information can be useful for identifying accounts that may need additional support or proactive engagement to address recurring issues. Would you like to focus on a specific aspect of this data or explore further details about particular accounts or services?

The following is a RAG-only response showing inability to process the same complex query:

Based on the provided data source, I cannot provide a complete analysis of high-priory support cases and their frequency patterns. While there are support cases mentioned in the data from January 2024, the data does not contain sufficient information about case priorities or enough cases per account to identity patterns meeting the threshold of more than 3 cases. Additionally, while some technical and customer service cases are shown, the data is fragmented across different dates without providing a comprehensive view of case volumes or frequencies per account and service.

These examples demonstrate how RAG alone struggles with analytical queries of increasing complexity, while our plugin-enhanced solution delivers precise, structured analysis at every level. When working with scenarios requiring precise numerical analysis, we can enhance RAG’s capabilities through structured approaches:

Enhancing Q Support-Insights with agentic AI

Building on the Q Support-Insights (QSI) solution introduced in Derive meaningful and actionable operational insights from AWS Using Amazon Q Business, we’ll demonstrate how to enhance analytical capabilities through agentic AI by creating custom plugins. This enhancement preserves QSI’s base implementation while adding precise analytical processing through structured metadata querying.

QSI overview

The Amazon Q Support Insights (QSI) solution consists of two main components:

    Data collection Pipeline
      Support Collector module using AWS Lambda functions The Support data consists of AWS Support cases, Health events, and Trusted Advisor checks Amazon EventBridge for automated data collection. The data pipeline enables two synchronization mechanisms:
        Real-time case updates: Processes AWS Support cases through event-based triggers (CreateCase, AddCommunicationToCase, ResolveCase, ReopenCase). Historical and daily sync: Performs initial historical data sync and refreshes AWS Trusted Advisor data daily.
      Stores data in JSON format in centralized Amazon Simple Storage Service (Amazon S3) bucket Supports multi-account data aggregation through AWS Organizations
    Amazon Q Business application environment
      Amazon Q Business application deployment Amazon S3 connector for data source integration Web experience configuration for user interaction Authentication through AWS IAM Identity Center

Enabling query aggregation with custom plugins for Amazon Q Business 

Custom plugins extend Amazon Q Business to combine semantic search with precise analytics capabilities. The following implementation details outline how we’ve augmented the base QSI solution:

Metadata processing

To enable precise querying capabilities, we’ve implemented a robust metadata processing framework that structures and indexes support data:

Cross-source analysis

The solution enables comprehensive analysis across multiple data sources through intelligent correlation capabilities:

Architecture overview

The following illustration shows how the enhanced Amazon Q Business solution integrates custom plugins with the existing QSI architecture. The diagram demonstrates the flow from user query through analytical processing to response generation, highlighting how structured data querying complements the RAG-based semantic search capabilities.

When a user submits an analytical query like Show me OpenSearch case distribution based on severity, the following process occurs:

    The custom plugin detects the analytical intent and triggers a Lambda function The Lambda function uses an Amazon Bedrock LLM to transform the natural language query into precise Athena SQL The query executes against structured metadata in Athena Results are synthesized back into natural language and presented alongside traditional RAG responses

This approach enables precise analytics while maintaining the conversational interface users expect.

Implementation example

Here’s a sample interaction demonstrating the enhanced capabilities. Let’s start with a basic query to understand the overall OpenSearch case volume, using structured data querying:

Query:

Give me OpenSearch cases in Jan 2025

Response:

Based on the query results for OpenSearch cases in January 2025, here's a summary:1. Total cases: 152. Severity breakdown:Critical: 4 casesHigh: 3 casesUrgent: 1 caseNormal: 3 casesLow: 2 cases3. Status: All cases are resolved4. Most common issues:Cluster health problems (yellow status)Blue/Green deployment issuesPerformance concerns (high CPU utilization)5. Services involved:OpenSearch Service Managed Cluster: 13 casesOpenSearch Service Ingestion: 2 casesThe cases span various topics including cluster health, deployments, performance, and configuration issues. Is there a specific aspect of these cases you'd like more information on?

Now, using RAG capabilities to analyze unstructured case content for architectural insights:

Query:

Out of these, what are the OpenSearch issues that are related to architecture deficiencies? 

Response:

Based on the available data, several architectural issues have been identified in OpenSearch deployments:* The shard count per node exceeding recommended limits is a significant architectural concern, with validation errors occurring when scaling up node counts. The recommended limit is 1,000 shards per node and exceeding this requires careful architectural planning. * Master node scaling is another architectural challenge, where insufficient master node capacity can limit cluster expansion. This requires proper configuration of master node instance types according to the number of nodes and shards in the domain. * Resource utilization issues, particularly around JVM Memory Pressure and CPU utilization, indicate potential architectural shortcomings. The system architecture should be designed to maintain JVM Memory Pressure below 80% and CPU Utilization below 85% for optimal performance. * The system's handling of blue/green deployments shows architectural limitations, as these operations cause performance impacts and increased latency during data migration to new nodes. 

Continuing with RAG-based analysis of case details to evaluate implementation patterns:

Query:

Given the high shard count, JVM, or other data can we make inferences about whether they are using this correctly?

Response:

Based on the data discussed, there are strong indicators that the OpenSearch system is not being used optimally:JVM Memory Management Issues:1. Memory pressure consistently exceeded recommended thresholds (above 78% and eventually over 90%)2. The 64GB initial heap size configuration proved inadequate for the workload3. Circuit breaking exceptions and cluster blocks occurred due to memory issuesShard Management Problems:1. The system appears to have exceeded the recommended limit of 1,000 shards per node2. Too many small shards created unnecessary system strainLarge shards complicated failure recovery processesResource Utilization:1. Failed to maintain recommended thresholds:2. JVM Memory Pressure should be below 80%3. CPU Utilization should be below 85%4. Continuous memory pressure issuesFailed snapshotsThese indicators strongly suggest improper sizing and resource allocation for the workload, pointing to incorrect implementation or configuration of the OpenSearch system. Better capacity planning and adherence to best practices would be necessary for optimal performance.

Finally, here’s another example of structured data analysis with a complex query demonstrating multi-dimensional aggregation capabilities:

Query:

For high-severity support cases since January 1, 2024, show the number of total cases and after-hours cases for each account and service combination. Define after-hours as before 1 PM UTC, after 1 AM UTC, or times falling within the weekend. Only include results with more than 3 after-hours cases. Sort by the highest number of after-hours cases. Exclude limit increase requests.

Response:

Based on the analysis of high-severity support cases since January 1, 2024, here's a summary of the results, focusing on accounts and services with more than 3 after-hours cases:1. The account with the highest number of after-hours cases (48) is 111111111111, for the Connect service. They have a total of 66 high-severity cases.2. The second highest is account 333333333333 with 40 after-hours cases for the OpenSearch Service Managed Cluster, out of 49 total cases.3. Account 222222222222 has 38 after-hours cases for RDS Oracle, out of 64 total cases.4. EMR service for account 555555555555 has 19 after-hours cases out of 23 total cases.5. Bedrock service for account 666666666666 has 17 after-hours cases out of 24 total cases.The results are sorted by the highest number of after-hours cases, and limit increase requests have been excluded as requested. After-hours cases are defined as those created before 1 PM UTC, after 1 AM UTC, or times falling within the weekend.Would you like more details on a specific account or service, or do you need a further analysis of this data?

The response combines precise numerical analysis with semantic understanding, providing actionable insights for operational reviews.

Benefits and impact

This plugin architecture delivers several key improvements:

Deploy the Amazon Q Business application

The following is a simplified deployment process. For detailed instructions, see the Amazon Q Business application creation module.

Prerequisites

    AWS CloudShell is recommended since, it comes pre-installed with the required libraries and tools. Alternatively, you can use a local machine with the AWS Command Line Interface (AWS CLI) installed and configured with valid credentials. Two S3 buckets:
      Support data bucket for storing AWS Support case data Resource bucket as temporary storage for Lambda resources for deployment (can be deleted after deployment)
    IAM Identity Center instance configured The solution needs AWS Support data collected using the Support Data Pipeline. You can deploy now and add data later, but functionality depends on data availability in your S3 bucket. Access to Anthropic’s Cloud 3-5 Sonnet through Amazon Bedrock. See Add or remove access to Amazon Bedrock foundation models The default database should exist in Athena. If not, you can create one using Athena Query Editor to create the database.

Deployment steps

You can us the following script to deploy the Q solution. No manual steps are needed—the script handles stack creation and configuration automatically.

# Clone the repositorygit clone https://github.com/aws-samples/support-insights-with-amazon-q.gitcd q_applicationchmod +x deploy_q_stacks.sh./deploy_q_stacks.sh

Clean up

To remove the resources, delete the S3 buckets and CloudFormation stacks. Delete the CloudFormation stacks in the following order:

    case-metadata-stack amazon-q-stack custom-plugin-stack

Note that this won’t delete the existing S3 buckets, you must manually delete the S3 buckets.

Conclusion

By combining RAG’s semantic understanding with precise analytical capabilities through plugins, we’ve transformed Amazon Q Business into a powerful operational analytics platform. In the examples in this post, you can see how organizations can use this enhancement to derive more accurate and actionable insights from their AWS Support data, supporting better operational decision-making and proactive issue resolution. While demonstrated through support data analytics for operational improvements, these patterns apply across domains that combine structured and unstructured data sources.

Learn more

For questions and feedback, visit the AWS re:Post or contact AWS Support.


About the authors

Chitresh Saxena is a Sr. AI/ML specialist TAM specializing in generative AI solutions and dedicated to helping customers successfully adopt AI/ML on AWS. He excels at understanding customer needs and provides technical guidance to build, launch, and scale AI solutions that solve complex business problems.

Kevin Morgan is a Sr. Enterprise Support Manager at AWS who helps customers accelerate their cloud adoption journey through hands-on leadership and technical guidance. As a member of the NextGen Developer Experience TFC, he specializes in Builder Experience, CloudOps and DevOps. Outside of work, Kevin enjoys being a Game Master for D&D and is a retro computing enthusiast.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Amazon Q Business RAG AWS Support 插件 数据分析
相关文章