AWS Machine Learning Blog 01月24日
Enhance your customer’s omnichannel experience with Amazon Bedrock and Amazon Lex
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了如何利用人工智能(AI)技术,特别是自然语言理解(NLU)和大型语言模型(LLM),来提升全渠道客户体验。通过集成Amazon Lex和Amazon Bedrock,企业可以在呼叫中心等渠道实现更智能的客户互动。Amazon Lex处理初步的意图分类和信息收集,而Amazon Bedrock则在Lex无法明确理解客户意图时提供辅助验证,从而在保证安全和合规的前提下,优化客户服务流程,减少人工处理时间,提高客户满意度。

🎯**意图分类增强**:即使客户的表达方式超出传统NLU模型的训练范围,LLM也能准确识别客户意图。例如,当客户描述“地下室被水淹了”时,LLM能识别出客户需要启动理赔。

🧩**辅助槽位填充**:LLM能够理解客户对槽位的模糊回答,并将其映射到正确的槽位值。例如,将“丰田坦途”理解为“卡车”,或者将“整个房顶都没了”理解为“屋顶”。

📢**背景噪音过滤**:LLM可以区分背景噪音和客户的实际陈述,从而更准确地理解客户意图。即使通话背景嘈杂,LLM也能识别出“我的车”等关键词,避免误解。

🤝**协同工作模式**:Amazon Lex和Amazon Bedrock协同工作,Lex处理初步交互,Bedrock处理复杂或模糊的意图,降低了LLM直接面向客户的风险,同时也降低了成本。

The rise of AI has opened new avenues for enhancing customer experiences across multiple channels. Technologies like natural language understanding (NLU) are employed to discern customer intents, facilitating efficient self-service actions. Automatic speech recognition (ASR) translates spoken words into text, enabling seamless voice interactions. With Amazon Lex bots, businesses can use conversational AI to integrate these capabilities into their call centers. Amazon Lex uses ASR and NLU to comprehend customer needs, guiding them through their journey. These AI technologies have significantly reduced agent handle times, increased Net Promoter Scores (NPS), and streamlined self-service tasks, such as appointment scheduling.

The advent of generative AI further expands the potential to enhance omnichannel customer experiences. However, concerns about security, compliance, and AI hallucinations often deter businesses from directly exposing customers to large language models (LLMs) through their omnichannel solutions. This is where the integration of Amazon Lex and Amazon Bedrock becomes invaluable. In this setup, Amazon Lex serves as the initial touchpoint, managing intent classification, slot collection, and fulfillment. Meanwhile, Amazon Bedrock acts as a secondary validation layer, intervening when Amazon Lex encounters uncertainties in understanding customer inputs.

In this post, we demonstrate how to integrate LLMs into your omnichannel experience using Amazon Lex and Amazon Bedrock.

Enhancing customer interactions with LLMs

The following are three scenarios illustrating how LLMs can enhance customer interactions:

As demonstrated in these scenarios, the LLM is not controlling the conversation. Instead, it operates within the boundaries defined by intents, intent descriptions, slots, sample slots, and utterances from Amazon Lex. This approach helps guide the customer along the correct path, reducing the risks of hallucination and manipulation of the customer-facing application. Furthermore, this approach reduces cost, because NLU is used when possible, and the LLM acts as a secondary check before re-prompting the customer.

You can further enhance this AI-driven experience by integrating it with your contact center solution, such as Amazon Connect. By combining the capabilities of Amazon Lex, Amazon Bedrock, and Amazon Connect, you can deliver a seamless and intelligent customer experience across your channels.

When customers reach out, whether through voice or chat, this integrated solution provides a powerful, AI-driven interaction:

    Amazon Connect manages the initial customer contact, handling call routing and channel selection. Amazon Lex processes the customer’s input, using NLU to identify intent and extract relevant information. In cases where Amazon Lex might not fully understand the customer’s intent or when a more nuanced interpretation is needed, advanced language models in Amazon Bedrock can be invoked to provide deeper analysis and understanding. The combined insights from Amazon Lex and Amazon Bedrock guide the conversation flow in Amazon Connect, determining whether to provide automated responses, request more information, or route the customer to a human agent.

Solution overview

In this solution, Amazon Lex will connect to Amazon Bedrock through Lambda, and invoke an LLM of your choice on Amazon Bedrock when assistance in intent classification and slot resolution is needed throughout the conversation. For instance, if an ElicitIntent call defaults to the FallbackIntent, the Lambda function runs to have Amazon Bedrock determine if the user potentially used out-of-band phrases that should be properly mapped. Additionally, we can augment the prompts sent to the model for intent classification and slot resolution with business context to yield more accurate results. Example prompts for intent classification and slot resolution is available in the GitHub repo.

The following diagram illustrates the solution architecture:

The workflow consists of the following steps:

    Messages are sent to the Amazon Lex omnichannel using Amazon Connect (text and voice), messaging apps (text), and third-party contact centers (text and voice). Amazon Lex NLU maps user utterances to specific intents. The Lambda function is invoked at certain phases of the conversation where Amazon Lex NLU didn’t identify the user utterance, such as during the fallback intent or during slot fulfillment. Lambda calls foundation models (FMs) selected from an AWS CloudFormation template through Amazon Bedrock to identify the intent, identify the slot, or determine if the transcribed messages contain background noise. Amazon Bedrock returns the identified intent or slot, or responds that it is unable to classify the utterance as a related intent or slot. Lambda sets the state of Amazon Lex to either move forward in the selected intent or re-prompt the user for input. Amazon Lex continues the conversation by either re-prompting the user or continuing to fulfill the intent.

Prerequisites

You should have the following prerequisites:

Deploy the omnichannel Amazon Lex bot

To deploy this solution, complete the following steps:

    Choose Launch Stack to launch a CloudFormation stack in us-east-1:
    For Stack name, enter a name for your stack. This post uses the name FNOLBot. In the Parameters section, select the model you want to use. Review the IAM resource creation and choose Create stack.

After a few minutes, your stack should be complete. The core resources are as follows:

Test the omnichannel bot

To test the bot, navigate to FNOLBot on the Amazon Lex console and open a test window. For more details, see Testing a bot using the console.

Intent classification

Let’s test how, instead of saying “I would like to make a claim,” the customer can ask more complex questions:

    In the test window, enter in “My neighbor’s tree fell on my garage. What steps should I take with my insurance company?” Choose Inspect.

In the response, the intent has been identified as GatherFNOLInfo.

Background noise mitigation with intent classification

Let’s simulate making a request with background noise:

    Refresh the bot by choosing the refresh icon. In the test window, enter “Hi yes I’m calling about yeah yeah one minute um um I need to make a claim.” Choose Inspect.

In the response, the intent has been identified as GatherFNOLInfo.

Slot assistance

Let’s test how instead of saying explicit slot values, we can use generative AI to help fill the slot:

    Refresh the bot by choosing the refresh icon. Enter “I need to make a claim.”

The Amazon Lex bot will then ask “What portion of the home was damaged?”

    Enter “the whole dang top of my house was gone.”

The bot will then ask “Please describe any injuries that occurred during the incident.”

    Enter “I got a pretty bad cut from the shingles.” Choose Inspect.

You will notice that the Damage slot has been filled with “roof” and the PersonalInjury slot has been filled with “laceration.”

Background noise mitigation with slot assistance

We now simulate how Amazon Lex uses ASR transcribing background noise. The first scenario is a conversation where the user is having a conversation with others while talking to the Amazon Lex bot. In the second scenario, a TV on in the background is so loud that it gets transcribed by ASR.

    Refresh the bot by choosing the refresh icon. Enter “I need to make a claim.”

The Amazon Lex bot will then ask “What portion of the home was damaged?”

    Enter “yeah i really need that soon um the roof was damaged.”

The bot will then ask “Please describe any injuries that occurred during the incident.”

    Enter “tonight on the nightly news reporters are on the scene um i got a pretty bad cut.” Choose Inspect.

You will notice that the Damage slot has been filled with “roof” and the PersonalInjury slot has been filled with “laceration.”

Clean up

To avoid incurring additional charges, delete the CloudFormation stacks you deployed.

Conclusion

In this post, we showed you how to set up Amazon Lex for an omnichannel chatbot experience and Amazon Bedrock to be your secondary validation layer. This allows your customers to potentially provide out-of-band responses both at the intent and slot collection levels without having to be re-prompted, allowing for a seamless customer experience. As we demonstrated, whether the user comes in and provides a robust description of their intent and slot or if they use phrases that are outside of the Amazon Lex NLU training data, the LLM is able to correctly identify the correct intent and slot.

If you have an existing Amazon Lex bot deployed, you can edit the Lambda code to further enhance the bot. Try out the solution from CloudFormation stack or code in the GitHub repo and let us know if you have any questions in the comments.


About the Authors

Michael Cho is a Solutions Architect at AWS, where he works with customers to accelerate their mission on the cloud. He is passionate about architecting and building innovative solutions that empower customers. Lately, he has been dedicating his time to experimenting with Generative AI for solving complex business problems.

Joe Morotti is a Solutions Architect at Amazon Web Services (AWS), working with Financial Services customers across the US. He has held a wide range of technical roles and enjoy showing customer’s art of the possible. His passion areas include conversational AI, contact center, and generative AI. In his free time, he enjoys spending quality time with his family exploring new places and over analyzing his sports team’s performance.

Vikas Shah is an Enterprise Solutions Architect at Amazon Web Services. He is a technology enthusiast who enjoys helping customers find innovative solutions to complex business challenges. His areas of interest are ML, IoT, robotics and storage. In his spare time, Vikas enjoys building robots, hiking, and traveling.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Amazon Lex Amazon Bedrock 人工智能 客户体验 大型语言模型
相关文章