The Popular Way to Build Trusted Generative AI? RAG

The Popular Way to Build Trusted Generative AI? RAG

Posted on



Organizations of all sizes, across all sectors, are rushing to reap the benefits of generative AI, from boosting operational efficiencies to reinventing their businesses. But as they begin to adopt this transformative technology, they’re encountering a common challenge—delivering accurate results.

It’s a critical issue: bias and other inaccuracies reduce trust. And for generative AI applications, trust is everything.

The solution? Customizing large language models (LLMs), the key AI technology powering everything from entry-level chatbots to enterprise-grade AI initiatives.

On their own, LLMs may provide results that are inaccurate or too general to be helpful. To truly build trust among customers and other users of generative AI applications, businesses need to ensure accurate, up-to-date, personalized responses. And that means customizing their LLMs.

But customizing an LLM can be complex, time-consuming, and resource-intensive. It requires expertise, and not every organization has data scientists and machine language engineers on staff. But more organizations are choosing a proven, cost-effective customization technique that boosts accuracy and relevance while taking full advantage of a resource most organizations already have plenty of: data.

How RAG Drives Accuracy

Retrieval augmented generation (RAG) is emerging as a preferred customization technique for businesses to rapidly build accurate, trusted generative AI applications. RAG is a fast, easy-to-use approach that helps reduce inaccuracies (or “hallucinations”) and increases the relevance of answers. It’s more cost-effective and requires less expertise than such labor-intensive techniques as fine-tuning and continued pre-training of LLMs.

For generative AI application builders, RAG offers an efficient way to create trusted generative AI applications. For customers, employees, and other users of these applications, RAG means more accurate, relevant, complete responses that build trust with responses that can cite sources for transparency.

Generative AI’s output is only as good as its data, so choosing credible sources is vital to improving responses. RAG augments LLMs by retrieving and applying data and insights from the organization’s data stores as well as trustworthy external sources of truth to deliver more accurate results. Even with a model trained on old data, RAG can update it with access to current, near-real-time information.

RAG in Action

Food-delivery company DoorDash applies RAG to its generative AI solution to improve self-service and enhance the experience of its independent contractors (“Dashers”) who submit a high volume of requests for assistance.

DoorDash collaborates with Amazon Web Services (AWS) to supplement its traditional call center with a voice-operated self-service contact center solution. For the core of its generative AI solution, DoorDash uses Anthropic’s Claude models and Amazon Bedrock, an AWS service that helps organizations build and scale generative AI applications quickly and easily.

Using RAG to customize the Claude 3 Haiku model, Bedrock enables DoorDash to access a deep, diverse knowledge base from company sources to provide relevant, accurate responses to Dashers, reducing average response time to 2.5 seconds or less. DoorDash’s generative AI-powered contact center now fields hundreds of thousands of calls every day.

Access to this vast database through RAG provided the key to building trust. “We’ve built a solution that gives Dashers reliable access to the information they need, when they need it,” says Chaitanya Hari, contact center product lead at DoorDash.

The Power of Customization

Customization can significantly improve response accuracy and relevance, especially for use cases that need to tap fresh, real-time data.

RAG isn’t the only customization strategy; fine-tuning and other techniques can play key roles in customizing LLMs and building generative AI applications. But as RAG evolves and its capabilities expand, it will continue to serve as a quick, easy way to get started with generative AI and to ensure better, more accurate responses, building trust among employees, partners, and customers.


Learn more about AWS generative AI.

 

 

 

 

 



Source Link Website

Gravatar Image
My lisa Nichols is an accomplished article writer with a flair for crafting engaging and informative content. With a deep curiosity for various subjects and a dedication to thorough research, lisa Nichols brings a unique blend of creativity and accuracy to every piece

Leave a Reply

Your email address will not be published. Required fields are marked *