Redis partners with Amazon Bedrock to elevate generative AI application quality

Integration empowers developers with advanced evaluation tools for LLM-powered systems, enhancing response quality and accelerating AI innovation


LAS VEGAS, Dec. 02, 2024 (GLOBE NEWSWIRE) -- Redis, the world’s fastest data platform, announced deeper integration with Amazon Bedrock to further improve the quality and reliability of generative AI apps. Building on last year’s successful integration of Redis Cloud as a knowledge base for building Retrieval-Augmented Generation (RAG) systems, Redis continues to deliver market-leading vector search performance and remains one of only three software vendors listed in the Amazon Bedrock console.

Amazon Bedrock Knowledge Bases now supports RAG evaluation

Amazon Bedrock’s new RAG evaluation service provides a fast, automated, and cost-effective evaluation tool, natively integrated into the Bedrock platform. Leveraging foundation models from Amazon and other leading AI providers, this service enables developers to automate the assessment of LLM-generated responses, improving accuracy and reducing the risk of errors such as hallucinations. By incorporating automated evals, generative AI applications can be optimized to meet specialized requirements across diverse use cases more effectively.

Redis and AWS Bedrock: an ongoing partnership

Retrieval-Augmented Generation is a cutting-edge architecture that combines domain-specific data retrieval with the generative capabilities of LLMs. Redis Cloud serves as a fast and flexible vector database for RAG, efficiently storing and retrieving vector embeddings that provide LLMs with relevant and up-to-date information. The Redis-Bedrock integration simplifies this process, enabling developers to seamlessly connect LLMs from the Bedrock console to their Redis-powered vector database, streamlining the workflow and reducing complexity.

Addressing the challenges of evaluating RAG systems

Despite these advancements, evaluating and diagnosing issues within RAG systems remains complex. Developers often face challenges in assessing the impact of various components, such as text chunking strategies, embedding model choices, LLM selection, and prompting techniques. Until now, full-scale human evaluations were often necessary to ensure quality and mitigate issues like model hallucinations, making the process time-consuming and expensive.

“When customers need fast and reliable vector search in production, they turn to us. However, LLMs are still prone to hallucinations,” said Manvinder Singh, VP of AI Product at Redis. “Our expanded partnership with Amazon Bedrock gives devs a powerful tool to create more accurate and trustworthy generative AI apps.”

About Redis
Redis is the world’s fastest data platform. From its open source origins in 2011 to becoming the #1 cited brand for caching solutions, Redis has helped more than 10,000 customers build, scale, and deploy the apps our world runs on. With multi-cloud and on-prem databases for caching, vector search, and more, Redis helps digital businesses set a new standard for app speed.

Located in San Francisco, Austin, London, and Tel Aviv, Redis is internationally recognized as the leader in building fast apps fast. Learn more at redis.io.

Redis Media Contact
LaunchSquad
Redis@launchsquad.com