Aporia Guardrails Outperforms NeMo, GPT-4o, and GPT 3.5 in AI Hallucination Detection and Latency

New benchmark report demonstrates Aporia’s capabilities utilizing its Multi-SLM detection engine, which ensures greater accuracy and detection for AI hallucinations


SAN FRANCISCO, July 17, 2024 (GLOBE NEWSWIRE) -- Aporia, the leading AI control platform, today released its 2024 Guardrails Benchmark report, highlighting the exceptional performance of Aporia in achieving industry-leading recall rates and markedly lower latency when compared with NVIDIA NeMo, GPT-4o, and GPT 3.5. Results are indicative of Aporia's commitment to advancing AI deployment standards, providing organizations and development teams with a trusted solution for releasing AI applications that are both responsive and secure.

In the ever-evolving landscape of AI-driven applications, minimizing latency and maximizing accuracy are crucial for delivering seamless interactions. Aporia's Guardrail solution has undergone rigorous testing to demonstrate its real-time response capabilities. Notably, Aporia achieves an average latency of 0.34 seconds, with a 90th percentile latency of 0.43 seconds, showcasing its efficiency in processing AI interactions with minimal delay. Additionally, although hallucinations are inherent in any Large Language Model (LLM) based application, Aporia's advanced Multi-Small Language Model (SLM) Detection Engine ensures a detection rate of 98% in identifying hallucinations, significantly outperforming NeMo Guardrails and GPT-4o, which achieve 91% and 94%, respectively.

Aporia achieves remarkably low latency through its decentralized strategy, which utilizes multiple SLMs rather than the industry standard of focusing on a single LLM. Each SLM is responsible for enforcing different policies, such as hallucinations or prompt injections, which enables Aporia to distribute the workload across multiple models. In addition to reducing latency, this approach provides unparalleled reliability, as any failure issue within an individual model will not disrupt the entire system. Further, as SLMs are easier to debug, Aporia’s method facilitates transparency and greater trust in the AI’s decision-making processes.

“Aporia is focused on empowering engineers and organizations to deploy secure and reliable AI applications without compromising on performance,” said Liran Hason, CEO and Co-Founder of Aporia. “We take pride in these benchmark results, which highlight the efficiency and accuracy of our solution and showcase the company’s commitment to enhancing AI reliability. In the ever-evolving AI industry, we acknowledge the ongoing need for improvements and are dedicated to continually raising the bar for AI safety and performance.”

In addition to hallucinations, Aporia continues to innovate with its Guardrails, integrating advanced security measures to handle sensitive data such as Personally Identifiable Information (PII), prevent prompt injections, and maintain conversation relevance.

To read the full report, please visit: https://www.aporia.com/blog/aporia-releases-2024-guardrail-benchmarks-multi-slm-detection-engine/

To learn more about Aporia Guardrails and to sign up for a free 14 day trial, no credit card needed, please visit https://www.aporia.com/.

About Aporia
Aporia is on a mission to help AI engineers deliver safe and reliable AI with the use of Guardrails. They created a multiSLM (Small Language Model) detection engine that provides sub-second latency ensuring the Guardrails don’t interfere with the AI’s latency. The company is recognized as a Technology Pioneer by the World Economic Forum for its mission of driving Responsible AI. Trusted by Fortune 500 companies and industry leaders such as Bosch, Lemonade, Levi’s, Munich RE, and Sixt, Aporia empowers organizations to deliver AI apps that are reliable, responsible, and fair. To learn more about Aporia, visit www.aporia.com.

Aporia Media Contact:
Mushkie Meyer
Headline Media
mushkie@headline.media
US: +19143364035