BOSTON, Feb. 13, 2024 (GLOBE NEWSWIRE) -- Legit Security, the leading application security posture management (ASPM) platform that enables secure application delivery, today announced the availability of the cybersecurity industry’s first AI discovery capabilities. With these new capabilities, Legit helps bridge the gap between security and development by enabling CISOs and AppSec teams to understand where and when AI code is used and take action to ensure proper security controls are in place - without slowing software delivery.
As developers harness the power of AI and large language models (LLMs) to develop and deploy capabilities more quickly, new risks arise. For example, AI-generated code may contain unknown vulnerabilities or flaws that put the entire application at risk. In addition, AI-generated code can introduce legal issues if copyright restrictions are in place. Another risk is improper implementation of AI features, which can lead to data exposure, such as customers bypassing prompt protections and extracting sensitive data. Despite all this, security teams rarely understand how developers use AI-generated code, resulting in security blind spots that impact both the organization and the software supply chain.
“There’s still a huge disconnect between what CISOs and their teams believe to be true and what is actually happening on the ground in development. This belief gap is particularly acute when it comes to understanding how, when, and why AI technology is used by developers,” said Dr. Gary McGraw, co-founder of the Berryville Institute of Machine Learning (BILM) and author of Software Security. “In our recent BIML publication ‘An Architectural Risk Analysis of Large Language Models’ we identified 81 LLM risks, including a critical top ten – none of which can be mitigated without thorough understanding of where AI is used to deliver code.”
Legit’s platform enables security leaders, including CISOs, product security leaders, and security architects, to gain comprehensive visibility into risks across the development pipeline from the infrastructure to the application layer. With a crystal-clear view of the development lifecycle, customers ensure the code deployed is traceable, secure, and compliant. These new AI code discovery capabilities bolster the platform by closing a significant visibility gap that allows security to take preventive actions, decrease the risk of legal exposure, and ensure compliance.
“AI offers huge potential to enable developers and organizations to deliver and innovate faster, but it is important to understand whether such decisions introduce risk,” said Liav Caspi, co-founder and chief technology officer at Legit Security. “Our aim is to ensure nothing stops developers from delivering while providing security and the confidence they have visibility and control into the usage of AI and LLMs. We already helped some of our customers see where and how AI is used, which was new information for the team.”
Legit’s AI code discovery capabilities provide a range of benefits to both security and development teams, including:
- Discovery of AI-generated code: Legit provides a full view of the development environment, including code derived from AI-generated coding tools (e.g., GitHub Copilot).
- Full visibility of the dev environment: By gaining a full view of the application environment, including repositories using LLM, MLOps services, and code generation tools, Legit’s platform offers the context necessary to understand and manage an application’s security posture.
- Security policy enforcement: Legit Security detects LLM and GenAI development and enforces organizational security policies, such as ensuring all AI-generated code gets reviewed by a human.
- Real-time notifications of GenAI code: Legit can immediately notify security teams when users install AI code generation tools, providing greater transparency and accountability.
- Protect against releasing vulnerable code: Legit’s platform provides guardrails to prevent the deployment of vulnerable code to production, including that delivered via AI tools.
- Alert on LLM risks: Legit scans LLM application’s code for security risks, such as prompt injection and insecure output handling.
Read a new blog from the Legit research team to learn more about important security considerations associated with GenAI applications. For more information on the importance of AI discovery, please visit the company’s blog. To learn more about the broader Legit Security platform, please visit https://www.legitsecurity.com.
About Legit Security
Legit Security provides an application security posture management platform that secures application delivery from code to cloud and protects an organization's software supply chain
from attacks. The platform’s unified application security control plane and automated SDLC
discovery and analysis capabilities provide visibility and security control over rapidly changing
environments and prioritize security issues based on context and business criticality to improve
security team efficiency and effectiveness.