Researchers have identified three critical security flaws in the LangChain and LangGraph frameworks that could allow unauthorized access to sensitive system files and private data. These vulnerabilities impact widely used tools for building AI applications, potentially exposing environment secrets and entire conversation histories to attackers.
Security experts have recently uncovered a trio of vulnerabilities within the LangChain and LangGraph ecosystems, which are essential open-source frameworks for developing applications powered by large language models. These tools have seen massive adoption, with recent download figures reaching tens of millions, making the potential impact of these flaws significant for enterprise users. The identified issues provide three distinct methods for attackers to bypass security measures and extract confidential information from active deployments.
The first vulnerability, identified as CVE-2026-34070, involves a path traversal flaw in how the system loads prompt templates. By providing a specifically manipulated template through the API, an attacker can trick the application into accessing files on the server that should remain private. This could lead to the exposure of critical system configurations, such as Docker files, which often contain sensitive architectural data.
A second and more severe issue, CVE-2025-68664, relates to how LangChain handles the deserialization of untrusted data. This flaw, previously dubbed LangGrinch, allows an attacker to input data structures that the system incorrectly interprets as trusted objects. As a result, an attacker can siphon off API keys and environment secrets, which are often the keys to an organization’s broader digital infrastructure.
The third vulnerability, CVE-2025-67644, targets LangGraph’s SQLite checkpointing system through an SQL injection vector. Attackers can manipulate metadata filter keys to run unauthorized queries against the database used to store application states. This specific exploit is particularly concerning because it grants access to the conversation histories of users, potentially revealing proprietary or personal information discussed during AI interactions.
The discovery of these flaws highlights the evolving security challenges associated with agentic AI workflows and the frameworks that support them. Organizations using these tools are encouraged to review their deployments and apply necessary patches to protect their filesystem data and secret credentials. Successful exploitation of these vulnerabilities would grant an attacker deep insight into an enterprise’s internal operations and sensitive AI-driven workflows.
Source: https://www.cyera.com/research/langdrained-3-paths-to-your-data-through-the-worlds-most-popular-ai-framework



