A critical vulnerability in the LangChain Core package allows attackers to exploit prompt injection to trigger malicious object instantiation during data serialization. This flaw, dubbed LangGrinch, puts sensitive secrets and system integrity at risk by misinterpreting user-controlled data as legitimate internal commands.
LangChain Core serves as the foundational Python library for building applications powered by large language models, providing the essential tools and interfaces used by developers globally. A severe security flaw identified as CVE-2025-68664 was recently discovered within this core framework, carrying a high criticality rating. The issue was brought to light by security researcher Yarden Porat, who found that the library's internal handling of data objects could be tricked into executing unintended actions through a process known as serialization injection.
The technical root of the problem lies in how the framework processes specific data structures through its primary serialization functions. When the system encounters a dictionary containing a specific internal key, it assumes the data is a trusted LangChain object rather than simple text or user input. Because the software failed to properly filter or escape these keys when they originated from external sources, an attacker could craft a malicious input that the system would later treat as a valid instruction. This allows for the creation of unsafe objects when the data is eventually reloaded by the application.
GET 50% Discount for VPN/ANTIVIRUS SOFTWARE AT 911Cyber - CODE: bit5025
This vulnerability is particularly dangerous because it can be triggered indirectly through prompt injection. An attacker does not need direct access to the underlying code; they only need to provide an input that the language model processes. If that model output is later saved or tracked as metadata within the LangChain environment, the malicious structure becomes embedded in the system’s memory. Once the framework attempts to read that metadata back, it inadvertently activates the injected object, potentially leading to the exposure of environment variables or the execution of unauthorized templates.
Beyond simple data theft, the flaw allows for the instantiation of various classes within the trusted LangChain namespace. While it does not permit the loading of entirely arbitrary external code, it opens up numerous paths for an attacker to manipulate the application’s logic or trigger side effects during the initialization of these classes. This creates a scenario where the very tools meant to manage and orchestrate the behavior of an artificial intelligence can be turned into a vehicle for compromising the entire system.
The scale of this issue is significant because LangChain Core is used in hundreds of millions of installations worldwide. Since the vulnerability exists in the central component of the ecosystem rather than an optional add-on, almost any application built on the framework could be susceptible if it handles model-generated metadata. Security experts urge all developers to transition to the patched versions of the software immediately to close this loop. Updates have been released for both the 0.3 and 1.2 branches of the package to resolve the escaping errors and secure the serialization process.
Source: LangChain Core Vulnerability Enables Prompt Injection And Data Exposure



