LangChain security flaws expose API keys and enable code execution

Craig Nash
By
Craig Nash
AI-powered tech writer covering artificial intelligence, chips, and computing.
9 Min Read
LangChain security flaws expose API keys and enable code execution — AI-generated illustration

LangChain security vulnerabilities have exposed a dangerous pattern: three separate high-severity and critical flaws, each targeting a different class of enterprise data, have been discovered and patched in the widely used open-source LLM framework. The framework, which powers over one million users and has accumulated 81,000 GitHub stars, now faces urgent pressure to patch deployments before attackers weaponize these holes at scale.

Key Takeaways

  • Three critical LangChain security vulnerabilities patched in December 2025 expose API keys, database credentials, and enable remote code execution.
  • CVE-2025-68664 (“LangGrinch”) affects hundreds of millions of langchain-core installations via unsafe serialization in dumps() and dumpd() functions.
  • Directory traversal flaw CVE-2024-28088 allows attackers to bypass GitHub repository restrictions and access sensitive chain configurations.
  • Prompt injection vectors enable attackers to leak environment variables, LLM API keys, and trigger arbitrary code execution via Jinja2.
  • Patches released in langchain-core versions 0.1.29, 0.3.81, and 1.2.5 require immediate deployment across enterprise deployments.

The “LangGrinch” Flaw: Serialization Gone Wrong

CVE-2025-68664, discovered by researcher Yarden Porat at Cyata on December 4, 2025, represents the most dangerous of the three LangChain security vulnerabilities. The flaw carries a CVSS score of 9.3—critical severity—and lurks in the langchain-core Python package’s dumps() and dumpd() serialization functions, which fail to properly escape user-controlled “lc” keys during object serialization and deserialization. The result is catastrophic: attackers can inject malicious code that instantiates unsafe arbitrary objects, potentially triggering multiple attack paths simultaneously.

The mechanics are subtle but devastating. Once an attacker injects an “lc” key into serialized content, the deserialization process treats it as a trusted instruction, bypassing safety checks. Porat explained the attack chain: “So once an attacker is able to make a LangChain orchestration loop serialize and later deserialize content including an ‘lc’ key, they would instantiate an unsafe arbitrary object, potentially triggering many attacker-friendly paths”. This enables prompt injection attacks that leak environment variables, database credentials, and LLM API keys—secrets that sit in memory during normal application execution.

What makes CVE-2025-68664 particularly alarming is its reach. The langchain-core package reportedly sees hundreds of millions of installs, meaning a single unpatched deployment could expose credentials across entire enterprises. The vulnerability also opens pathways to remote code execution via Jinja2 template injection, allowing attackers to execute arbitrary commands on the host system.

Directory Traversal and Hub Bypass: CVE-2024-28088

Before “LangGrinch” made headlines, CVE-2024-28088 exposed a simpler but equally dangerous flaw in LangChain versions through 0.1.10. The vulnerability exists in the load_chain function’s path parameter, which fails to validate directory traversal sequences like “../”. Attackers exploit this to bypass restrictions on the langchain-hub GitHub repository, accessing chain configurations and API keys that should remain isolated.

The exploit is straightforward: by crafting a path parameter with traversal sequences, an attacker can step outside the intended directory and read arbitrary files. This could lead to API key disclosure or, in some configurations, remote code execution when combined with other weaknesses. The flaw earned an Exploit Prediction Scoring System (EPSS) score of 10.69%, placing it in the 93rd percentile for exploitability—meaning real-world attacks are likely already underway. The patch arrived in langchain-core 0.1.29, but organizations running older versions remain exposed.

The Broader Threat: Prompt Injection as a Data Exfiltration Vector

LangChain security vulnerabilities are not isolated incidents—they reflect a systemic risk in how large language model frameworks handle untrusted input. A third critical flaw, CVE-2024-10940, affects langchain-core versions spanning 0.1.17 through 0.3.15 across multiple release branches, allowing unauthorized users to read sensitive data. Combined with the prompt injection vectors in CVE-2025-68664, attackers now have multiple entry points to extract secrets from running applications.

The danger amplifies in production environments where LangChain orchestrates workflows between multiple services—databases, APIs, cloud platforms. A single prompt injection payload can traverse the entire stack, harvesting credentials and configuration data. Researchers have also documented earlier prompt injection flaws (CVE-2023-44467, CVSS 9.8) that enable RCE via unsafe __import__ calls, demonstrating that LangChain’s security model has struggled with input validation for years.

Patches Available—But Adoption Is the Real Challenge

LangChain maintainers released patches in langchain-core versions 0.1.29, 0.3.81, and 1.2.5, addressing all three vulnerabilities. Since LangChain is open-source and free, the barrier to patching is not cost—it is coordination. Organizations using LangChain in production must identify all deployments, test patches in staging environments, and roll out updates without disrupting AI applications that may be handling sensitive workflows 24/7.

The urgency is real. Unlike traditional software vulnerabilities that require social engineering or network access, these LangChain security vulnerabilities can be triggered by attackers who have only basic knowledge of how the framework serializes and deserializes data. A malicious prompt submitted through a chatbot interface, a poisoned API response, or a crafted integration could activate the exploit chain. Enterprises relying on LangChain for customer-facing AI applications face immediate risk.

How LangChain Compares to the Broader AI Framework Landscape

LangChain’s vulnerabilities highlight a critical gap in how emerging AI frameworks prioritize security over feature velocity. The framework’s popularity—one million users, 81,000 GitHub stars—means these flaws affect a massive surface area. Other LLM orchestration platforms face similar risks, but LangChain’s modular architecture, which encourages integrations with external tools and APIs, creates more opportunities for credential leakage if serialization is not bulletproof. LangSmith, LangChain’s debugging platform, compounds the risk if integrated with unsafe tools, as sensitive data flows through additional layers.

What Should Organizations Do Right Now?

The response is straightforward but demanding. Teams using LangChain must immediately audit their deployments, identify which versions are running, and prioritize patching production systems. The patches are backward-compatible, so upgrading to langchain-core 0.1.29, 0.3.81, or 1.2.5 should not break existing applications. Testing in staging environments is essential to verify compatibility before rolling out to production.

Beyond patching, organizations should implement input validation and sanitization for all prompts and user-controlled data flowing into LangChain applications. Treat serialized data as untrusted, even when it originates from internal systems. Monitor for suspicious deserialization patterns and unusual object instantiation in application logs. These defenses will not eliminate risk entirely, but they reduce the blast radius if a zero-day emerges before the next patch cycle.

FAQ

What is CVE-2025-68664 and why is it critical?

CVE-2025-68664, also called “LangGrinch,” is a critical flaw (CVSS 9.3) in langchain-core’s serialization functions that allows attackers to inject malicious code via “lc” keys, leaking API keys, database credentials, and enabling remote code execution. It affects hundreds of millions of installations globally.

Which LangChain versions are vulnerable to these flaws?

CVE-2024-28088 affects LangChain through 0.1.10; CVE-2025-68664 affects langchain-core broadly; CVE-2024-10940 affects versions 0.1.17 through 0.3.15. Patches are available in langchain-core 0.1.29, 0.3.81, and 1.2.5.

Do I need to pay for LangChain security patches?

No. LangChain is open-source and free, and patches are available immediately. The cost is in testing and deployment effort, not licensing.

LangChain security vulnerabilities expose a hard truth: frameworks built for speed and flexibility often compromise on security until attackers prove the risks are real. The patches are out. The question now is whether organizations will treat them as urgent or defer updates until the next quarterly maintenance window. History suggests many will choose the latter—and regret it when the first breach notification arrives.

This article was written with AI assistance and editorially reviewed.

Source: TechRadar

Share This Article
AI-powered tech writer covering artificial intelligence, chips, and computing.