---
title: Critical CVE-2025-68664 Vulnerability in LangChain Core: What You Need to Know
description: Learn about CVE-2025-68664 in LangChain Core, its security risks, and defensive strategies to secure AI applications.
url: https://ziosec.com/blog/critical-cve-2025-68664-vulnerability-in-langchain-core-what-you-need-to-know
category: Feed
publishedAt: 2026-01-05
author: ZioAI
authorRole: Research
tags: CVE-2025-68664, LangChain, AI Security, Cybersecurity, Vulnerabilities, Software Patching, Serialization Issues, Threat Intelligence, Defensive Techniques
---

In December 2025, a critical vulnerability, designated as **CVE-2025-68664**, was identified in **LangChain Core**, a pivotal framework for developing generative AI applications. This vulnerability, discovered by security researcher **Yarden Porat** from Cyata, could have severe implications for the security of AI-driven applications that rely on LangChain Core. The core issue arises from improper handling of serialization and deserialization processes, which could enable attackers to execute arbitrary code under certain conditions.

## Understanding LangChain Core

LangChain Core is a critical framework that facilitates building applications using large language models (LLMs). It allows developers to create AI applications that can understand and generate human-like text based on input data. Given its widespread use in various AI implementations, the security of LangChain Core is vital. The discovery of CVE-2025-68664 emphasizes the need for secure coding practices within this framework.

## Detailing CVE-2025-68664

The **CVE-2025-68664** vulnerability originates from issues within LangChain Core's serialization mechanisms. Specifically, the serialization process improperly manages dictionaries containing an 'lc' marker, which are used to represent LangChain objects. The vulnerable dumps() and dumpd() functions fail to properly escape user-controlled dictionaries that include the reserved 'lc' key. This oversight allows attackers to craft malicious dictionaries that, once deserialized, may instantiate unsafe arbitrary objects, leading to serious security breaches.

## Security Implications and Vulnerabilities

The impacts of this vulnerability include:

### Exploit Scenarios

Attackers can exploit this vulnerability through multiple vectors:

*   **Event Streaming and Logging:** Malicious data can be injected into fields such as additional\_kwargs or response\_metadata, which are serialized and subsequently deserialized during normal operations, including event streaming and logging.
*   **Message History and Caches:** Serialized message histories or cached data may be exploited to trigger the deserialization of malicious objects, thereby instigating unintended behaviors.
*   **Prompt Injection:** Attackers may manipulate Large Language Model outputs to influence fields that are serialized and deserialized, enhancing the chances of executing malicious code.

These exploitation techniques can lead to severe consequences, including:

*   **Secret Extraction:** Unauthorized access to environment variables, especially when performing deserialization with secrets\_from\_env=True, which was the default setting before this advisory.
*   **Arbitrary Code Execution:** Certain conditions may allow LangChain object instantiation to result in arbitrary code execution, thus posing significant risks to system integrity.

## Threat Intelligence and Indicators

### Recognizing Exploitation Patterns

Indicators of potential exploitation of CVE-2025-68664 include:

*   **Unexpected Deserialization Behavior:** Instances where deserialization processes instantiate objects that are not explicitly allowed by the framework's allowlist.
*   **Unauthorized Access to Secrets:** Detection of unauthorized access patterns to sensitive environment variables or other critical data.
*   **Unusual System Behavior:** Anomalies such as unexpected network calls, file operations, or side effects during object instantiation.

## Defensive Recommendations

To mitigate the risks associated with **CVE-2025-68664**, organizations should implement the following proactive measures:

*   **Immediate Patching:** Upgrade LangChain Core to the latest patched versions (1.2.5 and 0.3.81) to address this vulnerability.
*   **Review Serialization Practices:** Ensure that serialization and deserialization processes properly escape user-controlled data, especially when handling reserved keys like 'lc'.
*   **Enhance Input Validation:** Implement robust validation mechanisms to detect and prevent malicious data from influencing serialization processes.
*   **Monitor for Indicators of Compromise:** Regularly audit systems for signs of exploitation, including unauthorized access to secrets or unexplained system behaviors.
*   **Update Security Configurations:** Modify default settings, such as disabling secrets\_from\_env=True, to reduce the risk of secret extraction during deserialization.

##   

The discovery of **CVE-2025-68664** highlights the paramount importance of secure serialization practices within AI frameworks, such as LangChain Core. Organizations utilizing this framework must prioritize implementing comprehensive security measures to mitigate potential exploits. By proactively addressing this vulnerability, enterprises can uphold the integrity and trustworthiness of their AI-driven applications.

For a more in-depth analysis, including technical insights and recommendations, refer to the original advisory published by **Cyata**.

_Note: This article is based on the findings and recommendations provided by Cyata's research team. For the latest information and updates, consult official_