CVE-2026-24779
vLLM vulnerable to Server-Side Request Forgery (SSRF) in `MediaConnector`
Description
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request Forgery (SSRF) vulnerability exists in the `MediaConnector` class within the vLLM project's multimodal feature set. The load_from_url and load_from_url_async methods obtain and process media from URLs provided by users, using different Python parsing libraries when restricting the target host. These two parsing libraries have different interpretations of backslashes, which allows the host name restriction to be bypassed. This allows an attacker to coerce the vLLM server into making arbitrary requests to internal network resources. This vulnerability is particularly critical in containerized environments like `llm-d`, where a compromised vLLM pod could be used to scan the internal network, interact with other pods, and potentially cause denial of service or access sensitive data. For example, an attacker could make the vLLM pod send malicious requests to an internal `llm-d` management endpoint, leading to system instability by falsely reporting metrics like the KV cache state. Version 0.14.1 contains a patch for the issue.
INFO
Published Date :
Jan. 27, 2026, 10:15 p.m.
Last Modified :
Jan. 27, 2026, 10:15 p.m.
Remotely Exploit :
Yes !
Source :
[email protected]
Affected Products
The following products are affected by CVE-2026-24779
vulnerability.
Even if cvefeed.io is aware of the exact versions of the
products
that
are
affected, the information is not represented in the table below.
No affected product recoded yet
CVSS Scores
| Score | Version | Severity | Vector | Exploitability Score | Impact Score | Source |
|---|---|---|---|---|---|---|
| CVSS 3.1 | HIGH | [email protected] | ||||
| CVSS 3.1 | HIGH | MITRE-CVE |
Solution
- Update vLLM to version 0.14.1 or newer.
- Review and restrict external media sources.
- Implement network segmentation.
References to Advisories, Solutions, and Tools
Here, you will find a curated list of external links that provide in-depth
information, practical solutions, and valuable tools related to
CVE-2026-24779.
CWE - Common Weakness Enumeration
While CVE identifies
specific instances of vulnerabilities, CWE categorizes the common flaws or
weaknesses that can lead to vulnerabilities. CVE-2026-24779 is
associated with the following CWEs:
Common Attack Pattern Enumeration and Classification (CAPEC)
Common Attack Pattern Enumeration and Classification
(CAPEC)
stores attack patterns, which are descriptions of the common attributes and
approaches employed by adversaries to exploit the CVE-2026-24779
weaknesses.
We scan GitHub repositories to detect new proof-of-concept exploits. Following list is a collection of public exploits and proof-of-concepts, which have been published on GitHub (sorted by the most recently updated).
Results are limited to the first 15 repositories due to potential performance issues.
The following list is the news that have been mention
CVE-2026-24779 vulnerability anywhere in the article.
The following table lists the changes that have been made to the
CVE-2026-24779 vulnerability over time.
Vulnerability history details can be useful for understanding the evolution of a vulnerability, and for identifying the most recent changes that may impact the vulnerability's severity, exploitability, or other characteristics.
-
New CVE Received by [email protected]
Jan. 27, 2026
Action Type Old Value New Value Added Description vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request Forgery (SSRF) vulnerability exists in the `MediaConnector` class within the vLLM project's multimodal feature set. The load_from_url and load_from_url_async methods obtain and process media from URLs provided by users, using different Python parsing libraries when restricting the target host. These two parsing libraries have different interpretations of backslashes, which allows the host name restriction to be bypassed. This allows an attacker to coerce the vLLM server into making arbitrary requests to internal network resources. This vulnerability is particularly critical in containerized environments like `llm-d`, where a compromised vLLM pod could be used to scan the internal network, interact with other pods, and potentially cause denial of service or access sensitive data. For example, an attacker could make the vLLM pod send malicious requests to an internal `llm-d` management endpoint, leading to system instability by falsely reporting metrics like the KV cache state. Version 0.14.1 contains a patch for the issue. Added CVSS V3.1 AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:L Added CWE CWE-918 Added Reference https://github.com/vllm-project/vllm/commit/f46d576c54fb8aeec5fc70560e850bed38ef17d7 Added Reference https://github.com/vllm-project/vllm/pull/32746 Added Reference https://github.com/vllm-project/vllm/security/advisories/GHSA-qh4c-xf7m-gxfc