CVE-2025-62372
vLLM vulnerable to DoS with incorrect shape of multimodal embedding inputs
Description
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.
INFO
Published Date :
Nov. 21, 2025, 2:15 a.m.
Last Modified :
Nov. 21, 2025, 2:15 a.m.
Remotely Exploit :
Yes !
Source :
[email protected]
Affected Products
The following products are affected by CVE-2025-62372
vulnerability.
Even if cvefeed.io is aware of the exact versions of the
products
that
are
affected, the information is not represented in the table below.
No affected product recoded yet
CVSS Scores
| Score | Version | Severity | Vector | Exploitability Score | Impact Score | Source |
|---|---|---|---|---|---|---|
| CVSS 4.0 | HIGH | [email protected] |
Solution
- Update vLLM to version 0.11.1 or later.
- Ensure input data shape matches model requirements.
References to Advisories, Solutions, and Tools
Here, you will find a curated list of external links that provide in-depth
information, practical solutions, and valuable tools related to
CVE-2025-62372.
CWE - Common Weakness Enumeration
While CVE identifies
specific instances of vulnerabilities, CWE categorizes the common flaws or
weaknesses that can lead to vulnerabilities. CVE-2025-62372 is
associated with the following CWEs:
Common Attack Pattern Enumeration and Classification (CAPEC)
Common Attack Pattern Enumeration and Classification
(CAPEC)
stores attack patterns, which are descriptions of the common attributes and
approaches employed by adversaries to exploit the CVE-2025-62372
weaknesses.
We scan GitHub repositories to detect new proof-of-concept exploits. Following list is a collection of public exploits and proof-of-concepts, which have been published on GitHub (sorted by the most recently updated).
Results are limited to the first 15 repositories due to potential performance issues.
The following list is the news that have been mention
CVE-2025-62372 vulnerability anywhere in the article.
The following table lists the changes that have been made to the
CVE-2025-62372 vulnerability over time.
Vulnerability history details can be useful for understanding the evolution of a vulnerability, and for identifying the most recent changes that may impact the vulnerability's severity, exploitability, or other characteristics.
-
New CVE Received by [email protected]
Nov. 21, 2025
Action Type Old Value New Value Added Description vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1. Added CVSS V4.0 AV:N/AC:L/AT:N/PR:L/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:H/E:X/CR:X/IR:X/AR:X/MAV:X/MAC:X/MAT:X/MPR:X/MUI:X/MVC:X/MVI:X/MVA:X/MSC:X/MSI:X/MSA:X/S:X/AU:X/R:X/V:X/RE:X/U:X Added CWE CWE-129 Added Reference https://github.com/vllm-project/vllm/commit/58fab50d82838d5014f4a14d991fdb9352c9c84b Added Reference https://github.com/vllm-project/vllm/pull/27204 Added Reference https://github.com/vllm-project/vllm/pull/6613 Added Reference https://github.com/vllm-project/vllm/security/advisories/GHSA-pmqf-x6x8-p7qw