CVE-2023-53076
Amazon Web Services (AWS) EKS Linux Kernel BPF JIT Pool Insufficient Default Limit Vulnerability
Description
Rejected reason: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.
INFO
Published Date :
May 2, 2025, 4:15 p.m.
Last Modified :
May 5, 2025, 3:15 p.m.
Remotely Exploit :
No
Source :
416baaa9-dc9f-4396-8d5f-8c081fb06d67
Solution
- No remediation is required for this vulnerability.
We scan GitHub repositories to detect new proof-of-concept exploits. Following list is a collection of public exploits and proof-of-concepts, which have been published on GitHub (sorted by the most recently updated).
Results are limited to the first 15 repositories due to potential performance issues.
The following list is the news that have been mention
CVE-2023-53076 vulnerability anywhere in the article.
The following table lists the changes that have been made to the
CVE-2023-53076 vulnerability over time.
Vulnerability history details can be useful for understanding the evolution of a vulnerability, and for identifying the most recent changes that may impact the vulnerability's severity, exploitability, or other characteristics.
-
CVE Rejected by 416baaa9-dc9f-4396-8d5f-8c081fb06d67
May. 05, 2025
Action Type Old Value New Value -
CVE Modified by 416baaa9-dc9f-4396-8d5f-8c081fb06d67
May. 05, 2025
Action Type Old Value New Value Changed Description In the Linux kernel, the following vulnerability has been resolved: bpf: Adjust insufficient default bpf_jit_limit We've seen recent AWS EKS (Kubernetes) user reports like the following: After upgrading EKS nodes from v20230203 to v20230217 on our 1.24 EKS clusters after a few days a number of the nodes have containers stuck in ContainerCreating state or liveness/readiness probes reporting the following error: Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "4a11039f730203ffc003b7[...]": OCI runtime exec failed: exec failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524: unknown However, we had not been seeing this issue on previous AMIs and it only started to occur on v20230217 (following the upgrade from kernel 5.4 to 5.10) with no other changes to the underlying cluster or workloads. We tried the suggestions from that issue (sysctl net.core.bpf_jit_limit=452534528) which helped to immediately allow containers to be created and probes to execute but after approximately a day the issue returned and the value returned by cat /proc/vmallocinfo | grep bpf_jit | awk '{s+=$2} END {print s}' was steadily increasing. I tested bpf tree to observe bpf_jit_charge_modmem, bpf_jit_uncharge_modmem their sizes passed in as well as bpf_jit_current under tcpdump BPF filter, seccomp BPF and native (e)BPF programs, and the behavior all looks sane and expected, that is nothing "leaking" from an upstream perspective. The bpf_jit_limit knob was originally added in order to avoid a situation where unprivileged applications loading BPF programs (e.g. seccomp BPF policies) consuming all the module memory space via BPF JIT such that loading of kernel modules would be prevented. The default limit was defined back in 2018 and while good enough back then, we are generally seeing far more BPF consumers today. Adjust the limit for the BPF JIT pool from originally 1/4 to now 1/2 of the module memory space to better reflect today's needs and avoid more users running into potentially hard to debug issues. Rejected reason: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority. Removed Reference kernel.org: https://git.kernel.org/stable/c/10ec8ca8ec1a2f04c4ed90897225231c58c124a7 Removed Reference kernel.org: https://git.kernel.org/stable/c/374ed036309fce73f9db04c3054018a71912d46b Removed Reference kernel.org: https://git.kernel.org/stable/c/42049e65d338870e93732b0b80c6c41faf6aa781 Removed Reference kernel.org: https://git.kernel.org/stable/c/54869daa6a437887614274f65298ba44a3fac63a Removed Reference kernel.org: https://git.kernel.org/stable/c/68ed00a37d2d1c932ff7be40be4b90c4bec48c56 Removed Reference kernel.org: https://git.kernel.org/stable/c/9cda812c76067c8a771eae43bb6943481cc7effc Removed Reference kernel.org: https://git.kernel.org/stable/c/a4bbab27c4bf69486f5846d44134eb31c37e9b22 Removed Reference kernel.org: https://git.kernel.org/stable/c/d69c2ded95b17d51cc6632c7848cbd476381ecd6 -
New CVE Received by 416baaa9-dc9f-4396-8d5f-8c081fb06d67
May. 02, 2025
Action Type Old Value New Value Added Description In the Linux kernel, the following vulnerability has been resolved: bpf: Adjust insufficient default bpf_jit_limit We've seen recent AWS EKS (Kubernetes) user reports like the following: After upgrading EKS nodes from v20230203 to v20230217 on our 1.24 EKS clusters after a few days a number of the nodes have containers stuck in ContainerCreating state or liveness/readiness probes reporting the following error: Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "4a11039f730203ffc003b7[...]": OCI runtime exec failed: exec failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524: unknown However, we had not been seeing this issue on previous AMIs and it only started to occur on v20230217 (following the upgrade from kernel 5.4 to 5.10) with no other changes to the underlying cluster or workloads. We tried the suggestions from that issue (sysctl net.core.bpf_jit_limit=452534528) which helped to immediately allow containers to be created and probes to execute but after approximately a day the issue returned and the value returned by cat /proc/vmallocinfo | grep bpf_jit | awk '{s+=$2} END {print s}' was steadily increasing. I tested bpf tree to observe bpf_jit_charge_modmem, bpf_jit_uncharge_modmem their sizes passed in as well as bpf_jit_current under tcpdump BPF filter, seccomp BPF and native (e)BPF programs, and the behavior all looks sane and expected, that is nothing "leaking" from an upstream perspective. The bpf_jit_limit knob was originally added in order to avoid a situation where unprivileged applications loading BPF programs (e.g. seccomp BPF policies) consuming all the module memory space via BPF JIT such that loading of kernel modules would be prevented. The default limit was defined back in 2018 and while good enough back then, we are generally seeing far more BPF consumers today. Adjust the limit for the BPF JIT pool from originally 1/4 to now 1/2 of the module memory space to better reflect today's needs and avoid more users running into potentially hard to debug issues. Added Reference https://git.kernel.org/stable/c/10ec8ca8ec1a2f04c4ed90897225231c58c124a7 Added Reference https://git.kernel.org/stable/c/374ed036309fce73f9db04c3054018a71912d46b Added Reference https://git.kernel.org/stable/c/42049e65d338870e93732b0b80c6c41faf6aa781 Added Reference https://git.kernel.org/stable/c/54869daa6a437887614274f65298ba44a3fac63a Added Reference https://git.kernel.org/stable/c/68ed00a37d2d1c932ff7be40be4b90c4bec48c56 Added Reference https://git.kernel.org/stable/c/9cda812c76067c8a771eae43bb6943481cc7effc Added Reference https://git.kernel.org/stable/c/a4bbab27c4bf69486f5846d44134eb31c37e9b22 Added Reference https://git.kernel.org/stable/c/d69c2ded95b17d51cc6632c7848cbd476381ecd6