CVE-2025-49847

llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662.
Configurations

Configuration 1 (hide)

cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:*

History

No history.

Information

Published : 2025-06-17 20:15

Updated : 2025-08-27 13:48


NVD link : CVE-2025-49847

Mitre link : CVE-2025-49847

CVE.ORG link : CVE-2025-49847


JSON object : View

Products Affected

ggml

  • llama.cpp
CWE
CWE-119

Improper Restriction of Operations within the Bounds of a Memory Buffer

CWE-195

Signed to Unsigned Conversion Error