CVE-2024-42479 – llama.cpp allows write-what-where in rpc_server::set_tensor
https://notcve.org/view.php?id=CVE-2024-42479
llama.cpp provides LLM inference in C/C++. The unsafe `data` pointer member in the `rpc_tensor` structure can cause arbitrary address writing. This vulnerability is fixed in b3561. • https://github.com/ggerganov/llama.cpp/commit/b72942fac998672a79a1ae3c03b340f7e629980b https://github.com/ggerganov/llama.cpp/security/advisories/GHSA-wcr5-566p-9cwj • CWE-123: Write-what-where Condition •
CVE-2024-42478 – llama.cpp allows Arbitrary Address Read in rpc_server::get_tensor
https://notcve.org/view.php?id=CVE-2024-42478
llama.cpp provides LLM inference in C/C++. The unsafe `data` pointer member in the `rpc_tensor` structure can cause arbitrary address reading. This vulnerability is fixed in b3561. • https://github.com/ggerganov/llama.cpp/commit/b72942fac998672a79a1ae3c03b340f7e629980b https://github.com/ggerganov/llama.cpp/security/advisories/GHSA-5vm9-p64x-gqw9 • CWE-125: Out-of-bounds Read •
CVE-2024-42477 – llama.cpp global-buffer-overflow in ggml_type_size
https://notcve.org/view.php?id=CVE-2024-42477
llama.cpp provides LLM inference in C/C++. The unsafe `type` member in the `rpc_tensor` structure can cause `global-buffer-overflow`. This vulnerability may lead to memory data leakage. The vulnerability is fixed in b3561. • https://github.com/ggerganov/llama.cpp/commit/b72942fac998672a79a1ae3c03b340f7e629980b https://github.com/ggerganov/llama.cpp/security/advisories/GHSA-mqp6-7pv6-fqjf • CWE-125: Out-of-bounds Read •
CVE-2024-41130 – llama.cpp null pointer dereference in gguf_init_from_file
https://notcve.org/view.php?id=CVE-2024-41130
llama.cpp provides LLM inference in C/C++. Prior to b3427, llama.cpp contains a null pointer dereference in gguf_init_from_file. This vulnerability is fixed in b3427. llama.cpp proporciona inferencia LLM en C/C++. Antes de b3427, llama.cpp contiene una desreferencia de puntero nulo en gguf_init_from_file. Esta vulnerabilidad se soluciona en b3427. • https://github.com/ggerganov/llama.cpp/commit/07283b1a90e1320aae4762c7e03c879043910252 https://github.com/ggerganov/llama.cpp/security/advisories/GHSA-49q7-2jmh-92fp • CWE-476: NULL Pointer Dereference •
CVE-2024-32878 – Use of Uninitialized Variable Vulnerability in llama.cpp
https://notcve.org/view.php?id=CVE-2024-32878
Llama.cpp is LLM inference in C/C++. There is a use of uninitialized heap variable vulnerability in gguf_init_from_file, the code will free this uninitialized variable later. In a simple POC, it will directly cause a crash. If the file is carefully constructed, it may be possible to control this uninitialized value and cause arbitrary address free problems. This may further lead to be exploited. • https://github.com/ggerganov/llama.cpp/releases/tag/b2749 https://github.com/ggerganov/llama.cpp/security/advisories/GHSA-p5mv-gjc5-mwqv • CWE-456: Missing Initialization of a Variable •