CVE-2024-26714 – interconnect: qcom: sc8180x: Mark CO0 BCM keepalive
https://notcve.org/view.php?id=CVE-2024-26714
In the Linux kernel, the following vulnerability has been resolved: interconnect: qcom: sc8180x: Mark CO0 BCM keepalive The CO0 BCM needs to be up at all times, otherwise some hardware (like the UFS controller) loses its connection to the rest of the SoC, resulting in a hang of the platform, accompanied by a spectacular logspam. Mark it as keepalive to prevent such cases. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: interconexión: qcom: sc8180x: Mark CO0 BCM keepalive El CO0 BCM debe estar activo en todo momento; de lo contrario, algún hardware (como el controlador UFS) pierde su conexión con el resto del SoC, lo que provocó un bloqueo de la plataforma, acompañado de un espectacular logspam. Márquelo como keepalive para evitar tales casos. • https://git.kernel.org/stable/c/9c8c6bac1ae86f6902baa938101902fb3a0a100b https://git.kernel.org/stable/c/6616d3c4f8284a7b3ef978c916566bd240cea1c7 https://git.kernel.org/stable/c/d8e36ff40cf9dadb135f3a97341c02c9a7afcc43 https://git.kernel.org/stable/c/7a3a70dd08e4b7dffc2f86f2c68fc3812804b9d0 https://git.kernel.org/stable/c/85e985a4f46e462a37f1875cb74ed380e7c0c2e0 •
CVE-2024-26712 – powerpc/kasan: Fix addr error caused by page alignment
https://notcve.org/view.php?id=CVE-2024-26712
In the Linux kernel, the following vulnerability has been resolved: powerpc/kasan: Fix addr error caused by page alignment In kasan_init_region, when k_start is not page aligned, at the begin of for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then `va = block + k_cur - k_start` is less than block, the addr va is invalid, because the memory address space from va to block is not alloced by memblock_alloc, which will not be reserved by memblock_reserve later, it will be used by other places. As a result, memory overwriting occurs. for example: int __init __weak kasan_init_region(void *start, size_t size) { [...] /* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */ block = memblock_alloc(k_end - k_start, PAGE_SIZE); [...] for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { /* at the begin of for loop * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400) * va(dcd96c00) is less than block(dcd97000), va is invalid */ void *va = block + k_cur - k_start; [...] } [...] } Therefore, page alignment is performed on k_start before memblock_alloc() to ensure the validity of the VA address. En el kernel de Linux, se resolvió la siguiente vulnerabilidad: powerpc/kasan: corrige el error de dirección causado por la alineación de la página En kasan_init_region, cuando k_start no está alineado con la página, al comienzo del bucle for, k_cur = k_start y PAGE_MASK es menor que k_start. y luego `va = block + k_cur - k_start` es menor que block, la dirección va no es válida, porque memblock_alloc no asigna el espacio de direcciones de memoria de va al bloque, que no será reservado por memblock_reserve más adelante, se utilizará por otros lugares. Como resultado, se produce una sobrescritura de la memoria. por ejemplo: int __init __weak kasan_init_region(void *start, size_t size) { [...] /* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */ block = memblock_alloc(k_end - k_start, PAGE_SIZE); [...] for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { /* al comienzo del bucle for * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400) * va (dcd96c00) es menor que block(dcd97000), va no es válido */ void *va = block + k_cur - k_start; [...] } [...] } Por lo tanto, la alineación de la página se realiza en k_start antes de memblock_alloc() para garantizar la validez de la dirección VA. • https://git.kernel.org/stable/c/663c0c9496a69f80011205ba3194049bcafd681d https://git.kernel.org/stable/c/5ce93076d8ee2a0fac3ad4adbd2e91b6197146db https://git.kernel.org/stable/c/230e89b5ad0a33f530a2a976b3e5e4385cb27882 https://git.kernel.org/stable/c/2738e0aa2fb24a7ab9c878d912dc2b239738c6c6 https://git.kernel.org/stable/c/0c09912dd8387e228afcc5e34ac5d79b1e3a1058 https://git.kernel.org/stable/c/0516c06b19dc64807c10e01bb99b552bdf2d7dbe https://git.kernel.org/stable/c/70ef2ba1f4286b2b73675aeb424b590c92d57b25 https://git.kernel.org/stable/c/4a7aee96200ad281a5cc4cf5c7a2e2a49 •
CVE-2024-26707 – net: hsr: remove WARN_ONCE() in send_hsr_supervision_frame()
https://notcve.org/view.php?id=CVE-2024-26707
In the Linux kernel, the following vulnerability has been resolved: net: hsr: remove WARN_ONCE() in send_hsr_supervision_frame() Syzkaller reported [1] hitting a warning after failing to allocate resources for skb in hsr_init_skb(). Since a WARN_ONCE() call will not help much in this case, it might be prudent to switch to netdev_warn_once(). At the very least it will suppress syzkaller reports such as [1]. Just in case, use netdev_warn_once() in send_prp_supervision_frame() for similar reasons. [1] HSR: Could not send supervision frame WARNING: CPU: 1 PID: 85 at net/hsr/hsr_device.c:294 send_hsr_supervision_frame+0x60a/0x810 net/hsr/hsr_device.c:294 RIP: 0010:send_hsr_supervision_frame+0x60a/0x810 net/hsr/hsr_device.c:294 ... Call Trace: <IRQ> hsr_announce+0x114/0x370 net/hsr/hsr_device.c:382 call_timer_fn+0x193/0x590 kernel/time/timer.c:1700 expire_timers kernel/time/timer.c:1751 [inline] __run_timers+0x764/0xb20 kernel/time/timer.c:2022 run_timer_softirq+0x58/0xd0 kernel/time/timer.c:2035 __do_softirq+0x21a/0x8de kernel/softirq.c:553 invoke_softirq kernel/softirq.c:427 [inline] __irq_exit_rcu kernel/softirq.c:632 [inline] irq_exit_rcu+0xb7/0x120 kernel/softirq.c:644 sysvec_apic_timer_interrupt+0x95/0xb0 arch/x86/kernel/apic/apic.c:1076 </IRQ> <TASK> asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:649 ... This issue is also found in older kernels (at least up to 5.10). En el kernel de Linux, se resolvió la siguiente vulnerabilidad: net: hsr: eliminar WARN_ONCE() en send_hsr_supervision_frame() Syzkaller informó [1] que apareció una advertencia después de no poder asignar recursos para skb en hsr_init_skb(). Dado que una llamada WARN_ONCE() no ayudará mucho en este caso, podría ser prudente cambiar a netdev_warn_once(). • https://git.kernel.org/stable/c/121c33b07b3127f501b366bc23d2a590e2f2b8ef https://git.kernel.org/stable/c/0d8011a878fdf96123bc0d6a12e2fe7ced5fddfb https://git.kernel.org/stable/c/de769423b2f053182a41317c4db5a927e90622a0 https://git.kernel.org/stable/c/56440799fc4621c279df16176f83a995d056023a https://git.kernel.org/stable/c/923dea2a7ea9e1ef5ac4031fba461c1cc92e32b8 https://git.kernel.org/stable/c/547545e50c913861219947ce490c68a1776b9b51 https://git.kernel.org/stable/c/37e8c97e539015637cb920d3e6f1e404f707a06e https://lists.debian.org/debian-lts-announce/2024/06/ •
CVE-2024-26706 – parisc: Fix random data corruption from exception handler
https://notcve.org/view.php?id=CVE-2024-26706
In the Linux kernel, the following vulnerability has been resolved: parisc: Fix random data corruption from exception handler The current exception handler implementation, which assists when accessing user space memory, may exhibit random data corruption if the compiler decides to use a different register than the specified register %r29 (defined in ASM_EXCEPTIONTABLE_REG) for the error code. If the compiler choose another register, the fault handler will nevertheless store -EFAULT into %r29 and thus trash whatever this register is used for. Looking at the assembly I found that this happens sometimes in emulate_ldd(). To solve the issue, the easiest solution would be if it somehow is possible to tell the fault handler which register is used to hold the error code. Using %0 or %1 in the inline assembly is not posssible as it will show up as e.g. %r29 (with the "%r" prefix), which the GNU assembler can not convert to an integer. This patch takes another, better and more flexible approach: We extend the __ex_table (which is out of the execution path) by one 32-word. In this word we tell the compiler to insert the assembler instruction "or %r0,%r0,%reg", where %reg references the register which the compiler choosed for the error return code. In case of an access failure, the fault handler finds the __ex_table entry and can examine the opcode. The used register is encoded in the lowest 5 bits, and the fault handler can then store -EFAULT into this register. Since we extend the __ex_table to 3 words we can't use the BUILDTIME_TABLE_SORT config option any longer. • https://git.kernel.org/stable/c/23027309b099ffc4efca5477009a11dccbdae592 https://git.kernel.org/stable/c/fa69a8063f8b27f3c7434a0d4f464a76a62f24d2 https://git.kernel.org/stable/c/ce31d79aa1f13a2345791f84935281a2c194e003 https://git.kernel.org/stable/c/8b1d72395635af45410b66cc4c4ab37a12c4a831 •
CVE-2024-26704 – ext4: fix double-free of blocks due to wrong extents moved_len
https://notcve.org/view.php?id=CVE-2024-26704
In the Linux kernel, the following vulnerability has been resolved: ext4: fix double-free of blocks due to wrong extents moved_len In ext4_move_extents(), moved_len is only updated when all moves are successfully executed, and only discards orig_inode and donor_inode preallocations when moved_len is not zero. When the loop fails to exit after successfully moving some extents, moved_len is not updated and remains at 0, so it does not discard the preallocations. If the moved extents overlap with the preallocated extents, the overlapped extents are freed twice in ext4_mb_release_inode_pa() and ext4_process_freed_data() (as described in commit 94d7c16cbbbd ("ext4: Fix double-free of blocks with EXT4_IOC_MOVE_EXT")), and bb_free is incremented twice. Hence when trim is executed, a zero-division bug is triggered in mb_update_avg_fragment_size() because bb_free is not zero and bb_fragments is zero. Therefore, update move_len after each extent move to avoid the issue. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: ext4: corrige la doble liberación de bloques debido a extensiones incorrectas. moving_len En ext4_move_extents(), move_len solo se actualiza cuando todos los movimientos se ejecutan exitosamente y solo descarta las preasignaciones de orig_inode y donante_inode cuando se mueve_len no es cero. Cuando el bucle no sale después de mover con éxito algunas extensiones, moving_len no se actualiza y permanece en 0, por lo que no descarta las asignaciones previas. • https://git.kernel.org/stable/c/fcf6b1b729bcd23f2b49a84fb33ffbb44712ee6a https://git.kernel.org/stable/c/b4fbb89d722cbb16beaaea234b7230faaaf68c71 https://git.kernel.org/stable/c/afbcad9ae7d6d11608399188f03a837451b6b3a1 https://git.kernel.org/stable/c/d033a555d9a1cf53dbf3301af7199cc4a4c8f537 https://git.kernel.org/stable/c/afba9d11320dad5ce222ac8964caf64b7b4bedb1 https://git.kernel.org/stable/c/185eab30486ba3e7bf8b9c2e049c79a06ffd2bc1 https://git.kernel.org/stable/c/2883940b19c38d5884c8626483811acf4d7e148f https://git.kernel.org/stable/c/559ddacb90da1d8786dd8ec4fd76bbfa4 • CWE-415: Double Free •