Page 186 of 3245 results (0.007 seconds)

CVSS: 5.9EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: speakup: Avoid crash on very long word In case a console is set up really large and contains a really long word (> 256 characters), we have to stop before the length of the word buffer. En el kernel de Linux se ha solucionado la siguiente vulnerabilidad: Speakup: Evitar crash en palabras muy largas En caso de que una consola esté configurada muy grande y contenga una palabra muy larga (>256 caracteres), tenemos que detenernos antes de la longitud de la palabra. búfer de palabras. • https://git.kernel.org/stable/c/c6e3fd22cd538365bfeb82997d5b89562e077d42 https://git.kernel.org/stable/c/756c5cb7c09e537b87b5d3acafcb101b2ccf394f https://git.kernel.org/stable/c/8f6b62125befe1675446923e4171eac2c012959c https://git.kernel.org/stable/c/6401038acfa24cba9c28cce410b7505efadd0222 https://git.kernel.org/stable/c/0d130158db29f5e0b3893154908cf618896450a8 https://git.kernel.org/stable/c/89af25bd4b4bf6a71295f07e07a8ae7dc03c6595 https://git.kernel.org/stable/c/8defb1d22ba0395b81feb963b96e252b097ba76f https://git.kernel.org/stable/c/0efb15c14c493263cb3a5f65f5ddfd460 •

CVSS: 5.5EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: Squashfs: check the inode number is not the invalid value of zero Syskiller has produced an out of bounds access in fill_meta_index(). That out of bounds access is ultimately caused because the inode has an inode number with the invalid value of zero, which was not checked. The reason this causes the out of bounds access is due to following sequence of events: 1. Fill_meta_index() is called to allocate (via empty_meta_index()) and fill a metadata index. It however suffers a data read error and aborts, invalidating the newly returned empty metadata index. It does this by setting the inode number of the index to zero, which means unused (zero is not a valid inode number). 2. When fill_meta_index() is subsequently called again on another read operation, locate_meta_index() returns the previous index because it matches the inode number of 0. Because this index has been returned it is expected to have been filled, and because it hasn't been, an out of bounds access is performed. This patch adds a sanity check which checks that the inode number is not zero when the inode is created and returns -EINVAL if it is. [phillip@squashfs.org.uk: whitespace fix] Link: https://lkml.kernel.org/r/20240409204723.446925-1-phillip@squashfs.org.uk En el kernel de Linux se ha resuelto la siguiente vulnerabilidad: Squashfs: comprobar que el número de inodo no sea el valor no válido de cero. • https://git.kernel.org/stable/c/be383effaee3d89034f0828038f95065b518772e https://git.kernel.org/stable/c/7def00ebc9f2d6a581ddf46ce4541f84a10680e5 https://git.kernel.org/stable/c/9253c54e01b6505d348afbc02abaa4d9f8a01395 https://access.redhat.com/security/cve/CVE-2024-26982 https://bugzilla.redhat.com/show_bug.cgi?id=2278337 • CWE-125: Out-of-bounds Read •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: nilfs2: fix OOB in nilfs_set_de_type The size of the nilfs_type_by_mode array in the fs/nilfs2/dir.c file is defined as "S_IFMT >> S_SHIFT", but the nilfs_set_de_type() function, which uses this array, specifies the index to read from the array in the same way as "(mode & S_IFMT) >> S_SHIFT". static void nilfs_set_de_type(struct nilfs_dir_entry *de, struct inode *inode) { umode_t mode = inode->i_mode; de->file_type = nilfs_type_by_mode[(mode & S_IFMT)>>S_SHIFT]; // oob } However, when the index is determined this way, an out-of-bounds (OOB) error occurs by referring to an index that is 1 larger than the array size when the condition "mode & S_IFMT == S_IFMT" is satisfied. Therefore, a patch to resize the nilfs_type_by_mode array should be applied to prevent OOB errors. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: nilfs2: corrige OOB en nilfs_set_de_type El tamaño de la matriz nilfs_type_by_mode en el archivo fs/nilfs2/dir.c se define como "S_IFMT >> S_SHIFT", pero la función nilfs_set_de_type() , que utiliza esta matriz, especifica el índice a leer de la matriz de la misma manera que "(mode & S_IFMT) >> S_SHIFT". static void nilfs_set_de_type(struct nilfs_dir_entry *de, struct inode *inode) { umode_t modo = inodo->i_mode; de->tipo_archivo = nilfs_type_by_mode[(modo & S_IFMT)>>S_SHIFT]; // oob } Sin embargo, cuando el índice se determina de esta manera, se produce un error fuera de los límites (OOB) al hacer referencia a un índice que es 1 mayor que el tamaño de la matriz cuando la condición "modo & S_IFMT == S_IFMT" es satisfecho. Por lo tanto, se debe aplicar un parche para cambiar el tamaño de la matriz nilfs_type_by_mode para evitar errores OOB. • https://git.kernel.org/stable/c/2ba466d74ed74f073257f86e61519cb8f8f46184 https://git.kernel.org/stable/c/054f29e9ca05be3906544c5f2a2c7321c30a4243 https://git.kernel.org/stable/c/90f43980ea6be4ad903e389be9a27a2a0018f1c8 https://git.kernel.org/stable/c/7061c7efbb9e8f11ce92d6b4646405ea2b0b4de1 https://git.kernel.org/stable/c/bdbe483da21f852c93b22557b146bc4d989260f0 https://git.kernel.org/stable/c/897ac5306bbeb83e90c437326f7044c79a17c611 https://git.kernel.org/stable/c/2382eae66b196c31893984a538908c3eb7506ff9 https://git.kernel.org/stable/c/90823f8d9ecca3d5fa6b102c8e464c62f •

CVSS: 7.0EPSS: 0%CPEs: 9EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: Always flush async #PF workqueue when vCPU is being destroyed Always flush the per-vCPU async #PF workqueue when a vCPU is clearing its completion queue, e.g. when a VM and all its vCPUs is being destroyed. KVM must ensure that none of its workqueue callbacks is running when the last reference to the KVM _module_ is put. Gifting a reference to the associated VM prevents the workqueue callback from dereferencing freed vCPU/VM memory, but does not prevent the KVM module from being unloaded before the callback completes. Drop the misguided VM refcount gifting, as calling kvm_put_kvm() from async_pf_execute() if kvm_put_kvm() flushes the async #PF workqueue will result in deadlock. async_pf_execute() can't return until kvm_put_kvm() finishes, and kvm_put_kvm() can't return until async_pf_execute() finishes: WARNING: CPU: 8 PID: 251 at virt/kvm/kvm_main.c:1435 kvm_put_kvm+0x2d/0x320 [kvm] Modules linked in: vhost_net vhost vhost_iotlb tap kvm_intel kvm irqbypass CPU: 8 PID: 251 Comm: kworker/8:1 Tainted: G W 6.6.0-rc1-e7af8d17224a-x86/gmem-vm #119 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Workqueue: events async_pf_execute [kvm] RIP: 0010:kvm_put_kvm+0x2d/0x320 [kvm] Call Trace: <TASK> async_pf_execute+0x198/0x260 [kvm] process_one_work+0x145/0x2d0 worker_thread+0x27e/0x3a0 kthread+0xba/0xe0 ret_from_fork+0x2d/0x50 ret_from_fork_asm+0x11/0x20 </TASK> ---[ end trace 0000000000000000 ]--- INFO: task kworker/8:1:251 blocked for more than 120 seconds. Tainted: G W 6.6.0-rc1-e7af8d17224a-x86/gmem-vm #119 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/8:1 state:D stack:0 pid:251 ppid:2 flags:0x00004000 Workqueue: events async_pf_execute [kvm] Call Trace: <TASK> __schedule+0x33f/0xa40 schedule+0x53/0xc0 schedule_timeout+0x12a/0x140 __wait_for_common+0x8d/0x1d0 __flush_work.isra.0+0x19f/0x2c0 kvm_clear_async_pf_completion_queue+0x129/0x190 [kvm] kvm_arch_destroy_vm+0x78/0x1b0 [kvm] kvm_put_kvm+0x1c1/0x320 [kvm] async_pf_execute+0x198/0x260 [kvm] process_one_work+0x145/0x2d0 worker_thread+0x27e/0x3a0 kthread+0xba/0xe0 ret_from_fork+0x2d/0x50 ret_from_fork_asm+0x11/0x20 </TASK> If kvm_clear_async_pf_completion_queue() actually flushes the workqueue, then there's no need to gift async_pf_execute() a reference because all invocations of async_pf_execute() will be forced to complete before the vCPU and its VM are destroyed/freed. And that in turn fixes the module unloading bug as __fput() won't do module_put() on the last vCPU reference until the vCPU has been freed, e.g. if closing the vCPU file also puts the last reference to the KVM module. Note that kvm_check_async_pf_completion() may also take the work item off the completion queue and so also needs to flush the work queue, as the work will not be seen by kvm_clear_async_pf_completion_queue(). Waiting on the workqueue could theoretically delay a vCPU due to waiting for the work to complete, but that's a very, very small chance, and likely a very small delay. • https://git.kernel.org/stable/c/af585b921e5d1e919947c4b1164b59507fe7cd7b https://git.kernel.org/stable/c/ab2c2f5d9576112ad22cfd3798071cb74693b1f5 https://git.kernel.org/stable/c/82e25cc1c2e93c3023da98be282322fc08b61ffb https://git.kernel.org/stable/c/f8730d6335e5f43d09151fca1f0f41922209a264 https://git.kernel.org/stable/c/83d3c5e309611ef593e2fcb78444fc8ceedf9bac https://git.kernel.org/stable/c/b54478d20375874aeee257744dedfd3e413432ff https://git.kernel.org/stable/c/a75afe480d4349c524d9c659b1a5a544dbc39a98 https://git.kernel.org/stable/c/4f3a3bce428fb439c66a578adc447afce • CWE-400: Uncontrolled Resource Consumption •

CVSS: 5.5EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape For raid456, if reshape is still in progress, then IO across reshape position will wait for reshape to make progress. However, for dm-raid, in following cases reshape will never make progress hence IO will hang: 1) the array is read-only; 2) MD_RECOVERY_WAIT is set; 3) MD_RECOVERY_FROZEN is set; After commit c467e97f079f ("md/raid6: use valid sector values to determine if an I/O should wait on the reshape") fix the problem that IO across reshape position doesn't wait for reshape, the dm-raid test shell/lvconvert-raid-reshape.sh start to hang: [root@fedora ~]# cat /proc/979/stack [<0>] wait_woken+0x7d/0x90 [<0>] raid5_make_request+0x929/0x1d70 [raid456] [<0>] md_handle_request+0xc2/0x3b0 [md_mod] [<0>] raid_map+0x2c/0x50 [dm_raid] [<0>] __map_bio+0x251/0x380 [dm_mod] [<0>] dm_submit_bio+0x1f0/0x760 [dm_mod] [<0>] __submit_bio+0xc2/0x1c0 [<0>] submit_bio_noacct_nocheck+0x17f/0x450 [<0>] submit_bio_noacct+0x2bc/0x780 [<0>] submit_bio+0x70/0xc0 [<0>] mpage_readahead+0x169/0x1f0 [<0>] blkdev_readahead+0x18/0x30 [<0>] read_pages+0x7c/0x3b0 [<0>] page_cache_ra_unbounded+0x1ab/0x280 [<0>] force_page_cache_ra+0x9e/0x130 [<0>] page_cache_sync_ra+0x3b/0x110 [<0>] filemap_get_pages+0x143/0xa30 [<0>] filemap_read+0xdc/0x4b0 [<0>] blkdev_read_iter+0x75/0x200 [<0>] vfs_read+0x272/0x460 [<0>] ksys_read+0x7a/0x170 [<0>] __x64_sys_read+0x1c/0x30 [<0>] do_syscall_64+0xc6/0x230 [<0>] entry_SYSCALL_64_after_hwframe+0x6c/0x74 This is because reshape can't make progress. For md/raid, the problem doesn't exist because register new sync_thread doesn't rely on the IO to be done any more: 1) If array is read-only, it can switch to read-write by ioctl/sysfs; 2) md/raid never set MD_RECOVERY_WAIT; 3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold 'reconfig_mutex', hence it can be cleared and reshape can continue by sysfs api 'sync_action'. However, I'm not sure yet how to avoid the problem in dm-raid yet. This patch on the one hand make sure raid_message() can't change sync_thread() through raid_message() after presuspend(), on the other hand detect the above 3 cases before wait for IO do be done in dm_suspend(), and let dm-raid requeue those IO. En el kernel de Linux, se resolvió la siguiente vulnerabilidad: dm-raid456, md/raid456: soluciona un punto muerto para dm-raid456 mientras io concurre con reshape. Para raid456, si el reshape todavía está en progreso, entonces IO en la posición de reshape esperará remodelar para progresar. • https://git.kernel.org/stable/c/5943a34bf6bab5801e08a55f63e1b8d5bc90dae1 https://git.kernel.org/stable/c/a8d249d770cb357d16a2097b548d2e4c1c137304 https://git.kernel.org/stable/c/41425f96d7aa59bc865f60f5dda3d7697b555677 https://access.redhat.com/security/cve/CVE-2024-26962 https://bugzilla.redhat.com/show_bug.cgi?id=2278174 •