CVE-2023-52589 – media: rkisp1: Fix IRQ disable race issue
https://notcve.org/view.php?id=CVE-2023-52589
In the Linux kernel, the following vulnerability has been resolved: media: rkisp1: Fix IRQ disable race issue In rkisp1_isp_stop() and rkisp1_csi_disable() the driver masks the interrupts and then apparently assumes that the interrupt handler won't be running, and proceeds in the stop procedure. This is not the case, as the interrupt handler can already be running, which would lead to the ISP being disabled while the interrupt handler handling a captured frame. This brings up two issues: 1) the ISP could be powered off while the interrupt handler is still running and accessing registers, leading to board lockup, and 2) the interrupt handler code and the code that disables the streaming might do things that conflict. It is not clear to me if 2) causes a real issue, but 1) can be seen with a suitable delay (or printk in my case) in the interrupt handler, leading to board lockup. En el kernel de Linux, se resolvió la siguiente vulnerabilidad: medios: rkisp1: Solucionar el problema de ejecución de desactivación de IRQ En rkisp1_isp_stop() y rkisp1_csi_disable() el controlador enmascara las interrupciones y aparentemente asume que el controlador de interrupciones no se ejecutará y continúa en el procedimiento de parada. Este no es el caso, ya que el controlador de interrupciones puede estar ejecutándose, lo que llevaría a que el ISP se deshabilite mientras el controlador de interrupciones maneja una trama capturada. Esto plantea dos problemas: 1) el ISP podría apagarse mientras el controlador de interrupciones aún está ejecutándose y accediendo a los registros, lo que provoca el bloqueo de la placa, y 2) el código del controlador de interrupciones y el código que deshabilita la transmisión pueden hacer cosas que entren en conflicto. • https://git.kernel.org/stable/c/bf808f58681cab64c81cd814551814fd34e540fe https://git.kernel.org/stable/c/fab483438342984f2a315fe13c882a80f0f7e545 https://git.kernel.org/stable/c/7bb1a2822aa2c2de4e09bf7c56dd93bd532f1fa7 https://git.kernel.org/stable/c/870565f063a58576e8a4529f122cac4325c6b395 •
CVE-2023-52588 – f2fs: fix to tag gcing flag on page during block migration
https://notcve.org/view.php?id=CVE-2023-52588
In the Linux kernel, the following vulnerability has been resolved: f2fs: fix to tag gcing flag on page during block migration It needs to add missing gcing flag on page during block migration, in order to garantee migrated data be persisted during checkpoint, otherwise out-of-order persistency between data and node may cause data corruption after SPOR. Similar issue was fixed by commit 2d1fe8a86bf5 ("f2fs: fix to tag gcing flag on page during file defragment"). En el kernel de Linux, se resolvió la siguiente vulnerabilidad: f2fs: corrección para etiquetar el indicador gcing en la página durante la migración de bloques. Es necesario agregar el indicador gcing faltante en la página durante la migración de bloques, para garantizar que los datos migrados persistan durante el punto de control; de lo contrario, no estarán disponibles. La persistencia del orden entre los datos y el nodo puede provocar daños en los datos después de SPOR. Se solucionó un problema similar mediante el commit 2d1fe8a86bf5 ("f2fs: corrección para etiquetar el indicador gcing en la página durante la desfragmentación del archivo"). • https://git.kernel.org/stable/c/7ea0f29d9fd84905051be020c0df7d557e286136 https://git.kernel.org/stable/c/7c972c89457511007dfc933814c06786905e515c https://git.kernel.org/stable/c/417b8a91f4e8831cadaf85c3f15c6991c1f54dde https://git.kernel.org/stable/c/b8094c0f1aae329b1c60a275a780d6c2c9ff7aa3 https://git.kernel.org/stable/c/4961acdd65c956e97c1a000c82d91a8c1cdbe44b •
CVE-2023-52587 – IB/ipoib: Fix mcast list locking
https://notcve.org/view.php?id=CVE-2023-52587
In the Linux kernel, the following vulnerability has been resolved: IB/ipoib: Fix mcast list locking Releasing the `priv->lock` while iterating the `priv->multicast_list` in `ipoib_mcast_join_task()` opens a window for `ipoib_mcast_dev_flush()` to remove the items while in the middle of iteration. If the mcast is removed while the lock was dropped, the for loop spins forever resulting in a hard lockup (as was reported on RHEL 4.18.0-372.75.1.el8_6 kernel): Task A (kworker/u72:2 below) | Task B (kworker/u72:0 below) -----------------------------------+----------------------------------- ipoib_mcast_join_task(work) | ipoib_ib_dev_flush_light(work) spin_lock_irq(&priv->lock) | __ipoib_ib_dev_flush(priv, ...) list_for_each_entry(mcast, | ipoib_mcast_dev_flush(dev = priv->dev) &priv->multicast_list, list) | ipoib_mcast_join(dev, mcast) | spin_unlock_irq(&priv->lock) | | spin_lock_irqsave(&priv->lock, flags) | list_for_each_entry_safe(mcast, tmcast, | &priv->multicast_list, list) | list_del(&mcast->list); | list_add_tail(&mcast->list, &remove_list) | spin_unlock_irqrestore(&priv->lock, flags) spin_lock_irq(&priv->lock) | | ipoib_mcast_remove_list(&remove_list) (Here, `mcast` is no longer on the | list_for_each_entry_safe(mcast, tmcast, `priv->multicast_list` and we keep | remove_list, list) spinning on the `remove_list` of | >>> wait_for_completion(&mcast->done) the other thread which is blocked | and the list is still valid on | it's stack.) Fix this by keeping the lock held and changing to GFP_ATOMIC to prevent eventual sleeps. Unfortunately we could not reproduce the lockup and confirm this fix but based on the code review I think this fix should address such lockups. crash> bc 31 PID: 747 TASK: ff1c6a1a007e8000 CPU: 31 COMMAND: "kworker/u72:2" -- [exception RIP: ipoib_mcast_join_task+0x1b1] RIP: ffffffffc0944ac1 RSP: ff646f199a8c7e00 RFLAGS: 00000002 RAX: 0000000000000000 RBX: ff1c6a1a04dc82f8 RCX: 0000000000000000 work (&priv->mcast_task{,.work}) RDX: ff1c6a192d60ac68 RSI: 0000000000000286 RDI: ff1c6a1a04dc8000 &mcast->list RBP: ff646f199a8c7e90 R8: ff1c699980019420 R9: ff1c6a1920c9a000 R10: ff646f199a8c7e00 R11: ff1c6a191a7d9800 R12: ff1c6a192d60ac00 mcast R13: ff1c6a1d82200000 R14: ff1c6a1a04dc8000 R15: ff1c6a1a04dc82d8 dev priv (&priv->lock) &priv->multicast_list (aka head) ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- <NMI exception stack> --- #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 at ffffffffc0944ac1 [ib_ipoib] #6 [ff646f199a8c7e98] process_one_work+0x1a7 at ffffffff9bf10967 crash> rx ff646f199a8c7e68 ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work = &priv->mcast_task.work crash> list -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000 (empty) crash> ipoib_dev_priv.mcast_task.work.func,mcast_mutex.owner.counter ff1c6a1a04dc8000 mcast_task.work.func = 0xffffffffc0944910 <ipoib_mcast_join_task>, mcast_mutex.owner.counter = 0xff1c69998efec000 crash> b 8 PID: 8 TASK: ff1c69998efec000 CPU: 33 COMMAND: "kworker/u72:0" -- #3 [ff646f1980153d50] wait_for_completion+0x96 at ffffffff9c7d7646 #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 at ffffffffc0944dc6 [ib_ipoib] #5 [ff646f1980153de8] ipoib_mcast_dev_flush+0x1a7 at ffffffffc09455a7 [ib_ipoib] #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 at ffffffffc09431a4 [ib_ipoib] #7 [ff ---truncated--- En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: IB/ipoib: corrige el bloqueo de la lista mcast Al liberar `priv->lock` mientras se itera `priv->multicast_list` en `ipoib_mcast_join_task()`, se abre una ventana para `ipoib_mcast_dev_flush( )` para eliminar los elementos mientras se encuentra en medio de la iteración. Si se elimina mcast mientras se elimina el bloqueo, el bucle for gira para siempre, lo que resulta en un bloqueo total (como se informó en el kernel RHEL 4.18.0-372.75.1.el8_6): Tarea A (kworker/u72:2 a continuación) | Tarea B (kworker/u72:0 a continuación) -----------------------------------+---- ------------------------------- ipoib_mcast_join_task(trabajo) | ipoib_ib_dev_flush_light(trabajo) spin_lock_irq(&priv->lock) | __ipoib_ib_dev_flush(priv, ...) list_for_each_entry(mcast, | ipoib_mcast_dev_flush(dev = priv->dev) &priv->multicast_list, lista) | ipoib_mcast_join(dev, mcast) | spin_unlock_irq(&priv->bloquear) | | spin_lock_irqsave(&priv->bloqueo, banderas) | list_for_each_entry_safe(mcast, tmcast, | &priv->multicast_list, lista) | list_del(&mcast->lista); | list_add_tail(&mcast->lista, &remove_list) | spin_unlock_irqrestore(&priv->bloquear, banderas) spin_lock_irq(&priv->bloquear) | | ipoib_mcast_remove_list(&remove_list) (Aquí, `mcast` ya no está en | list_for_each_entry_safe(mcast, tmcast, `priv->multicast_list` y seguimos | remove_list, list) girando en `remove_list` de | >>> wait_for_completion(&mcast ->hecho) el otro hilo que está bloqueado | y la lista sigue siendo válida | en su pila.) Solucione este problema manteniendo el bloqueo mantenido y cambiando a GFP_ATOMIC para evitar eventuales suspensiones. Desafortunadamente, no pudimos reproducir el bloqueo y confirmar esta solución, pero según la revisión del código, creo que esta solución debería abordar dichos bloqueos. crash> bc 31 PID: 747 TAREA: ff1c6a1a007e8000 CPU: 31 COMANDO: "kworker/u72:2" -- [excepción RIP: ipoib_mcast_join_task+0x1b1] RIP: ffffffffc0944ac1 RSP: ff646f199a8c7e00 RFLAGS: 000000 02 RAX: 0000000000000000 RBX: ff1c6a1a04dc82f8 RCX: 00000000000000000 trabajo (&priv->mcast_task{,.work}) RDX: ff1c6a192d60ac68 RSI: 0000000000000286 RDI: ff1c6a1a04dc8000 &mcast->list RBP: ff646f199a8c7e90 R8: ff1c699980019420 R9: ff 1c6a1920c9a000 R10: ff646f199a8c7e00 R11: ff1c6a191a7d9800 R12: ff1c6a192d60ac00 mcast R13: ff1c6a1d82200000 R14: ff1c6a1a04dc8000 R15: ff1c6a1a04dc82d8 dev priv (&priv->lock) &priv->multicast_list (también conocido como head) ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- --- #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 en ffffff ffc0944ac1 [ ib_ipoib] #6 [ff646f199a8c7e98] Process_one_work+0x1a7 en ffffffff9bf10967 crash> rx ff646f199a8c7e68 ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work = &priv->mcast_task.work crash> lista -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000 (vacío) falla> ipoib_dev_priv.mcast_task.work. func,mcast_mutex.owner.counter ff1c6a1a04dc8000 mcast_task.work.func = 0xffffffffc0944910 , mcast_mutex.owner.counter = 0xff1c69998efec000 crash> b 8 PID: 8 TAREA: ff1c69998efec000 CPU: 33 COMANDO: "kworker/u72:0" -- #3 [ff646f1980153d50] wait_for_completion+0x96 en ffffffff9c7d7646 #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 en ffffffffc0944dc6 [ib_ipoib] #5 [ff646f1980153de8] ipoib_ mcast_dev_flush+0x1a7 en ffffffffc09455a7 [ib_ipoib] #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 en ffffffffc09431a4 [ib_ipoib] # 7 [ff ---truncado--- • https://git.kernel.org/stable/c/4c8922ae8eb8dcc1e4b7d1059d97a8334288d825 https://git.kernel.org/stable/c/615e3adc2042b7be4ad122a043fc9135e6342c90 https://git.kernel.org/stable/c/ac2630fd3c90ffec34a0bfc4d413668538b0e8f2 https://git.kernel.org/stable/c/ed790bd0903ed3352ebf7f650d910f49b7319b34 https://git.kernel.org/stable/c/5108a2dc2db5630fb6cd58b8be80a0c134bc310a https://git.kernel.org/stable/c/342258fb46d66c1b4c7e2c3717ac01e10c03cf18 https://git.kernel.org/stable/c/7c7bd4d561e9dc6f5b7df9e184974915f6701a89 https://git.kernel.org/stable/c/4f973e211b3b1c6d36f7c6a19239d2588 •
CVE-2023-52586 – drm/msm/dpu: Add mutex lock in control vblank irq
https://notcve.org/view.php?id=CVE-2023-52586
In the Linux kernel, the following vulnerability has been resolved: drm/msm/dpu: Add mutex lock in control vblank irq Add a mutex lock to control vblank irq to synchronize vblank enable/disable operations happening from different threads to prevent race conditions while registering/unregistering the vblank irq callback. v4: -Removed vblank_ctl_lock from dpu_encoder_virt, so it is only a parameter of dpu_encoder_phys. -Switch from atomic refcnt to a simple int counter as mutex has now been added v3: Mistakenly did not change wording in last version. It is done now. v2: Slightly changed wording of commit message Patchwork: https://patchwork.freedesktop.org/patch/571854/ En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: drm/msm/dpu: agregue un bloqueo mutex en el control vblank irq. Agregue un bloqueo mutex para controlar vblank irq para sincronizar las operaciones de activación/desactivación de vblank que ocurren desde diferentes subprocesos para evitar condiciones de ejecución durante el registro. /anular el registro de la devolución de llamada vblank irq. v4: -Se eliminó vblank_ctl_lock de dpu_encoder_virt, por lo que es solo un parámetro de dpu_encoder_phys. -Cambiar de refcnt atómico a un contador int simple ya que ahora se ha agregado mutex v3: por error no cambió la redacción en la última versión. • https://git.kernel.org/stable/c/14f109bf74dd67e1d0469fed859c8e506b0df53f https://git.kernel.org/stable/c/45284ff733e4caf6c118aae5131eb7e7cf3eea5a •
CVE-2023-52585 – drm/amdgpu: Fix possible NULL dereference in amdgpu_ras_query_error_status_helper()
https://notcve.org/view.php?id=CVE-2023-52585
In the Linux kernel, the following vulnerability has been resolved: drm/amdgpu: Fix possible NULL dereference in amdgpu_ras_query_error_status_helper() Return invalid error code -EINVAL for invalid block id. Fixes the below: drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c:1183 amdgpu_ras_query_error_status_helper() error: we previously assumed 'info' could be null (see line 1176) En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: drm/amdgpu: corrige una posible desreferencia NULL en amdgpu_ras_query_error_status_helper() Devuelve un código de error no válido -EINVAL para una identificación de bloque no válida. Corrige lo siguiente: drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c:1183 error amdgpu_ras_query_error_status_helper(): anteriormente asumimos que la 'información' podría ser nula (consulte la línea 1176) • https://git.kernel.org/stable/c/467139546f3fb93913de064461b1a43a212d7626 https://git.kernel.org/stable/c/0eb296233f86750102aa43b97879b8d8311f249a https://git.kernel.org/stable/c/7e6d6f27522bcd037856234b720ff607b9c4a09b https://git.kernel.org/stable/c/92cb363d16ac1e41c9764cdb513d0e89a6ff4915 https://git.kernel.org/stable/c/c364e7a34c85c2154fb2e47561965d5b5a0b69b1 https://git.kernel.org/stable/c/195a6289282e039024ad30ba66e6f94a4d0fbe49 https://git.kernel.org/stable/c/b8d55a90fd55b767c25687747e2b24abd1ef8680 https://lists.debian.org/debian-lts-announce/2024/06/ •