Page 440 of 3762 results (0.009 seconds)

CVSS: -EPSS: 0%CPEs: 12EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: soc: fsl: qbman: Use raw spinlock for cgr_lock smp_call_function always runs its callback in hard IRQ context, even on PREEMPT_RT, where spinlocks can sleep. So we need to use a raw spinlock for cgr_lock to ensure we aren't waiting on a sleeping task. Although this bug has existed for a while, it was not apparent until commit ef2a8d5478b9 ("net: dpaa: Adjust queue depth on rate change") which invokes smp_call_function_single via qman_update_cgr_safe every time a link goes up or down. En el kernel de Linux, se resolvió la siguiente vulnerabilidad: soc: fsl: qbman: use spinlock sin formato para cgr_lock smp_call_function siempre ejecuta su devolución de llamada en un contexto IRQ duro, incluso en PREEMPT_RT, donde los spinlocks pueden dormir. Por lo tanto, necesitamos usar un spinlock sin formato para cgr_lock para asegurarnos de que no estamos esperando una tarea inactiva. Aunque este error ha existido por un tiempo, no fue evidente hasta la confirmación ef2a8d5478b9 ("net: dpaa: Ajustar la profundidad de la cola al cambiar la velocidad") que invoca smp_call_function_single a través de qman_update_cgr_safe cada vez que un enlace sube o baja. • https://git.kernel.org/stable/c/96f413f47677366e0ae03797409bfcc4151dbf9e https://git.kernel.org/stable/c/a85c525bbff4d7467d7f0ab6fed8e2f787b073d6 https://git.kernel.org/stable/c/29cd9c2d1f428c281962135ea046a9d7bda88d14 https://git.kernel.org/stable/c/5b10a404419f0532ef3ba990c12bebe118adb6d7 https://git.kernel.org/stable/c/2b3fede8225133671ce837c0d284804aa3bc7a02 https://git.kernel.org/stable/c/ff50716b7d5b7985979a5b21163cd79fb3d21d59 https://git.kernel.org/stable/c/32edca2f03a6cc42c650ddc3ad83d086e3f365d1 https://git.kernel.org/stable/c/9a3ca8292ce9fdcce122706c28c3f07bc •

CVSS: -EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: LoongArch: Define the __io_aw() hook as mmiowb() Commit fb24ea52f78e0d595852e ("drivers: Remove explicit invocations of mmiowb()") remove all mmiowb() in drivers, but it says: "NOTE: mmiowb() has only ever guaranteed ordering in conjunction with spin_unlock(). However, pairing each mmiowb() removal in this patch with the corresponding call to spin_unlock() is not at all trivial, so there is a small chance that this change may regress any drivers incorrectly relying on mmiowb() to order MMIO writes between CPUs using lock-free synchronisation." The mmio in radeon_ring_commit() is protected by a mutex rather than a spinlock, but in the mutex fastpath it behaves similar to spinlock. We can add mmiowb() calls in the radeon driver but the maintainer says he doesn't like such a workaround, and radeon is not the only example of mutex protected mmio. So we should extend the mmiowb tracking system from spinlock to mutex, and maybe other locking primitives. This is not easy and error prone, so we solve it in the architectural code, by simply defining the __io_aw() hook as mmiowb(). • https://git.kernel.org/stable/c/97cd43ba824aec764f5ea2790d0c0a318f885167 https://git.kernel.org/stable/c/d7d7c6cdea875be3b241d7d39873bb431db7154d https://git.kernel.org/stable/c/0b61a7dc6712b78799b3949997e8a5e94db5c4b0 https://git.kernel.org/stable/c/9adec248bba33b1503252caf8e59d81febfc5ceb https://git.kernel.org/stable/c/9c68ece8b2a5c5ff9b2fcaea923dd73efeb174cd •

CVSS: 5.5EPSS: 0%CPEs: 6EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: drm/amdgpu: amdgpu_ttm_gart_bind set gtt bound flag Otherwise after the GTT bo is released, the GTT and gart space is freed but amdgpu_ttm_backend_unbind will not clear the gart page table entry and leave valid mapping entry pointing to the stale system page. Then if GPU access the gart address mistakely, it will read undefined value instead page fault, harder to debug and reproduce the real issue. En el kernel de Linux, se resolvió la siguiente vulnerabilidad: drm/amdgpu: amdgpu_ttm_gart_bind establece el indicador vinculado a gtt. De lo contrario, después de que se libera GTT bo, se libera el espacio GTT y gart, pero amdgpu_ttm_backend_unbind no borrará la entrada de la tabla de páginas de gart y dejará una asignación válida. entrada que apunta a la página del sistema obsoleto. Luego, si la GPU accede a la dirección de Gart por error, leerá un valor indefinido en lugar de un error de página, lo que será más difícil de depurar y reproducir el problema real. • https://git.kernel.org/stable/c/5d5f1a7f3b1039925f79c7894f153c2a905201fb https://git.kernel.org/stable/c/589c414138a1bed98e652c905937d8f790804efe https://git.kernel.org/stable/c/6fcd12cb90888ef2d8af8d4c04e913252eee4ef3 https://git.kernel.org/stable/c/e8d27caef2c829a306e1f762fb95f06e8ec676f6 https://git.kernel.org/stable/c/5cdce3dda3b3dacde902f63a8ee72c2b7f91912d https://git.kernel.org/stable/c/6c6064cbe58b43533e3451ad6a8ba9736c109ac3 https://access.redhat.com/security/cve/CVE-2024-35817 https://bugzilla.redhat.com/show_bug.cgi?id=2281202 •

CVSS: -EPSS: 0%CPEs: 7EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: fs/aio: Check IOCB_AIO_RW before the struct aio_kiocb conversion The first kiocb_set_cancel_fn() argument may point at a struct kiocb that is not embedded inside struct aio_kiocb. With the current code, depending on the compiler, the req->ki_ctx read happens either before the IOCB_AIO_RW test or after that test. Move the req->ki_ctx read such that it is guaranteed that the IOCB_AIO_RW test happens first. En el kernel de Linux, se resolvió la siguiente vulnerabilidad: fs/aio: verifique IOCB_AIO_RW antes de la conversión de struct aio_kiocb. El primer argumento kiocb_set_cancel_fn() puede apuntar a una estructura kiocb que no está incrustada dentro de struct aio_kiocb. • https://git.kernel.org/stable/c/337b543e274fe7a8f47df3c8293cc6686ffa620f https://git.kernel.org/stable/c/b4eea7a05ee0ab5ab0514421e6ba8c5d249cf942 https://git.kernel.org/stable/c/ea1cd64d59f22d6d13f367d62ec6e27b9344695f https://git.kernel.org/stable/c/d7b6fa97ec894edd02f64b83e5e72e1aa352f353 https://git.kernel.org/stable/c/18f614369def2a11a52f569fe0f910b199d13487 https://git.kernel.org/stable/c/e7e23fc5d5fe422827c9a43ecb579448f73876c7 https://git.kernel.org/stable/c/1dc7d74fe456944a9b1c57bd776280249f441ac6 https://git.kernel.org/stable/c/10ca82aff58434e122c7c757cf0497c33 •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mmc: core: Avoid negative index with array access Commit 4d0c8d0aef63 ("mmc: core: Use mrq.sbc in close-ended ffu") assigns prev_idata = idatas[i - 1], but doesn't check that the iterator i is greater than zero. Let's fix this by adding a check. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: mmc: core: evitar índice negativo con acceso a matriz Commit 4d0c8d0aef63 ("mmc: core: usar mrq.sbc en ffu cerrado") asigna prev_idata = idatas[i - 1] , pero no comprueba que el iterador i sea mayor que cero. Arreglemos esto agregando una comprobación. • https://git.kernel.org/stable/c/f49f9e802785291149bdc9c824414de4604226b4 https://git.kernel.org/stable/c/59020bf0999ff7da8aedcd00ef8f0d75d93b6d20 https://git.kernel.org/stable/c/50b8b7a22e90bab9f1949b64a88ff17ab10913ec https://git.kernel.org/stable/c/c4edcd134bb72b3b0acc884612d624e48c9d057f https://git.kernel.org/stable/c/1653a8102868264f3488c298a9f20af2add9a288 https://git.kernel.org/stable/c/eed9119f8f8e8fbf225c08abdbb58597fba807e0 https://git.kernel.org/stable/c/4d0c8d0aef6355660b6775d57ccd5d4ea2e15802 https://git.kernel.org/stable/c/b9a7339ae403035ffe7fc37cb034b3694 •