Page 404 of 3321 results (0.008 seconds)

CVSS: 5.5EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mm: cachestat: fix two shmem bugs When cachestat on shmem races with swapping and invalidation, there are two possible bugs: 1) A swapin error can have resulted in a poisoned swap entry in the shmem inode's xarray. Calling get_shadow_from_swap_cache() on it will result in an out-of-bounds access to swapper_spaces[]. Validate the entry with non_swap_entry() before going further. 2) When we find a valid swap entry in the shmem's inode, the shadow entry in the swapcache might not exist yet: swap IO is still in progress and we're before __remove_mapping; swapin, invalidation, or swapoff have removed the shadow from swapcache after we saw the shmem swap entry. This will send a NULL to workingset_test_recent(). The latter purely operates on pointer bits, so it won't crash - node 0, memcg ID 0, eviction timestamp 0, etc. are all valid inputs - but it's a bogus test. In theory that could result in a false "recently evicted" count. Such a false positive wouldn't be the end of the world. But for code clarity and (future) robustness, be explicit about this case. Bail on get_shadow_from_swap_cache() returning NULL. • https://git.kernel.org/stable/c/cf264e1329fb0307e044f7675849f9f38b44c11a https://git.kernel.org/stable/c/b79f9e1ff27c994a4c452235ba09e672ec698e23 https://git.kernel.org/stable/c/d962f6c583458037dc7e529659b2b02b9dd3d94b https://git.kernel.org/stable/c/24a0e73d544439bb9329fbbafac44299e548a677 https://git.kernel.org/stable/c/d5d39c707a4cf0bcc84680178677b97aa2cb2627 https://access.redhat.com/security/cve/CVE-2024-35797 https://bugzilla.redhat.com/show_bug.cgi?id=2281151 •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: net: ll_temac: platform_get_resource replaced by wrong function The function platform_get_resource was replaced with devm_platform_ioremap_resource_byname and is called using 0 as name. This eventually ends up in platform_get_resource_byname in the call stack, where it causes a null pointer in strcmp. if (type == resource_type(r) && !strcmp(r->name, name)) It should have been replaced with devm_platform_ioremap_resource. En el kernel de Linux se ha resuelto la siguiente vulnerabilidad: net:ll_temac: platform_get_resource reemplazada por una función incorrecta La función platform_get_resource fue reemplazada por devm_platform_ioremap_resource_byname y se llama usando 0 como nombre. Esto eventualmente termina en platform_get_resource_byname en la pila de llamadas, donde genera un puntero null en strcmp. if (type == Resource_type(r) && !strcmp(r->name, name)) Debería haber sido reemplazado por devm_platform_ioremap_resource. • https://git.kernel.org/stable/c/bd69058f50d5ffa659423bcfa6fe6280ce9c760a https://git.kernel.org/stable/c/77c8cfdf808410be84be56aff7e0e186b8c5a879 https://git.kernel.org/stable/c/6d9395ba7f85bdb7af0b93272e537484ecbeff48 https://git.kernel.org/stable/c/553d294db94b5f139378022df480a9fb6c3ae39e https://git.kernel.org/stable/c/46efbdbc95a30951c2579caf97b6df2ee2b3bef3 https://git.kernel.org/stable/c/476eed5f1c22034774902a980aa48dc4662cb39a https://git.kernel.org/stable/c/7e9edb569fd9f688d887e36db8170f6e22bafbc8 https://git.kernel.org/stable/c/92c0c29f667870f17c0b764544bdf22ce •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: drm/amdgpu: fix deadlock while reading mqd from debugfs An errant disk backup on my desktop got into debugfs and triggered the following deadlock scenario in the amdgpu debugfs files. The machine also hard-resets immediately after those lines are printed (although I wasn't able to reproduce that part when reading by hand): [ 1318.016074][ T1082] ====================================================== [ 1318.016607][ T1082] WARNING: possible circular locking dependency detected [ 1318.017107][ T1082] 6.8.0-rc7-00015-ge0c8221b72c0 #17 Not tainted [ 1318.017598][ T1082] ------------------------------------------------------ [ 1318.018096][ T1082] tar/1082 is trying to acquire lock: [ 1318.018585][ T1082] ffff98c44175d6a0 (&mm->mmap_lock){++++}-{3:3}, at: __might_fault+0x40/0x80 [ 1318.019084][ T1082] [ 1318.019084][ T1082] but task is already holding lock: [ 1318.020052][ T1082] ffff98c4c13f55f8 (reservation_ww_class_mutex){+.+.}-{3:3}, at: amdgpu_debugfs_mqd_read+0x6a/0x250 [amdgpu] [ 1318.020607][ T1082] [ 1318.020607][ T1082] which lock already depends on the new lock. [ 1318.020607][ T1082] [ 1318.022081][ T1082] [ 1318.022081][ T1082] the existing dependency chain (in reverse order) is: [ 1318.023083][ T1082] [ 1318.023083][ T1082] -> #2 (reservation_ww_class_mutex){+.+.}-{3:3}: [ 1318.024114][ T1082] __ww_mutex_lock.constprop.0+0xe0/0x12f0 [ 1318.024639][ T1082] ww_mutex_lock+0x32/0x90 [ 1318.025161][ T1082] dma_resv_lockdep+0x18a/0x330 [ 1318.025683][ T1082] do_one_initcall+0x6a/0x350 [ 1318.026210][ T1082] kernel_init_freeable+0x1a3/0x310 [ 1318.026728][ T1082] kernel_init+0x15/0x1a0 [ 1318.027242][ T1082] ret_from_fork+0x2c/0x40 [ 1318.027759][ T1082] ret_from_fork_asm+0x11/0x20 [ 1318.028281][ T1082] [ 1318.028281][ T1082] -> #1 (reservation_ww_class_acquire){+.+.}-{0:0}: [ 1318.029297][ T1082] dma_resv_lockdep+0x16c/0x330 [ 1318.029790][ T1082] do_one_initcall+0x6a/0x350 [ 1318.030263][ T1082] kernel_init_freeable+0x1a3/0x310 [ 1318.030722][ T1082] kernel_init+0x15/0x1a0 [ 1318.031168][ T1082] ret_from_fork+0x2c/0x40 [ 1318.031598][ T1082] ret_from_fork_asm+0x11/0x20 [ 1318.032011][ T1082] [ 1318.032011][ T1082] -> #0 (&mm->mmap_lock){++++}-{3:3}: [ 1318.032778][ T1082] __lock_acquire+0x14bf/0x2680 [ 1318.033141][ T1082] lock_acquire+0xcd/0x2c0 [ 1318.033487][ T1082] __might_fault+0x58/0x80 [ 1318.033814][ T1082] amdgpu_debugfs_mqd_read+0x103/0x250 [amdgpu] [ 1318.034181][ T1082] full_proxy_read+0x55/0x80 [ 1318.034487][ T1082] vfs_read+0xa7/0x360 [ 1318.034788][ T1082] ksys_read+0x70/0xf0 [ 1318.035085][ T1082] do_syscall_64+0x94/0x180 [ 1318.035375][ T1082] entry_SYSCALL_64_after_hwframe+0x46/0x4e [ 1318.035664][ T1082] [ 1318.035664][ T1082] other info that might help us debug this: [ 1318.035664][ T1082] [ 1318.036487][ T1082] Chain exists of: [ 1318.036487][ T1082] &mm->mmap_lock --> reservation_ww_class_acquire --> reservation_ww_class_mutex [ 1318.036487][ T1082] [ 1318.037310][ T1082] Possible unsafe locking scenario: [ 1318.037310][ T1082] [ 1318.037838][ T1082] CPU0 CPU1 [ 1318.038101][ T1082] ---- ---- [ 1318.038350][ T1082] lock(reservation_ww_class_mutex); [ 1318.038590][ T1082] lock(reservation_ww_class_acquire); [ 1318.038839][ T1082] lock(reservation_ww_class_mutex); [ 1318.039083][ T1082] rlock(&mm->mmap_lock); [ 1318.039328][ T1082] [ 1318.039328][ T1082] *** DEADLOCK *** [ 1318.039328][ T1082] [ 1318.040029][ T1082] 1 lock held by tar/1082: [ 1318.040259][ T1082] #0: ffff98c4c13f55f8 (reservation_ww_class_mutex){+.+.}-{3:3}, at: amdgpu_debugfs_mqd_read+0x6a/0x250 [amdgpu] [ 1318.040560][ T1082] [ 1318.040560][ T1082] stack backtrace: [ ---truncated--- En el kernel de Linux, se resolvió la siguiente vulnerabilidad: drm/amdgpu: corrige el punto muerto al leer mqd desde debugfs Una copia de seguridad de disco errónea en mi escritorio entró en debugfs y desencadenó el siguiente escenario de punto muerto en los archivos amdgpu debugfs. La máquina también se reinicia inmediatamente después de imprimir esas líneas (aunque no pude reproducir esa parte cuando leí a mano): [ 1318.016074][ T1082] =============== ======================================= [ 1318.016607][ T1082] ADVERTENCIA: posible bloqueo circular dependencia detectada [ 1318.017107][ T1082] 6.8.0-rc7-00015-ge0c8221b72c0 #17 No contaminado [ 1318.017598][ T1082] ----------------------- ------------------------------- [ 1318.018096][ T1082] tar/1082 está intentando adquirir el bloqueo: [ 1318.018585][ T1082] ffff98c44175d6a0 (&mm->mmap_lock){++++}-{3:3}, en: __might_fault+0x40/0x80 [ 1318.019084][ T1082] [ 1318.019084][ T1082] pero la tarea ya mantiene el bloqueo: [ 1318.020052 ][ T1082] ffff98c4c13f55f8 (reservation_ww_class_mutex){+.+.}-{3:3}, en: amdgpu_debugfs_mqd_read+0x6a/0x250 [amdgpu] [ 1318.020607][ T1082] [ 1318.020607][ T1082] el bloqueo ya depende del nuevo cerrar con llave. [ 1318.020607][ T1082] [ 1318.022081][ T1082] [ 1318.022081][ T1082] la cadena de dependencia existente (en orden inverso) es: [ 1318.023083][ T1082] [ 1318.023083][ T1082] -> #2 (reservation_ww_ clase_mutex){+ .+.}-{3:3}: [ 1318.024114][ T1082] __ww_mutex_lock.constprop.0+0xe0/0x12f0 [ 1318.024639][ T1082] ww_mutex_lock+0x32/0x90 [ 1318.025161][ T1082] dep+0x18a/0x330 [ 1318.025683 ][ T1082] do_one_initcall+0x6a/0x350 [ 1318.026210][ T1082] kernel_init_freeable+0x1a3/0x310 [ 1318.026728][ T1082] kernel_init+0x15/0x1a0 [ 1318.027242][ T1082] from_fork+0x2c/0x40 [ 1318.027759][ T1082] ret_from_fork_asm+ 0x11/0x20 [ 1318.028281][ T1082] [ 1318.028281][ T1082] -> #1 (reservation_ww_class_acquire){+.+.}-{0:0}: [ 1318.029297][ T1082] dma_resv_lockdep+0x16c/0x330 [ 1 318.029790][ T1082] do_one_initcall+0x6a/0x350 [ 1318.030263][ T1082] kernel_init_freeable+0x1a3/0x310 [ 1318.030722][ T1082] kernel_init+0x15/0x1a0 [ 1318.031168][ T1082] bifurcación+0x2c/0x40 [ 1318.031598][ T1082] ret_from_fork_asm+0x11/ 0x20 [ 1318.032011][ T1082] [ 1318.032011][ T1082] -> #0 (&mm->mmap_lock){++++}-{3:3}: [ 1318.032778][ T1082] __lock_acquire+0x14bf/0x2680 [ 1318.033141 ] [ T1082] lock_acquire+0xcd/0x2c0 [ 1318.033487][ T1082] __might_fault+0x58/0x80 [ 1318.033814][ T1082] amdgpu_debugfs_mqd_read+0x103/0x250 [amdgpu] [ 1318.03418 1][T1082] lectura_proxy_completa+0x55/0x80 [1318.034487][T1082] vfs_read+0xa7/0x360 [ 1318.034788][ T1082] ksys_read+0x70/0xf0 [ 1318.035085][ T1082] do_syscall_64+0x94/0x180 [ 1318.035375][ T1082] wframe+0x46/0x4e [ 1318.035664][ T1082] [ 1318.035664][ T1082] Otra información que podría ayudarnos a depurar esto: [1318.035664] [T1082] [1318.036487] [T1082] La cadena existe de: [1318.036487] [T1082] & mm-> mmap_lock-> reservation_ww_class_acquire-> reservación_www_mutex. ] [ 1318.037310][T1082] Posible escenario de bloqueo inseguro: [ 1318.037310][ T1082] [ 1318.037838][ T1082] CPU0 CPU1 [ 1318.038101][ T1082] ---- ---- [ 1318.038350][ T1082] _class_mutex); [ 1318.038590][ T1082] bloqueo(reservation_ww_class_acquire); [ 1318.038839][ T1082] bloqueo(reservation_ww_class_mutex); [ 1318.039083][ T1082] rlock(&mm->mmap_lock); [ 1318.039328][ T1082] [ 1318.039328][ T1082] *** DEADLOCK *** [ 1318.039328][ T1082] [ 1318.040029][ T1082] 1 bloqueo retenido por tar/1082: [ 1318.040259][ T1082] #0: ffff98c4c13f55f8 ( reserve_ww_class_mutex){+.+.}-{3:3}, en: amdgpu_debugfs_mqd_read+0x6a/0x250 [amdgpu] [ 1318.040560][ T1082] [ 1318.040560][ T1082] seguimiento de pila: [ ---truncado--- • https://git.kernel.org/stable/c/445d85e3c1dfd8c45b24be6f1527f1e117256d0e https://git.kernel.org/stable/c/197f6d6987c55860f6eea1c93e4f800c59078874 https://git.kernel.org/stable/c/8b03556da6e576c62664b6cd01809e4a09d53b5b https://git.kernel.org/stable/c/4687e3c6ee877ee25e57b984eca00be53b9a8db5 https://git.kernel.org/stable/c/8678b1060ae2b75feb60b87e5b75e17374e3c1c5 •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: dm-raid: really frozen sync_thread during suspend 1) commit f52f5c71f3d4 ("md: fix stopping sync thread") remove MD_RECOVERY_FROZEN from __md_stop_writes() and doesn't realize that dm-raid relies on __md_stop_writes() to frozen sync_thread indirectly. Fix this problem by adding MD_RECOVERY_FROZEN in md_stop_writes(), and since stop_sync_thread() is only used for dm-raid in this case, also move stop_sync_thread() to md_stop_writes(). 2) The flag MD_RECOVERY_FROZEN doesn't mean that sync thread is frozen, it only prevent new sync_thread to start, and it can't stop the running sync thread; In order to frozen sync_thread, after seting the flag, stop_sync_thread() should be used. 3) The flag MD_RECOVERY_FROZEN doesn't mean that writes are stopped, use it as condition for md_stop_writes() in raid_postsuspend() doesn't look correct. Consider that reentrant stop_sync_thread() do nothing, always call md_stop_writes() in raid_postsuspend(). 4) raid_message can set/clear the flag MD_RECOVERY_FROZEN at anytime, and if MD_RECOVERY_FROZEN is cleared while the array is suspended, new sync_thread can start unexpected. Fix this by disallow raid_message() to change sync_thread status during suspend. Note that after commit f52f5c71f3d4 ("md: fix stopping sync thread"), the test shell/lvconvert-raid-reshape.sh start to hang in stop_sync_thread(), and with previous fixes, the test won't hang there anymore, however, the test will still fail and complain that ext4 is corrupted. And with this patch, the test won't hang due to stop_sync_thread() or fail due to ext4 is corrupted anymore. • https://git.kernel.org/stable/c/9dbd1aa3a81c6166608fec87994b6c464701f73a https://git.kernel.org/stable/c/af916cb66a80597f3523bc85812e790bcdcfd62b https://git.kernel.org/stable/c/eaa8fc9b092837cf2c754bde1a15d784ce9a85ab https://git.kernel.org/stable/c/16c4770c75b1223998adbeb7286f9a15c65fba73 •

CVSS: 5.5EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: SVM: Flush pages under kvm->lock to fix UAF in svm_register_enc_region() Do the cache flush of converted pages in svm_register_enc_region() before dropping kvm->lock to fix use-after-free issues where region and/or its array of pages could be freed by a different task, e.g. if userspace has __unregister_enc_region_locked() already queued up for the region. Note, the "obvious" alternative of using local variables doesn't fully resolve the bug, as region->pages is also dynamically allocated. I.e. the region structure itself would be fine, but region->pages could be freed. Flushing multiple pages under kvm->lock is unfortunate, but the entire flow is a rare slow path, and the manual flush is only needed on CPUs that lack coherency for encrypted memory. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: KVM: SVM: Vaciar páginas bajo kvm->lock para arreglar UAF en svm_register_enc_region() Realice el vaciado de caché de las páginas convertidas en svm_register_enc_region() antes de eliminar kvm->lock para arreglar el uso -Problemas posteriores a la liberación en los que la región y/o su conjunto de páginas podrían liberarse mediante una tarea diferente, por ejemplo, si el espacio de usuario ya tiene __unregister_enc_region_locked() en cola para la región. Tenga en cuenta que la alternativa "obvia" de usar variables locales no resuelve completamente el error, ya que región->páginas también se asigna dinámicamente. Es decir, la estructura de la región en sí estaría bien, pero se podrían liberar regiones->páginas. • https://git.kernel.org/stable/c/4f627ecde7329e476a077bb0590db8f27bb8f912 https://git.kernel.org/stable/c/19a23da53932bc8011220bd8c410cb76012de004 https://git.kernel.org/stable/c/f1ecde00ce1694597f923f0d25f7a797c5243d99 https://git.kernel.org/stable/c/848bcb0a1d96f67d075465667d3a1ad4af56311e https://git.kernel.org/stable/c/2d13b79640b147bd77c34a5998533b2021a4122d https://git.kernel.org/stable/c/e126b508ed2e616d679d85fca2fbe77bb48bbdd7 https://git.kernel.org/stable/c/4868c0ecdb6cfde7c70cf478c46e06bb9c7e5865 https://git.kernel.org/stable/c/12f8e32a5a389a5d58afc67728c76e61b •