Page 88 of 3808 results (0.010 seconds)

CVSS: -EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: scsi: sd: Fix off-by-one error in sd_read_block_characteristics() Ff the device returns page 0xb1 with length 8 (happens with qemu v2.x, for example), sd_read_block_characteristics() may attempt an out-of-bounds memory access when accessing the zoned field at offset 8. • https://git.kernel.org/stable/c/7fb019c46eeea4e3cc3ddfd3e01a24e610f34fac https://git.kernel.org/stable/c/60312ae7392f9c75c6591a52fc359cf7f810d48f https://git.kernel.org/stable/c/568c7c4c77eee6df7677bb861b7cee7398a3255d https://git.kernel.org/stable/c/a776050373893e4c847a49abeae2ccb581153df0 https://git.kernel.org/stable/c/413df704f149dec585df07466d2401bbd1f490a0 https://git.kernel.org/stable/c/f81eaf08385ddd474a2f41595a7757502870c0eb •

CVSS: -EPSS: 0%CPEs: 9EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: vfs: fix race between evice_inodes() and find_inode()&iput() Hi, all Recently I noticed a bug[1] in btrfs, after digged it into and I believe it'a race in vfs. Let's assume there's a inode (ie ino 261) with i_count 1 is called by iput(), and there's a concurrent thread calling generic_shutdown_super(). cpu0: cpu1: iput() // i_count is 1 ->spin_lock(inode) ->dec i_count to 0 ->iput_final() generic_shutdown_super() ->__inode_add_lru() ->evict_inodes() // cause some reason[2] ->if (atomic_read(inode->i_count)) continue; // return before // inode 261 passed the above check // list_lru_add_obj() // and then schedule out ->spin_unlock() // note here: the inode 261 // was still at sb list and hash list, // and I_FREEING|I_WILL_FREE was not been set btrfs_iget() // after some function calls ->find_inode() // found the above inode 261 ->spin_lock(inode) // check I_FREEING|I_WILL_FREE // and passed ->__iget() ->spin_unlock(inode) // schedule back ->spin_lock(inode) // check (I_NEW|I_FREEING|I_WILL_FREE) flags, // passed and set I_FREEING iput() ->spin_unlock(inode) ->spin_lock(inode) ->evict() // dec i_count to 0 ->iput_final() ->spin_unlock() ->evict() Now, we have two threads simultaneously evicting the same inode, which may trigger the BUG(inode->i_state & I_CLEAR) statement both within clear_inode() and iput(). To fix the bug, recheck the inode->i_count after holding i_lock. Because in the most scenarios, the first check is valid, and the overhead of spin_lock() can be reduced. If there is any misunderstanding, please let me know, thanks. [1]: https://lore.kernel.org/linux-btrfs/000000000000eabe1d0619c48986@google.com/ [2]: The reason might be 1. SB_ACTIVE was removed or 2. mapping_shrinkable() return false when I reproduced the bug. • https://git.kernel.org/stable/c/63997e98a3be68d7cec806d22bf9b02b2e1daabb https://git.kernel.org/stable/c/6cc13a80a26e6b48f78c725c01b91987d61563ef https://git.kernel.org/stable/c/489faddb1ae75b0e1a741fe5ca2542a2b5e794a5 https://git.kernel.org/stable/c/47a68c75052a660e4c37de41e321582ec9496195 https://git.kernel.org/stable/c/3721a69403291e2514d13a7c3af50a006ea1153b https://git.kernel.org/stable/c/540fb13120c9eab3ef203f90c00c8e69f37449d1 https://git.kernel.org/stable/c/0eed942bc65de1f93eca7bda51344290f9c573bb https://git.kernel.org/stable/c/0f8a5b6d0dafa4f533ac82e98f8b81207 •

CVSS: -EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: icmp: change the order of rate limits ICMP messages are ratelimited : After the blamed commits, the two rate limiters are applied in this order: 1) host wide ratelimit (icmp_global_allow()) 2) Per destination ratelimit (inetpeer based) In order to avoid side-channels attacks, we need to apply the per destination check first. This patch makes the following change : 1) icmp_global_allow() checks if the host wide limit is reached. But credits are not yet consumed. This is deferred to 3) 2) The per destination limit is checked/updated. This might add a new node in inetpeer tree. 3) icmp_global_consume() consumes tokens if prior operations succeeded. This means that host wide ratelimit is still effective in keeping inetpeer tree small even under DDOS. As a bonus, I removed icmp_global.lock as the fast path can use a lock-free operation. • https://git.kernel.org/stable/c/4cdf507d54525842dfd9f6313fdafba039084046 https://git.kernel.org/stable/c/997ba8889611891f91e8ad83583466aeab6239a3 https://git.kernel.org/stable/c/662ec52260cc07b9ae53ecd3925183c29d34288b https://git.kernel.org/stable/c/a7722921adb046e3836eb84372241f32584bdb07 https://git.kernel.org/stable/c/483397b4ba280813e4a9c161a0a85172ddb43d19 https://git.kernel.org/stable/c/8c2bd38b95f75f3d2a08c93e35303e26d480d24e •

CVSS: -EPSS: 0%CPEs: 6EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mm: avoid leaving partial pfn mappings around in error case As Jann points out, PFN mappings are special, because unlike normal memory mappings, there is no lifetime information associated with the mapping - it is just a raw mapping of PFNs with no reference counting of a 'struct page'. That's all very much intentional, but it does mean that it's easy to mess up the cleanup in case of errors. Yes, a failed mmap() will always eventually clean up any partial mappings, but without any explicit lifetime in the page table mapping itself, it's very easy to do the error handling in the wrong order. In particular, it's easy to mistakenly free the physical backing store before the page tables are actually cleaned up and (temporarily) have stale dangling PTE entries. To make this situation less error-prone, just make sure that any partial pfn mapping is torn down early, before any other error handling. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: mm: evitar dejar asignaciones pfn parciales en caso de error Como señala Jann, las asignaciones PFN son especiales, porque a diferencia de las asignaciones de memoria normales, no hay información de duración asociada con la asignación: es solo una asignación sin procesar de PFN sin recuento de referencias de una 'página de estructura'. Todo eso es muy intencional, pero significa que es fácil arruinar la limpieza en caso de errores. Sí, un mmap() fallido siempre limpiará eventualmente cualquier asignación parcial, pero sin ninguna duración explícita en la asignación de la tabla de páginas en sí, es muy fácil hacer el manejo de errores en el orden incorrecto. • https://git.kernel.org/stable/c/35770ca6180caa24a2b258c99a87bd437a1ee10f https://git.kernel.org/stable/c/5b2c8b34f6d76bfbd1dd4936eb8a0fbfb9af3959 https://git.kernel.org/stable/c/65d0db500d7c07f0f76fc24a4d837791c4862cd2 https://git.kernel.org/stable/c/a95a24fcaee1b892e47d5e6dcc403f713874ee80 https://git.kernel.org/stable/c/954fd4c81f22c4b6ba65379a81fd252971bf4ef3 https://git.kernel.org/stable/c/79a61cc3fc0466ad2b7b89618a6157785f0293b3 https://project-zero.issues.chromium.org/issues/366053091 •

CVSS: -EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: wifi: iwlwifi: mvm: pause TCM when the firmware is stopped Not doing so will make us send a host command to the transport while the firmware is not alive, which will trigger a WARNING. bad state = 0 WARNING: CPU: 2 PID: 17434 at drivers/net/wireless/intel/iwlwifi/iwl-trans.c:115 iwl_trans_send_cmd+0x1cb/0x1e0 [iwlwifi] RIP: 0010:iwl_trans_send_cmd+0x1cb/0x1e0 [iwlwifi] Call Trace: <TASK> iwl_mvm_send_cmd+0x40/0xc0 [iwlmvm] iwl_mvm_config_scan+0x198/0x260 [iwlmvm] iwl_mvm_recalc_tcm+0x730/0x11d0 [iwlmvm] iwl_mvm_tcm_work+0x1d/0x30 [iwlmvm] process_one_work+0x29e/0x640 worker_thread+0x2df/0x690 ? rescuer_thread+0x540/0x540 kthread+0x192/0x1e0 ? set_kthread_struct+0x90/0x90 ret_from_fork+0x22/0x30 • https://git.kernel.org/stable/c/a15df5f37fa3a8b7a8ec7a339d1e897bc524e28f https://git.kernel.org/stable/c/5948a191906b54e10f02f6b7a7670243a39f99f4 https://git.kernel.org/stable/c/2c61b561baf92a2860c76c2302a62169e22c21cc https://git.kernel.org/stable/c/55086c97a55d781b04a2667401c75ffde190135c https://git.kernel.org/stable/c/0668ebc8c2282ca1e7eb96092a347baefffb5fe7 •