Page 63 of 2718 results (0.007 seconds)

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: f2fs: check discard support for conventional zones As the helper function f2fs_bdev_support_discard() shows, f2fs checks if the target block devices support discard by calling bdev_max_discard_sectors() and bdev_is_zoned(). This check works well for most cases, but it does not work for conventional zones on zoned block devices. F2fs assumes that zoned block devices support discard, and calls __submit_discard_cmd(). When __submit_discard_cmd() is called for sequential write required zones, it works fine since __submit_discard_cmd() issues zone reset commands instead of discard commands. However, when __submit_discard_cmd() is called for conventional zones, __blkdev_issue_discard() is called even when the devices do not support discard. The inappropriate __blkdev_issue_discard() call was not a problem before the commit 30f1e7241422 ("block: move discard checks into the ioctl handler") because __blkdev_issue_discard() checked if the target devices support discard or not. • https://git.kernel.org/stable/c/30f1e724142242a453f92d90b33e030014900bf0 https://git.kernel.org/stable/c/7bd7ce68ddad5a28565e42ef21cacaff113773a9 https://git.kernel.org/stable/c/d2352b57897f6a3349666fc318dcbec99092c6a5 https://git.kernel.org/stable/c/43aec4d01bd2ce961817a777b3846f8318f398e4 •

CVSS: -EPSS: 0%CPEs: 7EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: vfs: fix race between evice_inodes() and find_inode()&iput() Hi, all Recently I noticed a bug[1] in btrfs, after digged it into and I believe it'a race in vfs. Let's assume there's a inode (ie ino 261) with i_count 1 is called by iput(), and there's a concurrent thread calling generic_shutdown_super(). cpu0: cpu1: iput() // i_count is 1 ->spin_lock(inode) ->dec i_count to 0 ->iput_final() generic_shutdown_super() ->__inode_add_lru() ->evict_inodes() // cause some reason[2] ->if (atomic_read(inode->i_count)) continue; // return before // inode 261 passed the above check // list_lru_add_obj() // and then schedule out ->spin_unlock() // note here: the inode 261 // was still at sb list and hash list, // and I_FREEING|I_WILL_FREE was not been set btrfs_iget() // after some function calls ->find_inode() // found the above inode 261 ->spin_lock(inode) // check I_FREEING|I_WILL_FREE // and passed ->__iget() ->spin_unlock(inode) // schedule back ->spin_lock(inode) // check (I_NEW|I_FREEING|I_WILL_FREE) flags, // passed and set I_FREEING iput() ->spin_unlock(inode) ->spin_lock(inode) ->evict() // dec i_count to 0 ->iput_final() ->spin_unlock() ->evict() Now, we have two threads simultaneously evicting the same inode, which may trigger the BUG(inode->i_state & I_CLEAR) statement both within clear_inode() and iput(). To fix the bug, recheck the inode->i_count after holding i_lock. Because in the most scenarios, the first check is valid, and the overhead of spin_lock() can be reduced. If there is any misunderstanding, please let me know, thanks. [1]: https://lore.kernel.org/linux-btrfs/000000000000eabe1d0619c48986@google.com/ [2]: The reason might be 1. SB_ACTIVE was removed or 2. mapping_shrinkable() return false when I reproduced the bug. • https://git.kernel.org/stable/c/63997e98a3be68d7cec806d22bf9b02b2e1daabb https://git.kernel.org/stable/c/47a68c75052a660e4c37de41e321582ec9496195 https://git.kernel.org/stable/c/3721a69403291e2514d13a7c3af50a006ea1153b https://git.kernel.org/stable/c/540fb13120c9eab3ef203f90c00c8e69f37449d1 https://git.kernel.org/stable/c/0eed942bc65de1f93eca7bda51344290f9c573bb https://git.kernel.org/stable/c/0f8a5b6d0dafa4f533ac82e98f8b812073a7c9d1 https://git.kernel.org/stable/c/6c857fb12b9137fee574443385d53914356bbe11 https://git.kernel.org/stable/c/88b1afbf0f6b221f6c5bb66cc80cd3b38 •

CVSS: -EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: icmp: change the order of rate limits ICMP messages are ratelimited : After the blamed commits, the two rate limiters are applied in this order: 1) host wide ratelimit (icmp_global_allow()) 2) Per destination ratelimit (inetpeer based) In order to avoid side-channels attacks, we need to apply the per destination check first. This patch makes the following change : 1) icmp_global_allow() checks if the host wide limit is reached. But credits are not yet consumed. This is deferred to 3) 2) The per destination limit is checked/updated. This might add a new node in inetpeer tree. 3) icmp_global_consume() consumes tokens if prior operations succeeded. This means that host wide ratelimit is still effective in keeping inetpeer tree small even under DDOS. As a bonus, I removed icmp_global.lock as the fast path can use a lock-free operation. • https://git.kernel.org/stable/c/4cdf507d54525842dfd9f6313fdafba039084046 https://git.kernel.org/stable/c/997ba8889611891f91e8ad83583466aeab6239a3 https://git.kernel.org/stable/c/662ec52260cc07b9ae53ecd3925183c29d34288b https://git.kernel.org/stable/c/a7722921adb046e3836eb84372241f32584bdb07 https://git.kernel.org/stable/c/483397b4ba280813e4a9c161a0a85172ddb43d19 https://git.kernel.org/stable/c/8c2bd38b95f75f3d2a08c93e35303e26d480d24e •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: exfat: resolve memory leak from exfat_create_upcase_table() If exfat_load_upcase_table reaches end and returns -EINVAL, allocated memory doesn't get freed and while exfat_load_default_upcase_table allocates more memory, leading to a memory leak. Here's link to syzkaller crash report illustrating this issue: https://syzkaller.appspot.com/text?tag=CrashReport&x=1406c201980000 • https://git.kernel.org/stable/c/a13d1a4de3b0fe3c41d818697d691c886c5585fa https://git.kernel.org/stable/c/f9835aec49670c46ebe2973032caaa1043b3d4da https://git.kernel.org/stable/c/331ed2c739ce656a67865f6b3ee0a478349d78cb https://git.kernel.org/stable/c/c290fe508eee36df1640c3cb35dc8f89e073c8a8 •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mm/hugetlb.c: fix UAF of vma in hugetlb fault pathway Syzbot reports a UAF in hugetlb_fault(). This happens because vmf_anon_prepare() could drop the per-VMA lock and allow the current VMA to be freed before hugetlb_vma_unlock_read() is called. We can fix this by using a modified version of vmf_anon_prepare() that doesn't release the VMA lock on failure, and then release it ourselves after hugetlb_vma_unlock_read(). • https://git.kernel.org/stable/c/9acad7ba3e25d11f4c96df1b7312ae89e6faca5c https://git.kernel.org/stable/c/e897d184a8dd4a4e1f39c8c495598e4d9472776c https://git.kernel.org/stable/c/d59ebc99dee0a2687a26df94b901eb8216dbf876 https://git.kernel.org/stable/c/98b74bb4d7e96b4da5ef3126511febe55b76b807 •