Page 32 of 4742 results (0.007 seconds)

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: bpf, arm64: Fix address emission with tag-based KASAN enabled When BPF_TRAMP_F_CALL_ORIG is enabled, the address of a bpf_tramp_image struct on the stack is passed during the size calculation pass and an address on the heap is passed during code generation. This may cause a heap buffer overflow if the heap address is tagged because emit_a64_mov_i64() will emit longer code than it did during the size calculation pass. The same problem could occur without tag-based KASAN if one of the 16-bit words of the stack address happened to be all-ones during the size calculation pass. Fix the problem by assuming the worst case (4 instructions) when calculating the size of the bpf_tramp_image address emission. • https://git.kernel.org/stable/c/19d3c179a37730caf600a97fed3794feac2b197b https://git.kernel.org/stable/c/6d218fcc707d6b2c3616b6cd24b948fd4825cfec https://git.kernel.org/stable/c/7db1a2121f3c7903b8e397392beec563c3d00950 https://git.kernel.org/stable/c/a552e2ef5fd1a6c78267cd4ec5a9b49aa11bbb1c •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: nilfs2: propagate directory read errors from nilfs_find_entry() Syzbot reported that a task hang occurs in vcs_open() during a fuzzing test for nilfs2. The root cause of this problem is that in nilfs_find_entry(), which searches for directory entries, ignores errors when loading a directory page/folio via nilfs_get_folio() fails. If the filesystem images is corrupted, and the i_size of the directory inode is large, and the directory page/folio is successfully read but fails the sanity check, for example when it is zero-filled, nilfs_check_folio() may continue to spit out error messages in bursts. Fix this issue by propagating the error to the callers when loading a page/folio fails in nilfs_find_entry(). The current interface of nilfs_find_entry() and its callers is outdated and cannot propagate error codes such as -EIO and -ENOMEM returned via nilfs_find_entry(), so fix it together. • https://git.kernel.org/stable/c/2ba466d74ed74f073257f86e61519cb8f8f46184 https://git.kernel.org/stable/c/bb857ae1efd3138c653239ed1e7aef14e1242c81 https://git.kernel.org/stable/c/b4b3dc9e7e604be98a222e9f941f5e93798ca475 https://git.kernel.org/stable/c/c1d0476885d708a932980b0f28cd90d9bd71db39 https://git.kernel.org/stable/c/edf8146057264191d5bfe5b91773f13d936dadd3 https://git.kernel.org/stable/c/270a6f9df35fa2aea01ec23770dc9b3fc9a12989 https://git.kernel.org/stable/c/9698088ac7704e260f492d9c254e29ed7dd8729a https://git.kernel.org/stable/c/efa810b15a25531cbc2f527330947b9fe •

CVSS: -EPSS: 0%CPEs: 6EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: drm/radeon: Fix encoder->possible_clones Include the encoder itself in its possible_clones bitmask. In the past nothing validated that drivers were populating possible_clones correctly, but that changed in commit 74d2aacbe840 ("drm: Validate encoder->possible_clones"). Looks like radeon never got the memo and is still not following the rules 100% correctly. This results in some warnings during driver initialization: Bogus possible_clones: [ENCODER:46:TV-46] possible_clones=0x4 (full encoder mask=0x7) WARNING: CPU: 0 PID: 170 at drivers/gpu/drm/drm_mode_config.c:615 drm_mode_config_validate+0x113/0x39c ... (cherry picked from commit 3b6e7d40649c0d75572039aff9d0911864c689db) • https://git.kernel.org/stable/c/74d2aacbe84042d89f572a3112a146fca05bfcb1 https://git.kernel.org/stable/c/df75c78bfeff99f9b4815c3e79e2b1b1e34fe264 https://git.kernel.org/stable/c/fda5dc80121b12871dc343ab37e0c3f0d138825d https://git.kernel.org/stable/c/c3cd27d85f0778f4ec07384d7516b33153759b8e https://git.kernel.org/stable/c/1a235af0216411a32ab4db54f7bd19020b46c86d https://git.kernel.org/stable/c/68801730ebb9393460b30cd3885e407f15da27a9 https://git.kernel.org/stable/c/28127dba64d8ae1a0b737b973d6d029908599611 •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: maple_tree: correct tree corruption on spanning store Patch series "maple_tree: correct tree corruption on spanning store", v3. There has been a nasty yet subtle maple tree corruption bug that appears to have been in existence since the inception of the algorithm. This bug seems far more likely to happen since commit f8d112a4e657 ("mm/mmap: avoid zeroing vma tree in mmap_region()"), which is the point at which reports started to be submitted concerning this bug. We were made definitely aware of the bug thanks to the kind efforts of Bert Karwatzki who helped enormously in my being able to track this down and identify the cause of it. The bug arises when an attempt is made to perform a spanning store across two leaf nodes, where the right leaf node is the rightmost child of the shared parent, AND the store completely consumes the right-mode node. This results in mas_wr_spanning_store() mitakenly duplicating the new and existing entries at the maximum pivot within the range, and thus maple tree corruption. The fix patch corrects this by detecting this scenario and disallowing the mistaken duplicate copy. The fix patch commit message goes into great detail as to how this occurs. This series also includes a test which reliably reproduces the issue, and asserts that the fix works correctly. Bert has kindly tested the fix and confirmed it resolved his issues. Also Mikhail Gavrilov kindly reported what appears to be precisely the same bug, which this fix should also resolve. This patch (of 2): There has been a subtle bug present in the maple tree implementation from its inception. This arises from how stores are performed - when a store occurs, it will overwrite overlapping ranges and adjust the tree as necessary to accommodate this. A range may always ultimately span two leaf nodes. In this instance we walk the two leaf nodes, determine which elements are not overwritten to the left and to the right of the start and end of the ranges respectively and then rebalance the tree to contain these entries and the newly inserted one. This kind of store is dubbed a 'spanning store' and is implemented by mas_wr_spanning_store(). In order to reach this stage, mas_store_gfp() invokes mas_wr_preallocate(), mas_wr_store_type() and mas_wr_walk() in turn to walk the tree and update the object (mas) to traverse to the location where the write should be performed, determining its store type. When a spanning store is required, this function returns false stopping at the parent node which contains the target range, and mas_wr_store_type() marks the mas->store_type as wr_spanning_store to denote this fact. When we go to perform the store in mas_wr_spanning_store(), we first determine the elements AFTER the END of the range we wish to store (that is, to the right of the entry to be inserted) - we do this by walking to the NEXT pivot in the tree (i.e. r_mas.last + 1), starting at the node we have just determined contains the range over which we intend to write. We then turn our attention to the entries to the left of the entry we are inserting, whose state is represented by l_mas, and copy these into a 'big node', which is a special node which contains enough slots to contain two leaf node's worth of data. We then copy the entry we wish to store immediately after this - the copy and the insertion of the new entry is performed by mas_store_b_node(). After this we copy the elements to the right of the end of the range which we are inserting, if we have not exceeded the length of the node (i.e. r_mas.offset <= r_mas.end). Herein lies the bug - under very specific circumstances, this logic can break and corrupt the maple tree. Consider the following tree: Height 0 Root Node / \ pivot = 0xffff / \ pivot = ULONG_MAX / ---truncated--- • https://git.kernel.org/stable/c/54a611b605901c7d5d05b6b8f5d04a6ceb0962aa https://git.kernel.org/stable/c/7c7874977da9e47ca0f53d8b9a5b17385fed83f2 https://git.kernel.org/stable/c/677f1df179cb68c12ddf7707ec325eb50e99c7d9 https://git.kernel.org/stable/c/982dd0d26d1f015ed34866579480d2be5250b0ef https://git.kernel.org/stable/c/bea07fd63192b61209d48cbb81ef474cc3ee4c62 •

CVSS: -EPSS: 0%CPEs: 7EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mm/swapfile: skip HugeTLB pages for unuse_vma I got a bad pud error and lost a 1GB HugeTLB when calling swapoff. The problem can be reproduced by the following steps: 1. Allocate an anonymous 1GB HugeTLB and some other anonymous memory. 2. Swapout the above anonymous memory. 3. run swapoff and we will get a bad pud error in kernel message: mm/pgtable-generic.c:42: bad pud 00000000743d215d(84000001400000e7) We can tell that pud_clear_bad is called by pud_none_or_clear_bad in unuse_pud_range() by ftrace. And therefore the HugeTLB pages will never be freed because we lost it from page table. • https://git.kernel.org/stable/c/0fe6e20b9c4c53b3e97096ee73a0857f60aad43f https://git.kernel.org/stable/c/ba7f982cdb37ff5a7739dec85d7325ea66fc1496 https://git.kernel.org/stable/c/417d5838ca73c6331ae2fe692fab6c25c00d9a0b https://git.kernel.org/stable/c/e41710f5a61aca9d6baaa8f53908a927dd9e7aa7 https://git.kernel.org/stable/c/6ec0fe3756f941f42f8c57156b8bdf2877b2ebaf https://git.kernel.org/stable/c/bed2b9037806c62166a0ef9a559a1e7e3e1275b8 https://git.kernel.org/stable/c/eb66a833cdd2f7302ee05d05e0fa12a2ca32eb87 https://git.kernel.org/stable/c/7528c4fb1237512ee18049f852f014eba •