CVE-2024-45024 – mm/hugetlb: fix hugetlb vs. core-mm PT locking
https://notcve.org/view.php?id=CVE-2024-45024
In the Linux kernel, the following vulnerability has been resolved: mm/hugetlb: fix hugetlb vs. core-mm PT locking We recently made GUP's common page table walking code to also walk hugetlb VMAs without most hugetlb special-casing, preparing for the future of having less hugetlb-specific page table walking code in the codebase. Turns out that we missed one page table locking detail: page table locking for hugetlb folios that are not mapped using a single PMD/PUD. Assume we have hugetlb folio that spans multiple PTEs (e.g., 64 KiB hugetlb folios on arm64 with 4 KiB base page size). GUP, as it walks the page tables, will perform a pte_offset_map_lock() to grab the PTE table lock. However, hugetlb that concurrently modifies these page tables would actually grab the mm->page_table_lock: with USE_SPLIT_PTE_PTLOCKS, the locks would differ. Something similar can happen right now with hugetlb folios that span multiple PMDs when USE_SPLIT_PMD_PTLOCKS. This issue can be reproduced [1], for example triggering: [ 3105.936100] ------------[ cut here ]------------ [ 3105.939323] WARNING: CPU: 31 PID: 2732 at mm/gup.c:142 try_grab_folio+0x11c/0x188 [ 3105.944634] Modules linked in: [...] [ 3105.974841] CPU: 31 PID: 2732 Comm: reproducer Not tainted 6.10.0-64.eln141.aarch64 #1 [ 3105.980406] Hardware name: QEMU KVM Virtual Machine, BIOS edk2-20240524-4.fc40 05/24/2024 [ 3105.986185] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 3105.991108] pc : try_grab_folio+0x11c/0x188 [ 3105.994013] lr : follow_page_pte+0xd8/0x430 [ 3105.996986] sp : ffff80008eafb8f0 [ 3105.999346] x29: ffff80008eafb900 x28: ffffffe8d481f380 x27: 00f80001207cff43 [ 3106.004414] x26: 0000000000000001 x25: 0000000000000000 x24: ffff80008eafba48 [ 3106.009520] x23: 0000ffff9372f000 x22: ffff7a54459e2000 x21: ffff7a546c1aa978 [ 3106.014529] x20: ffffffe8d481f3c0 x19: 0000000000610041 x18: 0000000000000001 [ 3106.019506] x17: 0000000000000001 x16: ffffffffffffffff x15: 0000000000000000 [ 3106.024494] x14: ffffb85477fdfe08 x13: 0000ffff9372ffff x12: 0000000000000000 [ 3106.029469] x11: 1fffef4a88a96be1 x10: ffff7a54454b5f0c x9 : ffffb854771b12f0 [ 3106.034324] x8 : 0008000000000000 x7 : ffff7a546c1aa980 x6 : 0008000000000080 [ 3106.038902] x5 : 00000000001207cf x4 : 0000ffff9372f000 x3 : ffffffe8d481f000 [ 3106.043420] x2 : 0000000000610041 x1 : 0000000000000001 x0 : 0000000000000000 [ 3106.047957] Call trace: [ 3106.049522] try_grab_folio+0x11c/0x188 [ 3106.051996] follow_pmd_mask.constprop.0.isra.0+0x150/0x2e0 [ 3106.055527] follow_page_mask+0x1a0/0x2b8 [ 3106.058118] __get_user_pages+0xf0/0x348 [ 3106.060647] faultin_page_range+0xb0/0x360 [ 3106.063651] do_madvise+0x340/0x598 Let's make huge_pte_lockptr() effectively use the same PT locks as any core-mm page table walker would. Add ptep_lockptr() to obtain the PTE page table lock using a pte pointer -- unfortunately we cannot convert pte_lockptr() because virt_to_page() doesn't work with kmap'ed page tables we can have with CONFIG_HIGHPTE. Handle CONFIG_PGTABLE_LEVELS correctly by checking in reverse order, such that when e.g., CONFIG_PGTABLE_LEVELS==2 with PGDIR_SIZE==P4D_SIZE==PUD_SIZE==PMD_SIZE will work as expected. Document why that works. There is one ugly case: powerpc 8xx, whereby we have an 8 MiB hugetlb folio being mapped using two PTE page tables. • https://git.kernel.org/stable/c/9cb28da54643ad464c47585cd5866c30b0218e67 https://git.kernel.org/stable/c/7300dadba49e531af2d890ae4e34c9b115384a62 https://git.kernel.org/stable/c/5f75cfbd6bb02295ddaed48adf667b6c828ce07b •
CVE-2024-45023 – md/raid1: Fix data corruption for degraded array with slow disk
https://notcve.org/view.php?id=CVE-2024-45023
In the Linux kernel, the following vulnerability has been resolved: md/raid1: Fix data corruption for degraded array with slow disk read_balance() will avoid reading from slow disks as much as possible, however, if valid data only lands in slow disks, and a new normal disk is still in recovery, unrecovered data can be read: raid1_read_request read_balance raid1_should_read_first -> return false choose_best_rdev -> normal disk is not recovered, return -1 choose_bb_rdev -> missing the checking of recovery, return the normal disk -> read unrecovered data Root cause is that the checking of recovery is missing in choose_bb_rdev(). Hence add such checking to fix the problem. Also fix similar problem in choose_slow_rdev(). • https://git.kernel.org/stable/c/9f3ced792203891b8fa39afa37908eba843fcfac https://git.kernel.org/stable/c/2febf5fdbf5d9a52ddc3e986971c8609b1582d67 https://git.kernel.org/stable/c/c916ca35308d3187c9928664f9be249b22a3a701 •
CVE-2024-45022 – mm/vmalloc: fix page mapping if vm_area_alloc_pages() with high order fallback to order 0
https://notcve.org/view.php?id=CVE-2024-45022
In the Linux kernel, the following vulnerability has been resolved: mm/vmalloc: fix page mapping if vm_area_alloc_pages() with high order fallback to order 0 The __vmap_pages_range_noflush() assumes its argument pages** contains pages with the same page shift. However, since commit e9c3cda4d86e ("mm, vmalloc: fix high order __GFP_NOFAIL allocations"), if gfp_flags includes __GFP_NOFAIL with high order in vm_area_alloc_pages() and page allocation failed for high order, the pages** may contain two different page shifts (high order and order-0). This could lead __vmap_pages_range_noflush() to perform incorrect mappings, potentially resulting in memory corruption. Users might encounter this as follows (vmap_allow_huge = true, 2M is for PMD_SIZE): kvmalloc(2M, __GFP_NOFAIL|GFP_X) __vmalloc_node_range_noprof(vm_flags=VM_ALLOW_HUGE_VMAP) vm_area_alloc_pages(order=9) ---> order-9 allocation failed and fallback to order-0 vmap_pages_range() vmap_pages_range_noflush() __vmap_pages_range_noflush(page_shift = 21) ----> wrong mapping happens We can remove the fallback code because if a high-order allocation fails, __vmalloc_node_range_noprof() will retry with order-0. Therefore, it is unnecessary to fallback to order-0 here. Therefore, fix this by removing the fallback code. • https://git.kernel.org/stable/c/fe5c2bdcb14c8612eb5e7a09159801c7219e9ac4 https://git.kernel.org/stable/c/e9c3cda4d86e56bf7fe403729f38c4f0f65d3860 https://git.kernel.org/stable/c/fd1ffbb50ef4da5e1378a46616b6d7407dc795da https://git.kernel.org/stable/c/de7bad86345c43cd040ed43e20d9fad78a3ee59f https://git.kernel.org/stable/c/c91618816f4d21fc574d7577a37722adcd4075b2 https://git.kernel.org/stable/c/61ebe5a747da649057c37be1c37eb934b4af79ca •
CVE-2024-45021 – memcg_write_event_control(): fix a user-triggerable oops
https://notcve.org/view.php?id=CVE-2024-45021
In the Linux kernel, the following vulnerability has been resolved: memcg_write_event_control(): fix a user-triggerable oops we are *not* guaranteed that anything past the terminating NUL is mapped (let alone initialized with anything sane). • https://git.kernel.org/stable/c/0dea116876eefc9c7ca9c5d74fe665481e499fa3 https://git.kernel.org/stable/c/fa5bfdf6cb5846a00e712d630a43e3cf55ccb411 https://git.kernel.org/stable/c/1b37ec85ad95b612307627758c6018cd9d92cca8 https://git.kernel.org/stable/c/ad149f5585345e383baa65f1539d816cd715fd3b https://git.kernel.org/stable/c/0fbe2a72e853a1052abe9bc2b7df8ddb102da227 https://git.kernel.org/stable/c/43768fa80fd192558737e24ed6548f74554611d7 https://git.kernel.org/stable/c/f1aa7c509aa766080db7ab3aec2e31b1df09e57c https://git.kernel.org/stable/c/21b578f1d599edb87462f11113c5b0fc7 •
CVE-2024-45020 – bpf: Fix a kernel verifier crash in stacksafe()
https://notcve.org/view.php?id=CVE-2024-45020
In the Linux kernel, the following vulnerability has been resolved: bpf: Fix a kernel verifier crash in stacksafe() Daniel Hodges reported a kernel verifier crash when playing with sched-ext. Further investigation shows that the crash is due to invalid memory access in stacksafe(). More specifically, it is the following code: if (exact != NOT_EXACT && old->stack[spi].slot_type[i % BPF_REG_SIZE] != cur->stack[spi].slot_type[i % BPF_REG_SIZE]) return false; The 'i' iterates old->allocated_stack. If cur->allocated_stack < old->allocated_stack the out-of-bound access will happen. To fix the issue add 'i >= cur->allocated_stack' check such that if the condition is true, stacksafe() should fail. Otherwise, cur->stack[spi].slot_type[i % BPF_REG_SIZE] memory access is legal. • https://git.kernel.org/stable/c/ab470fefce2837e66b771c60858118d50bb5bb10 https://git.kernel.org/stable/c/2793a8b015f7f1caadb9bce9c63dc659f7522676 https://git.kernel.org/stable/c/7cad3174cc79519bf5f6c4441780264416822c08 https://git.kernel.org/stable/c/6e3987ac310c74bb4dd6a2fa8e46702fe505fb2b https://git.kernel.org/stable/c/bed2eb964c70b780fb55925892a74f26cb590b25 https://access.redhat.com/security/cve/CVE-2024-45020 https://bugzilla.redhat.com/show_bug.cgi?id=2311717 • CWE-125: Out-of-bounds Read •