CVE-2024-53100 – nvme: tcp: avoid race between queue_lock lock and destroy
https://notcve.org/view.php?id=CVE-2024-53100
In the Linux kernel, the following vulnerability has been resolved: nvme: tcp: avoid race between queue_lock lock and destroy Commit 76d54bf20cdc ("nvme-tcp: don't access released socket during error recovery") added a mutex_lock() call for the queue->queue_lock in nvme_tcp_get_address(). However, the mutex_lock() races with mutex_destroy() in nvme_tcp_free_queue(), and causes the WARN below. DEBUG_LOCKS_WARN_ON(lock->magic != lock) WARNING: CPU: 3 PID: 34077 at kernel/locking/mutex.c:587 __mutex_lock+0xcf0/0x1220 Modules linked in: nvmet_tcp nvmet nvme_tcp nvme_fabrics iw_cm ib_cm ib_core pktcdvd nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nf_tables qrtr sunrpc ppdev 9pnet_virtio 9pnet pcspkr netfs parport_pc parport e1000 i2c_piix4 i2c_smbus loop fuse nfnetlink zram bochs drm_vram_helper drm_ttm_helper ttm drm_kms_helper xfs drm sym53c8xx floppy nvme scsi_transport_spi nvme_core nvme_auth serio_raw ata_generic pata_acpi dm_multipath qemu_fw_cfg [last unloaded: ib_uverbs] CPU: 3 UID: 0 PID: 34077 Comm: udisksd Not tainted 6.11.0-rc7 #319 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-2.fc40 04/01/2014 RIP: 0010:__mutex_lock+0xcf0/0x1220 Code: 08 84 d2 0f 85 c8 04 00 00 8b 15 ef b6 c8 01 85 d2 0f 85 78 f4 ff ff 48 c7 c6 20 93 ee af 48 c7 c7 60 91 ee af e8 f0 a7 6d fd <0f> 0b e9 5e f4 ff ff 48 b8 00 00 00 00 00 fc ff df 4c 89 f2 48 c1 RSP: 0018:ffff88811305f760 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff88812c652058 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000001 RBP: ffff88811305f8b0 R08: 0000000000000001 R09: ffffed1075c36341 R10: ffff8883ae1b1a0b R11: 0000000000010498 R12: 0000000000000000 R13: 0000000000000000 R14: dffffc0000000000 R15: ffff88812c652058 FS: 00007f9713ae4980(0000) GS:ffff8883ae180000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fcd78483c7c CR3: 0000000122c38000 CR4: 00000000000006f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ? __warn.cold+0x5b/0x1af ? __mutex_lock+0xcf0/0x1220 ? • https://git.kernel.org/stable/c/4f946479b326a3cbb193f2b8368aed9269514c35 https://git.kernel.org/stable/c/975cb1d2121511584695d0e47fdb90e6782da007 https://git.kernel.org/stable/c/e15cebc1b21856944b387f4abd03b66bd3d4f027 https://git.kernel.org/stable/c/782373ba27660ba7d330208cf5509ece6feb4545 •
CVE-2024-53099 – bpf: Check validity of link->type in bpf_link_show_fdinfo()
https://notcve.org/view.php?id=CVE-2024-53099
In the Linux kernel, the following vulnerability has been resolved: bpf: Check validity of link->type in bpf_link_show_fdinfo() If a newly-added link type doesn't invoke BPF_LINK_TYPE(), accessing bpf_link_type_strs[link->type] may result in an out-of-bounds access. To spot such missed invocations early in the future, checking the validity of link->type in bpf_link_show_fdinfo() and emitting a warning when such invocations are missed. • https://git.kernel.org/stable/c/d5092b0a1aaf35d77ebd8d33384d7930bec5cb5d https://git.kernel.org/stable/c/b3eb1b6a9f745d6941b345f0fae014dc8bb06d36 https://git.kernel.org/stable/c/8421d4c8762bd022cb491f2f0f7019ef51b4f0a7 •
CVE-2024-53098 – drm/xe/ufence: Prefetch ufence addr to catch bogus address
https://notcve.org/view.php?id=CVE-2024-53098
In the Linux kernel, the following vulnerability has been resolved: drm/xe/ufence: Prefetch ufence addr to catch bogus address access_ok() only checks for addr overflow so also try to read the addr to catch invalid addr sent from userspace. (cherry picked from commit 9408c4508483ffc60811e910a93d6425b8e63928) • https://git.kernel.org/stable/c/5d623ffbae96b23f1fc43a3d5a267aabdb07583d https://git.kernel.org/stable/c/9c1813b3253480b30604c680026c7dc721ce86d1 •
CVE-2024-53097 – mm: krealloc: Fix MTE false alarm in __do_krealloc
https://notcve.org/view.php?id=CVE-2024-53097
In the Linux kernel, the following vulnerability has been resolved: mm: krealloc: Fix MTE false alarm in __do_krealloc This patch addresses an issue introduced by commit 1a83a716ec233 ("mm: krealloc: consider spare memory for __GFP_ZERO") which causes MTE (Memory Tagging Extension) to falsely report a slab-out-of-bounds error. The problem occurs when zeroing out spare memory in __do_krealloc. The original code only considered software-based KASAN and did not account for MTE. It does not reset the KASAN tag before calling memset, leading to a mismatch between the pointer tag and the memory tag, resulting in a false positive. Example of the error: ================================================================== swapper/0: BUG: KASAN: slab-out-of-bounds in __memset+0x84/0x188 swapper/0: Write at addr f4ffff8005f0fdf0 by task swapper/0/1 swapper/0: Pointer tag: [f4], memory tag: [fe] swapper/0: swapper/0: CPU: 4 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12. swapper/0: Hardware name: MT6991(ENG) (DT) swapper/0: Call trace: swapper/0: dump_backtrace+0xfc/0x17c swapper/0: show_stack+0x18/0x28 swapper/0: dump_stack_lvl+0x40/0xa0 swapper/0: print_report+0x1b8/0x71c swapper/0: kasan_report+0xec/0x14c swapper/0: __do_kernel_fault+0x60/0x29c swapper/0: do_bad_area+0x30/0xdc swapper/0: do_tag_check_fault+0x20/0x34 swapper/0: do_mem_abort+0x58/0x104 swapper/0: el1_abort+0x3c/0x5c swapper/0: el1h_64_sync_handler+0x80/0xcc swapper/0: el1h_64_sync+0x68/0x6c swapper/0: __memset+0x84/0x188 swapper/0: btf_populate_kfunc_set+0x280/0x3d8 swapper/0: __register_btf_kfunc_id_set+0x43c/0x468 swapper/0: register_btf_kfunc_id_set+0x48/0x60 swapper/0: register_nf_nat_bpf+0x1c/0x40 swapper/0: nf_nat_init+0xc0/0x128 swapper/0: do_one_initcall+0x184/0x464 swapper/0: do_initcall_level+0xdc/0x1b0 swapper/0: do_initcalls+0x70/0xc0 swapper/0: do_basic_setup+0x1c/0x28 swapper/0: kernel_init_freeable+0x144/0x1b8 swapper/0: kernel_init+0x20/0x1a8 swapper/0: ret_from_fork+0x10/0x20 ================================================================== • https://git.kernel.org/stable/c/a543785856249a5ba8c20468098601c0c33b1224 https://git.kernel.org/stable/c/44f79667fefd52945a44d2a57a2cd3c554d7f4e0 https://git.kernel.org/stable/c/f8767d10bcbc2529540eb906906c0058e15cd918 https://git.kernel.org/stable/c/e3a9fc1520a6606c6121aca8d6679c6b93de7fd8 https://git.kernel.org/stable/c/3e9a65a38706866bf93e19f5b4936465188add10 https://git.kernel.org/stable/c/73388659ef0eea51747350530afdeadf8809ce9c https://git.kernel.org/stable/c/d02492863023431c31f85d570f718433c22b9311 https://git.kernel.org/stable/c/d43f1430d47c22a0727c05b6f156ed25f •
CVE-2024-53096 – mm: resolve faulty mmap_region() error path behaviour
https://notcve.org/view.php?id=CVE-2024-53096
In the Linux kernel, the following vulnerability has been resolved: mm: resolve faulty mmap_region() error path behaviour The mmap_region() function is somewhat terrifying, with spaghetti-like control flow and numerous means by which issues can arise and incomplete state, memory leaks and other unpleasantness can occur. A large amount of the complexity arises from trying to handle errors late in the process of mapping a VMA, which forms the basis of recently observed issues with resource leaks and observable inconsistent state. Taking advantage of previous patches in this series we move a number of checks earlier in the code, simplifying things by moving the core of the logic into a static internal function __mmap_region(). Doing this allows us to perform a number of checks up front before we do any real work, and allows us to unwind the writable unmap check unconditionally as required and to perform a CONFIG_DEBUG_VM_MAPLE_TREE validation unconditionally also. We move a number of things here: 1. We preallocate memory for the iterator before we call the file-backed memory hook, allowing us to exit early and avoid having to perform complicated and error-prone close/free logic. We carefully free iterator state on both success and error paths. 2. The enclosing mmap_region() function handles the mapping_map_writable() logic early. Previously the logic had the mapping_map_writable() at the point of mapping a newly allocated file-backed VMA, and a matching mapping_unmap_writable() on success and error paths. We now do this unconditionally if this is a file-backed, shared writable mapping. • https://git.kernel.org/stable/c/deb0f6562884b5b4beb883d73e66a7d3a1b96d99 https://git.kernel.org/stable/c/a3c08c021778dad30f69895e378843e9f423d734 https://git.kernel.org/stable/c/43bed0a13a5cdbb314d14f28c2bf2c60eb4e6e1e https://git.kernel.org/stable/c/6757330b1be5b0606125b65ed50caac69bccf9a5 https://git.kernel.org/stable/c/66f2ed0172af04a89677ae1898600e1264e25800 https://git.kernel.org/stable/c/52c81fd0f5a8bf8032687b94ccf00d13b44cc5c8 https://git.kernel.org/stable/c/bdc136e2b05fabcd780fe5f165d154eb779dfcb0 https://git.kernel.org/stable/c/5de195060b2e251a835f622759550e620 •