CVE-2024-42246 – net, sunrpc: Remap EPERM in case of connection failure in xs_tcp_setup_socket
https://notcve.org/view.php?id=CVE-2024-42246
In the Linux kernel, the following vulnerability has been resolved: net, sunrpc: Remap EPERM in case of connection failure in xs_tcp_setup_socket When using a BPF program on kernel_connect(), the call can return -EPERM. This causes xs_tcp_setup_socket() to loop forever, filling up the syslog and causing the kernel to potentially freeze up. Neil suggested: This will propagate -EPERM up into other layers which might not be ready to handle it. It might be safer to map EPERM to an error we would be more likely to expect from the network system - such as ECONNREFUSED or ENETDOWN. ECONNREFUSED as error seems reasonable. For programs setting a different error can be out of reach (see handling in 4fbac77d2d09) in particular on kernels which do not have f10d05966196 ("bpf: Make BPF_PROG_RUN_ARRAY return -err instead of allow boolean"), thus given that it is better to simply remap for consistent behavior. UDP does handle EPERM in xs_udp_send_request(). • https://git.kernel.org/stable/c/4fbac77d2d092b475dda9eea66da674369665427 https://git.kernel.org/stable/c/bc790261218952635f846aaf90bcc0974f6f62c6 https://git.kernel.org/stable/c/934247ea65bc5eca8bdb7f8c0ddc15cef992a5d6 https://git.kernel.org/stable/c/02ee1976edb21a96ce8e3fd4ef563f14cc16d041 https://git.kernel.org/stable/c/5d8254e012996cee1a0f9cc920531cb7e4d9a011 https://git.kernel.org/stable/c/f2431e7db0fe0daccb2f06bb0d23740affcd2fa6 https://git.kernel.org/stable/c/d6c686c01c5f12ff8f7264e0ddf71df6cb0d4414 https://git.kernel.org/stable/c/f388cfd913a2b96c05339a335f365795d • CWE-835: Loop with Unreachable Exit Condition ('Infinite Loop') •
CVE-2024-42245 – Revert "sched/fair: Make sure to try to detach at least one movable task"
https://notcve.org/view.php?id=CVE-2024-42245
In the Linux kernel, the following vulnerability has been resolved: Revert "sched/fair: Make sure to try to detach at least one movable task" This reverts commit b0defa7ae03ecf91b8bfd10ede430cff12fcbd06. b0defa7ae03ec changed the load balancing logic to ignore env.max_loop if all tasks examined to that point were pinned. The goal of the patch was to make it more likely to be able to detach a task buried in a long list of pinned tasks. However, this has the unfortunate side effect of creating an O(n) iteration in detach_tasks(), as we now must fully iterate every task on a cpu if all or most are pinned. Since this load balance code is done with rq lock held, and often in softirq context, it is very easy to trigger hard lockups. We observed such hard lockups with a user who affined O(10k) threads to a single cpu. When I discussed this with Vincent he initially suggested that we keep the limit on the number of tasks to detach, but increase the number of tasks we can search. • https://git.kernel.org/stable/c/b0defa7ae03ecf91b8bfd10ede430cff12fcbd06 https://git.kernel.org/stable/c/d467194018dd536fe6c65a2fd3aedfcdb1424903 https://git.kernel.org/stable/c/1e116c18e32b035a2d1bd460800072c8bf96bc44 https://git.kernel.org/stable/c/0fa6dcbfa2e2b97c1e6febbea561badf0931a38b https://git.kernel.org/stable/c/2feab2492deb2f14f9675dd6388e9e2bf669c27a •
CVE-2024-42244 – USB: serial: mos7840: fix crash on resume
https://notcve.org/view.php?id=CVE-2024-42244
In the Linux kernel, the following vulnerability has been resolved: USB: serial: mos7840: fix crash on resume Since commit c49cfa917025 ("USB: serial: use generic method if no alternative is provided in usb serial layer"), USB serial core calls the generic resume implementation when the driver has not provided one. This can trigger a crash on resume with mos7840 since support for multiple read URBs was added back in 2011. Specifically, both port read URBs are now submitted on resume for open ports, but the context pointer of the second URB is left set to the core rather than mos7840 port structure. Fix this by implementing dedicated suspend and resume functions for mos7840. Tested with Delock 87414 USB 2.0 to 4x serial adapter. [ johan: analyse crash and rewrite commit message; set busy flag on resume; drop bulk-in check; drop unnecessary usb_kill_urb() ] • https://git.kernel.org/stable/c/d83b405383c965498923f3561c3321e2b5df5727 https://git.kernel.org/stable/c/932a86a711c722b45ed47ba2103adca34d225b33 https://git.kernel.org/stable/c/b14aa5673e0a8077ff4b74f0bb260735e7d5e6a4 https://git.kernel.org/stable/c/1094ed500987e67a9d18b0f95e1812f1cc720856 https://git.kernel.org/stable/c/5ae6a64f18211851c8df6b4221381c438b9a7348 https://git.kernel.org/stable/c/553e67dec846323b5575e78a776cf594c13f98c4 https://git.kernel.org/stable/c/c15a688e49987385baa8804bf65d570e362f8576 https://access.redhat.com/security/cve/CVE-2024-42244 • CWE-99: Improper Control of Resource Identifiers ('Resource Injection') •
CVE-2024-42243 – mm/filemap: make MAX_PAGECACHE_ORDER acceptable to xarray
https://notcve.org/view.php?id=CVE-2024-42243
In the Linux kernel, the following vulnerability has been resolved: mm/filemap: make MAX_PAGECACHE_ORDER acceptable to xarray Patch series "mm/filemap: Limit page cache size to that supported by xarray", v2. Currently, xarray can't support arbitrary page cache size. More details can be found from the WARN_ON() statement in xas_split_alloc(). In our test whose code is attached below, we hit the WARN_ON() on ARM64 system where the base page size is 64KB and huge page size is 512MB. The issue was reported long time ago and some discussions on it can be found here [1]. [1] https://www.spinics.net/lists/linux-xfs/msg75404.html In order to fix the issue, we need to adjust MAX_PAGECACHE_ORDER to one supported by xarray and avoid PMD-sized page cache if needed. The code changes are suggested by David Hildenbrand. PATCH[1] adjusts MAX_PAGECACHE_ORDER to that supported by xarray PATCH[2-3] avoids PMD-sized page cache in the synchronous readahead path PATCH[4] avoids PMD-sized page cache for shmem files if needed Test program ============ # cat test.c #define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <fcntl.h> #include <errno.h> #include <sys/syscall.h> #include <sys/mman.h> #define TEST_XFS_FILENAME "/tmp/data" #define TEST_SHMEM_FILENAME "/dev/shm/data" #define TEST_MEM_SIZE 0x20000000 int main(int argc, char **argv) { const char *filename; int fd = 0; void *buf = (void *)-1, *p; int pgsize = getpagesize(); int ret; if (pgsize ! • https://git.kernel.org/stable/c/793917d997df2e432f3e9ac126e4482d68256d01 https://git.kernel.org/stable/c/a0c42ddd0969fdc760a85e20e267776028a7ca4e https://git.kernel.org/stable/c/333c5539a31f48828456aa9997ec2808f06a699a https://git.kernel.org/stable/c/099d90642a711caae377f53309abfe27e8724a8b https://access.redhat.com/security/cve/CVE-2024-42243 https://bugzilla.redhat.com/show_bug.cgi?id=2303511 • CWE-20: Improper Input Validation •
CVE-2024-42241 – mm/shmem: disable PMD-sized page cache if needed
https://notcve.org/view.php?id=CVE-2024-42241
In the Linux kernel, the following vulnerability has been resolved: mm/shmem: disable PMD-sized page cache if needed For shmem files, it's possible that PMD-sized page cache can't be supported by xarray. For example, 512MB page cache on ARM64 when the base page size is 64KB can't be supported by xarray. It leads to errors as the following messages indicate when this sort of xarray entry is split. WARNING: CPU: 34 PID: 7578 at lib/xarray.c:1025 xas_split_alloc+0xf8/0x128 Modules linked in: binfmt_misc nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 \ nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject \ nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 \ ip_set rfkill nf_tables nfnetlink vfat fat virtio_balloon drm fuse xfs \ libcrc32c crct10dif_ce ghash_ce sha2_ce sha256_arm64 sha1_ce virtio_net \ net_failover virtio_console virtio_blk failover dimlib virtio_mmio CPU: 34 PID: 7578 Comm: test Kdump: loaded Tainted: G W 6.10.0-rc5-gavin+ #9 Hardware name: QEMU KVM Virtual Machine, BIOS edk2-20240524-1.el9 05/24/2024 pstate: 83400005 (Nzcv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--) pc : xas_split_alloc+0xf8/0x128 lr : split_huge_page_to_list_to_order+0x1c4/0x720 sp : ffff8000882af5f0 x29: ffff8000882af5f0 x28: ffff8000882af650 x27: ffff8000882af768 x26: 0000000000000cc0 x25: 000000000000000d x24: ffff00010625b858 x23: ffff8000882af650 x22: ffffffdfc0900000 x21: 0000000000000000 x20: 0000000000000000 x19: ffffffdfc0900000 x18: 0000000000000000 x17: 0000000000000000 x16: 0000018000000000 x15: 52f8004000000000 x14: 0000e00000000000 x13: 0000000000002000 x12: 0000000000000020 x11: 52f8000000000000 x10: 52f8e1c0ffff6000 x9 : ffffbeb9619a681c x8 : 0000000000000003 x7 : 0000000000000000 x6 : ffff00010b02ddb0 x5 : ffffbeb96395e378 x4 : 0000000000000000 x3 : 0000000000000cc0 x2 : 000000000000000d x1 : 000000000000000c x0 : 0000000000000000 Call trace: xas_split_alloc+0xf8/0x128 split_huge_page_to_list_to_order+0x1c4/0x720 truncate_inode_partial_folio+0xdc/0x160 shmem_undo_range+0x2bc/0x6a8 shmem_fallocate+0x134/0x430 vfs_fallocate+0x124/0x2e8 ksys_fallocate+0x4c/0xa0 __arm64_sys_fallocate+0x24/0x38 invoke_syscall.constprop.0+0x7c/0xd8 do_el0_svc+0xb4/0xd0 el0_svc+0x44/0x1d8 el0t_64_sync_handler+0x134/0x150 el0t_64_sync+0x17c/0x180 Fix it by disabling PMD-sized page cache when HPAGE_PMD_ORDER is larger than MAX_PAGECACHE_ORDER. As Matthew Wilcox pointed, the page cache in a shmem file isn't represented by a multi-index entry and doesn't have this limitation when the xarry entry is split until commit 6b24ca4a1a8d ("mm: Use multi-index entries in the page cache"). A denial of service vulnerability was found in the Linux Kernel. • https://git.kernel.org/stable/c/6b24ca4a1a8d4ee3221d6d44ddbb99f542e4bda3 https://git.kernel.org/stable/c/93893eacb372b0a4a30f7de6609b08c3ba6c4fd9 https://git.kernel.org/stable/c/cd25208ca9b0097f8e079d692fc678f36fdbc3f9 https://git.kernel.org/stable/c/9fd154ba926b34c833b7bfc4c14ee2e931b3d743 https://access.redhat.com/security/cve/CVE-2024-42241 https://bugzilla.redhat.com/show_bug.cgi?id=2303509 • CWE-99: Improper Control of Resource Identifiers ('Resource Injection') •