CVE-2024-53063 – media: dvbdev: prevent the risk of out of memory access
https://notcve.org/view.php?id=CVE-2024-53063
In the Linux kernel, the following vulnerability has been resolved: media: dvbdev: prevent the risk of out of memory access The dvbdev contains a static variable used to store dvb minors. The behavior of it depends if CONFIG_DVB_DYNAMIC_MINORS is set or not. When not set, dvb_register_device() won't check for boundaries, as it will rely that a previous call to dvb_register_adapter() would already be enforcing it. On a similar way, dvb_device_open() uses the assumption that the register functions already did the needed checks. This can be fragile if some device ends using different calls. This also generate warnings on static check analysers like Coverity. So, add explicit guards to prevent potential risk of OOM issues. • https://git.kernel.org/stable/c/5dd3f3071070f5a306bdf8d474c80062f5691cba https://git.kernel.org/stable/c/fedfde9deb83ac8d2f3d5f36f111023df34b1684 https://git.kernel.org/stable/c/3b88675e18b6517043a6f734eaa8ea6eb3bfa140 https://git.kernel.org/stable/c/a4a17210c03ade1c8d9a9f193a105654b7a05c11 https://git.kernel.org/stable/c/5f76f7df14861e3a560898fa41979ec92424b58f https://git.kernel.org/stable/c/b751a96025275c17f04083cbfe856822f1658946 https://git.kernel.org/stable/c/1e461672616b726f29261ee81bb991528818537c https://git.kernel.org/stable/c/9c17085fabbde2041c893d29599800f2d •
CVE-2024-53061 – media: s5p-jpeg: prevent buffer overflows
https://notcve.org/view.php?id=CVE-2024-53061
In the Linux kernel, the following vulnerability has been resolved: media: s5p-jpeg: prevent buffer overflows The current logic allows word to be less than 2. If this happens, there will be buffer overflows, as reported by smatch. Add extra checks to prevent it. While here, remove an unused word = 0 assignment. • https://git.kernel.org/stable/c/6c96dbbc2aa9f5b4aed8792989d69eae22bf77c4 https://git.kernel.org/stable/c/c5f6fefcda8fac8f082b6c5bf416567f4e100c51 https://git.kernel.org/stable/c/e5117f6e7adcf9fd7546cdd0edc9abe4474bc98b https://git.kernel.org/stable/c/f54e8e1e39dacccebcfb9a9a36f0552a0a97e2ef https://git.kernel.org/stable/c/a930cddfd153b5d4401df0c01effa14c831ff21e https://git.kernel.org/stable/c/c85db2d4432de4ff9d97006691ce2dcb5bda660e https://git.kernel.org/stable/c/784bc785a453eb2f8433dd62075befdfa1b2d6fd https://git.kernel.org/stable/c/c951a0859fdacf49a2298b5551a7e52b9 •
CVE-2024-53058 – net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data
https://notcve.org/view.php?id=CVE-2024-53058
In the Linux kernel, the following vulnerability has been resolved: net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data In case the non-paged data of a SKB carries protocol header and protocol payload to be transmitted on a certain platform that the DMA AXI address width is configured to 40-bit/48-bit, or the size of the non-paged data is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI address width is configured to 32-bit, then this SKB requires at least two DMA transmit descriptors to serve it. For example, three descriptors are allocated to split one DMA buffer mapped from one piece of non-paged data: dma_desc[N + 0], dma_desc[N + 1], dma_desc[N + 2]. Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold extra information to be reused in stmmac_tx_clean(): tx_q->tx_skbuff_dma[N + 0], tx_q->tx_skbuff_dma[N + 1], tx_q->tx_skbuff_dma[N + 2]. Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer address returned by DMA mapping call. stmmac_tx_clean() will try to unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf is a valid buffer address. The expected behavior that saves DMA buffer address of this non-paged data to tx_q->tx_skbuff_dma[entry].buf is: tx_q->tx_skbuff_dma[N + 0].buf = NULL; tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single(); Unfortunately, the current code misbehaves like this: tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single(); tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = NULL; On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address obviously, then the DMA buffer will be unmapped immediately. There may be a rare case that the DMA engine does not finish the pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go horribly wrong, DMA is going to access a unmapped/unreferenced memory region, corrupted data will be transmited or iommu fault will be triggered :( In contrast, the for-loop that maps SKB fragments behaves perfectly as expected, and that is how the driver should do for both non-paged data and paged frags actually. This patch corrects DMA map/unmap sequences by fixing the array index for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address. Tested and verified on DWXGMAC CORE 3.20a • https://git.kernel.org/stable/c/f748be531d7012c456b97f66091d86b3675c5fef https://git.kernel.org/stable/c/ece593fc9c00741b682869d3f3dc584d37b7c9df https://git.kernel.org/stable/c/a3ff23f7c3f0e13f718900803e090fd3997d6bc9 https://git.kernel.org/stable/c/07c9c26e37542486e34d767505e842f48f29c3f6 https://git.kernel.org/stable/c/58d23d835eb498336716cca55b5714191a309286 https://git.kernel.org/stable/c/66600fac7a984dea4ae095411f644770b2561ede •
CVE-2024-53057 – net/sched: stop qdisc_tree_reduce_backlog on TC_H_ROOT
https://notcve.org/view.php?id=CVE-2024-53057
In the Linux kernel, the following vulnerability has been resolved: net/sched: stop qdisc_tree_reduce_backlog on TC_H_ROOT In qdisc_tree_reduce_backlog, Qdiscs with major handle ffff: are assumed to be either root or ingress. This assumption is bogus since it's valid to create egress qdiscs with major handle ffff: Budimir Markovic found that for qdiscs like DRR that maintain an active class list, it will cause a UAF with a dangling class pointer. In 066a3b5b2346, the concern was to avoid iterating over the ingress qdisc since its parent is itself. The proper fix is to stop when parent TC_H_ROOT is reached because the only way to retrieve ingress is when a hierarchy which does not contain a ffff: major handle call into qdisc_lookup with TC_H_MAJ(TC_H_ROOT). In the scenario where major ffff: is an egress qdisc in any of the tree levels, the updates will also propagate to TC_H_ROOT, which then the iteration must stop. net/sched/sch_api.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) • https://git.kernel.org/stable/c/066a3b5b2346febf9a655b444567b7138e3bb939 https://git.kernel.org/stable/c/e7f9a6f97eb067599a74f3bcb6761976b0ed303e https://git.kernel.org/stable/c/dbe778b08b5101df9e89bc06e0a3a7ecd2f4ef20 https://git.kernel.org/stable/c/ce691c814bc7a3c30c220ffb5b7422715458fd9b https://git.kernel.org/stable/c/05df1b1dff8f197f1c275b57ccb2ca33021df552 https://git.kernel.org/stable/c/580b3189c1972aff0f993837567d36392e9d981b https://git.kernel.org/stable/c/597cf9748c3477bf61bc35f0634129f56764ad24 https://git.kernel.org/stable/c/9995909615c3431a5304c1210face5f26 •
CVE-2024-53052 – io_uring/rw: fix missing NOWAIT check for O_DIRECT start write
https://notcve.org/view.php?id=CVE-2024-53052
In the Linux kernel, the following vulnerability has been resolved: io_uring/rw: fix missing NOWAIT check for O_DIRECT start write When io_uring starts a write, it'll call kiocb_start_write() to bump the super block rwsem, preventing any freezes from happening while that write is in-flight. The freeze side will grab that rwsem for writing, excluding any new writers from happening and waiting for existing writes to finish. But io_uring unconditionally uses kiocb_start_write(), which will block if someone is currently attempting to freeze the mount point. This causes a deadlock where freeze is waiting for previous writes to complete, but the previous writes cannot complete, as the task that is supposed to complete them is blocked waiting on starting a new write. This results in the following stuck trace showing that dependency with the write blocked starting a new write: task:fio state:D stack:0 pid:886 tgid:886 ppid:876 Call trace: __switch_to+0x1d8/0x348 __schedule+0x8e8/0x2248 schedule+0x110/0x3f0 percpu_rwsem_wait+0x1e8/0x3f8 __percpu_down_read+0xe8/0x500 io_write+0xbb8/0xff8 io_issue_sqe+0x10c/0x1020 io_submit_sqes+0x614/0x2110 __arm64_sys_io_uring_enter+0x524/0x1038 invoke_syscall+0x74/0x268 el0_svc_common.constprop.0+0x160/0x238 do_el0_svc+0x44/0x60 el0_svc+0x44/0xb0 el0t_64_sync_handler+0x118/0x128 el0t_64_sync+0x168/0x170 INFO: task fsfreeze:7364 blocked for more than 15 seconds. Not tainted 6.12.0-rc5-00063-g76aaf945701c #7963 with the attempting freezer stuck trying to grab the rwsem: task:fsfreeze state:D stack:0 pid:7364 tgid:7364 ppid:995 Call trace: __switch_to+0x1d8/0x348 __schedule+0x8e8/0x2248 schedule+0x110/0x3f0 percpu_down_write+0x2b0/0x680 freeze_super+0x248/0x8a8 do_vfs_ioctl+0x149c/0x1b18 __arm64_sys_ioctl+0xd0/0x1a0 invoke_syscall+0x74/0x268 el0_svc_common.constprop.0+0x160/0x238 do_el0_svc+0x44/0x60 el0_svc+0x44/0xb0 el0t_64_sync_handler+0x118/0x128 el0t_64_sync+0x168/0x170 Fix this by having the io_uring side honor IOCB_NOWAIT, and only attempt a blocking grab of the super block rwsem if it isn't set. For normal issue where IOCB_NOWAIT would always be set, this returns -EAGAIN which will have io_uring core issue a blocking attempt of the write. That will in turn also get completions run, ensuring forward progress. Since freezing requires CAP_SYS_ADMIN in the first place, this isn't something that can be triggered by a regular user. • https://git.kernel.org/stable/c/485d9232112b17f389b29497ff41b97b3189546b https://git.kernel.org/stable/c/4e24041ba86d50aaa4c792ae2c88ed01b3d96243 https://git.kernel.org/stable/c/9e8debb8e51354b201db494689198078ec2c1e75 https://git.kernel.org/stable/c/003d2996964c03dfd34860500428f4cdf1f5879e https://git.kernel.org/stable/c/26b8c48f369b7591f5679e0b90612f4862a32929 https://git.kernel.org/stable/c/1d60d74e852647255bd8e76f5a22dc42531e4389 •