Page 105 of 1927 results (0.034 seconds)

CVSS: -EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: ASoC: Intel: Boards: Fix NULL pointer deref in BYT/CHT boards harder Since commit 13f58267cda3 ("ASoC: soc.h: don't create dummy Component via COMP_DUMMY()") dummy codecs declared like this: SND_SOC_DAILINK_DEF(dummy, DAILINK_COMP_ARRAY(COMP_DUMMY())); expand to: static struct snd_soc_dai_link_component dummy[] = { }; Which means that dummy is a zero sized array and thus dais[i].codecs should not be dereferenced *at all* since it points to the address of the next variable stored in the data section as the "dummy" variable has an address but no size, so even dereferencing dais[0] is already an out of bounds array reference. Which means that the if (dais[i].codecs->name) check added in commit 7d99a70b6595 ("ASoC: Intel: Boards: Fix NULL pointer deref in BYT/CHT boards") relies on that the part of the next variable which the name member maps to just happens to be NULL. Which apparently so far it usually is, except when it isn't and then it results in crashes like this one: [ 28.795659] BUG: unable to handle page fault for address: 0000000000030011 ... [ 28.795780] Call Trace: [ 28.795787] <TASK> ... [ 28.795862] ? strcmp+0x18/0x40 [ 28.795872] 0xffffffffc150c605 [ 28.795887] platform_probe+0x40/0xa0 ... [ 28.795979] ? __pfx_init_module+0x10/0x10 [snd_soc_sst_bytcr_wm5102] Really fix things this time around by checking dais.num_codecs != 0. • https://git.kernel.org/stable/c/7d99a70b65951108d82e1618c67abe69c3ed7720 https://git.kernel.org/stable/c/85cda5b040bda9c577b34eb72d5b2e5b7e31985c https://git.kernel.org/stable/c/0cc65482f5b03ac2b1c240bc34665e43ea2d71bb •

CVSS: -EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: riscv: misaligned: Restrict user access to kernel memory raw_copy_{to,from}_user() do not call access_ok(), so this code allowed userspace to access any virtual memory address. • https://git.kernel.org/stable/c/7c83232161f609bbc452a1255f823f41afc411dd https://git.kernel.org/stable/c/a3b6ff6c896aee5ef9b581e40d0045ff04fcbc8c https://git.kernel.org/stable/c/b686ecdeacf6658e1348c1a32a08e2e72f7c0f00 •

CVSS: -EPSS: 0%CPEs: 6EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: can: mcp251x: fix deadlock if an interrupt occurs during mcp251x_open The mcp251x_hw_wake() function is called with the mpc_lock mutex held and disables the interrupt handler so that no interrupts can be processed while waking the device. If an interrupt has already occurred then waiting for the interrupt handler to complete will deadlock because it will be trying to acquire the same mutex. CPU0 CPU1 ---- ---- mcp251x_open() mutex_lock(&priv->mcp_lock) request_threaded_irq() <interrupt> mcp251x_can_ist() mutex_lock(&priv->mcp_lock) mcp251x_hw_wake() disable_irq() <-- deadlock Use disable_irq_nosync() instead because the interrupt handler does everything while holding the mutex so it doesn't matter if it's still running. • https://git.kernel.org/stable/c/8ce8c0abcba314e1fe954a1840f6568bf5aef2ef https://git.kernel.org/stable/c/3a49b6b1caf5cefc05264d29079d52c99cb188e0 https://git.kernel.org/stable/c/513c8fc189b52f7922e36bdca58997482b198f0e https://git.kernel.org/stable/c/f7ab9e14b23a3eac6714bdc4dba244d8aa1ef646 https://git.kernel.org/stable/c/8fecde9c3f9a4b97b68bb97c9f47e5b662586ba7 https://git.kernel.org/stable/c/e554113a1cd2a9cfc6c7af7bdea2141c5757e188 https://git.kernel.org/stable/c/7dd9c26bd6cf679bcfdef01a8659791aa6487a29 •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: tracing/osnoise: Use a cpumask to know what threads are kthreads The start_kthread() and stop_thread() code was not always called with the interface_lock held. This means that the kthread variable could be unexpectedly changed causing the kthread_stop() to be called on it when it should not have been, leading to: while true; do rtla timerlat top -u -q & PID=$!; sleep 5; kill -INT $PID; sleep 0.001; kill -TERM $PID; wait $PID; done Causing the following OOPS: Oops: general protection fault, probably for non-canonical address 0xdffffc0000000002: 0000 [#1] PREEMPT SMP KASAN PTI KASAN: null-ptr-deref in range [0x0000000000000010-0x0000000000000017] CPU: 5 UID: 0 PID: 885 Comm: timerlatu/5 Not tainted 6.11.0-rc4-test-00002-gbc754cc76d1b-dirty #125 a533010b71dab205ad2f507188ce8c82203b0254 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 RIP: 0010:hrtimer_active+0x58/0x300 Code: 48 c1 ee 03 41 54 48 01 d1 48 01 d6 55 53 48 83 ec 20 80 39 00 0f 85 30 02 00 00 49 8b 6f 30 4c 8d 75 10 4c 89 f0 48 c1 e8 03 <0f> b6 3c 10 4c 89 f0 83 e0 07 83 c0 03 40 38 f8 7c 09 40 84 ff 0f RSP: 0018:ffff88811d97f940 EFLAGS: 00010202 RAX: 0000000000000002 RBX: ffff88823c6b5b28 RCX: ffffed10478d6b6b RDX: dffffc0000000000 RSI: ffffed10478d6b6c RDI: ffff88823c6b5b28 RBP: 0000000000000000 R08: ffff88823c6b5b58 R09: ffff88823c6b5b60 R10: ffff88811d97f957 R11: 0000000000000010 R12: 00000000000a801d R13: ffff88810d8b35d8 R14: 0000000000000010 R15: ffff88823c6b5b28 FS: 0000000000000000(0000) GS:ffff88823c680000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000561858ad7258 CR3: 000000007729e001 CR4: 0000000000170ef0 Call Trace: <TASK> ? die_addr+0x40/0xa0 ? exc_general_protection+0x154/0x230 ? • https://git.kernel.org/stable/c/e88ed227f639ebcb31ed4e5b88756b47d904584b https://git.kernel.org/stable/c/7a5f01828edf152c144d27cf63de446fdf2dc222 https://git.kernel.org/stable/c/27282d2505b402f39371fd60d19d95c01a4b6776 https://git.kernel.org/stable/c/177e1cc2f41235c145041eed03ef5bab18f32328 •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: userfaultfd: fix checks for huge PMDs Patch series "userfaultfd: fix races around pmd_trans_huge() check", v2. The pmd_trans_huge() code in mfill_atomic() is wrong in three different ways depending on kernel version: 1. The pmd_trans_huge() check is racy and can lead to a BUG_ON() (if you hit the right two race windows) - I've tested this in a kernel build with some extra mdelay() calls. See the commit message for a description of the race scenario. On older kernels (before 6.5), I think the same bug can even theoretically lead to accessing transhuge page contents as a page table if you hit the right 5 narrow race windows (I haven't tested this case). 2. As pointed out by Qi Zheng, pmd_trans_huge() is not sufficient for detecting PMDs that don't point to page tables. On older kernels (before 6.5), you'd just have to win a single fairly wide race to hit this. I've tested this on 6.1 stable by racing migration (with a mdelay() patched into try_to_migrate()) against UFFDIO_ZEROPAGE - on my x86 VM, that causes a kernel oops in ptlock_ptr(). 3. On newer kernels (>=6.5), for shmem mappings, khugepaged is allowed to yank page tables out from under us (though I haven't tested that), so I think the BUG_ON() checks in mfill_atomic() are just wrong. I decided to write two separate fixes for these (one fix for bugs 1+2, one fix for bug 3), so that the first fix can be backported to kernels affected by bugs 1+2. This patch (of 2): This fixes two issues. I discovered that the following race can occur: mfill_atomic other thread ============ ============ <zap PMD> pmdp_get_lockless() [reads none pmd] <bail if trans_huge> <if none:> <pagefault creates transhuge zeropage> __pte_alloc [no-op] <zap PMD> <bail if pmd_trans_huge(*dst_pmd)> BUG_ON(pmd_none(*dst_pmd)) I have experimentally verified this in a kernel with extra mdelay() calls; the BUG_ON(pmd_none(*dst_pmd)) triggers. On kernels newer than commit 0d940a9b270b ("mm/pgtable: allow pte_offset_map[_lock]() to fail"), this can't lead to anything worse than a BUG_ON(), since the page table access helpers are actually designed to deal with page tables concurrently disappearing; but on older kernels (<=6.4), I think we could probably theoretically race past the two BUG_ON() checks and end up treating a hugepage as a page table. The second issue is that, as Qi Zheng pointed out, there are other types of huge PMDs that pmd_trans_huge() can't catch: devmap PMDs and swap PMDs (in particular, migration PMDs). On <=6.4, this is worse than the first issue: If mfill_atomic() runs on a PMD that contains a migration entry (which just requires winning a single, fairly wide race), it will pass the PMD to pte_offset_map_lock(), which assumes that the PMD points to a page table. Breakage follows: First, the kernel tries to take the PTE lock (which will crash or maybe worse if there is no "struct page" for the address bits in the migration entry PMD - I think at least on X86 there usually is no corresponding "struct page" thanks to the PTE inversion mitigation, amd64 looks different). If that didn't crash, the kernel would next try to write a PTE into what it wrongly thinks is a page table. As part of fixing these issues, get rid of the check for pmd_trans_huge() before __pte_alloc() - that's redundant, we're going to have to check for that after the __pte_alloc() anyway. Backport note: pmdp_get_lockless() is pmd_read_atomic() in older kernels. • https://git.kernel.org/stable/c/c1a4de99fada21e2e9251e52cbb51eff5aadc757 https://git.kernel.org/stable/c/3c6b4bcf37845c9359aed926324bed66bdd2448d https://git.kernel.org/stable/c/98cc18b1b71e23fe81a5194ed432b20c2d81a01a https://git.kernel.org/stable/c/71c186efc1b2cf1aeabfeff3b9bd5ac4c5ac14d8 •