Page 223 of 3027 results (0.007 seconds)

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: ASoC: SOF: Intel: hda: fix null deref on system suspend entry When system enters suspend with an active stream, SOF core calls hw_params_upon_resume(). On Intel platforms with HDA DMA used to manage the link DMA, this leads to call chain of hda_dsp_set_hw_params_upon_resume() -> hda_dsp_dais_suspend() -> hda_dai_suspend() -> hda_ipc4_post_trigger() A bug is hit in hda_dai_suspend() as hda_link_dma_cleanup() is run first, which clears hext_stream->link_substream, and then hda_ipc4_post_trigger() is called with a NULL snd_pcm_substream pointer. • https://git.kernel.org/stable/c/2b009fa0823c1510700fd17a0780ddd06a460fb4 https://git.kernel.org/stable/c/8246bbf818ed7b8d5afc92b951e6d562b45c2450 https://git.kernel.org/stable/c/993af0f2d9f24e3c18a445ae22b34190d1fcad61 https://git.kernel.org/stable/c/9065693dcc13f287b9e4991f43aee70cf5538fdd •

CVSS: -EPSS: 0%CPEs: 6EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: net: ks8851: Fix deadlock with the SPI chip variant When SMP is enabled and spinlocks are actually functional then there is a deadlock with the 'statelock' spinlock between ks8851_start_xmit_spi and ks8851_irq: watchdog: BUG: soft lockup - CPU#0 stuck for 27s! call trace: queued_spin_lock_slowpath+0x100/0x284 do_raw_spin_lock+0x34/0x44 ks8851_start_xmit_spi+0x30/0xb8 ks8851_start_xmit+0x14/0x20 netdev_start_xmit+0x40/0x6c dev_hard_start_xmit+0x6c/0xbc sch_direct_xmit+0xa4/0x22c __qdisc_run+0x138/0x3fc qdisc_run+0x24/0x3c net_tx_action+0xf8/0x130 handle_softirqs+0x1ac/0x1f0 __do_softirq+0x14/0x20 ____do_softirq+0x10/0x1c call_on_irq_stack+0x3c/0x58 do_softirq_own_stack+0x1c/0x28 __irq_exit_rcu+0x54/0x9c irq_exit_rcu+0x10/0x1c el1_interrupt+0x38/0x50 el1h_64_irq_handler+0x18/0x24 el1h_64_irq+0x64/0x68 __netif_schedule+0x6c/0x80 netif_tx_wake_queue+0x38/0x48 ks8851_irq+0xb8/0x2c8 irq_thread_fn+0x2c/0x74 irq_thread+0x10c/0x1b0 kthread+0xc8/0xd8 ret_from_fork+0x10/0x20 This issue has not been identified earlier because tests were done on a device with SMP disabled and so spinlocks were actually NOPs. Now use spin_(un)lock_bh for TX queue related locking to avoid execution of softirq work synchronously that would lead to a deadlock. • https://git.kernel.org/stable/c/1092525155eaad5c69ca9f3b6f3e7895a9424d66 https://git.kernel.org/stable/c/30302b41ffdcd194bef27fb3b1a9f2ca53dedb27 https://git.kernel.org/stable/c/3dc5d44545453de1de9c53cc529cc960a85933da https://git.kernel.org/stable/c/786788bb1396ed5ea27e39c4933f59f4e52004e4 https://git.kernel.org/stable/c/7c25c5d7274631b655f0f9098a16241fcd5db57b https://git.kernel.org/stable/c/a0c69c492f4a8fad52f0a97565241c926160c9a4 https://git.kernel.org/stable/c/80ece00137300d74642f2038c8fe5440deaf9f05 https://git.kernel.org/stable/c/10fec0cd0e8f56ff06c46bb24254c7d8f •

CVSS: 5.5EPSS: 0%CPEs: 15EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: USB: core: Fix duplicate endpoint bug by clearing reserved bits in the descriptor Syzbot has identified a bug in usbcore (see the Closes: tag below) caused by our assumption that the reserved bits in an endpoint descriptor's bEndpointAddress field will always be 0. As a result of the bug, the endpoint_is_duplicate() routine in config.c (and possibly other routines as well) may believe that two descriptors are for distinct endpoints, even though they have the same direction and endpoint number. This can lead to confusion, including the bug identified by syzbot (two descriptors with matching endpoint numbers and directions, where one was interrupt and the other was bulk). To fix the bug, we will clear the reserved bits in bEndpointAddress when we parse the descriptor. (Note that both the USB-2.0 and USB-3.1 specs say these bits are "Reserved, reset to zero".) This requires us to make a copy of the descriptor earlier in usb_parse_endpoint() and use the copy instead of the original when checking for duplicates. • https://git.kernel.org/stable/c/0a8fd1346254974c3a852338508e4a4cddbb35f1 https://git.kernel.org/stable/c/c3726b442527ab31c7110d0445411f5b5343db01 https://git.kernel.org/stable/c/15668b4354b38b41b316571deed2763d631b2977 https://git.kernel.org/stable/c/8597a9245181656ae2ef341906e5f40af323fbca https://git.kernel.org/stable/c/264024a2676ba7d91fe7b1713b2c32d1b0b508cb https://git.kernel.org/stable/c/b0de742a1be16b76b534d088682f18cf57f012d2 https://git.kernel.org/stable/c/7cc00abef071a8a7d0f4457b7afa2f57f683d83f https://git.kernel.org/stable/c/05b0f2fc3c2f9efda47439557e0d51fac • CWE-99: Improper Control of Resource Identifiers ('Resource Injection') •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: nilfs2: fix kernel bug on rename operation of broken directory Syzbot reported that in rename directory operation on broken directory on nilfs2, __block_write_begin_int() called to prepare block write may fail BUG_ON check for access exceeding the folio/page size. This is because nilfs_dotdot(), which gets parent directory reference entry ("..") of the directory to be moved or renamed, does not check consistency enough, and may return location exceeding folio/page size for broken directories. Fix this issue by checking required directory entries ("." and "..") in the first chunk of the directory in nilfs_dotdot(). • https://git.kernel.org/stable/c/2ba466d74ed74f073257f86e61519cb8f8f46184 https://git.kernel.org/stable/c/ff9767ba2cb949701e45e6e4287f8af82986b703 https://git.kernel.org/stable/c/24c1c8566a9b6be51f5347be2ea76e25fc82b11e https://git.kernel.org/stable/c/a9a466a69b85059b341239766a10efdd3ee68a4b https://git.kernel.org/stable/c/7000b438dda9d0f41a956fc9bffed92d2eb6be0d https://git.kernel.org/stable/c/1a8879c0771a68d70ee2e5e66eea34207e8c6231 https://git.kernel.org/stable/c/60f61514374e4a0c3b65b08c6024dd7e26150bfd https://git.kernel.org/stable/c/298cd810d7fb687c90a14d8f9fd1b8719 •

CVSS: 5.5EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mm: vmalloc: check if a hash-index is in cpu_possible_mask The problem is that there are systems where cpu_possible_mask has gaps between set CPUs, for example SPARC. In this scenario addr_to_vb_xa() hash function can return an index which accesses to not-possible and not setup CPU area using per_cpu() macro. This results in an oops on SPARC. A per-cpu vmap_block_queue is also used as hash table, incorrectly assuming the cpu_possible_mask has no gaps. Fix it by adjusting an index to a next possible CPU. • https://git.kernel.org/stable/c/062eacf57ad91b5c272f89dc964fd6dd9715ea7d https://git.kernel.org/stable/c/28acd531c9a365dac01b32e6bc54aed8c1429bcb https://git.kernel.org/stable/c/47f9b6e49b422392fb0e348a65eb925103ba1882 https://git.kernel.org/stable/c/a34acf30b19bc4ee3ba2f1082756ea2604c19138 https://access.redhat.com/security/cve/CVE-2024-41032 https://bugzilla.redhat.com/show_bug.cgi?id=2300398 • CWE-99: Improper Control of Resource Identifiers ('Resource Injection') •