Page 28 of 2501 results (0.008 seconds)

CVSS: -EPSS: 0%CPEs: 9EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: ASoC: ops: Fix bounds check for _sx controls For _sx controls the semantics of the max field is not the usual one, max is the number of steps rather than the maximum value. This means that our check in snd_soc_put_volsw_sx() needs to just check against the maximum value. • https://git.kernel.org/stable/c/9e5c40b5706d8aae2cf70bd7e01f0b4575a642d0 https://git.kernel.org/stable/c/4977491e4b3aad8567f57e2a9992d251410c1db3 https://git.kernel.org/stable/c/9a12fcbf3c622f9bf6b110a873d62b0cba93972e https://git.kernel.org/stable/c/c33402b056de61104b6146dedbe138ca8d7ec62b https://git.kernel.org/stable/c/038f8b7caa74d29e020949a43ca368c93f6b29b9 https://git.kernel.org/stable/c/e8e07c5e25a29e2a6f119fd947f55d7a55eb8a13 https://git.kernel.org/stable/c/4f1e50d6a9cf9c1b8c859d449b5031cacfa8404e https://git.kernel.org/stable/c/ef6cd9eeb38062a145802b7b56be7ae10 •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: riscv: Sync efi page table's kernel mappings before switching The EFI page table is initially created as a copy of the kernel page table. With VMAP_STACK enabled, kernel stacks are allocated in the vmalloc area: if the stack is allocated in a new PGD (one that was not present at the moment of the efi page table creation or not synced in a previous vmalloc fault), the kernel will take a trap when switching to the efi page table when the vmalloc kernel stack is accessed, resulting in a kernel panic. Fix that by updating the efi kernel mappings before switching to the efi page table. • https://git.kernel.org/stable/c/b91540d52a08b65eb6a2b09132e1bd54fa82754c https://git.kernel.org/stable/c/fa7a7d185ef380546b4b1fed6f84f31dbae8cec7 https://git.kernel.org/stable/c/96f479383d92944406d4b3f2bc03c2f640def9f1 https://git.kernel.org/stable/c/3f105a742725a1b78766a55169f1d827732e62b8 •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: nvme: fix SRCU protection of nvme_ns_head list Walking the nvme_ns_head siblings list is protected by the head's srcu in nvme_ns_head_submit_bio() but not nvme_mpath_revalidate_paths(). Removing namespaces from the list also fails to synchronize the srcu. Concurrent scan work can therefore cause use-after-frees. Hold the head's srcu lock in nvme_mpath_revalidate_paths() and synchronize with the srcu, not the global RCU, in nvme_ns_remove(). Observed the following panic when making NVMe/RDMA connections with native multipath on the Rocky Linux 8.6 kernel (it seems the upstream kernel has the same race condition). Disassembly shows the faulting instruction is cmp 0x50(%rdx),%rcx; computing capacity != get_capacity(ns->disk). Address 0x50 is dereferenced because ns->disk is NULL. The NULL disk appears to be the result of concurrent scan work freeing the namespace (note the log line in the middle of the panic). [37314.206036] BUG: unable to handle kernel NULL pointer dereference at 0000000000000050 [37314.206036] nvme0n3: detected capacity change from 0 to 11811160064 [37314.299753] PGD 0 P4D 0 [37314.299756] Oops: 0000 [#1] SMP PTI [37314.299759] CPU: 29 PID: 322046 Comm: kworker/u98:3 Kdump: loaded Tainted: G W X --------- - - 4.18.0-372.32.1.el8test86.x86_64 #1 [37314.299762] Hardware name: Dell Inc. PowerEdge R720/0JP31P, BIOS 2.7.0 05/23/2018 [37314.299763] Workqueue: nvme-wq nvme_scan_work [nvme_core] [37314.299783] RIP: 0010:nvme_mpath_revalidate_paths+0x26/0xb0 [nvme_core] [37314.299790] Code: 1f 44 00 00 66 66 66 66 90 55 53 48 8b 5f 50 48 8b 83 c8 c9 00 00 48 8b 13 48 8b 48 50 48 39 d3 74 20 48 8d 42 d0 48 8b 50 20 <48> 3b 4a 50 74 05 f0 80 60 70 ef 48 8b 50 30 48 8d 42 d0 48 39 d3 [37315.058803] RSP: 0018:ffffabe28f913d10 EFLAGS: 00010202 [37315.121316] RAX: ffff927a077da800 RBX: ffff92991dd70000 RCX: 0000000001600000 [37315.206704] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff92991b719800 [37315.292106] RBP: ffff929a6b70c000 R08: 000000010234cd4a R09: c0000000ffff7fff [37315.377501] R10: 0000000000000001 R11: ffffabe28f913a30 R12: 0000000000000000 [37315.462889] R13: ffff92992716600c R14: ffff929964e6e030 R15: ffff92991dd70000 [37315.548286] FS: 0000000000000000(0000) GS:ffff92b87fb80000(0000) knlGS:0000000000000000 [37315.645111] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [37315.713871] CR2: 0000000000000050 CR3: 0000002208810006 CR4: 00000000000606e0 [37315.799267] Call Trace: [37315.828515] nvme_update_ns_info+0x1ac/0x250 [nvme_core] [37315.892075] nvme_validate_or_alloc_ns+0x2ff/0xa00 [nvme_core] [37315.961871] ? __blk_mq_free_request+0x6b/0x90 [37316.015021] nvme_scan_work+0x151/0x240 [nvme_core] [37316.073371] process_one_work+0x1a7/0x360 [37316.121318] ? create_worker+0x1a0/0x1a0 [37316.168227] worker_thread+0x30/0x390 [37316.212024] ? • https://git.kernel.org/stable/c/e7d65803e2bb5bc739548b67a5fc72c626cf7e3b https://git.kernel.org/stable/c/787d81d4eb150e443e5d1276c6e8f03cfecc2302 https://git.kernel.org/stable/c/5b566d09ab1b975566a53f9c5466ee260d087582 https://git.kernel.org/stable/c/899d2a05dc14733cfba6224083c6b0dd5a738590 •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: iommu/vt-d: Fix PCI device refcount leak in dmar_dev_scope_init() for_each_pci_dev() is implemented by pci_get_device(). The comment of pci_get_device() says that it will increase the reference count for the returned pci_dev and also decrease the reference count for the input pci_dev @from if it is not NULL. If we break for_each_pci_dev() loop with pdev not NULL, we need to call pci_dev_put() to decrease the reference count. Add the missing pci_dev_put() for the error path to avoid reference count leak. • https://git.kernel.org/stable/c/2e45528930388658603ea24d49cf52867b928d3e https://git.kernel.org/stable/c/d47bc9d7bcdbb9adc9703513d964b514fee5b0bf https://git.kernel.org/stable/c/71c4a621985fc051ab86d3a86c749069a993fcb2 https://git.kernel.org/stable/c/876d7bfb89273997056220029ff12b1c2cc4691d https://git.kernel.org/stable/c/cbdd83bd2fd67142b03ce9dbdd1eab322ff7321f https://git.kernel.org/stable/c/a5c65cd56aed027f8a97fda8b691caaeb66d115e https://git.kernel.org/stable/c/bdb613ef179ad4bb9d56a2533e9b30e434f1dfb7 https://git.kernel.org/stable/c/2a8f7b90681472948de172dbbf5a54cd3 •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: riscv: fix race when vmap stack overflow Currently, when detecting vmap stack overflow, riscv firstly switches to the so called shadow stack, then use this shadow stack to call the get_overflow_stack() to get the overflow stack. However, there's a race here if two or more harts use the same shadow stack at the same time. To solve this race, we introduce spin_shadow_stack atomic var, which will be swap between its own address and 0 in atomic way, when the var is set, it means the shadow_stack is being used; when the var is cleared, it means the shadow_stack isn't being used. [Palmer: Add AQ to the swap, and also some comments.] • https://git.kernel.org/stable/c/31da94c25aea835ceac00575a9fd206c5a833fed https://git.kernel.org/stable/c/ac00301adb19df54f2eae1efc4bad7447c0156ce https://git.kernel.org/stable/c/879fabc5a95401d9bce357e4b1d24ae4a360a81f https://git.kernel.org/stable/c/7e1864332fbc1b993659eab7974da9fe8bf8c128 •