4435 results (0.009 seconds)

CVSS: -EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: libfs: fix get_stashed_dentry() get_stashed_dentry() tries to optimistically retrieve a stashed dentry from a provided location. It needs to ensure to hold rcu lock before it dereference the stashed location to prevent UAF issues. Use rcu_dereference() instead of READ_ONCE() it's effectively equivalent with some lockdep bells and whistles and it communicates clearly that this expects rcu protection. • https://git.kernel.org/stable/c/07fd7c329839cf0b8c7766883d830a1a0d12d1dd https://git.kernel.org/stable/c/03e2a1209a83a380df34a72f7d6d1bc6c74132c7 https://git.kernel.org/stable/c/4e32c25b58b945f976435bbe51f39b32d714052e •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: sch/netem: fix use after free in netem_dequeue If netem_dequeue() enqueues packet to inner qdisc and that qdisc returns __NET_XMIT_STOLEN. The packet is dropped but qdisc_tree_reduce_backlog() is not called to update the parent's q.qlen, leading to the similar use-after-free as Commit e04991a48dbaf382 ("netem: fix return value if duplicate enqueue fails") Commands to trigger KASAN UaF: ip link add type dummy ip link set lo up ip link set dummy0 up tc qdisc add dev lo parent root handle 1: drr tc filter add dev lo parent 1: basic classid 1:1 tc class add dev lo classid 1:1 drr tc qdisc add dev lo parent 1:1 handle 2: netem tc qdisc add dev lo parent 2: handle 3: drr tc filter add dev lo parent 3: basic classid 3:1 action mirred egress redirect dev dummy0 tc class add dev lo classid 3:1 drr ping -c1 -W0.01 localhost # Trigger bug tc class del dev lo classid 1:1 tc class add dev lo classid 1:1 drr ping -c1 -W0.01 localhost # UaF • https://git.kernel.org/stable/c/50612537e9ab29693122fab20fc1eed235054ffe https://git.kernel.org/stable/c/f0bddb4de043399f16d1969dad5ee5b984a64e7b https://git.kernel.org/stable/c/295ad5afd9efc5f67b86c64fce28fb94e26dc4c9 https://git.kernel.org/stable/c/98c75d76187944296068d685dfd8a1e9fd8c4fdc https://git.kernel.org/stable/c/14f91ab8d391f249b845916820a56f42cf747241 https://git.kernel.org/stable/c/db2c235682913a63054e741fe4e19645fdf2d68e https://git.kernel.org/stable/c/dde33a9d0b80aae0c69594d1f462515d7ff1cb3d https://git.kernel.org/stable/c/32008ab989ddcff1a485fa2b4906234c2 •

CVSS: -EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: net: ethernet: ti: am65-cpsw: Fix NULL dereference on XDP_TX If number of TX queues are set to 1 we get a NULL pointer dereference during XDP_TX. ~# ethtool -L eth0 tx 1 ~# ./xdp-trafficgen udp -A <ipv6-src> -a <ipv6-dst> eth0 -t 2 Transmitting on eth0 (ifindex 2) [ 241.135257] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000030 Fix this by using actual TX queues instead of max TX queues when picking the TX channel in am65_cpsw_ndo_xdp_xmit(). • https://git.kernel.org/stable/c/8acacc40f7337527ff84cd901ed2ef0a2b95b2b6 https://git.kernel.org/stable/c/2e7189d2b1de51fc2567676cd4f96c0fe0960b9f https://git.kernel.org/stable/c/0a50c35277f96481a5a6ed5faf347f282040c57d •

CVSS: -EPSS: 0%CPEs: 7EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: ASoC: dapm: Fix UAF for snd_soc_pcm_runtime object When using kernel with the following extra config, - CONFIG_KASAN=y - CONFIG_KASAN_GENERIC=y - CONFIG_KASAN_INLINE=y - CONFIG_KASAN_VMALLOC=y - CONFIG_FRAME_WARN=4096 kernel detects that snd_pcm_suspend_all() access a freed 'snd_soc_pcm_runtime' object when the system is suspended, which leads to a use-after-free bug: [ 52.047746] BUG: KASAN: use-after-free in snd_pcm_suspend_all+0x1a8/0x270 [ 52.047765] Read of size 1 at addr ffff0000b9434d50 by task systemd-sleep/2330 [ 52.047785] Call trace: [ 52.047787] dump_backtrace+0x0/0x3c0 [ 52.047794] show_stack+0x34/0x50 [ 52.047797] dump_stack_lvl+0x68/0x8c [ 52.047802] print_address_description.constprop.0+0x74/0x2c0 [ 52.047809] kasan_report+0x210/0x230 [ 52.047815] __asan_report_load1_noabort+0x3c/0x50 [ 52.047820] snd_pcm_suspend_all+0x1a8/0x270 [ 52.047824] snd_soc_suspend+0x19c/0x4e0 The snd_pcm_sync_stop() has a NULL check on 'substream->runtime' before making any access. So we need to always set 'substream->runtime' to NULL everytime we kfree() it. • https://git.kernel.org/stable/c/a72706ed8208ac3f72d1c3ebbc6509e368b0dcb0 https://git.kernel.org/stable/c/993b60c7f93fa1d8ff296b58f646a867e945ae89 https://git.kernel.org/stable/c/8ca21e7a27c66b95a4b215edc8e45e5d66679f9f https://git.kernel.org/stable/c/3033ed903b4f28b5e1ab66042084fbc2c48f8624 https://git.kernel.org/stable/c/fe5046ca91d631ec432eee3bdb1f1c49b09c8b5e https://git.kernel.org/stable/c/5d13afd021eb43868fe03cef6da34ad08831ad6d https://git.kernel.org/stable/c/6a14fad8be178df6c4589667efec1789a3307b4e https://git.kernel.org/stable/c/b4a90b543d9f62d3ac34ec1ab97fc5334 •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: powerpc/qspinlock: Fix deadlock in MCS queue If an interrupt occurs in queued_spin_lock_slowpath() after we increment qnodesp->count and before node->lock is initialized, another CPU might see stale lock values in get_tail_qnode(). If the stale lock value happens to match the lock on that CPU, then we write to the "next" pointer of the wrong qnode. This causes a deadlock as the former CPU, once it becomes the head of the MCS queue, will spin indefinitely until it's "next" pointer is set by its successor in the queue. Running stress-ng on a 16 core (16EC/16VP) shared LPAR, results in occasional lockups similar to the following: $ stress-ng --all 128 --vm-bytes 80% --aggressive \ --maximize --oomable --verify --syslog \ --metrics --times --timeout 5m watchdog: CPU 15 Hard LOCKUP ...... NIP [c0000000000b78f4] queued_spin_lock_slowpath+0x1184/0x1490 LR [c000000001037c5c] _raw_spin_lock+0x6c/0x90 Call Trace: 0xc000002cfffa3bf0 (unreliable) _raw_spin_lock+0x6c/0x90 raw_spin_rq_lock_nested.part.135+0x4c/0xd0 sched_ttwu_pending+0x60/0x1f0 __flush_smp_call_function_queue+0x1dc/0x670 smp_ipi_demux_relaxed+0xa4/0x100 xive_muxed_ipi_action+0x20/0x40 __handle_irq_event_percpu+0x80/0x240 handle_irq_event_percpu+0x2c/0x80 handle_percpu_irq+0x84/0xd0 generic_handle_irq+0x54/0x80 __do_irq+0xac/0x210 __do_IRQ+0x74/0xd0 0x0 do_IRQ+0x8c/0x170 hardware_interrupt_common_virt+0x29c/0x2a0 --- interrupt: 500 at queued_spin_lock_slowpath+0x4b8/0x1490 ...... NIP [c0000000000b6c28] queued_spin_lock_slowpath+0x4b8/0x1490 LR [c000000001037c5c] _raw_spin_lock+0x6c/0x90 --- interrupt: 500 0xc0000029c1a41d00 (unreliable) _raw_spin_lock+0x6c/0x90 futex_wake+0x100/0x260 do_futex+0x21c/0x2a0 sys_futex+0x98/0x270 system_call_exception+0x14c/0x2f0 system_call_vectored_common+0x15c/0x2ec The following code flow illustrates how the deadlock occurs. For the sake of brevity, assume that both locks (A and B) are contended and we call the queued_spin_lock_slowpath() function. CPU0 CPU1 ---- ---- spin_lock_irqsave(A) | spin_unlock_irqrestore(A) | spin_lock(B) | | | ▼ | id = qnodesp->count++; | (Note that nodes[0].lock == A) | | | ▼ | Interrupt | (happens before "nodes[0].lock = B") | | | ▼ | spin_lock_irqsave(A) | | | ▼ | id = qnodesp->count++ | nodes[1].lock = A | | | ▼ | Tail of MCS queue | | spin_lock_irqsave(A) ▼ | Head of MCS queue ▼ | CPU0 is previous tail ▼ | Spin indefinitely ▼ (until "nodes[1].next != NULL") prev = get_tail_qnode(A, CPU0) | ▼ prev == &qnodes[CPU0].nodes[0] (as qnodes ---truncated--- • https://git.kernel.org/stable/c/84990b169557428c318df87b7836cd15f65b62dc https://git.kernel.org/stable/c/d84ab6661e8d09092de9b034b016515ef9b66085 https://git.kernel.org/stable/c/f06af737e4be28c0e926dc25d5f0a111da4e2987 https://git.kernel.org/stable/c/734ad0af3609464f8f93e00b6c0de1e112f44559 •