Page 592 of 4611 results (0.014 seconds)

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: arm64: Fix circular locking dependency The rule inside kvm enforces that the vcpu->mutex is taken *inside* kvm->lock. The rule is violated by the pkvm_create_hyp_vm() which acquires the kvm->lock while already holding the vcpu->mutex lock from kvm_vcpu_ioctl(). Avoid the circular locking dependency altogether by protecting the hyp vm handle with the config_lock, much like we already do for other forms of VM-scoped data. En el kernel de Linux, se resolvió la siguiente vulnerabilidad: KVM: arm64: corrige la dependencia de bloqueo circular La regla dentro de kvm exige que vcpu->mutex se tome *dentro* de kvm->lock. La regla es violada por pkvm_create_hyp_vm() que adquiere el bloqueo kvm->mientras ya mantiene el bloqueo vcpu->mutex de kvm_vcpu_ioctl(). • https://git.kernel.org/stable/c/3d16cebf01127f459dcfeb79ed77bd68b124c228 https://git.kernel.org/stable/c/3ab1c40a1e915e350d9181a4603af393141970cc https://git.kernel.org/stable/c/10c02aad111df02088d1a81792a709f6a7eca6cc •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: net: stmmac: protect updates of 64-bit statistics counters As explained by a comment in <linux/u64_stats_sync.h>, write side of struct u64_stats_sync must ensure mutual exclusion, or one seqcount update could be lost on 32-bit platforms, thus blocking readers forever. Such lockups have been observed in real world after stmmac_xmit() on one CPU raced with stmmac_napi_poll_tx() on another CPU. To fix the issue without introducing a new lock, split the statics into three parts: 1. fields updated only under the tx queue lock, 2. fields updated only during NAPI poll, 3. fields updated only from interrupt context, Updates to fields in the first two groups are already serialized through other locks. It is sufficient to split the existing struct u64_stats_sync so that each group has its own. Note that tx_set_ic_bit is updated from both contexts. Split this counter so that each context gets its own, and calculate their sum to get the total value in stmmac_get_ethtool_stats(). For the third group, multiple interrupts may be processed by different CPUs at the same time, but interrupts on the same CPU will not nest. Move fields from this group to a newly created per-cpu struct stmmac_pcpu_stats. • https://git.kernel.org/stable/c/133466c3bbe171f826294161db203f7670bb30c8 https://git.kernel.org/stable/c/9680b2ab54ba8d72581100e8c45471306101836e https://git.kernel.org/stable/c/e6af0f082a4b87b99ad033003be2a904a1791b3f https://git.kernel.org/stable/c/38cc3c6dcc09dc3a1800b5ec22aef643ca11eab8 •

CVSS: -EPSS: 0%CPEs: 6EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: ceph: prevent use-after-free in encode_cap_msg() In fs/ceph/caps.c, in encode_cap_msg(), "use after free" error was caught by KASAN at this line - 'ceph_buffer_get(arg->xattr_buf);'. This implies before the refcount could be increment here, it was freed. In same file, in "handle_cap_grant()" refcount is decremented by this line - 'ceph_buffer_put(ci->i_xattrs.blob);'. It appears that a race occurred and resource was freed by the latter line before the former line could increment it. encode_cap_msg() is called by __send_cap() and __send_cap() is called by ceph_check_caps() after calling __prep_cap(). __prep_cap() is where arg->xattr_buf is assigned to ci->i_xattrs.blob. This is the spot where the refcount must be increased to prevent "use after free" error. • https://git.kernel.org/stable/c/8180d0c27b93a6eb60da1b08ea079e3926328214 https://git.kernel.org/stable/c/70e329b440762390258a6fe8c0de93c9fdd56c77 https://git.kernel.org/stable/c/f3f98d7d84b31828004545e29fd7262b9f444139 https://git.kernel.org/stable/c/ae20db45e482303a20e56f2db667a9d9c54ac7e7 https://git.kernel.org/stable/c/7958c1bf5b03c6f1f58e724dbdec93f8f60b96fc https://git.kernel.org/stable/c/cda4672da1c26835dcbd7aec2bfed954eda9b5ef https://lists.debian.org/debian-lts-announce/2024/06/msg00017.html •

CVSS: -EPSS: 0%CPEs: 7EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: fs,hugetlb: fix NULL pointer dereference in hugetlbs_fill_super When configuring a hugetlb filesystem via the fsconfig() syscall, there is a possible NULL dereference in hugetlbfs_fill_super() caused by assigning NULL to ctx->hstate in hugetlbfs_parse_param() when the requested pagesize is non valid. E.g: Taking the following steps: fd = fsopen("hugetlbfs", FSOPEN_CLOEXEC); fsconfig(fd, FSCONFIG_SET_STRING, "pagesize", "1024", 0); fsconfig(fd, FSCONFIG_CMD_CREATE, NULL, NULL, 0); Given that the requested "pagesize" is invalid, ctxt->hstate will be replaced with NULL, losing its previous value, and we will print an error: ... ... case Opt_pagesize: ps = memparse(param->string, &rest); ctx->hstate = h; if (!ctx->hstate) { pr_err("Unsupported page size %lu MB\n", ps / SZ_1M); return -EINVAL; } return 0; ... ... This is a problem because later on, we will dereference ctxt->hstate in hugetlbfs_fill_super() ... ... sb->s_blocksize = huge_page_size(ctx->hstate); ... ... Causing below Oops. Fix this by replacing cxt->hstate value only when then pagesize is known to be valid. kernel: hugetlbfs: Unsupported page size 0 MB kernel: BUG: kernel NULL pointer dereference, address: 0000000000000028 kernel: #PF: supervisor read access in kernel mode kernel: #PF: error_code(0x0000) - not-present page kernel: PGD 800000010f66c067 P4D 800000010f66c067 PUD 1b22f8067 PMD 0 kernel: Oops: 0000 [#1] PREEMPT SMP PTI kernel: CPU: 4 PID: 5659 Comm: syscall Tainted: G E 6.8.0-rc2-default+ #22 5a47c3fef76212addcc6eb71344aabc35190ae8f kernel: Hardware name: Intel Corp. GROVEPORT/GROVEPORT, BIOS GVPRCRB1.86B.0016.D04.1705030402 05/03/2017 kernel: RIP: 0010:hugetlbfs_fill_super+0xb4/0x1a0 kernel: Code: 48 8b 3b e8 3e c6 ed ff 48 85 c0 48 89 45 20 0f 84 d6 00 00 00 48 b8 ff ff ff ff ff ff ff 7f 4c 89 e7 49 89 44 24 20 48 8b 03 <8b> 48 28 b8 00 10 00 00 48 d3 e0 49 89 44 24 18 48 8b 03 8b 40 28 kernel: RSP: 0018:ffffbe9960fcbd48 EFLAGS: 00010246 kernel: RAX: 0000000000000000 RBX: ffff9af5272ae780 RCX: 0000000000372004 kernel: RDX: ffffffffffffffff RSI: ffffffffffffffff RDI: ffff9af555e9b000 kernel: RBP: ffff9af52ee66b00 R08: 0000000000000040 R09: 0000000000370004 kernel: R10: ffffbe9960fcbd48 R11: 0000000000000040 R12: ffff9af555e9b000 kernel: R13: ffffffffa66b86c0 R14: ffff9af507d2f400 R15: ffff9af507d2f400 kernel: FS: 00007ffbc0ba4740(0000) GS:ffff9b0bd7000000(0000) knlGS:0000000000000000 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 kernel: CR2: 0000000000000028 CR3: 00000001b1ee0000 CR4: 00000000001506f0 kernel: Call Trace: kernel: <TASK> kernel: ? __die_body+0x1a/0x60 kernel: ? page_fault_oops+0x16f/0x4a0 kernel: ? • https://git.kernel.org/stable/c/32021982a324dce93b4ae00c06213bf45fb319c8 https://git.kernel.org/stable/c/1dde8ef4b7a749ae1bc73617c91775631d167557 https://git.kernel.org/stable/c/80d852299987a8037be145a94f41874228f1a773 https://git.kernel.org/stable/c/22850c9950a4e43a67299755d11498f3292d02ff https://git.kernel.org/stable/c/2e2c07104b4904aed1389a59b25799b95a85b5b9 https://git.kernel.org/stable/c/13c5a9fb07105557a1fa9efdb4f23d7ef30b7274 https://git.kernel.org/stable/c/ec78418801ef7b0c22cd6a30145ec480dd48db39 https://git.kernel.org/stable/c/79d72c68c58784a3e1cd2378669d51bfd •

CVSS: -EPSS: 0%CPEs: 7EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: xen/events: close evtchn after mapping cleanup shutdown_pirq and startup_pirq are not taking the irq_mapping_update_lock because they can't due to lock inversion. Both are called with the irq_desc->lock being taking. The lock order, however, is first irq_mapping_update_lock and then irq_desc->lock. This opens multiple races: - shutdown_pirq can be interrupted by a function that allocates an event channel: CPU0 CPU1 shutdown_pirq { xen_evtchn_close(e) __startup_pirq { EVTCHNOP_bind_pirq -> returns just freed evtchn e set_evtchn_to_irq(e, irq) } xen_irq_info_cleanup() { set_evtchn_to_irq(e, -1) } } Assume here event channel e refers here to the same event channel number. After this race the evtchn_to_irq mapping for e is invalid (-1). - __startup_pirq races with __unbind_from_irq in a similar way. Because __startup_pirq doesn't take irq_mapping_update_lock it can grab the evtchn that __unbind_from_irq is currently freeing and cleaning up. In this case even though the event channel is allocated, its mapping can be unset in evtchn_to_irq. The fix is to first cleanup the mappings and then close the event channel. • https://git.kernel.org/stable/c/d46a78b05c0e37f76ddf4a7a67bf0b6c68bada55 https://git.kernel.org/stable/c/9470f5b2503cae994098dea9682aee15b313fa44 https://git.kernel.org/stable/c/0fc88aeb2e32b76db3fe6a624b8333dbe621b8fd https://git.kernel.org/stable/c/ea592baf9e41779fe9a0424c03dd2f324feca3b3 https://git.kernel.org/stable/c/585a344af6bcac222608a158fc2830ff02712af5 https://git.kernel.org/stable/c/20980195ec8d2e41653800c45c8c367fa1b1f2b4 https://git.kernel.org/stable/c/9be71aa12afa91dfe457b3fb4a444c42b1ee036b https://git.kernel.org/stable/c/fa765c4b4aed2d64266b694520ecb025c •