Page 64 of 2835 results (0.021 seconds)

CVSS: -EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: drm/amd/display: Wrap dcn301_calculate_wm_and_dlg for FPU. Mirrors the logic for dcn30. Cue lots of WARNs and some kernel panics without this fix. • https://git.kernel.org/stable/c/456ba2433844a6483cc4c933aa8f43d24575e341 https://git.kernel.org/stable/c/25f1488bdbba63415239ff301fe61a8546140d9f •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: LAPIC: Also cancel preemption timer during SET_LAPIC The below warning is splatting during guest reboot. ------------[ cut here ]------------ WARNING: CPU: 0 PID: 1931 at arch/x86/kvm/x86.c:10322 kvm_arch_vcpu_ioctl_run+0x874/0x880 [kvm] CPU: 0 PID: 1931 Comm: qemu-system-x86 Tainted: G I 5.17.0-rc1+ #5 RIP: 0010:kvm_arch_vcpu_ioctl_run+0x874/0x880 [kvm] Call Trace: <TASK> kvm_vcpu_ioctl+0x279/0x710 [kvm] __x64_sys_ioctl+0x83/0xb0 do_syscall_64+0x3b/0xc0 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7fd39797350b This can be triggered by not exposing tsc-deadline mode and doing a reboot in the guest. The lapic_shutdown() function which is called in sys_reboot path will not disarm the flying timer, it just masks LVTT. lapic_shutdown() clears APIC state w/ LVT_MASKED and timer-mode bit is 0, this can trigger timer-mode switch between tsc-deadline and oneshot/periodic, which can result in preemption timer be cancelled in apic_update_lvtt(). However, We can't depend on this when not exposing tsc-deadline mode and oneshot/periodic modes emulated by preemption timer. Qemu will synchronise states around reset, let's cancel preemption timer under KVM_SET_LAPIC. • https://git.kernel.org/stable/c/54b3439c8e70e0bcfea59aeef9dd98908cbbf655 https://git.kernel.org/stable/c/ce55f63f6cea4cab8ae9212f73285648a5baa30d https://git.kernel.org/stable/c/35fe7cfbab2e81f1afb23fc4212210b1de6d9633 •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: x86: Forcibly leave nested virt when SMM state is toggled Forcibly leave nested virtualization operation if userspace toggles SMM state via KVM_SET_VCPU_EVENTS or KVM_SYNC_X86_EVENTS. If userspace forces the vCPU out of SMM while it's post-VMXON and then injects an SMI, vmx_enter_smm() will overwrite vmx->nested.smm.vmxon and end up with both vmxon=false and smm.vmxon=false, but all other nVMX state allocated. Don't attempt to gracefully handle the transition as (a) most transitions are nonsencial, e.g. forcing SMM while L2 is running, (b) there isn't sufficient information to handle all transitions, e.g. SVM wants access to the SMRAM save state, and (c) KVM_SET_VCPU_EVENTS must precede KVM_SET_NESTED_STATE during state restore as the latter disallows putting the vCPU into L2 if SMM is active, and disallows tagging the vCPU as being post-VMXON in SMM if SMM is not active. Abuse of KVM_SET_VCPU_EVENTS manifests as a WARN and memory leak in nVMX due to failure to free vmcs01's shadow VMCS, but the bug goes far beyond just a memory leak, e.g. toggling SMM on while L2 is active puts the vCPU in an architecturally impossible state. WARNING: CPU: 0 PID: 3606 at free_loaded_vmcs arch/x86/kvm/vmx/vmx.c:2665 [inline] WARNING: CPU: 0 PID: 3606 at free_loaded_vmcs+0x158/0x1a0 arch/x86/kvm/vmx/vmx.c:2656 Modules linked in: CPU: 1 PID: 3606 Comm: syz-executor725 Not tainted 5.17.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:free_loaded_vmcs arch/x86/kvm/vmx/vmx.c:2665 [inline] RIP: 0010:free_loaded_vmcs+0x158/0x1a0 arch/x86/kvm/vmx/vmx.c:2656 Code: <0f> 0b eb b3 e8 8f 4d 9f 00 e9 f7 fe ff ff 48 89 df e8 92 4d 9f 00 Call Trace: <TASK> kvm_arch_vcpu_destroy+0x72/0x2f0 arch/x86/kvm/x86.c:11123 kvm_vcpu_destroy arch/x86/kvm/../../../virt/kvm/kvm_main.c:441 [inline] kvm_destroy_vcpus+0x11f/0x290 arch/x86/kvm/../../../virt/kvm/kvm_main.c:460 kvm_free_vcpus arch/x86/kvm/x86.c:11564 [inline] kvm_arch_destroy_vm+0x2e8/0x470 arch/x86/kvm/x86.c:11676 kvm_destroy_vm arch/x86/kvm/../../.. • https://git.kernel.org/stable/c/080dbe7e9b86a0392d8dffc00d9971792afc121f https://git.kernel.org/stable/c/e302786233e6bc512986d007c96458ccf5ca21c7 https://git.kernel.org/stable/c/b4c0d89c92e957ecccce12e66b63875d0cc7af7e https://git.kernel.org/stable/c/f7e570780efc5cec9b2ed1e0472a7da14e864fdb •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: usb: xhci-plat: fix crash when suspend if remote wake enable Crashed at i.mx8qm platform when suspend if enable remote wakeup Internal error: synchronous external abort: 96000210 [#1] PREEMPT SMP Modules linked in: CPU: 2 PID: 244 Comm: kworker/u12:6 Not tainted 5.15.5-dirty #12 Hardware name: Freescale i.MX8QM MEK (DT) Workqueue: events_unbound async_run_entry_fn pstate: 600000c5 (nZCv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : xhci_disable_hub_port_wake.isra.62+0x60/0xf8 lr : xhci_disable_hub_port_wake.isra.62+0x34/0xf8 sp : ffff80001394bbf0 x29: ffff80001394bbf0 x28: 0000000000000000 x27: ffff00081193b578 x26: ffff00081193b570 x25: 0000000000000000 x24: 0000000000000000 x23: ffff00081193a29c x22: 0000000000020001 x21: 0000000000000001 x20: 0000000000000000 x19: ffff800014e90490 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000002 x12: 0000000000000000 x11: 0000000000000000 x10: 0000000000000960 x9 : ffff80001394baa0 x8 : ffff0008145d1780 x7 : ffff0008f95b8e80 x6 : 000000001853b453 x5 : 0000000000000496 x4 : 0000000000000000 x3 : ffff00081193a29c x2 : 0000000000000001 x1 : 0000000000000000 x0 : ffff000814591620 Call trace: xhci_disable_hub_port_wake.isra.62+0x60/0xf8 xhci_suspend+0x58/0x510 xhci_plat_suspend+0x50/0x78 platform_pm_suspend+0x2c/0x78 dpm_run_callback.isra.25+0x50/0xe8 __device_suspend+0x108/0x3c0 The basic flow: 1. run time suspend call xhci_suspend, xhci parent devices gate the clock. 2. echo mem >/sys/power/state, system _device_suspend call xhci_suspend 3. xhci_suspend call xhci_disable_hub_port_wake, which access register, but clock already gated by run time suspend. This problem was hidden by power domain driver, which call run time resume before it. But the below commit remove it and make this issue happen. commit c1df456d0f06e ("PM: domains: Don't runtime resume devices at genpd_prepare()") This patch call run time resume before suspend to make sure clock is on before access register. Testeb-by: Abel Vesa <abel.vesa@nxp.com> • https://git.kernel.org/stable/c/20c51a4c52208f98e27308c456a1951778f41fa5 https://git.kernel.org/stable/c/d5755832a1e47f5d8773f0776e211ecd4e02da72 https://git.kernel.org/stable/c/8b05ad29acb972850ad795fa850e814b2e758b83 https://git.kernel.org/stable/c/9df478463d9feb90dae24f183383961cf123a0ec •

CVSS: -EPSS: 0%CPEs: 9EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: USB: core: Fix hang in usb_kill_urb by adding memory barriers The syzbot fuzzer has identified a bug in which processes hang waiting for usb_kill_urb() to return. It turns out the issue is not unlinking the URB; that works just fine. Rather, the problem arises when the wakeup notification that the URB has completed is not received. The reason is memory-access ordering on SMP systems. In outline form, usb_kill_urb() and __usb_hcd_giveback_urb() operating concurrently on different CPUs perform the following actions: CPU 0 CPU 1 ---------------------------- --------------------------------- usb_kill_urb(): __usb_hcd_giveback_urb(): ... ... atomic_inc(&urb->reject); atomic_dec(&urb->use_count); ... ... wait_event(usb_kill_urb_queue, atomic_read(&urb->use_count) == 0); if (atomic_read(&urb->reject)) wake_up(&usb_kill_urb_queue); Confining your attention to urb->reject and urb->use_count, you can see that the overall pattern of accesses on CPU 0 is: write urb->reject, then read urb->use_count; whereas the overall pattern of accesses on CPU 1 is: write urb->use_count, then read urb->reject. This pattern is referred to in memory-model circles as SB (for "Store Buffering"), and it is well known that without suitable enforcement of the desired order of accesses -- in the form of memory barriers -- it is entirely possible for one or both CPUs to execute their reads ahead of their writes. The end result will be that sometimes CPU 0 sees the old un-decremented value of urb->use_count while CPU 1 sees the old un-incremented value of urb->reject. • https://git.kernel.org/stable/c/5f138ef224dffd15d5e5c5b095859719e0038427 https://git.kernel.org/stable/c/b50f5ca60475710bbc9a3af32fbfc17b1e69c2f0 https://git.kernel.org/stable/c/546ba238535d925254e0b3f12012a5c55801e2f3 https://git.kernel.org/stable/c/5904dfd3ddaff3bf4a41c3baf0a8e8f31ed4599b https://git.kernel.org/stable/c/9c61fce322ac2ef7fecf025285353570d60e41d6 https://git.kernel.org/stable/c/e3b131e30e612ff0e32de6c1cb4f69f89db29193 https://git.kernel.org/stable/c/9340226388c66a7e090ebb00e91ed64a753b6c26 https://git.kernel.org/stable/c/c9a18f7c5b071dce5e6939568829d4099 •