Page 456 of 4685 results (0.014 seconds)

CVSS: 6.2EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: x86/ioremap: Map EFI-reserved memory as encrypted for SEV Some drivers require memory that is marked as EFI boot services data. In order for this memory to not be re-used by the kernel after ExitBootServices(), efi_mem_reserve() is used to preserve it by inserting a new EFI memory descriptor and marking it with the EFI_MEMORY_RUNTIME attribute. Under SEV, memory marked with the EFI_MEMORY_RUNTIME attribute needs to be mapped encrypted by Linux, otherwise the kernel might crash at boot like below: EFI Variables Facility v0.08 2004-May-17 general protection fault, probably for non-canonical address 0x3597688770a868b2: 0000 [#1] SMP NOPTI CPU: 13 PID: 1 Comm: swapper/0 Not tainted 5.12.4-2-default #1 openSUSE Tumbleweed Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:efi_mokvar_entry_next [...] Call Trace: efi_mokvar_sysfs_init ? efi_mokvar_table_init do_one_initcall ? __kmalloc kernel_init_freeable ? rest_init kernel_init ret_from_fork Expand the __ioremap_check_other() function to additionally check for this other type of boot data reserved at runtime and indicate that it should be mapped encrypted for an SEV guest. [ bp: Massage commit message. ] En el kernel de Linux, se resolvió la siguiente vulnerabilidad: x86/ioremap: asigne la memoria reservada de EFI como cifrada para SEV. • https://git.kernel.org/stable/c/58c909022a5a56cd1d9e89c8c5461fd1f6a27bb5 https://git.kernel.org/stable/c/208bb686e7fa7fff16e8fa78ff0db34aa9acdbd7 https://git.kernel.org/stable/c/b7a05aba39f733ec337c5b952e112dd2dc4fc404 https://git.kernel.org/stable/c/8d651ee9c71bb12fc0c8eb2786b66cbe5aa3e43b •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: x86/fpu: Prevent state corruption in __fpu__restore_sig() The non-compacted slowpath uses __copy_from_user() and copies the entire user buffer into the kernel buffer, verbatim. This means that the kernel buffer may now contain entirely invalid state on which XRSTOR will #GP. validate_user_xstate_header() can detect some of that corruption, but that leaves the onus on callers to clear the buffer. Prior to XSAVES support, it was possible just to reinitialize the buffer, completely, but with supervisor states that is not longer possible as the buffer clearing code split got it backwards. Fixing that is possible but not corrupting the state in the first place is more robust. Avoid corruption of the kernel XSAVE buffer by using copy_user_to_xstate() which validates the XSAVE header contents before copying the actual states to the kernel. copy_user_to_xstate() was previously only called for compacted-format kernel buffers, but it works for both compacted and non-compacted forms. Using it for the non-compacted form is slower because of multiple __copy_from_user() operations, but that cost is less important than robust code in an already slow path. [ Changelog polished by Dave Hansen ] En el kernel de Linux, se resolvió la siguiente vulnerabilidad: x86/fpu: evita la corrupción del estado en __fpu__restore_sig() La ruta lenta no compactada usa __copy_from_user() y copia todo el búfer del usuario en el búfer del kernel, palabra por palabra. Esto significa que el búfer del kernel ahora puede contener un estado completamente inválido en el que XRSTOR realizará #GP. validar_user_xstate_header() puede detectar parte de esa corrupción, pero eso deja a las personas que llaman la responsabilidad de borrar el búfer. Antes de la compatibilidad con XSAVES, era posible simplemente reinicializar el búfer por completo, pero con los estados del supervisor eso ya no es posible porque la división del código de borrado del búfer lo hacía al revés. • https://git.kernel.org/stable/c/b860eb8dce5906b14e3a7f3c771e0b3d6ef61b94 https://git.kernel.org/stable/c/076f732b16a5bf842686e1b43ab6021a2d98233e https://git.kernel.org/stable/c/ec25ea1f3f05d6f8ee51d1277efea986eafd4f2a https://git.kernel.org/stable/c/484cea4f362e1eeb5c869abbfb5f90eae6421b38 •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: x86/fpu: Invalidate FPU state after a failed XRSTOR from a user buffer Both Intel and AMD consider it to be architecturally valid for XRSTOR to fail with #PF but nonetheless change the register state. The actual conditions under which this might occur are unclear [1], but it seems plausible that this might be triggered if one sibling thread unmaps a page and invalidates the shared TLB while another sibling thread is executing XRSTOR on the page in question. __fpu__restore_sig() can execute XRSTOR while the hardware registers are preserved on behalf of a different victim task (using the fpu_fpregs_owner_ctx mechanism), and, in theory, XRSTOR could fail but modify the registers. If this happens, then there is a window in which __fpu__restore_sig() could schedule out and the victim task could schedule back in without reloading its own FPU registers. This would result in part of the FPU state that __fpu__restore_sig() was attempting to load leaking into the victim task's user-visible state. Invalidate preserved FPU registers on XRSTOR failure to prevent this situation from corrupting any state. [1] Frequent readers of the errata lists might imagine "complex microarchitectural conditions". En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: x86/fpu: Invalida el estado de la FPU después de un XRSTOR fallido desde un búfer de usuario. Tanto Intel como AMD consideran que es arquitectónicamente válido que XRSTOR falle con #PF pero aun así cambie el estado del registro. . • https://git.kernel.org/stable/c/1d731e731c4cd7cbd3b1aa295f0932e7610da82f https://git.kernel.org/stable/c/a7748e021b9fb7739e3cb88449296539de0b6817 https://git.kernel.org/stable/c/002665dcba4bbec8c82f0aeb4bd3f44334ed2c14 https://git.kernel.org/stable/c/d8778e393afa421f1f117471144f8ce6deb6953a •

CVSS: 5.5EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mac80211: fix deadlock in AP/VLAN handling Syzbot reports that when you have AP_VLAN interfaces that are up and close the AP interface they belong to, we get a deadlock. No surprise - since we dev_close() them with the wiphy mutex held, which goes back into the netdev notifier in cfg80211 and tries to acquire the wiphy mutex there. To fix this, we need to do two things: 1) prevent changing iftype while AP_VLANs are up, we can't easily fix this case since cfg80211 already calls us with the wiphy mutex held, but change_interface() is relatively rare in drivers anyway, so changing iftype isn't used much (and userspace has to fall back to down/change/up anyway) 2) pull the dev_close() loop over VLANs out of the wiphy mutex section in the normal stop case En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: mac80211: corrige el punto muerto en el manejo de AP/VLAN. Syzbot informa que cuando tienes interfaces AP_VLAN activas y cierras la interfaz AP a la que pertenecen, obtenemos un punto muerto. No es de extrañar, ya que los dev_close() los usamos con el mutex wiphy retenido, lo que regresa al notificador netdev en cfg80211 e intenta adquirir el mutex wiphy allí. Para solucionar esto, debemos hacer dos cosas: 1) evitar cambiar iftype mientras las AP_VLAN estén activas, no podemos solucionar fácilmente este caso ya que cfg80211 ya nos llama con el mutex wiphy retenido, pero change_interface() es relativamente raro en los controladores de todos modos , por lo que cambiar iftype no se usa mucho (y el espacio de usuario tiene que volver a bajar/cambiar/arriba de todos modos) 2) extraiga el bucle dev_close() sobre las VLAN de la sección wiphy mutex en el caso de detención normal • https://git.kernel.org/stable/c/a05829a7222e9d10c416dd2dbbf3929fe6646b89 https://git.kernel.org/stable/c/8043903fcb72f545c52e3ec74d6fd82ef79ce7c5 https://git.kernel.org/stable/c/d5befb224edbe53056c2c18999d630dafb4a08b9 •

CVSS: 6.2EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: net: ll_temac: Make sure to free skb when it is completely used With the skb pointer piggy-backed on the TX BD, we have a simple and efficient way to free the skb buffer when the frame has been transmitted. But in order to avoid freeing the skb while there are still fragments from the skb in use, we need to piggy-back on the TX BD of the skb, not the first. Without this, we are doing use-after-free on the DMA side, when the first BD of a multi TX BD packet is seen as completed in xmit_done, and the remaining BDs are still being processed. En el kernel de Linux se ha resuelto la siguiente vulnerabilidad: net:ll_temac: Asegúrate de liberar skb cuando esté completamente utilizado. Con el puntero skb acoplado en la BD TX, tenemos una forma sencilla y eficaz de liberar el buffer skb. cuando la trama ha sido transmitida. Pero para evitar liberar el skb mientras todavía hay fragmentos del skb en uso, debemos aprovechar el BD TX del skb, no el primero. Sin esto, estamos haciendo use after free en el lado DMA, cuando el primer BD de un paquete BD de transmisión múltiple se considera completado en xmit_done y los BD restantes todavía se están procesando. • https://git.kernel.org/stable/c/6d120ab4dc39a543c6b63361e1d0541c382900a3 https://git.kernel.org/stable/c/019ab7d044d0ebf97e1236bb8935b7809be92358 https://git.kernel.org/stable/c/e8afe05bd359ebe12a61dbdc94c06c00ea3e8d4b https://git.kernel.org/stable/c/6aa32217a9a446275440ee8724b1ecaf1838df47 •