Page 502 of 4283 results (0.014 seconds)

CVSS: 5.5EPSS: 0%CPEs: 14EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: fs: sysfs: Fix reference leak in sysfs_break_active_protection() The sysfs_break_active_protection() routine has an obvious reference leak in its error path. If the call to kernfs_find_and_get() fails then kn will be NULL, so the companion sysfs_unbreak_active_protection() routine won't get called (and would only cause an access violation by trying to dereference kn->parent if it was called). As a result, the reference to kobj acquired at the start of the function will never be released. Fix the leak by adding an explicit kobject_put() call when kn is NULL. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: fs: sysfs: corrige la fuga de referencia en sysfs_break_active_protection() La rutina sysfs_break_active_protection() tiene una fuga de referencia obvia en su ruta de error. Si la llamada a kernfs_find_and_get() falla, entonces kn será NULL, por lo que no se llamará a la rutina complementaria sysfs_unbreak_active_protection() (y solo causaría una infracción de acceso al intentar eliminar la referencia a kn->parent si se llamara). • https://git.kernel.org/stable/c/2afc9166f79b8f6da5f347f48515215ceee4ae37 https://git.kernel.org/stable/c/e8a37b2fd5b5087bec6cbbf6946ee3caa712953b https://git.kernel.org/stable/c/a6abc93760dd07fcd29760b70e6e7520f22cb288 https://git.kernel.org/stable/c/461a6385e58e8247e6ba2005aa5d1b8d980ee4a2 https://git.kernel.org/stable/c/8a5e02a0f46ea33ed19e48e096a8e8d28e73d10a https://git.kernel.org/stable/c/c984f4d1d40a2f349503b3faf946502ccbf02f9f https://git.kernel.org/stable/c/807d1d299a04e9ad9a9dac55419c1137a105254b https://git.kernel.org/stable/c/f28bba37fe244889b81bb5c508d3f6e5c •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: x86/pmu: Disable support for adaptive PEBS Drop support for virtualizing adaptive PEBS, as KVM's implementation is architecturally broken without an obvious/easy path forward, and because exposing adaptive PEBS can leak host LBRs to the guest, i.e. can leak host kernel addresses to the guest. Bug #1 is that KVM doesn't account for the upper 32 bits of IA32_FIXED_CTR_CTRL when (re)programming fixed counters, e.g fixed_ctrl_field() drops the upper bits, reprogram_fixed_counters() stores local variables as u8s and truncates the upper bits too, etc. Bug #2 is that, because KVM _always_ sets precise_ip to a non-zero value for PEBS events, perf will _always_ generate an adaptive record, even if the guest requested a basic record. Note, KVM will also enable adaptive PEBS in individual *counter*, even if adaptive PEBS isn't exposed to the guest, but this is benign as MSR_PEBS_DATA_CFG is guaranteed to be zero, i.e. the guest will only ever see Basic records. Bug #3 is in perf. intel_pmu_disable_fixed() doesn't clear the upper bits either, i.e. leaves ICL_FIXED_0_ADAPTIVE set, and intel_pmu_enable_fixed() effectively doesn't clear ICL_FIXED_0_ADAPTIVE either. I.e. perf _always_ enables ADAPTIVE counters, regardless of what KVM requests. Bug #4 is that adaptive PEBS *might* effectively bypass event filters set by the host, as "Updated Memory Access Info Group" records information that might be disallowed by userspace via KVM_SET_PMU_EVENT_FILTER. Bug #5 is that KVM doesn't ensure LBR MSRs hold guest values (or at least zeros) when entering a vCPU with adaptive PEBS, which allows the guest to read host LBRs, i.e. host RIPs/addresses, by enabling "LBR Entries" records. Disable adaptive PEBS support as an immediate fix due to the severity of the LBR leak in particular, and because fixing all of the bugs will be non-trivial, e.g. not suitable for backporting to stable kernels. Note! This will break live migration, but trying to make KVM play nice with live migration would be quite complicated, wouldn't be guaranteed to work (i.e. • https://git.kernel.org/stable/c/c59a1f106f5cd4843c097069ff1bb2ad72103a67 https://git.kernel.org/stable/c/0fb74c00d140a66128afc0003785dcc57e69d312 https://git.kernel.org/stable/c/037e48ceccf163899374b601afb6ae8d0bf1d2ac https://git.kernel.org/stable/c/7a7650b3ac23e5fc8c990f00e94f787dc84e3175 https://git.kernel.org/stable/c/9e985cbf2942a1bb8fcef9adc2a17d90fd7ca8ee •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: x86/mmu: Write-protect L2 SPTEs in TDP MMU when clearing dirty status Check kvm_mmu_page_ad_need_write_protect() when deciding whether to write-protect or clear D-bits on TDP MMU SPTEs, so that the TDP MMU accounts for any role-specific reasons for disabling D-bit dirty logging. Specifically, TDP MMU SPTEs must be write-protected when the TDP MMU is being used to run an L2 (i.e. L1 has disabled EPT) and PML is enabled. KVM always disables PML when running L2, even when L1 and L2 GPAs are in the some domain, so failing to write-protect TDP MMU SPTEs will cause writes made by L2 to not be reflected in the dirty log. [sean: massage shortlog and changelog, tweak ternary op formatting] En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: KVM: x86/mmu: Proteger contra escritura los SPTE L2 en la MMU TDP al borrar el estado sucio. Verifique kvm_mmu_page_ad_need_write_protect() cuando decida si desea proteger contra escritura o borrar los bits D en los SPTE de la MMU TDP. , de modo que la MMU TDP tenga en cuenta cualquier motivo específico de la función para deshabilitar el registro sucio de bits D. Específicamente, los SPTE de TDP MMU deben estar protegidos contra escritura cuando la TDP MMU se utiliza para ejecutar un L2 (es decir, L1 tiene EPT deshabilitado) y PML está habilitado. KVM siempre desactiva PML cuando se ejecuta L2, incluso cuando los GPA L1 y L2 están en algún dominio, por lo que si no se protegen contra escritura los SPTE TDP MMU, las escrituras realizadas por L2 no se reflejarán en el registro sucio. • https://git.kernel.org/stable/c/5982a5392663b30f57ee90b0372c19a7e9cb655a https://git.kernel.org/stable/c/cdf811a937471af2d1facdf8ae80e5e68096f1ed https://git.kernel.org/stable/c/e20bff0f1b2de9cfe303dd35ff46470104a87404 https://git.kernel.org/stable/c/2673dfb591a359c75080dd5af3da484b89320d22 •

CVSS: -EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: arm64: hibernate: Fix level3 translation fault in swsusp_save() On arm64 machines, swsusp_save() faults if it attempts to access MEMBLOCK_NOMAP memory ranges. This can be reproduced in QEMU using UEFI when booting with rodata=off debug_pagealloc=off and CONFIG_KFENCE=n: Unable to handle kernel paging request at virtual address ffffff8000000000 Mem abort info: ESR = 0x0000000096000007 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 FSC = 0x07: level 3 translation fault Data abort info: ISV = 0, ISS = 0x00000007, ISS2 = 0x00000000 CM = 0, WnR = 0, TnD = 0, TagAccess = 0 GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 swapper pgtable: 4k pages, 39-bit VAs, pgdp=00000000eeb0b000 [ffffff8000000000] pgd=180000217fff9803, p4d=180000217fff9803, pud=180000217fff9803, pmd=180000217fff8803, pte=0000000000000000 Internal error: Oops: 0000000096000007 [#1] SMP Internal error: Oops: 0000000096000007 [#1] SMP Modules linked in: xt_multiport ipt_REJECT nf_reject_ipv4 xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c iptable_filter bpfilter rfkill at803x snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg dwmac_generic stmmac_platform snd_hda_codec stmmac joydev pcs_xpcs snd_hda_core phylink ppdev lp parport ramoops reed_solomon ip_tables x_tables nls_iso8859_1 vfat multipath linear amdgpu amdxcp drm_exec gpu_sched drm_buddy hid_generic usbhid hid radeon video drm_suballoc_helper drm_ttm_helper ttm i2c_algo_bit drm_display_helper cec drm_kms_helper drm CPU: 0 PID: 3663 Comm: systemd-sleep Not tainted 6.6.2+ #76 Source Version: 4e22ed63a0a48e7a7cff9b98b7806d8d4add7dc0 Hardware name: Greatwall GW-XXXXXX-XXX/GW-XXXXXX-XXX, BIOS KunLun BIOS V4.0 01/19/2021 pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : swsusp_save+0x280/0x538 lr : swsusp_save+0x280/0x538 sp : ffffffa034a3fa40 x29: ffffffa034a3fa40 x28: ffffff8000001000 x27: 0000000000000000 x26: ffffff8001400000 x25: ffffffc08113e248 x24: 0000000000000000 x23: 0000000000080000 x22: ffffffc08113e280 x21: 00000000000c69f2 x20: ffffff8000000000 x19: ffffffc081ae2500 x18: 0000000000000000 x17: 6666662074736420 x16: 3030303030303030 x15: 3038666666666666 x14: 0000000000000b69 x13: ffffff9f89088530 x12: 00000000ffffffea x11: 00000000ffff7fff x10: 00000000ffff7fff x9 : ffffffc08193f0d0 x8 : 00000000000bffe8 x7 : c0000000ffff7fff x6 : 0000000000000001 x5 : ffffffa0fff09dc8 x4 : 0000000000000000 x3 : 0000000000000027 x2 : 0000000000000000 x1 : 0000000000000000 x0 : 000000000000004e Call trace: swsusp_save+0x280/0x538 swsusp_arch_suspend+0x148/0x190 hibernation_snapshot+0x240/0x39c hibernate+0xc4/0x378 state_store+0xf0/0x10c kobj_attr_store+0x14/0x24 The reason is swsusp_save() -> copy_data_pages() -> page_is_saveable() -> kernel_page_present() assuming that a page is always present when can_set_direct_map() is false (all of rodata_full, debug_pagealloc_enabled() and arm64_kfence_can_set_direct_map() false), irrespective of the MEMBLOCK_NOMAP ranges. Such MEMBLOCK_NOMAP regions should not be saved during hibernation. This problem was introduced by changes to the pfn_valid() logic in commit a7d9f306ba70 ("arm64: drop pfn_valid_within() and simplify pfn_valid()"). Similar to other architectures, drop the !can_set_direct_map() check in kernel_page_present() so that page_is_savable() skips such pages. [catalin.marinas@arm.com: rework commit message] En el kernel de Linux, se resolvió la siguiente vulnerabilidad: arm64: hibernación: corrige el error de traducción de nivel 3 en swsusp_save() En máquinas arm64, swsusp_save() falla si intenta acceder a los rangos de memoria MEMBLOCK_NOMAP. Esto se puede reproducir en QEMU usando UEFI al arrancar con rodata=off debug_pagealloc=off y CONFIG_KFENCE=n: No se puede manejar la solicitud de paginación del kernel en la dirección virtual ffffff8000000000 Información de cancelación de memoria: ESR = 0x0000000096000007 EC = 0x25: DABT (EL actual), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 FSC = 0x07: fallo de traducción de nivel 3 Información de cancelación de datos: ISV = 0, ISS = 0x00000007, ISS2 = 0x00000000 CM = 0, WnR = 0, TnD = 0, TagAccess = 0 GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 pgtable de intercambio: páginas de 4k, VA de 39 bits, pgdp=00000000eeb0b000 [ffffff8000000000] pgd=180000217fff9803, p4d=180000217fff 9803, pud=180000217fff9803, pmd =180000217fff8803, pte=0000000000000000 Error interno: Ups: 0000000096000007 [#1] Error interno de SMP: Ups: 0000000096000007 [#1] Módulos SMP vinculados en: xt_multiport ipt_REJECT nf_reject_ipv4 x t_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c iptable_filter bpfilter rfkill at803x snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg dwmac_generic stmmac_platform snd_hda_codec stmmac joydev pcs_xpcs snd_hda_core phylink ppdev lp parport ramoops reed_solomon ip_tables x_tables nls_iso8859_1 vfat multipath lineal amdgpu amdxcp drm_exec gpu_sched drm_buddy hid_generic usbhid hid radeon video drm_suballoc_helper drm_ttm_helper ttm algo_bit drm_display_helper cec drm_kms_helper drm CPU: 0 PID: 3663 Comm: systemd-sleep No contaminado 6.6.2 + #76 Versión de origen: 4e22ed63a0a48e7a7cff9b98b7806d8d4add7dc0 Nombre del hardware: Greatwall GW-XXXXXX-XXX/GW-XXXXXX-XXX, BIOS KunLun BIOS V4.0 19/01/2021 pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc: swsusp_save+0x280/0x538 lr: swsusp_save+0x280/0x538 sp: ffffffa034a3fa40 x29: ffffffa034a3fa40 x28: ffffff8000001000 x27: 0000000000000000 x26: ffff8001400000 x25: ffffffc08113e248 x24: 0000000000000000 x23: 0000000000080000 x22: ffffffc08113e280 x21: 00000000000c69f2 x20 : ffffff8000000000 x19: ffffffc081ae2500 x18: 0000000000000000 x17: 6666662074736420 x16: 3030303030303030 x15: 3038666666666666 x14: 00000000000b69 x13: ffffff9f89088530 x12: 00000000ffffffea x11: 00000000ffff7fff x10: 00000000ffff7fff x9: fffffc08193f0d0 x8: 00000000000bffe8 x7: c 0000000ffff7fff x6: 0000000000000001 x5: fffffa0fff09dc8 x4: 0000000000000000 x3: 0000000000000027 x2: 0000000000000000 x1: 0000000000000000 x0: 000000000000004e Rastreo de llamadas: swsusp_save+0x280/0x538 swsusp_arch_suspend+0x148/0x190 bernation_snapshot+0x240/0x39c hibernate+0xc4/0x378 state_store+0xf0/0x10c kobj_attr_store+0x14/0x24 El motivo es swsusp_save( ) -> copy_data_pages() -> page_is_saveable() -> kernel_page_present() suponiendo que una página siempre está presente cuando can_set_direct_map() es falsa (todos rodata_full, debug_pagealloc_enabled() y arm64_kfence_can_set_direct_map() son falsos), independientemente de los rangos de MEMBLOCK_NOMAP. • https://git.kernel.org/stable/c/a7d9f306ba7052056edf9ccae596aeb400226af8 https://git.kernel.org/stable/c/813f5213f2c612dc800054859aaa396ec8ad7069 https://git.kernel.org/stable/c/f7e71a7cf399f53ff9fc314ca3836dc913b05bd6 https://git.kernel.org/stable/c/31f815cb436082e72d34ed2e8a182140a73ebdf4 https://git.kernel.org/stable/c/022b19ebc31cce369c407617041a3db810db23b3 https://git.kernel.org/stable/c/50449ca66cc5a8cbc64749cf4b9f3d3fc5f4b457 •

CVSS: -EPSS: 0%CPEs: 6EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: init/main.c: Fix potential static_command_line memory overflow We allocate memory of size 'xlen + strlen(boot_command_line) + 1' for static_command_line, but the strings copied into static_command_line are extra_command_line and command_line, rather than extra_command_line and boot_command_line. When strlen(command_line) > strlen(boot_command_line), static_command_line will overflow. This patch just recovers strlen(command_line) which was miss-consolidated with strlen(boot_command_line) in the commit f5c7310ac73e ("init/main: add checks for the return value of memblock_alloc*()") En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: init/main.c: soluciona un posible desbordamiento de memoria de static_command_line Asignamos memoria de tamaño 'xlen + strlen(boot_command_line) + 1' para static_command_line, pero las cadenas copiadas en static_command_line son extra_command_line y command_line, en lugar de extra_command_line y boot_command_line. Cuando strlen(command_line) > strlen(boot_command_line), static_command_line se desbordará. Este parche simplemente recupera strlen(command_line) que no se consolidó con strlen(boot_command_line) en el commit f5c7310ac73e ("init/main: agregue comprobaciones para el valor de retorno de memblock_alloc*()") • https://git.kernel.org/stable/c/f5c7310ac73ea270e3a1acdb73d1b4817f11fd67 https://git.kernel.org/stable/c/2ef607ea103616aec0289f1b65d103d499fa903a https://git.kernel.org/stable/c/0dc727a4e05400205358a22c3d01ccad2c8e1fe4 https://git.kernel.org/stable/c/76c2f4d426a5358fced5d5990744d46f10a4ccea https://git.kernel.org/stable/c/81cf85ae4f2dd5fa3e43021782aa72c4c85558e8 https://git.kernel.org/stable/c/936a02b5a9630c5beb0353c3085cc49d86c57034 https://git.kernel.org/stable/c/46dad3c1e57897ab9228332f03e1c14798d2d3b9 https://lists.debian.org/debian-lts-announce/2024/06/ •