Page 447 of 3027 results (0.011 seconds)

CVSS: 5.9EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: speakup: Avoid crash on very long word In case a console is set up really large and contains a really long word (> 256 characters), we have to stop before the length of the word buffer. En el kernel de Linux se ha solucionado la siguiente vulnerabilidad: Speakup: Evitar crash en palabras muy largas En caso de que una consola esté configurada muy grande y contenga una palabra muy larga (>256 caracteres), tenemos que detenernos antes de la longitud de la palabra. búfer de palabras. • https://git.kernel.org/stable/c/c6e3fd22cd538365bfeb82997d5b89562e077d42 https://git.kernel.org/stable/c/756c5cb7c09e537b87b5d3acafcb101b2ccf394f https://git.kernel.org/stable/c/8f6b62125befe1675446923e4171eac2c012959c https://git.kernel.org/stable/c/6401038acfa24cba9c28cce410b7505efadd0222 https://git.kernel.org/stable/c/0d130158db29f5e0b3893154908cf618896450a8 https://git.kernel.org/stable/c/89af25bd4b4bf6a71295f07e07a8ae7dc03c6595 https://git.kernel.org/stable/c/8defb1d22ba0395b81feb963b96e252b097ba76f https://git.kernel.org/stable/c/0efb15c14c493263cb3a5f65f5ddfd460 •

CVSS: 5.5EPSS: 0%CPEs: 14EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: fs: sysfs: Fix reference leak in sysfs_break_active_protection() The sysfs_break_active_protection() routine has an obvious reference leak in its error path. If the call to kernfs_find_and_get() fails then kn will be NULL, so the companion sysfs_unbreak_active_protection() routine won't get called (and would only cause an access violation by trying to dereference kn->parent if it was called). As a result, the reference to kobj acquired at the start of the function will never be released. Fix the leak by adding an explicit kobject_put() call when kn is NULL. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: fs: sysfs: corrige la fuga de referencia en sysfs_break_active_protection() La rutina sysfs_break_active_protection() tiene una fuga de referencia obvia en su ruta de error. Si la llamada a kernfs_find_and_get() falla, entonces kn será NULL, por lo que no se llamará a la rutina complementaria sysfs_unbreak_active_protection() (y solo causaría una infracción de acceso al intentar eliminar la referencia a kn->parent si se llamara). • https://git.kernel.org/stable/c/2afc9166f79b8f6da5f347f48515215ceee4ae37 https://git.kernel.org/stable/c/e8a37b2fd5b5087bec6cbbf6946ee3caa712953b https://git.kernel.org/stable/c/a6abc93760dd07fcd29760b70e6e7520f22cb288 https://git.kernel.org/stable/c/461a6385e58e8247e6ba2005aa5d1b8d980ee4a2 https://git.kernel.org/stable/c/8a5e02a0f46ea33ed19e48e096a8e8d28e73d10a https://git.kernel.org/stable/c/c984f4d1d40a2f349503b3faf946502ccbf02f9f https://git.kernel.org/stable/c/807d1d299a04e9ad9a9dac55419c1137a105254b https://git.kernel.org/stable/c/f28bba37fe244889b81bb5c508d3f6e5c •

CVSS: 5.5EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: x86/pmu: Disable support for adaptive PEBS Drop support for virtualizing adaptive PEBS, as KVM's implementation is architecturally broken without an obvious/easy path forward, and because exposing adaptive PEBS can leak host LBRs to the guest, i.e. can leak host kernel addresses to the guest. Bug #1 is that KVM doesn't account for the upper 32 bits of IA32_FIXED_CTR_CTRL when (re)programming fixed counters, e.g fixed_ctrl_field() drops the upper bits, reprogram_fixed_counters() stores local variables as u8s and truncates the upper bits too, etc. Bug #2 is that, because KVM _always_ sets precise_ip to a non-zero value for PEBS events, perf will _always_ generate an adaptive record, even if the guest requested a basic record. Note, KVM will also enable adaptive PEBS in individual *counter*, even if adaptive PEBS isn't exposed to the guest, but this is benign as MSR_PEBS_DATA_CFG is guaranteed to be zero, i.e. the guest will only ever see Basic records. Bug #3 is in perf. intel_pmu_disable_fixed() doesn't clear the upper bits either, i.e. leaves ICL_FIXED_0_ADAPTIVE set, and intel_pmu_enable_fixed() effectively doesn't clear ICL_FIXED_0_ADAPTIVE either. I.e. perf _always_ enables ADAPTIVE counters, regardless of what KVM requests. Bug #4 is that adaptive PEBS *might* effectively bypass event filters set by the host, as "Updated Memory Access Info Group" records information that might be disallowed by userspace via KVM_SET_PMU_EVENT_FILTER. Bug #5 is that KVM doesn't ensure LBR MSRs hold guest values (or at least zeros) when entering a vCPU with adaptive PEBS, which allows the guest to read host LBRs, i.e. host RIPs/addresses, by enabling "LBR Entries" records. Disable adaptive PEBS support as an immediate fix due to the severity of the LBR leak in particular, and because fixing all of the bugs will be non-trivial, e.g. not suitable for backporting to stable kernels. Note! This will break live migration, but trying to make KVM play nice with live migration would be quite complicated, wouldn't be guaranteed to work (i.e. • https://git.kernel.org/stable/c/c59a1f106f5cd4843c097069ff1bb2ad72103a67 https://git.kernel.org/stable/c/0fb74c00d140a66128afc0003785dcc57e69d312 https://git.kernel.org/stable/c/037e48ceccf163899374b601afb6ae8d0bf1d2ac https://git.kernel.org/stable/c/7a7650b3ac23e5fc8c990f00e94f787dc84e3175 https://git.kernel.org/stable/c/9e985cbf2942a1bb8fcef9adc2a17d90fd7ca8ee https://access.redhat.com/security/cve/CVE-2024-26992 https://bugzilla.redhat.com/show_bug.cgi?id=2278316 •

CVSS: 5.5EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: x86/mmu: x86: Don't overflow lpage_info when checking attributes Fix KVM_SET_MEMORY_ATTRIBUTES to not overflow lpage_info array and trigger KASAN splat, as seen in the private_mem_conversions_test selftest. When memory attributes are set on a GFN range, that range will have specific properties applied to the TDP. A huge page cannot be used when the attributes are inconsistent, so they are disabled for those the specific huge pages. For internal KVM reasons, huge pages are also not allowed to span adjacent memslots regardless of whether the backing memory could be mapped as huge. What GFNs support which huge page sizes is tracked by an array of arrays 'lpage_info' on the memslot, of ‘kvm_lpage_info’ structs. Each index of lpage_info contains a vmalloc allocated array of these for a specific supported page size. The kvm_lpage_info denotes whether a specific huge page (GFN and page size) on the memslot is supported. • https://git.kernel.org/stable/c/90b4fe17981e155432c4dbc490606d0c2e9c2199 https://git.kernel.org/stable/c/048cc4a028e635d339687ed968985d2d1669494c https://git.kernel.org/stable/c/992b54bd083c5bee24ff7cc35991388ab08598c4 https://access.redhat.com/security/cve/CVE-2024-26991 https://bugzilla.redhat.com/show_bug.cgi?id=2278318 •

CVSS: 5.5EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: KVM: x86/mmu: Write-protect L2 SPTEs in TDP MMU when clearing dirty status Check kvm_mmu_page_ad_need_write_protect() when deciding whether to write-protect or clear D-bits on TDP MMU SPTEs, so that the TDP MMU accounts for any role-specific reasons for disabling D-bit dirty logging. Specifically, TDP MMU SPTEs must be write-protected when the TDP MMU is being used to run an L2 (i.e. L1 has disabled EPT) and PML is enabled. KVM always disables PML when running L2, even when L1 and L2 GPAs are in the some domain, so failing to write-protect TDP MMU SPTEs will cause writes made by L2 to not be reflected in the dirty log. [sean: massage shortlog and changelog, tweak ternary op formatting] En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: KVM: x86/mmu: Proteger contra escritura los SPTE L2 en la MMU TDP al borrar el estado sucio. Verifique kvm_mmu_page_ad_need_write_protect() cuando decida si desea proteger contra escritura o borrar los bits D en los SPTE de la MMU TDP. , de modo que la MMU TDP tenga en cuenta cualquier motivo específico de la función para deshabilitar el registro sucio de bits D. Específicamente, los SPTE de TDP MMU deben estar protegidos contra escritura cuando la TDP MMU se utiliza para ejecutar un L2 (es decir, L1 tiene EPT deshabilitado) y PML está habilitado. KVM siempre desactiva PML cuando se ejecuta L2, incluso cuando los GPA L1 y L2 están en algún dominio, por lo que si no se protegen contra escritura los SPTE TDP MMU, las escrituras realizadas por L2 no se reflejarán en el registro sucio. • https://git.kernel.org/stable/c/5982a5392663b30f57ee90b0372c19a7e9cb655a https://git.kernel.org/stable/c/cdf811a937471af2d1facdf8ae80e5e68096f1ed https://git.kernel.org/stable/c/e20bff0f1b2de9cfe303dd35ff46470104a87404 https://git.kernel.org/stable/c/2673dfb591a359c75080dd5af3da484b89320d22 https://access.redhat.com/security/cve/CVE-2024-26990 https://bugzilla.redhat.com/show_bug.cgi?id=2278320 •