Page 3 of 367 results (0.015 seconds)

CVSS: 6.5EPSS: 0%CPEs: 5EXPL: 0

P2M pool freeing may take excessively long The P2M pool backing second level address translation for guests may be of significant size. Therefore its freeing may take more time than is reasonable without intermediate preemption checks. Such checking for the need to preempt was so far missing. La liberación del pool P2M puede tardar demasiado El pool P2M que respalda la traducción de direcciones de segundo nivel para huéspedes puede tener un tamaño considerable. Por lo tanto, su liberación puede tomar más tiempo de lo que es razonable sin comprobaciones intermedias de preferencia. • http://www.openwall.com/lists/oss-security/2022/10/11/3 http://xenbits.xen.org/xsa/advisory-410.html https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/TJOMUNGW6VTK5CZZRLWLVVEOUPEQBRHI https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/XWSC77GS5NATI3TT7FMVPULUPXR635XQ https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/YZVXG7OOOXCX6VIPEMLFDPIPUTFAYWPE https://security.gentoo.org/glsa/202402-07 https:// • CWE-404: Improper Resource Shutdown or Release •

CVSS: 5.6EPSS: 0%CPEs: 6EXPL: 0

Racy interactions between dirty vram tracking and paging log dirty hypercalls Activation of log dirty mode done by XEN_DMOP_track_dirty_vram (was named HVMOP_track_dirty_vram before Xen 4.9) is racy with ongoing log dirty hypercalls. A suitably timed call to XEN_DMOP_track_dirty_vram can enable log dirty while another CPU is still in the process of tearing down the structures related to a previously enabled log dirty mode (XEN_DOMCTL_SHADOW_OP_OFF). This is due to lack of mutually exclusive locking between both operations and can lead to entries being added in already freed slots, resulting in a memory leak. Una activación del modo de registro sucio realizada por XEN_DMOP_track_dirty_vram (es llamada HVMOP_track_dirty_vram antes de Xen versión 4.9) es producido con las hiperllamadas de registro sucio en curso. Una llamada a XEN_DMOP_track_dirty_vram con el tiempo apropiado puede habilitar log dirty mientras otra CPU está todavía en el proceso de desmontar las estructuras relacionadas con un modo log dirty previamente habilitado (XEN_DOMCTL_SHADOW_OP_OFF). • http://www.openwall.com/lists/oss-security/2022/04/05/1 http://xenbits.xen.org/xsa/advisory-397.html https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/6ETPM2OVZZ6KOS2L7QO7SIW6XWT5OW3F https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/UHFSRVLM2JUCPDC2KGB7ETPQYJLCGBLD https://security.gentoo.org/glsa/202402-07 https://www.debian.org/security/2022/dsa-5117 https://xenbits.xenproject.org/xsa/advisory-397.txt • CWE-667: Improper Locking •

CVSS: 7.0EPSS: 0%CPEs: 5EXPL: 0

race in VT-d domain ID cleanup Xen domain IDs are up to 15 bits wide. VT-d hardware may allow for only less than 15 bits to hold a domain ID associating a physical device with a particular domain. Therefore internally Xen domain IDs are mapped to the smaller value range. The cleaning up of the housekeeping structures has a race, allowing for VT-d domain IDs to be leaked and flushes to be bypassed. Una carrera en la limpieza del ID de dominio de VT-d Los ID de dominio de Xen presentan hasta 15 bits de ancho. • http://www.openwall.com/lists/oss-security/2022/04/05/2 http://xenbits.xen.org/xsa/advisory-399.html https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/6ETPM2OVZZ6KOS2L7QO7SIW6XWT5OW3F https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/UHFSRVLM2JUCPDC2KGB7ETPQYJLCGBLD https://security.gentoo.org/glsa/202402-07 https://www.debian.org/security/2022/dsa-5117 https://xenbits.xenproject.org/xsa/advisory-399.txt • CWE-362: Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition') •

CVSS: 5.5EPSS: 0%CPEs: 4EXPL: 0

A PV guest could DoS Xen while unmapping a grant To address XSA-380, reference counting was introduced for grant mappings for the case where a PV guest would have the IOMMU enabled. PV guests can request two forms of mappings. When both are in use for any individual mapping, unmapping of such a mapping can be requested in two steps. The reference count for such a mapping would then mistakenly be decremented twice. Underflow of the counters gets detected, resulting in the triggering of a hypervisor bug check. • http://www.openwall.com/lists/oss-security/2022/01/25/3 https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/OMR6UBGJW6JKND7IILGQ2CU35EQPF3E3 https://security.gentoo.org/glsa/202208-23 https://www.debian.org/security/2022/dsa-5117 https://xenbits.xenproject.org/xsa/advisory-394.txt • CWE-191: Integer Underflow (Wrap or Wraparound) •

CVSS: 7.0EPSS: 0%CPEs: 1EXPL: 0

grant table v2 status pages may remain accessible after de-allocation (take two) Guest get permitted access to certain Xen-owned pages of memory. The majority of such pages remain allocated / associated with a guest for its entire lifetime. Grant table v2 status pages, however, get de-allocated when a guest switched (back) from v2 to v1. The freeing of such pages requires that the hypervisor know where in the guest these pages were mapped. The hypervisor tracks only one use within guest space, but racing requests from the guest to insert mappings of these pages may result in any of them to become mapped in multiple locations. • https://security.gentoo.org/glsa/202402-07 https://xenbits.xenproject.org/xsa/advisory-387.txt •