Page 301 of 2642 results (0.007 seconds)

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: ocfs2: fix data corruption by fallocate When fallocate punches holes out of inode size, if original isize is in the middle of last cluster, then the part from isize to the end of the cluster will be zeroed with buffer write, at that time isize is not yet updated to match the new size, if writeback is kicked in, it will invoke ocfs2_writepage()->block_write_full_page() where the pages out of inode size will be dropped. That will cause file corruption. Fix this by zero out eof blocks when extending the inode size. Running the following command with qemu-image 4.2.1 can get a corrupted coverted image file easily. qemu-img convert -p -t none -T none -f qcow2 $qcow_image \ -O qcow2 -o compat=1.1 $qcow_image.conv The usage of fallocate in qemu is like this, it first punches holes out of inode size, then extend the inode size. fallocate(11, FALLOC_FL_KEEP_SIZE|FALLOC_FL_PUNCH_HOLE, 2276196352, 65536) = 0 fallocate(11, 0, 2276196352, 65536) = 0 v1: https://www.spinics.net/lists/linux-fsdevel/msg193999.html v2: https://lore.kernel.org/linux-fsdevel/20210525093034.GB4112@quack2.suse.cz/T/ En el kernel de Linux, se resolvió la siguiente vulnerabilidad: ocfs2: corrige la corrupción de datos por fallocate Cuando fallocate perfora agujeros en el tamaño del inodo, si el isize original está en el medio del último clúster, entonces la parte desde isize hasta el final del clúster se pondrá a cero con la escritura en el búfer, en ese momento isize aún no se ha actualizado para que coincida con el nuevo tamaño, si se activa la reescritura, invocará ocfs2_writepage()->block_write_full_page() donde se eliminarán las páginas fuera del tamaño del inodo. Eso causará corrupción de archivos. Solucione este problema poniendo a cero los bloques eof al extender el tamaño del inodo. • https://git.kernel.org/stable/c/624fa7baa3788dc9e57840ba5b94bc22b03cda57 https://git.kernel.org/stable/c/33e03adafb29eedae1bae9cdb50c1385279fcf65 https://git.kernel.org/stable/c/a1700479524bb9cb5e8ae720236a6fabd003acae https://git.kernel.org/stable/c/cec4e857ffaa8c447f51cd8ab4e72350077b6770 https://git.kernel.org/stable/c/cc2edb99ea606a45182b5ea38cc8f4e583aa0774 https://git.kernel.org/stable/c/c8d5faee46242c3f33b8a71a4d7d52214785bfcc https://git.kernel.org/stable/c/0a31dd6fd2f4e7db538fb6eb1f06973d81f8dd3b https://git.kernel.org/stable/c/6bba4471f0cc1296fe3c2089b9e52442d •

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: btrfs: abort in rename_exchange if we fail to insert the second ref Error injection stress uncovered a problem where we'd leave a dangling inode ref if we failed during a rename_exchange. This happens because we insert the inode ref for one side of the rename, and then for the other side. If this second inode ref insert fails we'll leave the first one dangling and leave a corrupt file system behind. Fix this by aborting if we did the insert for the first inode ref. En el kernel de Linux, se resolvió la siguiente vulnerabilidad: btrfs: abortar en rename_exchange si no logramos insertar la segunda referencia. • https://git.kernel.org/stable/c/0df50d47d17401f9f140dfbe752a65e5d72f9932 https://git.kernel.org/stable/c/ff8de2cec65a8c8521faade12a31b39c80e49f5b https://git.kernel.org/stable/c/dc09ef3562726cd520c8338c1640872a60187af5 •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: x86/kvm: Teardown PV features on boot CPU as well Various PV features (Async PF, PV EOI, steal time) work through memory shared with hypervisor and when we restore from hibernation we must properly teardown all these features to make sure hypervisor doesn't write to stale locations after we jump to the previously hibernated kernel (which can try to place anything there). For secondary CPUs the job is already done by kvm_cpu_down_prepare(), register syscore ops to do the same for boot CPU. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: x86/kvm: Desmontaje de funciones PV también en la CPU de arranque Varias funciones PV (Async PF, PV EOI, tiempo de robo) funcionan a través de la memoria compartida con el hipervisor y cuando restauramos desde la hibernación Debemos eliminar adecuadamente todas estas características para asegurarnos de que el hipervisor no escriba en ubicaciones obsoletas después de saltar al kernel previamente hibernado (que puede intentar colocar cualquier cosa allí). Para las CPU secundarias, el trabajo ya lo realiza kvm_cpu_down_prepare(), registre syscore ops para hacer lo mismo para la CPU de arranque. • https://git.kernel.org/stable/c/7620a669111b52f224d006dea9e1e688e2d62c54 https://git.kernel.org/stable/c/38b858da1c58ad46519a257764e059e663b59ff2 https://git.kernel.org/stable/c/d1629b5b925de9b27979e929dae7fcb766daf6b6 https://git.kernel.org/stable/c/8b79feffeca28c5459458fe78676b081e87c93a4 •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: x86/kvm: Disable kvmclock on all CPUs on shutdown Currenly, we disable kvmclock from machine_shutdown() hook and this only happens for boot CPU. We need to disable it for all CPUs to guard against memory corruption e.g. on restore from hibernate. Note, writing '0' to kvmclock MSR doesn't clear memory location, it just prevents hypervisor from updating the location so for the short while after write and while CPU is still alive, the clock remains usable and correct so we don't need to switch to some other clocksource. En el kernel de Linux, se resolvió la siguiente vulnerabilidad: x86/kvm: deshabilite kvmclock en todas las CPU al apagar Actualmente, deshabilitamos kvmclock desde el enlace machine_shutdown() y esto solo sucede para la CPU de arranque. Necesitamos deshabilitarlo para todas las CPU para protegernos contra la corrupción de la memoria, por ejemplo, al restaurar desde la hibernación. Tenga en cuenta que escribir '0' en kvmclock MSR no borra la ubicación de la memoria, solo evita que el hipervisor actualice la ubicación, por lo que durante un breve período después de la escritura y mientras la CPU aún está activa, el reloj permanece utilizable y correcto, por lo que no lo necesitamos. para cambiar a alguna otra fuente de reloj. • https://git.kernel.org/stable/c/9084fe1b3572664ad276f427dce575f580c9799a https://git.kernel.org/stable/c/3b0becf8b1ecf642a9edaf4c9628ffc641e490d6 https://git.kernel.org/stable/c/1df2dc09926f61319116c80ee85701df33577d70 https://git.kernel.org/stable/c/c02027b5742b5aa804ef08a4a9db433295533046 •

CVSS: 5.5EPSS: 0%CPEs: 13EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: nfsd: fix RELEASE_LOCKOWNER The test on so_count in nfsd4_release_lockowner() is nonsense and harmful. Revert to using check_for_locks(), changing that to not sleep. First: harmful. As is documented in the kdoc comment for nfsd4_release_lockowner(), the test on so_count can transiently return a false positive resulting in a return of NFS4ERR_LOCKS_HELD when in fact no locks are held. This is clearly a protocol violation and with the Linux NFS client it can cause incorrect behaviour. If RELEASE_LOCKOWNER is sent while some other thread is still processing a LOCK request which failed because, at the time that request was received, the given owner held a conflicting lock, then the nfsd thread processing that LOCK request can hold a reference (conflock) to the lock owner that causes nfsd4_release_lockowner() to return an incorrect error. The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because it never sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, so it knows that the error is impossible. It assumes the lock owner was in fact released so it feels free to use the same lock owner identifier in some later locking request. When it does reuse a lock owner identifier for which a previous RELEASE failed, it will naturally use a lock_seqid of zero. However the server, which didn't release the lock owner, will expect a larger lock_seqid and so will respond with NFS4ERR_BAD_SEQID. So clearly it is harmful to allow a false positive, which testing so_count allows. The test is nonsense because ... well... it doesn't mean anything. so_count is the sum of three different counts. 1/ the set of states listed on so_stateids 2/ the set of active vfs locks owned by any of those states 3/ various transient counts such as for conflicting locks. When it is tested against '2' it is clear that one of these is the transient reference obtained by find_lockowner_str_locked(). • https://git.kernel.org/stable/c/3097f38e91266c7132c3fdb7e778fac858c00670 https://git.kernel.org/stable/c/e2fc17fcc503cfca57b5d1dd3b646ca7eebead97 https://git.kernel.org/stable/c/ce3c4ad7f4ce5db7b4f08a1e237d8dd94b39180b https://git.kernel.org/stable/c/fea1d0940301378206955264a01778700fc9c16f https://git.kernel.org/stable/c/2ec65dc6635d1976bd1dbf2640ff7f810b2f6dd1 https://git.kernel.org/stable/c/ef481b262bba4f454351eec43f024fec942c2d4c https://git.kernel.org/stable/c/10d75984495f7fe62152c3b0dbfa3f0a6b739c9b https://git.kernel.org/stable/c/a2235bc65ade40982c3d09025cdd34bc5 • CWE-393: Return of Wrong Status Code •