Page 345 of 2297 results (0.011 seconds)

CVSS: 6.4EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: net/sched: act_ct: fix skb leak and crash on ooo frags act_ct adds skb->users before defragmentation. If frags arrive in order, the last frag's reference is reset in: inet_frag_reasm_prepare skb_morph which is not straightforward. However when frags arrive out of order, nobody unref the last frag, and all frags are leaked. The situation is even worse, as initiating packet capture can lead to a crash[0] when skb has been cloned and shared at the same time. Fix the issue by removing skb_get() before defragmentation. act_ct returns TC_ACT_CONSUMED when defrag failed or in progress. [0]: [ 843.804823] ------------[ cut here ]------------ [ 843.809659] kernel BUG at net/core/skbuff.c:2091! [ 843.814516] invalid opcode: 0000 [#1] PREEMPT SMP [ 843.819296] CPU: 7 PID: 0 Comm: swapper/7 Kdump: loaded Tainted: G S 6.7.0-rc3 #2 [ 843.824107] Hardware name: XFUSION 1288H V6/BC13MBSBD, BIOS 1.29 11/25/2022 [ 843.828953] RIP: 0010:pskb_expand_head+0x2ac/0x300 [ 843.833805] Code: 8b 70 28 48 85 f6 74 82 48 83 c6 08 bf 01 00 00 00 e8 38 bd ff ff 8b 83 c0 00 00 00 48 03 83 c8 00 00 00 e9 62 ff ff ff 0f 0b <0f> 0b e8 8d d0 ff ff e9 b3 fd ff ff 81 7c 24 14 40 01 00 00 4c 89 [ 843.843698] RSP: 0018:ffffc9000cce07c0 EFLAGS: 00010202 [ 843.848524] RAX: 0000000000000002 RBX: ffff88811a211d00 RCX: 0000000000000820 [ 843.853299] RDX: 0000000000000640 RSI: 0000000000000000 RDI: ffff88811a211d00 [ 843.857974] RBP: ffff888127d39518 R08: 00000000bee97314 R09: 0000000000000000 [ 843.862584] R10: 0000000000000000 R11: ffff8881109f0000 R12: 0000000000000880 [ 843.867147] R13: ffff888127d39580 R14: 0000000000000640 R15: ffff888170f7b900 [ 843.871680] FS: 0000000000000000(0000) GS:ffff889ffffc0000(0000) knlGS:0000000000000000 [ 843.876242] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 843.880778] CR2: 00007fa42affcfb8 CR3: 000000011433a002 CR4: 0000000000770ef0 [ 843.885336] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 843.889809] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 843.894229] PKRU: 55555554 [ 843.898539] Call Trace: [ 843.902772] <IRQ> [ 843.906922] ? __die_body+0x1e/0x60 [ 843.911032] ? • https://git.kernel.org/stable/c/b57dc7c13ea90e09ae15f821d2583fa0231b4935 https://git.kernel.org/stable/c/172ba7d46c202e679f3ccb10264c67416aaeb1c4 https://git.kernel.org/stable/c/0b5b831122fc3789fff75be433ba3e4dd7b779d4 https://git.kernel.org/stable/c/73f7da5fd124f2cda9161e2e46114915e6e82e97 https://git.kernel.org/stable/c/f5346df0591d10bc948761ca854b1fae6d2ef441 https://git.kernel.org/stable/c/3f14b377d01d8357eba032b4cabc8c1149b458b6 https://access.redhat.com/security/cve/CVE-2023-52610 https://bugzilla.redhat.com/show_bug.cgi?id=2270080 • CWE-402: Transmission of Private Resources into a New Sphere ('Resource Leak') •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: binder: fix race between mmput() and do_exit() Task A calls binder_update_page_range() to allocate and insert pages on a remote address space from Task B. For this, Task A pins the remote mm via mmget_not_zero() first. This can race with Task B do_exit() and the final mmput() refcount decrement will come from Task A. Task A | Task B ------------------+------------------ mmget_not_zero() | | do_exit() | exit_mm() | mmput() mmput() | exit_mmap() | remove_vma() | fput() | In this case, the work of ____fput() from Task B is queued up in Task A as TWA_RESUME. So in theory, Task A returns to userspace and the cleanup work gets executed. However, Task A instead sleep, waiting for a reply from Task B that never comes (it's dead). This means the binder_deferred_release() is blocked until an unrelated binder event forces Task A to go back to userspace. • https://git.kernel.org/stable/c/457b9a6f09f011ebcb9b52cc203a6331a6fc2de7 https://git.kernel.org/stable/c/95b1d336b0642198b56836b89908d07b9a0c9608 https://git.kernel.org/stable/c/252a2a5569eb9f8d16428872cc24dea1ac0bb097 https://git.kernel.org/stable/c/7e7a0d86542b0ea903006d3f42f33c4f7ead6918 https://git.kernel.org/stable/c/98fee5bee97ad47b527a997d5786410430d1f0e9 https://git.kernel.org/stable/c/6696f76c32ff67fec26823fc2df46498e70d9bf3 https://git.kernel.org/stable/c/67f16bf2cc1698fd50e01ee8a2becc5a8e6d3a3e https://git.kernel.org/stable/c/77d210e8db4d61d43b2d16df66b1ec46f •

CVSS: 5.5EPSS: 0%CPEs: 13EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: nfsd: fix RELEASE_LOCKOWNER The test on so_count in nfsd4_release_lockowner() is nonsense and harmful. Revert to using check_for_locks(), changing that to not sleep. First: harmful. As is documented in the kdoc comment for nfsd4_release_lockowner(), the test on so_count can transiently return a false positive resulting in a return of NFS4ERR_LOCKS_HELD when in fact no locks are held. This is clearly a protocol violation and with the Linux NFS client it can cause incorrect behaviour. If RELEASE_LOCKOWNER is sent while some other thread is still processing a LOCK request which failed because, at the time that request was received, the given owner held a conflicting lock, then the nfsd thread processing that LOCK request can hold a reference (conflock) to the lock owner that causes nfsd4_release_lockowner() to return an incorrect error. The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because it never sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, so it knows that the error is impossible. It assumes the lock owner was in fact released so it feels free to use the same lock owner identifier in some later locking request. When it does reuse a lock owner identifier for which a previous RELEASE failed, it will naturally use a lock_seqid of zero. However the server, which didn't release the lock owner, will expect a larger lock_seqid and so will respond with NFS4ERR_BAD_SEQID. So clearly it is harmful to allow a false positive, which testing so_count allows. The test is nonsense because ... well... it doesn't mean anything. so_count is the sum of three different counts. 1/ the set of states listed on so_stateids 2/ the set of active vfs locks owned by any of those states 3/ various transient counts such as for conflicting locks. When it is tested against '2' it is clear that one of these is the transient reference obtained by find_lockowner_str_locked(). • https://git.kernel.org/stable/c/3097f38e91266c7132c3fdb7e778fac858c00670 https://git.kernel.org/stable/c/e2fc17fcc503cfca57b5d1dd3b646ca7eebead97 https://git.kernel.org/stable/c/ce3c4ad7f4ce5db7b4f08a1e237d8dd94b39180b https://git.kernel.org/stable/c/fea1d0940301378206955264a01778700fc9c16f https://git.kernel.org/stable/c/2ec65dc6635d1976bd1dbf2640ff7f810b2f6dd1 https://git.kernel.org/stable/c/ef481b262bba4f454351eec43f024fec942c2d4c https://git.kernel.org/stable/c/10d75984495f7fe62152c3b0dbfa3f0a6b739c9b https://git.kernel.org/stable/c/a2235bc65ade40982c3d09025cdd34bc5 • CWE-393: Return of Wrong Status Code •

CVSS: -EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: firmware: arm_scmi: Check mailbox/SMT channel for consistency On reception of a completion interrupt the shared memory area is accessed to retrieve the message header at first and then, if the message sequence number identifies a transaction which is still pending, the related payload is fetched too. When an SCMI command times out the channel ownership remains with the platform until eventually a late reply is received and, as a consequence, any further transmission attempt remains pending, waiting for the channel to be relinquished by the platform. Once that late reply is received the channel ownership is given back to the agent and any pending request is then allowed to proceed and overwrite the SMT area of the just delivered late reply; then the wait for the reply to the new request starts. It has been observed that the spurious IRQ related to the late reply can be wrongly associated with the freshly enqueued request: when that happens the SCMI stack in-flight lookup procedure is fooled by the fact that the message header now present in the SMT area is related to the new pending transaction, even though the real reply has still to arrive. This race-condition on the A2P channel can be detected by looking at the channel status bits: a genuine reply from the platform will have set the channel free bit before triggering the completion IRQ. Add a consistency check to validate such condition in the A2P ISR. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: firmware: arm_scmi: comprueba la coherencia del buzón/canal SMT Al recibir una interrupción de finalización, se accede al área de memoria compartida para recuperar el encabezado del mensaje al principio y luego, si el número de secuencia del mensaje identifica una transacción que aún está pendiente, el payload relacionado también se recupera. Cuando se agota el tiempo de espera de un comando SCMI, la propiedad del canal permanece en la plataforma hasta que finalmente se recibe una respuesta tardía y, como consecuencia, cualquier intento de transmisión adicional permanece pendiente, esperando que la plataforma abandone el canal. Una vez que se recibe esa respuesta tardía, la propiedad del canal se devuelve al agente y cualquier solicitud pendiente puede continuar y sobrescribir el área SMT de la respuesta tardía recién entregada; luego comienza la espera de la respuesta a la nueva solicitud. Se ha observado que la IRQ espuria relacionada con la respuesta tardía puede asociarse erróneamente con la solicitud recién puesta en cola: cuando eso sucede, el procedimiento de búsqueda en curso de la pila SCMI se ve engañado por el hecho de que el encabezado del mensaje ahora presente en el área SMT es relacionado con la nueva transacción pendiente, aunque la respuesta real aún no ha llegado. • https://git.kernel.org/stable/c/5c8a47a5a91d4d6e185f758d61997613d9c5d6ac https://git.kernel.org/stable/c/614cc65032dcb0b64d23f5c5e338a8a04b12be5d https://git.kernel.org/stable/c/7f95f6997f4fdd17abec3200cae45420a5489350 https://git.kernel.org/stable/c/9b5e1b93c83ee5fc9f5d7bd2d45b421bd87774a2 https://git.kernel.org/stable/c/12dc4217f16551d6dee9cbefc23fdb5659558cda https://git.kernel.org/stable/c/437a310b22244d4e0b78665c3042e5d1c0f45306 •

CVSS: -EPSS: 0%CPEs: 6EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: scsi: core: Move scsi_host_busy() out of host lock for waking up EH handler Inside scsi_eh_wakeup(), scsi_host_busy() is called & checked with host lock every time for deciding if error handler kthread needs to be waken up. This can be too heavy in case of recovery, such as: - N hardware queues - queue depth is M for each hardware queue - each scsi_host_busy() iterates over (N * M) tag/requests If recovery is triggered in case that all requests are in-flight, each scsi_eh_wakeup() is strictly serialized, when scsi_eh_wakeup() is called for the last in-flight request, scsi_host_busy() has been run for (N * M - 1) times, and request has been iterated for (N*M - 1) * (N * M) times. If both N and M are big enough, hard lockup can be triggered on acquiring host lock, and it is observed on mpi3mr(128 hw queues, queue depth 8169). Fix the issue by calling scsi_host_busy() outside the host lock. We don't need the host lock for getting busy count because host the lock never covers that. [mkp: Drop unnecessary 'busy' variables pointed out by Bart] En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: scsi: core: Saque scsi_host_busy() del bloqueo del host para activar el controlador EH Dentro de scsi_eh_wakeup(), se llama a scsi_host_busy() y se verifica con el bloqueo del host cada vez para decidir si se produce un error. Es necesario activar el controlador kthread. Esto puede ser demasiado pesado en caso de recuperación, como por ejemplo: - N colas de hardware - la profundidad de la cola es M para cada cola de hardware - cada scsi_host_busy() itera sobre (N * M) etiquetas/solicitudes Si la recuperación se activa en caso de que todas las solicitudes están en curso, cada scsi_eh_wakeup() está estrictamente serializado, cuando se llama a scsi_eh_wakeup() para la última solicitud en curso, scsi_host_busy() se ha ejecutado (N * M - 1) veces y la solicitud se ha iterado durante ( N*M - 1) * (N * M) veces. Si tanto N como M son lo suficientemente grandes, se puede activar un bloqueo duro al adquirir el bloqueo del host, y se observa en mpi3mr (128 colas hw, profundidad de cola 8169). • https://git.kernel.org/stable/c/6eb045e092efefafc6687409a6fa6d1dabf0fb69 https://git.kernel.org/stable/c/f5944853f7a961fedc1227dc8f60393f8936d37c https://git.kernel.org/stable/c/d37c1c81419fdef66ebd0747cf76fb8b7d979059 https://git.kernel.org/stable/c/db6338f45971b4285ea368432a84033690eaf53c https://git.kernel.org/stable/c/65ead8468c21c2676d4d06f50b46beffdea69df1 https://git.kernel.org/stable/c/07e3ca0f17f579491b5f54e9ed05173d6c1d6fcb https://git.kernel.org/stable/c/4373534a9850627a2695317944898eb1283a2db0 https://lists.debian.org/debian-lts-announce/2024/06/ •