Page 414 of 2873 results (0.023 seconds)

CVSS: -EPSS: 0%CPEs: 7EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: userfaultfd: release page in error path to avoid BUG_ON Consider the following sequence of events: 1. Userspace issues a UFFD ioctl, which ends up calling into shmem_mfill_atomic_pte(). We successfully account the blocks, we shmem_alloc_page(), but then the copy_from_user() fails. We return -ENOENT. We don't release the page we allocated. 2. • https://git.kernel.org/stable/c/cb658a453b9327ce96ce5222c24d162b5b65b564 https://git.kernel.org/stable/c/319116227e52d49eee671f0aa278bac89b3c1b69 https://git.kernel.org/stable/c/07c9b834c97d0fa3402fb7f3f3b32df370a6ff1f https://git.kernel.org/stable/c/b3f1731c6d7fbc1ebe3ed8eff6d6bec56d76ff43 https://git.kernel.org/stable/c/140cfd9980124aecb6c03ef2e69c72d0548744de https://git.kernel.org/stable/c/ad53127973034c63b5348715a1043d0e80ceb330 https://git.kernel.org/stable/c/2d59a0ed8b26b8f3638d8afc31f839e27759f1f6 https://git.kernel.org/stable/c/7ed9d238c7dbb1fdb63ad96a618498515 •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: btrfs: fix deadlock when cloning inline extents and using qgroups There are a few exceptional cases where cloning an inline extent needs to copy the inline extent data into a page of the destination inode. When this happens, we end up starting a transaction while having a dirty page for the destination inode and while having the range locked in the destination's inode iotree too. Because when reserving metadata space for a transaction we may need to flush existing delalloc in case there is not enough free space, we have a mechanism in place to prevent a deadlock, which was introduced in commit 3d45f221ce627d ("btrfs: fix deadlock when cloning inline extent and low on free metadata space"). However when using qgroups, a transaction also reserves metadata qgroup space, which can also result in flushing delalloc in case there is not enough available space at the moment. When this happens we deadlock, since flushing delalloc requires locking the file range in the inode's iotree and the range was already locked at the very beginning of the clone operation, before attempting to start the transaction. When this issue happens, stack traces like the following are reported: [72747.556262] task:kworker/u81:9 state:D stack: 0 pid: 225 ppid: 2 flags:0x00004000 [72747.556268] Workqueue: writeback wb_workfn (flush-btrfs-1142) [72747.556271] Call Trace: [72747.556273] __schedule+0x296/0x760 [72747.556277] schedule+0x3c/0xa0 [72747.556279] io_schedule+0x12/0x40 [72747.556284] __lock_page+0x13c/0x280 [72747.556287] ? generic_file_readonly_mmap+0x70/0x70 [72747.556325] extent_write_cache_pages+0x22a/0x440 [btrfs] [72747.556331] ? __set_page_dirty_nobuffers+0xe7/0x160 [72747.556358] ? • https://git.kernel.org/stable/c/c53e9653605dbf708f5be02902de51831be4b009 https://git.kernel.org/stable/c/36af2de520cca7c37974cc4944b47850f6c460ee https://git.kernel.org/stable/c/d5347827d0b4b2250cbce6eccaa1c81dc78d8651 https://git.kernel.org/stable/c/96157707c0420e3d3edfe046f1cc797fee117ade https://git.kernel.org/stable/c/f9baa501b4fd6962257853d46ddffbc21f27e344 •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: usb: dwc3: gadget: Free gadget structure only after freeing endpoints As part of commit e81a7018d93a ("usb: dwc3: allocate gadget structure dynamically") the dwc3_gadget_release() was added which will free the dwc->gadget structure upon the device's removal when usb_del_gadget_udc() is called in dwc3_gadget_exit(). However, simply freeing the gadget results a dangling pointer situation: the endpoints created in dwc3_gadget_init_endpoints() have their dep->endpoint.ep_list members chained off the list_head anchored at dwc->gadget->ep_list. Thus when dwc->gadget is freed, the first dwc3_ep in the list now has a dangling prev pointer and likewise for the next pointer of the dwc3_ep at the tail of the list. The dwc3_gadget_free_endpoints() that follows will result in a use-after-free when it calls list_del(). This was caught by enabling KASAN and performing a driver unbind. The recent commit 568262bf5492 ("usb: dwc3: core: Add shutdown callback for dwc3") also exposes this as a panic during shutdown. There are a few possibilities to fix this. One could be to perform a list_del() of the gadget->ep_list itself which removes it from the rest of the dwc3_ep chain. Another approach is what this patch does, by splitting up the usb_del_gadget_udc() call into its separate "del" and "put" components. This allows dwc3_gadget_free_endpoints() to be called before the gadget is finally freed with usb_put_gadget(). En el kernel de Linux, se resolvió la siguiente vulnerabilidad: usb: dwc3: gadget: estructura de gadget libre solo después de liberar los puntos finales. • https://git.kernel.org/stable/c/e81a7018d93a7de31a3f121c9a7eecd0a5ec58b0 https://git.kernel.org/stable/c/1ea775021282d90e1d08d696b7ab54aa75d688e5 https://git.kernel.org/stable/c/bc0cdd72493236fb72b390ad38ce581e353c143c https://git.kernel.org/stable/c/b4b8e9601d7ee8806d2687f081a42485d27674a1 https://git.kernel.org/stable/c/bb9c74a5bd1462499fe5ccb1e3c5ac40dcfa9139 •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: ACPI: scan: Fix a memory leak in an error handling path If 'acpi_device_set_name()' fails, we must free 'acpi_device_bus_id->bus_id' or there is a (potential) memory leak. En el kernel de Linux se ha resuelto la siguiente vulnerabilidad: ACPI: scan: Corregir pérdida de memoria en una ruta de manejo de errores Si falla 'acpi_device_set_name()' debemos liberar 'acpi_device_bus_id->bus_id' o hay una (potencial) memoria filtración. • https://git.kernel.org/stable/c/e5cdbe419004e172f642e876a671a9ff1c52f8bb https://git.kernel.org/stable/c/717d9d88fbd956ab03fad97266f6ce63a036e7f8 https://git.kernel.org/stable/c/7385e438e1f31af5b86f72fd19b0dcd2738502c9 https://git.kernel.org/stable/c/bc0b1a2036dd8072106ec81a8685ecb901f72ed6 https://git.kernel.org/stable/c/4a5891992c680d69d7e490e4d0428d17779d8e85 https://git.kernel.org/stable/c/321dbe6c0b551f9f8030becc6900f77cf9bbb9ad https://git.kernel.org/stable/c/eb50aaf960e3bedfef79063411ffd670da94b84b https://git.kernel.org/stable/c/6901a4f795e0e8d65ae779cb37fc22e0b •

CVSS: 6.0EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: kyber: fix out of bounds access when preempted __blk_mq_sched_bio_merge() gets the ctx and hctx for the current CPU and passes the hctx to ->bio_merge(). kyber_bio_merge() then gets the ctx for the current CPU again and uses that to get the corresponding Kyber context in the passed hctx. However, the thread may be preempted between the two calls to blk_mq_get_ctx(), and the ctx returned the second time may no longer correspond to the passed hctx. This "works" accidentally most of the time, but it can cause us to read garbage if the second ctx came from an hctx with more ctx's than the first one (i.e., if ctx->index_hw[hctx->type] > hctx->nr_ctx). This manifested as this UBSAN array index out of bounds error reported by Jakub: UBSAN: array-index-out-of-bounds in ../kernel/locking/qspinlock.c:130:9 index 13106 is out of range for type 'long unsigned int [128]' Call Trace: dump_stack+0xa4/0xe5 ubsan_epilogue+0x5/0x40 __ubsan_handle_out_of_bounds.cold.13+0x2a/0x34 queued_spin_lock_slowpath+0x476/0x480 do_raw_spin_lock+0x1c2/0x1d0 kyber_bio_merge+0x112/0x180 blk_mq_submit_bio+0x1f5/0x1100 submit_bio_noacct+0x7b0/0x870 submit_bio+0xc2/0x3a0 btrfs_map_bio+0x4f0/0x9d0 btrfs_submit_data_bio+0x24e/0x310 submit_one_bio+0x7f/0xb0 submit_extent_page+0xc4/0x440 __extent_writepage_io+0x2b8/0x5e0 __extent_writepage+0x28d/0x6e0 extent_write_cache_pages+0x4d7/0x7a0 extent_writepages+0xa2/0x110 do_writepages+0x8f/0x180 __writeback_single_inode+0x99/0x7f0 writeback_sb_inodes+0x34e/0x790 __writeback_inodes_wb+0x9e/0x120 wb_writeback+0x4d2/0x660 wb_workfn+0x64d/0xa10 process_one_work+0x53a/0xa80 worker_thread+0x69/0x5b0 kthread+0x20b/0x240 ret_from_fork+0x1f/0x30 Only Kyber uses the hctx, so fix it by passing the request_queue to ->bio_merge() instead. BFQ and mq-deadline just use that, and Kyber can map the queues itself to avoid the mismatch. • https://git.kernel.org/stable/c/a6088845c2bf754d6cb2572b484180680b037804 https://git.kernel.org/stable/c/0b6b4b90b74c27bea968c214d820ba4254b903a5 https://git.kernel.org/stable/c/54dbe2d2c1fcabf650c7a8b747601da355cd7f9f https://git.kernel.org/stable/c/a287cd84e047045f5a4d4da793414e848de627c6 https://git.kernel.org/stable/c/2ef3c76540c49167a0bc3d5f80d00fd1fc4586df https://git.kernel.org/stable/c/efed9a3337e341bd0989161b97453b52567bc59d https://access.redhat.com/security/cve/CVE-2021-46984 https://bugzilla.redhat.com/show_bug.cgi?id=2266750 • CWE-125: Out-of-bounds Read •