Page 245 of 5447 results (0.008 seconds)

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mailbox: mtk-cmdq: Move devm_mbox_controller_register() after devm_pm_runtime_enable() When mtk-cmdq unbinds, a WARN_ON message with condition pm_runtime_get_sync() < 0 occurs. According to the call tracei below: cmdq_mbox_shutdown mbox_free_channel mbox_controller_unregister __devm_mbox_controller_unregister ... The root cause can be deduced to be calling pm_runtime_get_sync() after calling pm_runtime_disable() as observed below: 1. CMDQ driver uses devm_mbox_controller_register() in cmdq_probe() to bind the cmdq device to the mbox_controller, so devm_mbox_controller_unregister() will automatically unregister the device bound to the mailbox controller when the device-managed resource is removed. That means devm_mbox_controller_unregister() and cmdq_mbox_shoutdown() will be called after cmdq_remove(). 2. CMDQ driver also uses devm_pm_runtime_enable() in cmdq_probe() after devm_mbox_controller_register(), so that devm_pm_runtime_disable() will be called after cmdq_remove(), but before devm_mbox_controller_unregister(). To fix this problem, cmdq_probe() needs to move devm_mbox_controller_register() after devm_pm_runtime_enable() to make devm_pm_runtime_disable() be called after devm_mbox_controller_unregister(). • https://git.kernel.org/stable/c/623a6143a845bd485b00ba684f0ccef11835edab https://git.kernel.org/stable/c/1403991a40b94438a2acc749bf05c117abdb34f9 https://git.kernel.org/stable/c/d00df6700ad10974a7e20646956f4ff22cdbe0ec https://git.kernel.org/stable/c/11fa625b45faf0649118b9deaf2d31c86ac41911 https://git.kernel.org/stable/c/a8bd68e4329f9a0ad1b878733e0f80be6a971649 •

CVSS: -EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: landlock: Don't lose track of restrictions on cred_transfer When a process' cred struct is replaced, this _almost_ always invokes the cred_prepare LSM hook; but in one special case (when KEYCTL_SESSION_TO_PARENT updates the parent's credentials), the cred_transfer LSM hook is used instead. Landlock only implements the cred_prepare hook, not cred_transfer, so KEYCTL_SESSION_TO_PARENT causes all information on Landlock restrictions to be lost. This basically means that a process with the ability to use the fork() and keyctl() syscalls can get rid of all Landlock restrictions on itself. Fix it by adding a cred_transfer hook that does the same thing as the existing cred_prepare hook. (Implemented by having hook_cred_prepare() call hook_cred_transfer() so that the two functions are less likely to accidentally diverge in the future.) Linux has an issue where landlock can be disabled thanks to a missing cred_transfer hook. • https://git.kernel.org/stable/c/385975dca53eb41031d0cbd1de318eb1bc5d6bb9 https://git.kernel.org/stable/c/916c648323fa53b89eedb34a0988ddaf01406117 https://git.kernel.org/stable/c/0d74fd54db0bd0c0c224bef0da8fc95ea9c9f36c https://git.kernel.org/stable/c/16896914bace82d7811c62f3b6d5320132384f49 https://git.kernel.org/stable/c/b14cc2cf313bd29056fadbc8ecd7f957cf5791ff https://git.kernel.org/stable/c/39705a6c29f8a2b93cf5b99528a55366c50014d1 https://lore.kernel.org/all/20240817.shahka3Ee1iy@digikod.net https://www.openwall.com/lists/oss-security/2024/08/17& •

CVSS: -EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mm/huge_memory: avoid PMD-size page cache if needed xarray can't support arbitrary page cache size. the largest and supported page cache size is defined as MAX_PAGECACHE_ORDER by commit 099d90642a71 ("mm/filemap: make MAX_PAGECACHE_ORDER acceptable to xarray"). However, it's possible to have 512MB page cache in the huge memory's collapsing path on ARM64 system whose base page size is 64KB. 512MB page cache is breaking the limitation and a warning is raised when the xarray entry is split as shown in the following example. [root@dhcp-10-26-1-207 ~]# cat /proc/1/smaps | grep KernelPageSize KernelPageSize: 64 kB [root@dhcp-10-26-1-207 ~]# cat /tmp/test.c : int main(int argc, char **argv) { const char *filename = TEST_XFS_FILENAME; int fd = 0; void *buf = (void *)-1, *p; int pgsize = getpagesize(); int ret = 0; if (pgsize != 0x10000) { fprintf(stdout, "System with 64KB base page size is required!\n"); return -EPERM; } system("echo 0 > /sys/devices/virtual/bdi/253:0/read_ahead_kb"); system("echo 1 > /proc/sys/vm/drop_caches"); /* Open the xfs file */ fd = open(filename, O_RDONLY); assert(fd > 0); /* Create VMA */ buf = mmap(NULL, TEST_MEM_SIZE, PROT_READ, MAP_SHARED, fd, 0); assert(buf ! • https://git.kernel.org/stable/c/6b24ca4a1a8d4ee3221d6d44ddbb99f542e4bda3 https://git.kernel.org/stable/c/e60f62f75c99740a28e2bf7e6044086033012a16 https://git.kernel.org/stable/c/d659b715e94ac039803d7601505d3473393fc0be •

CVSS: -EPSS: 0%CPEs: 4EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mm/mglru: fix div-by-zero in vmpressure_calc_level() evict_folios() uses a second pass to reclaim folios that have gone through page writeback and become clean before it finishes the first pass, since folio_rotate_reclaimable() cannot handle those folios due to the isolation. The second pass tries to avoid potential double counting by deducting scan_control->nr_scanned. However, this can result in underflow of nr_scanned, under a condition where shrink_folio_list() does not increment nr_scanned, i.e., when folio_trylock() fails. The underflow can cause the divisor, i.e., scale=scanned+reclaimed in vmpressure_calc_level(), to become zero, resulting in the following crash: [exception RIP: vmpressure_work_fn+101] process_one_work at ffffffffa3313f2b Since scan_control->nr_scanned has no established semantics, the potential double counting has minimal risks. Therefore, fix the problem by not deducting scan_control->nr_scanned in evict_folios(). • https://git.kernel.org/stable/c/359a5e1416caaf9ce28396a65ed3e386cc5de663 https://git.kernel.org/stable/c/8de7bf77f21068a5f602bb1e59adbc5ab533509d https://git.kernel.org/stable/c/d6510f234c7d117790397f9bb150816b0a954a04 https://git.kernel.org/stable/c/a39e38be632f0e1c908d70d1c9cd071c03faf895 https://git.kernel.org/stable/c/8b671fe1a879923ecfb72dda6caf01460dd885ef •

CVSS: -EPSS: 0%CPEs: 5EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: exfat: fix potential deadlock on __exfat_get_dentry_set When accessing a file with more entries than ES_MAX_ENTRY_NUM, the bh-array is allocated in __exfat_get_entry_set. The problem is that the bh-array is allocated with GFP_KERNEL. It does not make sense. In the following cases, a deadlock for sbi->s_lock between the two processes may occur. CPU0 CPU1 ---- ---- kswapd balance_pgdat lock(fs_reclaim) exfat_iterate lock(&sbi->s_lock) exfat_readdir exfat_get_uniname_from_ext_entry exfat_get_dentry_set __exfat_get_dentry_set kmalloc_array ... lock(fs_reclaim) ... evict exfat_evict_inode lock(&sbi->s_lock) To fix this, let's allocate bh-array with GFP_NOFS. • https://git.kernel.org/stable/c/bd3bdb9e0d656f760b11d0c638d35d7f7068144d https://git.kernel.org/stable/c/92dcd7d6c6068bf4fd35a6f64d606e27d634807e https://git.kernel.org/stable/c/a3ff29a95fde16906304455aa8c0bd84eb770258 https://git.kernel.org/stable/c/632fb232b6bbf8277edcbe9ecd4b4d98ecb122eb https://git.kernel.org/stable/c/c052f775ee6ccacd3c97e4cf41a2a657e63d4259 https://git.kernel.org/stable/c/a7ac198f8dba791e3144c4da48a5a9b95773ee4b https://git.kernel.org/stable/c/1d1970493c289e3f44b9ec847ed26a5dbdf56a62 https://git.kernel.org/stable/c/89fc548767a2155231128cb98726d6d2e •