Page 185 of 5038 results (0.009 seconds)

CVSS: -EPSS: 0%CPEs: 3EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: tracing: Have format file honor EVENT_FILE_FL_FREED When eventfs was introduced, special care had to be done to coordinate the freeing of the file meta data with the files that are exposed to user space. The file meta data would have a ref count that is set when the file is created and would be decremented and freed after the last user that opened the file closed it. When the file meta data was to be freed, it would set a flag (EVENT_FILE_FL_FREED) to denote that the file is freed, and any new references made (like new opens or reads) would fail as it is marked freed. This allowed other meta data to be freed after this flag was set (under the event_mutex). All the files that were dynamically created in the events directory had a pointer to the file meta data and would call event_release() when the last reference to the user space file was closed. This would be the time that it is safe to free the file meta data. A shortcut was made for the "format" file. • https://git.kernel.org/stable/c/14aa4f3efc6e784847e8c8543a7ef34ec9bdbb01 https://git.kernel.org/stable/c/b63db58e2fa5d6963db9c45df88e60060f0ff35f https://git.kernel.org/stable/c/4ed03758ddf0b19d69eed69386d65a92d0091e0c https://git.kernel.org/stable/c/531dc6780d94245af037c25c2371c8caf652f0f9 https://git.kernel.org/stable/c/b1560408692cd0ab0370cfbe9deb03ce97ab3f6d •

CVSS: -EPSS: 0%CPEs: 8EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: tracing: Fix overflow in get_free_elt() "tracing_map->next_elt" in get_free_elt() is at risk of overflowing. Once it overflows, new elements can still be inserted into the tracing_map even though the maximum number of elements (`max_elts`) has been reached. Continuing to insert elements after the overflow could result in the tracing_map containing "tracing_map->max_size" elements, leaving no empty entries. If any attempt is made to insert an element into a full tracing_map using `__tracing_map_insert()`, it will cause an infinite loop with preemption disabled, leading to a CPU hang problem. Fix this by preventing any further increments to "tracing_map->next_elt" once it reaches "tracing_map->max_elt". • https://git.kernel.org/stable/c/08d43a5fa063e03c860f2f391a30c388bcbc948e https://git.kernel.org/stable/c/302ceb625d7b990db205a15e371f9a71238de91c https://git.kernel.org/stable/c/d3e4dbc2858fe85d1dbd2e72a9fc5dea988b5c18 https://git.kernel.org/stable/c/eb223bf01e688dfe37e813c8988ee11c8c9f8d0a https://git.kernel.org/stable/c/cd10d186a5409a1fe6e976df82858e9773a698da https://git.kernel.org/stable/c/788ea62499b3c18541fd6d621964d8fafbc4aec5 https://git.kernel.org/stable/c/a172c7b22bc2feaf489cfc6d6865f7237134fdf8 https://git.kernel.org/stable/c/236bb4690773ab6869b40bedc7bc8d889 •

CVSS: 5.5EPSS: 0%CPEs: 6EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: padata: Fix possible divide-by-0 panic in padata_mt_helper() We are hit with a not easily reproducible divide-by-0 panic in padata.c at bootup time. [ 10.017908] Oops: divide error: 0000 1 PREEMPT SMP NOPTI [ 10.017908] CPU: 26 PID: 2627 Comm: kworker/u1666:1 Not tainted 6.10.0-15.el10.x86_64 #1 [ 10.017908] Hardware name: Lenovo ThinkSystem SR950 [7X12CTO1WW]/[7X12CTO1WW], BIOS [PSE140J-2.30] 07/20/2021 [ 10.017908] Workqueue: events_unbound padata_mt_helper [ 10.017908] RIP: 0010:padata_mt_helper+0x39/0xb0 : [ 10.017963] Call Trace: [ 10.017968] <TASK> [ 10.018004] ? padata_mt_helper+0x39/0xb0 [ 10.018084] process_one_work+0x174/0x330 [ 10.018093] worker_thread+0x266/0x3a0 [ 10.018111] kthread+0xcf/0x100 [ 10.018124] ret_from_fork+0x31/0x50 [ 10.018138] ret_from_fork_asm+0x1a/0x30 [ 10.018147] </TASK> Looking at the padata_mt_helper() function, the only way a divide-by-0 panic can happen is when ps->chunk_size is 0. The way that chunk_size is initialized in padata_do_multithreaded(), chunk_size can be 0 when the min_chunk in the passed-in padata_mt_job structure is 0. Fix this divide-by-0 panic by making sure that chunk_size will be at least 1 no matter what the input parameters are. A denial of service vulnerability exists in the Linux kernel. A possible divide-by-0 is in the padata_mt_helper() function when the ps->chunk_size is 0. • https://git.kernel.org/stable/c/004ed42638f4428e70ead59d170f3d17ff761a0f https://git.kernel.org/stable/c/ab8b397d5997d8c37610252528edc54bebf9f6d3 https://git.kernel.org/stable/c/8f5ffd2af7274853ff91d6cd62541191d9fbd10d https://git.kernel.org/stable/c/a29cfcb848c31f22b4de6a531c3e1d68c9bfe09f https://git.kernel.org/stable/c/924f788c906dccaca30acab86c7124371e1d6f2c https://git.kernel.org/stable/c/da0ffe84fcc1627a7dff82c80b823b94236af905 https://git.kernel.org/stable/c/6d45e1c948a8b7ed6ceddb14319af69424db730c https://access.redhat.com/security/cve/CVE-2024-43889 • CWE-369: Divide By Zero •

CVSS: 7.1EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: mm: list_lru: fix UAF for memory cgroup The mem_cgroup_from_slab_obj() is supposed to be called under rcu lock or cgroup_mutex or others which could prevent returned memcg from being freed. Fix it by adding missing rcu read lock. Found by code inspection. [songmuchun@bytedance.com: only grab rcu lock when necessary, per Vlastimil] Link: https://lkml.kernel.org/r/20240801024603.1865-1-songmuchun@bytedance.com • https://git.kernel.org/stable/c/0a97c01cd20bb96359d8c9dedad92a061ed34e0b https://git.kernel.org/stable/c/4589f77c18dd98b65f45617b6d1e95313cf6fcab https://git.kernel.org/stable/c/5161b48712dcd08ec427c450399d4d1483e21dea https://access.redhat.com/security/cve/CVE-2024-43888 https://bugzilla.redhat.com/show_bug.cgi?id=2307861 • CWE-416: Use After Free •

CVSS: -EPSS: 0%CPEs: 2EXPL: 0

In the Linux kernel, the following vulnerability has been resolved: net/tcp: Disable TCP-AO static key after RCU grace period The lifetime of TCP-AO static_key is the same as the last tcp_ao_info. On the socket destruction tcp_ao_info ceases to be with RCU grace period, while tcp-ao static branch is currently deferred destructed. The static key definition is : DEFINE_STATIC_KEY_DEFERRED_FALSE(tcp_ao_needed, HZ); which means that if RCU grace period is delayed by more than a second and tcp_ao_needed is in the process of disablement, other CPUs may yet see tcp_ao_info which atent dead, but soon-to-be. And that breaks the assumption of static_key_fast_inc_not_disabled(). See the comment near the definition: > * The caller must make sure that the static key can't get disabled while > * in this function. It doesn't patch jump labels, only adds a user to > * an already enabled static key. Originally it was introduced in commit eb8c507296f6 ("jump_label: Prevent key->enabled int overflow"), which is needed for the atomic contexts, one of which would be the creation of a full socket from a request socket. In that atomic context, it's known by the presence of the key (md5/ao) that the static branch is already enabled. So, the ref counter for that static branch is just incremented instead of holding the proper mutex. static_key_fast_inc_not_disabled() is just a helper for such usage case. • https://git.kernel.org/stable/c/67fa83f7c86a86913ab9cd5a13b4bebd8d2ebb43 https://git.kernel.org/stable/c/954d55a59b2501f4a9bd693b40ce45a1c46cb2b3 https://git.kernel.org/stable/c/14ab4792ee120c022f276a7e4768f4dcb08f0cdd •