// For flags

CVE-2021-46939

tracing: Restructure trace_clock_global() to never block

Severity Score

5.5
*CVSS v3.1

Exploit Likelihood

*EPSS

Affected Versions

*CPE

Public Exploits

0
*Multiple Sources

Exploited in Wild

-
*KEV

Decision

Track
*SSVC
Descriptions

In the Linux kernel, the following vulnerability has been resolved: tracing: Restructure trace_clock_global() to never block It was reported that a fix to the ring buffer recursion detection would
cause a hung machine when performing suspend / resume testing. The
following backtrace was extracted from debugging that case: Call Trace: trace_clock_global+0x91/0xa0 __rb_reserve_next+0x237/0x460 ring_buffer_lock_reserve+0x12a/0x3f0 trace_buffer_lock_reserve+0x10/0x50 __trace_graph_return+0x1f/0x80 trace_graph_return+0xb7/0xf0 ? trace_clock_global+0x91/0xa0 ftrace_return_to_handler+0x8b/0xf0 ? pv_hash+0xa0/0xa0 return_to_handler+0x15/0x30 ? ftrace_graph_caller+0xa0/0xa0 ? trace_clock_global+0x91/0xa0 ? __rb_reserve_next+0x237/0x460 ? ring_buffer_lock_reserve+0x12a/0x3f0 ? trace_event_buffer_lock_reserve+0x3c/0x120 ? trace_event_buffer_reserve+0x6b/0xc0 ? trace_event_raw_event_device_pm_callback_start+0x125/0x2d0 ? dpm_run_callback+0x3b/0xc0 ? pm_ops_is_empty+0x50/0x50 ? platform_get_irq_byname_optional+0x90/0x90 ? trace_device_pm_callback_start+0x82/0xd0 ? dpm_run_callback+0x49/0xc0 With the following RIP: RIP: 0010:native_queued_spin_lock_slowpath+0x69/0x200 Since the fix to the recursion detection would allow a single recursion to
happen while tracing, this lead to the trace_clock_global() taking a spin
lock and then trying to take it again: ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* lock taken */ (something else gets traced by function graph tracer) ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* DEAD LOCK! */ Tracing should *never* block, as it can lead to strange lockups like the
above. Restructure the trace_clock_global() code to instead of simply taking a
lock to update the recorded "prev_time" simply use it, as two events
happening on two different CPUs that calls this at the same time, really
doesn't matter which one goes first. Use a trylock to grab the lock for
updating the prev_time, and if it fails, simply try again the next time.
If it failed to be taken, that means something else is already updating
it. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=212761

En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: rastreo: reestructurar trace_clock_global() para no bloquear nunca. Se informó que una solución a la detección de recursividad del búfer circular provocaría que la máquina se bloqueara al realizar pruebas de suspensión/reanudación. El siguiente seguimiento se extrajo de la depuración de ese caso: Call Trace: trace_clock_global+0x91/0xa0 __rb_reserve_next+0x237/0x460 ring_buffer_lock_reserve+0x12a/0x3f0 trace_buffer_lock_reserve+0x10/0x50 __trace_graph_return+0x1f/0x80 trace_graph_return+0xb7 /0xf0? trace_clock_global+0x91/0xa0 ftrace_return_to_handler+0x8b/0xf0 ? pv_hash+0xa0/0xa0 return_to_handler+0x15/0x30 ? ftrace_graph_caller+0xa0/0xa0? trace_clock_global+0x91/0xa0? __rb_reserve_next+0x237/0x460? ring_buffer_lock_reserve+0x12a/0x3f0? trace_event_buffer_lock_reserve+0x3c/0x120? trace_event_buffer_reserve+0x6b/0xc0? trace_event_raw_event_device_pm_callback_start+0x125/0x2d0? dpm_run_callback+0x3b/0xc0? pm_ops_is_empty+0x50/0x50? platform_get_irq_byname_opcional+0x90/0x90? trace_device_pm_callback_start+0x82/0xd0? dpm_run_callback+0x49/0xc0 Con el siguiente RIP: RIP: 0010:native_queued_spin_lock_slowpath+0x69/0x200 Dado que la solución a la detección de recursión permitiría que ocurriera una sola recursión durante el seguimiento, esto llevó a trace_clock_global() a tomar un bloqueo de giro y luego intentarlo para tomarlo de nuevo: ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* bloqueo tomado */ (algo más es rastreado por la función de seguimiento del gráfico) ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath () { /* ¡BLOQUEO MUERTO! */ El rastreo *nunca* debe bloquearse, ya que puede provocar bloqueos extraños como el anterior. Reestructura el código trace_clock_global() para que, en lugar de simplemente tomar un bloqueo para actualizar el "prev_time" registrado, simplemente lo uses, ya que dos eventos suceden en dos CPU diferentes que llaman a esto al mismo tiempo, realmente no importa cuál va primero. Utilice un trylock para obtener el bloqueo para actualizar prev_time y, si falla, simplemente inténtelo de nuevo la próxima vez. Si no se pudo tomar, eso significa que algo más ya lo está actualizando. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=212761

In the Linux kernel, the following vulnerability has been resolved: tracing: Restructure trace_clock_global() to never block It was reported that a fix to the ring buffer recursion detection would cause a hung machine when performing suspend / resume testing. The following backtrace was extracted from debugging that case: Call Trace: trace_clock_global+0x91/0xa0 __rb_reserve_next+0x237/0x460 ring_buffer_lock_reserve+0x12a/0x3f0 trace_buffer_lock_reserve+0x10/0x50 __trace_graph_return+0x1f/0x80 trace_graph_return+0xb7/0xf0 ? trace_clock_global+0x91/0xa0 ftrace_return_to_handler+0x8b/0xf0 ? pv_hash+0xa0/0xa0 return_to_handler+0x15/0x30 ? ftrace_graph_caller+0xa0/0xa0 ? trace_clock_global+0x91/0xa0 ? __rb_reserve_next+0x237/0x460 ? ring_buffer_lock_reserve+0x12a/0x3f0 ? trace_event_buffer_lock_reserve+0x3c/0x120 ? trace_event_buffer_reserve+0x6b/0xc0 ? trace_event_raw_event_device_pm_callback_start+0x125/0x2d0 ? dpm_run_callback+0x3b/0xc0 ? pm_ops_is_empty+0x50/0x50 ? platform_get_irq_byname_optional+0x90/0x90 ? trace_device_pm_callback_start+0x82/0xd0 ? dpm_run_callback+0x49/0xc0 With the following RIP: RIP: 0010:native_queued_spin_lock_slowpath+0x69/0x200 Since the fix to the recursion detection would allow a single recursion to happen while tracing, this lead to the trace_clock_global() taking a spin lock and then trying to take it again: ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* lock taken */ (something else gets traced by function graph tracer) ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* DEAD LOCK! */ Tracing should *never* block, as it can lead to strange lockups like the above. Restructure the trace_clock_global() code to instead of simply taking a lock to update the recorded "prev_time" simply use it, as two events happening on two different CPUs that calls this at the same time, really doesn't matter which one goes first. Use a trylock to grab the lock for updating the prev_time, and if it fails, simply try again the next time. If it failed to be taken, that means something else is already updating it. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=212761

Zheng Wang discovered that the Broadcom FullMAC WLAN driver in the Linux kernel contained a race condition during device removal, leading to a use- after-free vulnerability. A physically proximate attacker could possibly use this to cause a denial of service. Several security issues were discovered in the Linux kernel. An attacker could possibly use these to compromise the system.

*Credits: N/A
CVSS Scores
Attack Vector
Local
Attack Complexity
Low
Privileges Required
Low
User Interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High
Attack Vector
Local
Attack Complexity
Low
Authentication
Single
Confidentiality
None
Integrity
None
Availability
Complete
* Common Vulnerability Scoring System
SSVC
  • Decision:Track
Exploitation
None
Automatable
No
Tech. Impact
Partial
* Organization's Worst-case Scenario
Timeline
  • 2024-02-25 CVE Reserved
  • 2024-02-27 CVE Published
  • 2024-12-19 CVE Updated
  • 2025-03-18 EPSS Updated
  • ---------- Exploited in Wild
  • ---------- KEV Due Date
  • ---------- First Exploit
CWE
  • CWE-662: Improper Synchronization
  • CWE-833: Deadlock
CAPEC
Affected Vendors, Products, and Versions
Vendor Product Version Other Status
Vendor Product Version Other Status <-- --> Vendor Product Version Other Status
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 2.6.30 < 4.4.269
Search vendor "Linux" for product "Linux Kernel" and version " >= 2.6.30 < 4.4.269"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 2.6.30 < 4.9.269
Search vendor "Linux" for product "Linux Kernel" and version " >= 2.6.30 < 4.9.269"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 2.6.30 < 4.14.233
Search vendor "Linux" for product "Linux Kernel" and version " >= 2.6.30 < 4.14.233"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 2.6.30 < 4.19.191
Search vendor "Linux" for product "Linux Kernel" and version " >= 2.6.30 < 4.19.191"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 2.6.30 < 5.4.118
Search vendor "Linux" for product "Linux Kernel" and version " >= 2.6.30 < 5.4.118"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 2.6.30 < 5.10.36
Search vendor "Linux" for product "Linux Kernel" and version " >= 2.6.30 < 5.10.36"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 2.6.30 < 5.11.20
Search vendor "Linux" for product "Linux Kernel" and version " >= 2.6.30 < 5.11.20"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 2.6.30 < 5.12.3
Search vendor "Linux" for product "Linux Kernel" and version " >= 2.6.30 < 5.12.3"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 2.6.30 < 5.13
Search vendor "Linux" for product "Linux Kernel" and version " >= 2.6.30 < 5.13"
en
Affected