// For flags

CVE-2024-36003

ice: fix LAG and VF lock dependency in ice_reset_vf()

Severity Score

5.5
*CVSS v3.1

Exploit Likelihood

*EPSS

Affected Versions

*CPE

Public Exploits

0
*Multiple Sources

Exploited in Wild

-
*KEV

Decision

Track
*SSVC
Descriptions

In the Linux kernel, the following vulnerability has been resolved: ice: fix LAG and VF lock dependency in ice_reset_vf() 9f74a3dfcf83 ("ice: Fix VF Reset paths when interface in a failed over
aggregate"), the ice driver has acquired the LAG mutex in ice_reset_vf().
The commit placed this lock acquisition just prior to the acquisition of
the VF configuration lock. If ice_reset_vf() acquires the configuration lock via the ICE_VF_RESET_LOCK
flag, this could deadlock with ice_vc_cfg_qs_msg() because it always
acquires the locks in the order of the VF configuration lock and then the
LAG mutex. Lockdep reports this violation almost immediately on creating and then
removing 2 VF: ======================================================
WARNING: possible circular locking dependency detected
6.8.0-rc6 #54 Tainted: G W O
------------------------------------------------------
kworker/60:3/6771 is trying to acquire lock:
ff40d43e099380a0 (&vf->cfg_lock){+.+.}-{3:3}, at: ice_reset_vf+0x22f/0x4d0 [ice] but task is already holding lock:
ff40d43ea1961210 (&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&pf->lag_mutex){+.+.}-{3:3}: __lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0 __mutex_lock+0x9b/0xbf0 ice_vc_cfg_qs_msg+0x45/0x690 [ice] ice_vc_process_vf_msg+0x4f5/0x870 [ice] __ice_clean_ctrlq+0x2b5/0x600 [ice] ice_service_task+0x2c9/0x480 [ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 kthread+0x104/0x140 ret_from_fork+0x31/0x50 ret_from_fork_asm+0x1b/0x30 -> #0 (&vf->cfg_lock){+.+.}-{3:3}: check_prev_add+0xe2/0xc50 validate_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0 __mutex_lock+0x9b/0xbf0 ice_reset_vf+0x22f/0x4d0 [ice] ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480 [ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 kthread+0x104/0x140 ret_from_fork+0x31/0x50 ret_from_fork_asm+0x1b/0x30 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&pf->lag_mutex); lock(&vf->cfg_lock); lock(&pf->lag_mutex); lock(&vf->cfg_lock); *** DEADLOCK ***
4 locks held by kworker/60:3/6771: #0: ff40d43e05428b38 ((wq_completion)ice){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0 #1: ff50d06e05197e58 ((work_completion)(&pf->serv_task)){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0 #2: ff40d43ea1960e50 (&pf->vfs.table_lock){+.+.}-{3:3}, at: ice_process_vflr_event+0x48/0xd0 [ice] #3: ff40d43ea1961210 (&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice] stack backtrace:
CPU: 60 PID: 6771 Comm: kworker/60:3 Tainted: G W O 6.8.0-rc6 #54
Hardware name:
Workqueue: ice ice_service_task [ice]
Call Trace: <TASK> dump_stack_lvl+0x4a/0x80 check_noncircular+0x12d/0x150 check_prev_add+0xe2/0xc50 ? save_trace+0x59/0x230 ? add_chain_cache+0x109/0x450 validate_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40 ? lockdep_hardirqs_on+0x7d/0x100 lock_acquire+0xd4/0x2d0 ? ice_reset_vf+0x22f/0x4d0 [ice] ? lock_is_held_type+0xc7/0x120 __mutex_lock+0x9b/0xbf0 ? ice_reset_vf+0x22f/0x4d0 [ice] ? ice_reset_vf+0x22f/0x4d0 [ice] ? rcu_is_watching+0x11/0x50 ? ice_reset_vf+0x22f/0x4d0 [ice] ice_reset_vf+0x22f/0x4d0 [ice] ? process_one_work+0x176/0x4d0 ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480 [ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 ? __pfx_worker_thread+0x10/0x10 kthread+0x104/0x140 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x31/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 </TASK> To avoid deadlock, we must acquire the LAG ---truncated---

En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: ice: corrige la dependencia de bloqueo de LAG y VF en ice_reset_vf() 9f74a3dfcf83 ("ice: corrige las rutas de restablecimiento de VF cuando la interfaz está en un agregado de conmutación por error"), el controlador ice ha adquirido el LAG mutex en ice_reset_vf(). La confirmación colocó esta adquisición de bloqueo justo antes de la adquisición del bloqueo de configuración VF. Si ice_reset_vf() adquiere el bloqueo de configuración a través del indicador ICE_VF_RESET_LOCK, esto podría bloquearse con ice_vc_cfg_qs_msg() porque siempre adquiere los bloqueos en el orden del bloqueo de configuración VF y luego el mutex LAG. Lockdep informa esta infracción casi inmediatamente al crear y luego eliminar 2 VF: ===================================== ================== ADVERTENCIA: posible dependencia de bloqueo circular detectada 6.8.0-rc6 #54 Contaminado: GWO --------------- --------------------------------------- kworker/60:3/6771 está intentando adquirir bloqueo: ff40d43e099380a0 (&amp;vf-&gt;cfg_lock){+.+.}-{3:3}, en: ice_reset_vf+0x22f/0x4d0 [ice] pero la tarea ya mantiene el bloqueo: ff40d43ea1961210 (&amp;pf-&gt;lag_mutex){+.+ .}-{3:3}, en: ice_reset_vf+0xb7/0x4d0 [ice] cuyo bloqueo ya depende del nuevo bloqueo. la cadena de dependencia existente (en orden inverso) es: -&gt; #1 (&amp;pf-&gt;lag_mutex){+.+.}-{3:3}: __lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0 __mutex_lock+0x9b/0xbf0 ice_vc_cfg_qs_msg +0x45/0x690 [ice] ice_vc_process_vf_msg+0x4f5/0x870 [ice] __ice_clean_ctrlq+0x2b5/0x600 [ice] ice_service_task+0x2c9/0x480 [ice] Process_one_work+0x1e9/0x4d0 trabajador_thread+0x1e1/0x3d0 leer+0x104/0x140 ret_from_fork+0x31/ 0x50 ret_from_fork_asm+0x1b/0x30 -&gt; #0 (&amp;vf-&gt;cfg_lock){+.+.}-{3:3}: check_prev_add+0xe2/0xc50 validar_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0 ex_lock +0x9b/0xbf0 ice_reset_vf+0x22f/0x4d0 [ice] ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480 [ice] Process_one_work+0x1e9/0x4d0 trabajador_thread+0x1e1/0x3d0 kthread+0x104/0x1 40 ret_from_fork+0x31/0x50 ret_from_fork_asm+ 0x1b/0x30 otra información que podría ayudarnos a depurar esto: Posible escenario de bloqueo inseguro: CPU0 CPU1 ---- ---- lock(&amp;pf-&gt;lag_mutex); bloquear(&amp;vf-&gt;cfg_lock); bloquear(&amp;pf-&gt;lag_mutex); bloquear(&amp;vf-&gt;cfg_lock); *** DEADLOCK *** 4 bloqueos retenidos por kworker/60:3/6771: #0: ff40d43e05428b38 ((wq_completion)ice){+.+.}-{0:0}, en: Process_one_work+0x176/0x4d0 # 1: ff50d06e05197e58 ((work_completion)(&amp;pf-&gt;serv_task)){+.+.}-{0:0}, en: Process_one_work+0x176/0x4d0 #2: ff40d43ea1960e50 (&amp;pf-&gt;vfs.table_lock){+.+ .}-{3:3}, en: ice_process_vflr_event+0x48/0xd0 [ice] #3: ff40d43ea1961210 (&amp;pf-&gt;lag_mutex){+.+.}-{3:3}, en: ice_reset_vf+0xb7/0x4d0 [ ice] seguimiento de pila: CPU: 60 PID: 6771 Comm: kworker/60:3 Tainted: GWO 6.8.0-rc6 #54 Nombre de hardware: Cola de trabajo: ice ice_service_task [ice] Seguimiento de llamadas: dump_stack_lvl+0x4a/0x80 check_noncircular +0x12d/0x150 check_prev_add+0xe2/0xc50 ? save_trace+0x59/0x230? add_chain_cache+0x109/0x450 validar_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40? lockdep_hardirqs_on+0x7d/0x100 lock_acquire+0xd4/0x2d0? ice_reset_vf+0x22f/0x4d0 [ice]? lock_is_held_type+0xc7/0x120 __mutex_lock+0x9b/0xbf0 ? ice_reset_vf+0x22f/0x4d0 [ice]? ice_reset_vf+0x22f/0x4d0 [ice]? rcu_is_watching+0x11/0x50? ice_reset_vf+0x22f/0x4d0 [ice] ice_reset_vf+0x22f/0x4d0 [ice] ? Process_one_work+0x176/0x4d0 ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480 [ice] Process_one_work+0x1e9/0x4d0 trabajador_thread+0x1e1/0x3d0? __pfx_worker_thread+0x10/0x10 kthread+0x104/0x140 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x31/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 Para evitar un punto muerto, debemos adquirir el LAG ---truncado---

In the Linux kernel, the following vulnerability has been resolved: ice: fix LAG and VF lock dependency in ice_reset_vf() 9f74a3dfcf83 ("ice: Fix VF Reset paths when interface in a failed over aggregate"), the ice driver has acquired the LAG mutex in ice_reset_vf(). The commit placed this lock acquisition just prior to the acquisition of the VF configuration lock. If ice_reset_vf() acquires the configuration lock via the ICE_VF_RESET_LOCK flag, this could deadlock with ice_vc_cfg_qs_msg() because it always acquires the locks in the order of the VF configuration lock and then the LAG mutex. Lockdep reports this violation almost immediately on creating and then removing 2 VF: ====================================================== WARNING: possible circular locking dependency detected 6.8.0-rc6 #54 Tainted: G W O ------------------------------------------------------ kworker/60:3/6771 is trying to acquire lock: ff40d43e099380a0 (&vf->cfg_lock){+.+.}-{3:3}, at: ice_reset_vf+0x22f/0x4d0 [ice] but task is already holding lock: ff40d43ea1961210 (&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&pf->lag_mutex){+.+.}-{3:3}: __lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0 __mutex_lock+0x9b/0xbf0 ice_vc_cfg_qs_msg+0x45/0x690 [ice] ice_vc_process_vf_msg+0x4f5/0x870 [ice] __ice_clean_ctrlq+0x2b5/0x600 [ice] ice_service_task+0x2c9/0x480 [ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 kthread+0x104/0x140 ret_from_fork+0x31/0x50 ret_from_fork_asm+0x1b/0x30 -> #0 (&vf->cfg_lock){+.+.}-{3:3}: check_prev_add+0xe2/0xc50 validate_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0 __mutex_lock+0x9b/0xbf0 ice_reset_vf+0x22f/0x4d0 [ice] ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480 [ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 kthread+0x104/0x140 ret_from_fork+0x31/0x50 ret_from_fork_asm+0x1b/0x30 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&pf->lag_mutex); lock(&vf->cfg_lock); lock(&pf->lag_mutex); lock(&vf->cfg_lock); *** DEADLOCK *** 4 locks held by kworker/60:3/6771: #0: ff40d43e05428b38 ((wq_completion)ice){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0 #1: ff50d06e05197e58 ((work_completion)(&pf->serv_task)){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0 #2: ff40d43ea1960e50 (&pf->vfs.table_lock){+.+.}-{3:3}, at: ice_process_vflr_event+0x48/0xd0 [ice] #3: ff40d43ea1961210 (&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice] stack backtrace: CPU: 60 PID: 6771 Comm: kworker/60:3 Tainted: G W O 6.8.0-rc6 #54 Hardware name: Workqueue: ice ice_service_task [ice] Call Trace: <TASK> dump_stack_lvl+0x4a/0x80 check_noncircular+0x12d/0x150 check_prev_add+0xe2/0xc50 ? save_trace+0x59/0x230 ? add_chain_cache+0x109/0x450 validate_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40 ? lockdep_hardirqs_on+0x7d/0x100 lock_acquire+0xd4/0x2d0 ? ice_reset_vf+0x22f/0x4d0 [ice] ? lock_is_held_type+0xc7/0x120 __mutex_lock+0x9b/0xbf0 ? ice_reset_vf+0x22f/0x4d0 [ice] ? ice_reset_vf+0x22f/0x4d0 [ice] ? rcu_is_watching+0x11/0x50 ? ice_reset_vf+0x22f/0x4d0 [ice] ice_reset_vf+0x22f/0x4d0 [ice] ? process_one_work+0x176/0x4d0 ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480 [ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 ? __pfx_worker_thread+0x10/0x10 kthread+0x104/0x140 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x31/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 </TASK> To avoid deadlock, we must acquire the LAG ---truncated---

Benedict Schlüter, Supraja Sridhara, Andrin Bertschi, and Shweta Shinde discovered that an untrusted hypervisor could inject malicious #VC interrupts and compromise the security guarantees of AMD SEV-SNP. This flaw is known as WeSee. A local attacker in control of the hypervisor could use this to expose sensitive information or possibly execute arbitrary code in the trusted execution environment. Several security issues were discovered in the Linux kernel. An attacker could possibly use these to compromise the system.

*Credits: N/A
CVSS Scores
Attack Vector
Local
Attack Complexity
Low
Privileges Required
Low
User Interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High
Attack Vector
Local
Attack Complexity
Low
Authentication
Single
Confidentiality
None
Integrity
None
Availability
Complete
* Common Vulnerability Scoring System
SSVC
  • Decision:Track
Exploitation
None
Automatable
No
Tech. Impact
Partial
* Organization's Worst-case Scenario
Timeline
  • 2024-05-17 CVE Reserved
  • 2024-05-20 CVE Published
  • 2025-05-04 CVE Updated
  • 2025-06-28 EPSS Updated
  • ---------- Exploited in Wild
  • ---------- KEV Due Date
  • ---------- First Exploit
CWE
CAPEC
Affected Vendors, Products, and Versions
Vendor Product Version Other Status
Vendor Product Version Other Status <-- --> Vendor Product Version Other Status
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 6.6.5 < 6.6.30
Search vendor "Linux" for product "Linux Kernel" and version " >= 6.6.5 < 6.6.30"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 6.7 < 6.8.9
Search vendor "Linux" for product "Linux Kernel" and version " >= 6.7 < 6.8.9"
en
Affected
Linux
Search vendor "Linux"
Linux Kernel
Search vendor "Linux" for product "Linux Kernel"
>= 6.7 < 6.9
Search vendor "Linux" for product "Linux Kernel" and version " >= 6.7 < 6.9"
en
Affected