CVE-2021-47371 – nexthop: Fix memory leaks in nexthop notification chain listeners
https://notcve.org/view.php?id=CVE-2021-47371
In the Linux kernel, the following vulnerability has been resolved: nexthop: Fix memory leaks in nexthop notification chain listeners syzkaller discovered memory leaks [1] that can be reduced to the following commands: # ip nexthop add id 1 blackhole # devlink dev reload pci/0000:06:00.0 As part of the reload flow, mlxsw will unregister its netdevs and then unregister from the nexthop notification chain. Before unregistering from the notification chain, mlxsw will receive delete notifications for nexthop objects using netdevs registered by mlxsw or their uppers. mlxsw will not receive notifications for nexthops using netdevs that are not dismantled as part of the reload flow. For example, the blackhole nexthop above that internally uses the loopback netdev as its nexthop device. One way to fix this problem is to have listeners flush their nexthop tables after unregistering from the notification chain. This is error-prone as evident by this patch and also not symmetric with the registration path where a listener receives a dump of all the existing nexthops. Therefore, fix this problem by replaying delete notifications for the listener being unregistered. This is symmetric to the registration path and also consistent with the netdev notification chain. The above means that unregister_nexthop_notifier(), like register_nexthop_notifier(), will have to take RTNL in order to iterate over the existing nexthops and that any callers of the function cannot hold RTNL. • https://git.kernel.org/stable/c/2a014b200bbd973cc96e082a5bc445fe20b50f32 https://git.kernel.org/stable/c/741760fa6252628a3d3afad439b72437d4b123d9 https://git.kernel.org/stable/c/3106a0847525befe3e22fc723909d1b21eb0d520 • CWE-400: Uncontrolled Resource Consumption •
CVE-2021-47369 – s390/qeth: fix NULL deref in qeth_clear_working_pool_list()
https://notcve.org/view.php?id=CVE-2021-47369
In the Linux kernel, the following vulnerability has been resolved: s390/qeth: fix NULL deref in qeth_clear_working_pool_list() When qeth_set_online() calls qeth_clear_working_pool_list() to roll back after an error exit from qeth_hardsetup_card(), we are at risk of accessing card->qdio.in_q before it was allocated by qeth_alloc_qdio_queues() via qeth_mpc_initialize(). qeth_clear_working_pool_list() then dereferences NULL, and by writing to queue->bufs[i].pool_entry scribbles all over the CPU's lowcore. Resulting in a crash when those lowcore areas are used next (eg. on the next machine-check interrupt). Such a scenario would typically happen when the device is first set online and its queues aren't allocated yet. An early IO error or certain misconfigs (eg. mismatched transport mode, bad portno) then cause us to error out from qeth_hardsetup_card() with card->qdio.in_q still being NULL. Fix it by checking the pointer for NULL before accessing it. Note that we also have (rare) paths inside qeth_mpc_initialize() where a configuration change can cause us to free the existing queues, expecting that subsequent code will allocate them again. If we then error out before that re-allocation happens, the same bug occurs. Root-caused-by: Heiko Carstens <hca@linux.ibm.com> En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: s390/qeth: corrige NULL deref en qeth_clear_working_pool_list(). Cuando qeth_set_online() llama a qeth_clear_working_pool_list() para retroceder después de una salida de error de qeth_hardsetup_card(), corremos el riesgo de acceder a la tarjeta ->qdio.in_q antes de que qeth_alloc_qdio_queues() lo asignara mediante qeth_mpc_initialize(). qeth_clear_working_pool_list() luego elimina la referencia a NULL y, al escribir en queue->bufs[i].pool_entry garabatea por todo el núcleo bajo de la CPU. Lo que resulta en un bloqueo cuando esas áreas de núcleo bajo se usan a continuación (por ejemplo, en la siguiente interrupción de verificación de la máquina). • https://git.kernel.org/stable/c/eff73e16ee116f6eafa2be48fab42659a27cb453 https://git.kernel.org/stable/c/b2400fe7e1011c5f3dc2268e8382082465b1c8a2 https://git.kernel.org/stable/c/22697ca855c06a4a1264d5651542b7d98870a8c4 https://git.kernel.org/stable/c/db94f89e1dadf693c15c2d60de0c34777cea5779 https://git.kernel.org/stable/c/9b00fb12cdc9d8d1c3ffe82a78e74738127803fc https://git.kernel.org/stable/c/248f064af222a1f97ee02c84a98013dfbccad386 • CWE-476: NULL Pointer Dereference •
CVE-2021-47368 – enetc: Fix illegal access when reading affinity_hint
https://notcve.org/view.php?id=CVE-2021-47368
In the Linux kernel, the following vulnerability has been resolved: enetc: Fix illegal access when reading affinity_hint irq_set_affinity_hit() stores a reference to the cpumask_t parameter in the irq descriptor, and that reference can be accessed later from irq_affinity_hint_proc_show(). Since the cpu_mask parameter passed to irq_set_affinity_hit() has only temporary storage (it's on the stack memory), later accesses to it are illegal. Thus reads from the corresponding procfs affinity_hint file can result in paging request oops. The issue is fixed by the get_cpu_mask() helper, which provides a permanent storage for the cpumask_t parameter. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: enetc: corrige el acceso ilegal al leer affinity_hint irq_set_affinity_hit() almacena una referencia al parámetro cpumask_t en el descriptor irq, y se puede acceder a esa referencia más tarde desde irq_affinity_hint_proc_show(). Dado que el parámetro cpu_mask pasado a irq_set_affinity_hit() solo tiene almacenamiento temporal (está en la memoria de la pila), los accesos posteriores a él son ilegales. • https://git.kernel.org/stable/c/d4fd0404c1c95b17880f254ebfee3485693fa8ba https://git.kernel.org/stable/c/4c4c3052911b577920353a7646e4883d5da40c28 https://git.kernel.org/stable/c/6c3f1b741c6c2914ea120e3a5790d3e900152f7b https://git.kernel.org/stable/c/6f329d9da2a5ae032fcde800a99b118124ed5270 https://git.kernel.org/stable/c/7237a494decfa17d0b9d0076e6cee3235719de90 • CWE-400: Uncontrolled Resource Consumption •
CVE-2021-47367 – virtio-net: fix pages leaking when building skb in big mode
https://notcve.org/view.php?id=CVE-2021-47367
In the Linux kernel, the following vulnerability has been resolved: virtio-net: fix pages leaking when building skb in big mode We try to use build_skb() if we had sufficient tailroom. But we forget to release the unused pages chained via private in big mode which will leak pages. Fixing this by release the pages after building the skb in big mode. En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: virtio-net: corrige páginas con fugas al compilar skb en modo grande. Intentamos usar build_skb() si tuviéramos suficiente espacio de adaptación. • https://git.kernel.org/stable/c/fb32856b16ad9d5bcd75b76a274e2c515ac7b9d7 https://git.kernel.org/stable/c/f020bb63b5d2e5576acadd10e158fe3b04af67ba https://git.kernel.org/stable/c/afd92d82c9d715fb97565408755acad81573591a • CWE-119: Improper Restriction of Operations within the Bounds of a Memory Buffer •
CVE-2021-47366 – afs: Fix corruption in reads at fpos 2G-4G from an OpenAFS server
https://notcve.org/view.php?id=CVE-2021-47366
In the Linux kernel, the following vulnerability has been resolved: afs: Fix corruption in reads at fpos 2G-4G from an OpenAFS server AFS-3 has two data fetch RPC variants, FS.FetchData and FS.FetchData64, and Linux's afs client switches between them when talking to a non-YFS server if the read size, the file position or the sum of the two have the upper 32 bits set of the 64-bit value. This is a problem, however, since the file position and length fields of FS.FetchData are *signed* 32-bit values. Fix this by capturing the capability bits obtained from the fileserver when it's sent an FS.GetCapabilities RPC, rather than just discarding them, and then picking out the VICED_CAPABILITY_64BITFILES flag. This can then be used to decide whether to use FS.FetchData or FS.FetchData64 - and also FS.StoreData or FS.StoreData64 - rather than using upper_32_bits() to switch on the parameter values. This capabilities flag could also be used to limit the maximum size of the file, but all servers must be checked for that. Note that the issue does not exist with FS.StoreData - that uses *unsigned* 32-bit values. It's also not a problem with Auristor servers as its YFS.FetchData64 op uses unsigned 64-bit values. This can be tested by cloning a git repo through an OpenAFS client to an OpenAFS server and then doing "git status" on it from a Linux afs client[1]. Provided the clone has a pack file that's in the 2G-4G range, the git status will show errors like: error: packfile .git/objects/pack/pack-5e813c51d12b6847bbc0fcd97c2bca66da50079c.pack does not match index error: packfile .git/objects/pack/pack-5e813c51d12b6847bbc0fcd97c2bca66da50079c.pack does not match index This can be observed in the server's FileLog with something like the following appearing: Sun Aug 29 19:31:39 2021 SRXAFS_FetchData, Fid = 2303380852.491776.3263114, Host 192.168.11.201:7001, Id 1001 Sun Aug 29 19:31:39 2021 CheckRights: len=0, for host=192.168.11.201:7001 Sun Aug 29 19:31:39 2021 FetchData_RXStyle: Pos 18446744071815340032, Len 3154 Sun Aug 29 19:31:39 2021 FetchData_RXStyle: file size 2400758866 ... Sun Aug 29 19:31:40 2021 SRXAFS_FetchData returns 5 Note the file position of 18446744071815340032. This is the requested file position sign-extended. • https://git.kernel.org/stable/c/b9b1f8d5930a813879278d0cbfc8c658d6a038dc https://git.kernel.org/stable/c/e66fc460d6dcf85cf12288e133a081205aebcd97 https://git.kernel.org/stable/c/b537a3c21775075395af475dcc6ef212fcf29db8 •