3 results (0.010 seconds)

CVSS: 8.8EPSS: 0%CPEs: 2EXPL: 1

Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository scrapy/scrapy prior to 2.6.1. Una Exposición de Información Confidencial a un Actor no Autorizado en el repositorio de GitHub scrapy/scrapy versiones anteriores a 2.6.1 • https://github.com/scrapy/scrapy/commit/8ce01b3b76d4634f55067d6cfdf632ec70ba304a https://huntr.dev/bounties/3da527b1-2348-4f69-9e88-2e11a96ac585 https://lists.debian.org/debian-lts-announce/2022/03/msg00021.html • CWE-200: Exposure of Sensitive Information to an Unauthorized Actor CWE-863: Incorrect Authorization •

CVSS: 6.5EPSS: 0%CPEs: 3EXPL: 0

Scrapy is a high-level web crawling and scraping framework for Python. If you use `HttpAuthMiddleware` (i.e. the `http_user` and `http_pass` spider attributes) for HTTP authentication, all requests will expose your credentials to the request target. This includes requests generated by Scrapy components, such as `robots.txt` requests sent by Scrapy when the `ROBOTSTXT_OBEY` setting is set to `True`, or as requests reached through redirects. Upgrade to Scrapy 2.5.1 and use the new `http_auth_domain` spider attribute to control which domains are allowed to receive the configured HTTP authentication credentials. If you are using Scrapy 1.8 or a lower version, and upgrading to Scrapy 2.5.1 is not an option, you may upgrade to Scrapy 1.8.1 instead. • http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.downloadermiddlewares.httpauth https://github.com/scrapy/scrapy/commit/b01d69a1bf48060daec8f751368622352d8b85a6 https://github.com/scrapy/scrapy/security/advisories/GHSA-jwqp-28gf-p498 https://lists.debian.org/debian-lts-announce/2022/03/msg00021.html https://w3lib.readthedocs.io/en/latest/w3lib.html#w3lib.http.basic_auth_header • CWE-200: Exposure of Sensitive Information to an Unauthorized Actor CWE-522: Insufficiently Protected Credentials •

CVSS: 7.8EPSS: 0%CPEs: 1EXPL: 1

Scrapy 1.4 allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. Scrapy 1.4 permite que atacantes remotos provoquen una denegación de servicio (consumo de memoria) utilizando archivos de gran tamaño, ya que se leen numerosos archivos en la memoria. Esto resulta especialmente problemático si los archivos se escriben posteriormente de manera individual en un hilo separado a un recurso de almacenamiento lento. Esto se ha demostrado con la interacción entre dataReceived (en core/downloader/handlers/http11.py) y S3FilesStore. • http://blog.csdn.net/wangtua/article/details/75228728 https://github.com/scrapy/scrapy/issues/482 • CWE-400: Uncontrolled Resource Consumption •