Skip to content

Conversation

@gurgunday
Copy link
Member

Continues the work of utilizing the bufferPool

After:

buffers/buffer-of.js
buffers/buffer-of.js n=500000 len=0: 24,067,484.842418384
buffers/buffer-of.js n=500000 len=1: 27,773,857.034959674
buffers/buffer-of.js n=500000 len=8: 22,167,419.43528499
buffers/buffer-of.js n=500000 len=64: 8,500,130.68950935
buffers/buffer-of.js n=500000 len=256: 2,575,202.346524378
buffers/buffer-of.js n=500000 len=1024: 719,503.269332918

Before:

buffers/buffer-of.js
buffers/buffer-of.js n=500000 len=0: 19,773,331.18621321
buffers/buffer-of.js n=500000 len=1: 20,064,473.57422553
buffers/buffer-of.js n=500000 len=8: 16,945,705.958110213
buffers/buffer-of.js n=500000 len=64: 6,507,108.716946263
buffers/buffer-of.js n=500000 len=256: 1,357,259.5572280008
buffers/buffer-of.js n=500000 len=1024: 605,587.4836820365

@nodejs-github-bot
Copy link
Collaborator

Review requested:

  • @nodejs/performance

@nodejs-github-bot nodejs-github-bot added buffer Issues and PRs related to the buffer subsystem. needs-ci PRs that need a full CI run. labels Oct 22, 2025
@gurgunday gurgunday added the performance Issues and PRs related to the performance of Node.js. label Oct 22, 2025
@codecov
Copy link

codecov bot commented Oct 23, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 88.54%. Comparing base (2e944d7) to head (6b73f3b).
⚠️ Report is 4 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #60372      +/-   ##
==========================================
+ Coverage   88.52%   88.54%   +0.02%     
==========================================
  Files         703      703              
  Lines      208222   208234      +12     
  Branches    40131    40145      +14     
==========================================
+ Hits       184320   184391      +71     
+ Misses      15905    15840      -65     
- Partials     7997     8003       +6     
Files with missing lines Coverage Δ
lib/buffer.js 100.00% <100.00%> (ø)

... and 41 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@mcollina mcollina added the request-ci Add this label to start a Jenkins CI on a PR. label Oct 24, 2025
@github-actions github-actions bot removed the request-ci Add this label to start a Jenkins CI on a PR. label Oct 24, 2025
@nodejs-github-bot
Copy link
Collaborator

@nodejs-github-bot
Copy link
Collaborator

@gurgunday gurgunday added the author ready PRs that have at least one approval, no pending requests for changes, and a CI started. label Oct 24, 2025
@aduh95
Copy link
Contributor

aduh95 commented Oct 25, 2025

Benchmark GHA: https://github.com/aduh95/node/actions/runs/18809554526
Benchmark CI: https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/1747/

Results (improvements across the board)
                                       confidence improvement accuracy (*)    (**)   (***)
buffers/buffer-of.js n=500000 len=0           ***     21.11 %       ±6.75%  ±8.90% ±11.42%
buffers/buffer-of.js n=500000 len=1           ***     31.03 %       ±8.27% ±10.91% ±14.01%
buffers/buffer-of.js n=500000 len=1024        ***     28.21 %       ±7.97% ±10.50% ±13.48%
buffers/buffer-of.js n=500000 len=256         ***     89.28 %      ±11.68% ±15.40% ±19.76%
buffers/buffer-of.js n=500000 len=64            *      9.31 %       ±7.16%  ±9.44% ±12.11%
buffers/buffer-of.js n=500000 len=8           ***     29.63 %       ±7.91% ±10.43% ±13.39%

Be aware that when doing many comparisons the risk of a false-positive
result increases. In this case, there are 6 comparisons, you can thus
expect the following amount of false-positive results:
  0.30 false positives, when considering a   5% risk acceptance (*, **, ***),
  0.06 false positives, when considering a   1% risk acceptance (**, ***),
  0.01 false positives, when considering a 0.1% risk acceptance (***)
134.237                                         confidence improvement accuracy (*)   (**)  (***)
134.237  buffers/buffer-of.js n=500000 len=0           ***     17.61 %       ±5.32% ±7.07% ±9.21%
134.237  buffers/buffer-of.js n=500000 len=1           ***     19.85 %       ±5.11% ±6.80% ±8.86%
134.237  buffers/buffer-of.js n=500000 len=1024        ***     25.33 %       ±0.36% ±0.49% ±0.64%
134.237  buffers/buffer-of.js n=500000 len=256         ***     84.14 %       ±1.22% ±1.63% ±2.13%
134.237  buffers/buffer-of.js n=500000 len=64          ***      3.58 %       ±2.02% ±2.69% ±3.50%
134.237  buffers/buffer-of.js n=500000 len=8           ***     12.67 %       ±4.17% ±5.55% ±7.23%

@gurgunday
Copy link
Member Author

@aduh95 the CI didn't pick up the benchmarks, I think the category is buffers instead of buffer

@ChALkeR
Copy link
Member

ChALkeR commented Oct 26, 2025

I feel like adding pooling to places which previously returned non-pooled buffers should be semver-major to be on the safe side

Too much code in the ecosystem assuming non-pooled buffers and accessing .buffer directly without offset...

@ChALkeR
Copy link
Member

ChALkeR commented Oct 27, 2025

We could even make Buffer.alloc pooled, like Buffer.from already is
Upd: I was wrong, we most likely can't

Which could improve perf in some usecases significantly (~3x on allocations), and would be much more noticeable than Buffer.of()

But it would definitely be a semver-major

And the only difference from Buffer.of is the amount of usage, but that goes both ways

@gurgunday
Copy link
Member Author

Previously we have added pooling as semver minor (#31615), but I have no problem with semver major.

We could even make Buffer.alloc pooled, like Buffer.from already is

I think Buffer.alloc docs say it's a new buffer instance explicitly and it's one of the differences with unsafeAlloc, but maybe we still can? However it's less obvious.

@ChALkeR
Copy link
Member

ChALkeR commented Oct 27, 2025

Previously we have added pooling as semver minor (#31615)

#31615 did not add pooling, see the diff there -- allocate() usage which it replaced was already pooled

Buffer.from has been pooled back in 4.x:

chalker@Nikitas-MacBook-Air % nvm use 4  
Now using node v4.9.1 (npm v2.15.11)
chalker@Nikitas-MacBook-Air % node     
> Buffer.from('x').buffer.byteLength
8192
> Buffer.from([1]).buffer.byteLength
8192

@gurgunday
Copy link
Member Author

I'll let the collaborators decide – both are fine for me

@ChALkeR
Copy link
Member

ChALkeR commented Oct 27, 2025

e.g. this will definitely break: https://github.com/website-local/website-scrap-engine/blob/6433593ee96fe2a386092984111a7c64599c9709/test/util.spec.ts#L80

but other places might use it indirectly, I have seen a lot of code assuming non-pooled uint8arrays, including in popular deps
if anyone feeds Buffer.of there, they will be affected

I vote semver-major 🙃

@H4ad H4ad added the semver-major PRs that contain breaking changes and should be released in the next major version. label Oct 27, 2025
@H4ad
Copy link
Member

H4ad commented Oct 27, 2025

The commit message should also be changed to be more explicit, it's adding the buffer pool to Buffer.of, optimization is the side-effect.

@mcollina Do you think this change should be a topic for tsc due to the impact it might have be?

@H4ad H4ad added the needs-citgm PRs that need a CITGM CI run. label Oct 27, 2025
@gurgunday
Copy link
Member Author

The commit message should also be changed to be more explicit, it's adding the buffer pool to Buffer.of, optimization is the side-effect.

Done

Copy link
Member

@ChALkeR ChALkeR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are more things this could break, including our own APIs.

At the very least, documentation should be updated for this to land

node/doc/api/webstreams.md

Lines 694 to 697 in dec0213

Do not pass a pooled {Buffer} object instance in to this method.
Pooled `Buffer` objects are created using `Buffer.allocUnsafe()`,
or `Buffer.from()`, or are often returned by various `node:fs` module
callbacks. These types of `Buffer`s use a shared underlying

E.g., but not sure if limited to:

node/doc/api/buffer.md

Lines 827 to 831 in dec0213

The `Buffer` module pre-allocates an internal `Buffer` instance of
size [`Buffer.poolSize`][] that is used as a pool for the fast allocation of new
`Buffer` instances created using [`Buffer.allocUnsafe()`][], [`Buffer.from(array)`][],
[`Buffer.from(string)`][], and [`Buffer.concat()`][] only when `size` is less than
`Buffer.poolSize >>> 1` (floor of [`Buffer.poolSize`][] divided by two).

node/doc/api/buffer.md

Lines 5463 to 5465 in dec0213

`Buffer` instances returned by [`Buffer.allocUnsafe()`][], [`Buffer.from(string)`][],
[`Buffer.concat()`][] and [`Buffer.from(array)`][] _may_ be allocated off a shared
internal memory pool if `size` is less than or equal to half [`Buffer.poolSize`][].

@gurgunday
Copy link
Member Author

Did you find something in Node.js code that would break? Or are you saying it just could? As you can see in the docs and the code most Buffer methods may already use it.

I understand it's semver major and I'm OK with that, but for me one of the reasons to use Buffer methods to create a new Uint8Array is the buffer pool. Otherwise I'd just use Uint8Array static methods, etc.

But yeah good idea to update the docs including .of as well.

@ChALkeR
Copy link
Member

ChALkeR commented Oct 29, 2025

@gurgunday the first doc citation in my previous comment starts with

Do not pass a pooled {Buffer} object instance in to this method.

I trusted that means that it would break, but I did not recheck.

@gurgunday
Copy link
Member Author

Docs are added

I think this can land as semver major

This also gives people a good reason to use Buffer.of, like Buffer.from, instead of Uint8Array.of for more performance

@ChALkeR
Copy link
Member

ChALkeR commented Oct 30, 2025

More data (not against landing this, just more data):

chalker@macbook-air node % ./out/Release/node.0 benchmark/buffers/buffer-of.js
buffers/buffer-of.js n=5000000 len=0: 23,930,563.116350498
buffers/buffer-of.js n=5000000 len=1: 24,074,805.23482565
buffers/buffer-of.js n=5000000 len=2: 22,687,869.775693152
buffers/buffer-of.js n=5000000 len=4: 21,766,654.179717608
buffers/buffer-of.js n=5000000 len=8: 20,562,789.821470033
buffers/buffer-of.js n=5000000 len=64: 7,864,438.063809134
buffers/buffer-of.js n=5000000 len=256: 2,691,632.454896651
buffers/buffer-of.js n=5000000 len=1024: 984,038.7519073675
chalker@macbook-air node % ./out/Release/node.1 benchmark/buffers/buffer-of.js
buffers/buffer-of.js n=5000000 len=0: 30,019,813.0165513
buffers/buffer-of.js n=5000000 len=1: 36,636,299.42425469
buffers/buffer-of.js n=5000000 len=2: 33,512,822.424769998
buffers/buffer-of.js n=5000000 len=4: 31,435,444.563986454
buffers/buffer-of.js n=5000000 len=8: 26,835,941.7761396
buffers/buffer-of.js n=5000000 len=64: 11,101,086.213533279
buffers/buffer-of.js n=5000000 len=256: 3,698,045.9235413205
buffers/buffer-of.js n=5000000 len=1024: 1,082,960.749712068
chalker@macbook-air node % ./out/Release/node benchmark/buffers/buffer-of.js  
buffers/buffer-of.js n=5000000 len=0: 28,634,315.090605903
buffers/buffer-of.js n=5000000 len=1: 29,676,283.73727543
buffers/buffer-of.js n=5000000 len=2: 28,108,271.239398185
buffers/buffer-of.js n=5000000 len=4: 24,118,940.180194914
buffers/buffer-of.js n=5000000 len=8: 22,930,669.537990075
buffers/buffer-of.js n=5000000 len=64: 10,545,975.708399553
buffers/buffer-of.js n=5000000 len=256: 2,758,204.612396961
buffers/buffer-of.js n=5000000 len=1024: 982,744.0786832569

0 is main, 1 is this branch rebased on top of main, and last one is still non-pooled

 const of = (...items) => {
-  const newObj = createUnsafeBuffer(items.length);
-  for (let k = 0; k < items.length; k++)
+  const len = items.length;
+  const newObj = len <= 64 ? new FastBuffer(len) : createUnsafeBuffer(len);
+  for (let k = 0; k < len; k++)
     newObj[k] = items[k];
   return newObj;
 };

note: I increased n 10x for better consistency and added more small sizes (as that's on what Buffer.of is likely used)

pooling improves the perf over that, but that patch is backportable and non semver-major

more than half of the perf issue here was that createUnsafeBuffer is just worse than Uint8Array constructor on sizes <=64 (and especially 0)
i.e. more than half of the effect on sizes up to 64 could be achieved without pooling (and it's much less likely that Buffer.of is used instead of Buffer.from on smth larger than 64)

Also note: it's important to not optimize createUnsafeBuffer with that instead (except perhaps for size=0, which is fine, but useless with any of these changes)

@ChALkeR
Copy link
Member

ChALkeR commented Oct 30, 2025

Hm, I wonder if just always FastBuffer there instead of all the complex logic is a cleaner option...

chalker@macbook-air node % ./out/Release/node.0 benchmark/buffers/buffer-of.js
buffers/buffer-of.js n=500000 len=0: 15,596,953.752723774
buffers/buffer-of.js n=500000 len=1: 21,141,872.53565048
buffers/buffer-of.js n=500000 len=2: 21,801,097.685268454
buffers/buffer-of.js n=500000 len=4: 20,528,540.953659117
buffers/buffer-of.js n=500000 len=8: 19,835,333.40934233
buffers/buffer-of.js n=500000 len=64: 9,533,304.877053829
buffers/buffer-of.js n=500000 len=65: 5,037,081.330518438
buffers/buffer-of.js n=500000 len=256: 2,695,629.0951087438
buffers/buffer-of.js n=500000 len=1024: 977,110.5349628782
buffers/buffer-of.js n=500000 len=2048: 520,336.4840793213
buffers/buffer-of.js n=500000 len=10240: 112,783.05727800346
chalker@macbook-air node % ./out/Release/node.1 benchmark/buffers/buffer-of.js
buffers/buffer-of.js n=500000 len=0: 17,825,073.639835477
buffers/buffer-of.js n=500000 len=1: 27,283,826.829551116
buffers/buffer-of.js n=500000 len=2: 31,900,216.12396424
buffers/buffer-of.js n=500000 len=4: 28,966,196.448744316
buffers/buffer-of.js n=500000 len=8: 27,717,274.048227616
buffers/buffer-of.js n=500000 len=64: 10,481,740.807513312
buffers/buffer-of.js n=500000 len=65: 10,345,710.39637748
buffers/buffer-of.js n=500000 len=256: 3,623,475.027923223
buffers/buffer-of.js n=500000 len=1024: 1,075,294.735276114
buffers/buffer-of.js n=500000 len=2048: 553,338.6656736082
buffers/buffer-of.js n=500000 len=10240: 112,197.75783996245
chalker@macbook-air node % ./out/Release/node.3 benchmark/buffers/buffer-of.js
buffers/buffer-of.js n=500000 len=0: 18,072,234.722184572
buffers/buffer-of.js n=500000 len=1: 28,449,704.478694726
buffers/buffer-of.js n=500000 len=2: 26,948,244.925483793
buffers/buffer-of.js n=500000 len=4: 24,549,566.999647174
buffers/buffer-of.js n=500000 len=8: 22,982,121.472278826
buffers/buffer-of.js n=500000 len=64: 10,335,641.481898643
buffers/buffer-of.js n=500000 len=65: 5,278,813.744975229
buffers/buffer-of.js n=500000 len=256: 2,753,454.3282300276
buffers/buffer-of.js n=500000 len=1024: 981,079.1527302328
buffers/buffer-of.js n=500000 len=2048: 522,575.97888145887
buffers/buffer-of.js n=500000 len=10240: 107,080.31746473502

From top to bottom: main, pooled, just FastBuffer.

 const of = (...items) => {
-  const newObj = createUnsafeBuffer(items.length);
-  for (let k = 0; k < items.length; k++)
+  const len = items.length;
+  const newObj = new FastBuffer(len);
+  for (let k = 0; k < len; k++)
     newObj[k] = items[k];

FastBuffer is faster than main on all sizes up to 64, and then just random / doesn't seem to be slower.
It is slower than the current pooled solution though

Perhaps it would make sense to land that in 24/25, and this in 26 on top.

@gurgunday
Copy link
Member Author

Is createUnsafeBuffer zero-filling in your benchmarks?

@ChALkeR
Copy link
Member

ChALkeR commented Oct 30, 2025

@gurgunday no, this is rebased on main (also, I suggest rebasing this PR on main to get the non-zero-filling change in)
The trick is that new Uint8Array(size) bypasses allocation when size <= 64

@ChALkeR
Copy link
Member

ChALkeR commented Oct 30, 2025

I checked usage.
To my surprise, Buffer.of is not only used with static sizes, Buffer.of(... is relatively common
Nah, I was looking at the wrong thing. The vast majority of usage is small static sizes.

@ChALkeR
Copy link
Member

ChALkeR commented Oct 31, 2025

@gurgunday I filed #60503.
That's not a replacement for this PR, but this PR should be compared to that approach. (It will still be faster)

I also modified your benchmark to denoise it a little, but I didn't commit it to not conflict with this PR in benchmarks.
Feel free to adjust the benchmark here based on those changes (if you want!)

@gurgunday
Copy link
Member Author

Rebased, here are the latest results:

branch:

buffers/buffer-of.js
buffers/buffer-of.js n=500000 len=0: 23,176,224.64213939
buffers/buffer-of.js n=500000 len=1: 28,260,428.21105159
buffers/buffer-of.js n=500000 len=8: 22,288,220.118461892
buffers/buffer-of.js n=500000 len=64: 8,651,554.224031374
buffers/buffer-of.js n=500000 len=256: 2,675,041.3293885393
buffers/buffer-of.js n=500000 len=1024: 761,538.1119104564

main:

buffers/buffer-of.js
buffers/buffer-of.js n=500000 len=0: 23,458,759.5007976
buffers/buffer-of.js n=500000 len=1: 24,023,591.166525528
buffers/buffer-of.js n=500000 len=8: 19,374,432.13539411
buffers/buffer-of.js n=500000 len=64: 8,169,940.247344614
buffers/buffer-of.js n=500000 len=256: 1,423,056.0698322074
buffers/buffer-of.js n=500000 len=1024: 665,914.9969122255

After looking at the codebase again, I think it's safe to merge as semver major; as previously stated, it gives a good reason to prefer Buffer.of instead of Uint8Array.of

@ChALkeR
Copy link
Member

ChALkeR commented Nov 15, 2025

most used sizes are likely 0-8, and improvement on 1-8 seems significant (15-20%) according to #60372 (comment)

unless someone demonstrates that this somehow does not bring any measurable real-world perf improvement, i think this should go in

@gurgunday
Copy link
Member Author

gurgunday commented Nov 23, 2025

@anonrig maybe PTAL? :)

For context: this will be semver-major but a really good perf improvement and in alignment with other Buffer methods like Buffer.from

@aduh95
Copy link
Contributor

aduh95 commented Nov 23, 2025

Copy link
Member

@panva panva left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason this needs to overload TypedArray.of()? I would like that we don't unnecessarily diverge Buffer from Uint8Array any further. It's already messy enough as it is. This might as well be a Buffer-specific static method name that doesn't confict with TypedArray definitions.

@gurgunday
Copy link
Member Author

gurgunday commented Nov 23, 2025

Is there a reason this needs to overload TypedArray.of()?

Yes, to not call the deprecated Buffer constructor

Edit: Well, it's up to you in the end but I think it wouldn't be messy if we make it clear that the difference between Buffer and Uint8Array methods is the pool, so users have a clear reason to use them. It's not like we will ever remove the pool from Buffer.from, right?

@panva
Copy link
Member

panva commented Nov 23, 2025

Is there a reason this needs to overload TypedArray.of()?

Yes, to not call the deprecated Buffer constructor

You got me lost there. Please explain. Why can't a faster version of %TypedArray%.of() on a Buffer be called Buffer.pooledMuchFasterIfYouNeedItOf()?

@gurgunday
Copy link
Member Author

Oh apologies I misread your comment, I thought you were questioning why we're overwriting the method in the first place, which is to avoid deprecated Buffer constructor warnings

Why can't a faster version of %TypedArray%.of() on a Buffer be called Buffer.pooledMuchFasterIfYouNeedItOf()?

I'm not opposed to this approach, I think it would even be better long-term to have a complete separation between pooled and non-pooled methods, but then we would have the problem of Buffer.from already returning instances from the pool. If you're down to making a breaking change to restore it as well, I think we can move forward with this

If we can't though, I think continuing in the steps of making Buffer static methods return pooled instances is also clearer than the inconsistency we have right now

@panva
Copy link
Member

panva commented Nov 23, 2025

I'm not down with deprecating Buffer.from. I am merely objecting to making the functional gap between Buffer and Uint8Array-inherited methods even bigger.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

author ready PRs that have at least one approval, no pending requests for changes, and a CI started. buffer Issues and PRs related to the buffer subsystem. needs-ci PRs that need a full CI run. needs-citgm PRs that need a CITGM CI run. performance Issues and PRs related to the performance of Node.js. semver-major PRs that contain breaking changes and should be released in the next major version.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants