Currently, the following throws a TypeError:
const val = 123412341234n;
const buf = new Uint8Array(8);
buf[0] = val;
Simply casting to Number is not sufficient, since it can cause a loss of precision. The code for writing a 64-bit integer into a Node.js Buffer instance (which is also a Uint8Array instance) would look something like this:
buf[offset++] = Number(value & 0xffn);
value = value >> 8n;
buf[offset++] = Number(value & 0xffn);
value = value >> 8n;
buf[offset++] = Number(value & 0xffn);
value = value >> 8n;
buf[offset++] = Number(value & 0xffn);
value = value >> 8n;
buf[offset++] = Number(value & 0xffn);
value = value >> 8n;
buf[offset++] = Number(value & 0xffn);
value = value >> 8n;
buf[offset++] = Number(value & 0xffn);
value = value >> 8n;
buf[offset++] = Number(value & 0xffn);
This is not only verbose, but also weird, because both Number and BigInt are supersets of Uint8.
This is clearly not a case of avoiding silent truncations, since all TypedArrays (including the new BigInt64Array and BigUint64Array) still allow them when setting values of the correct type. For example, the following silently truncates according to the spec:
const val = 123412341234123412341234123412341234123412341234123412341234n;
const buf = new BigUint64Array(1);
buf[0] = val;
My informal proposal: when setting indexes of Int8Array, Uint8Array, Int16Array, Uint16Array, Int32Array or Uint32Array to BigInt values, it should do the same as with Number values - that is, set the index to the value modulo 2^8, 2^16 or 2^32, respectively.
Note: this is not the same as #106. That patch performed a conversion to Number, which can cause a loss of precision.