Spread vs Push in Large Arrays - Javascript
Spread vs Push in Large Arrays - Javascript

Spread vs Push in Large Arrays - Javascript

Type
Blog
Published
May 7, 2025
Author
Chetan Agrawal
ย 
Memory issues are the worst in javascript world. They are nasty and takes a lot of time to find the root cause. And combined with Array operations, its like cherry on an ice cream, enough to get you sugar high. I faced one such problem recently and with some help came to a conclusion which may not be very obvious.

Conclusions

  • Spread operations are expensive for merging 2 large arrays.
    • Why?
      • They consume a lot more memory as it creates a new array most of the time if you are merging 2 arrays.
      • They trigger GC a lot more if the array size is large and hence consuming your CPU cycles
  • They are worse when being done in a loop.
  • Prefer push operations over spread when working with large arrays.

Experiment 1: Spread operation

Code for the experiment

Nothing fancy here, simple array merge in a loop using spread operator.
const start = performance.now() const logMemoryUsage = () => { console.log( "Heap snapshot", Date.now(), `${process.memoryUsage().rss / 1024 / 1024} MB`, ) } logMemoryUsage() const arrMap = { arr: [1] } for (let i = 0; i < 1000; i++) { const newArr = Array(10000).fill(0) const arr = [...arrMap.arr, ...newArr] arrMap.arr = arr if (i % 10 === 0) { logMemoryUsage() } } logMemoryUsage() // if (global && typeof global.gc === 'function') { global.gc() // } console.log("time: ", performance.now() - start) logMemoryUsage()
Run the above using below command.
node --trace-gc --expose-gc <file_name.js>

Observations

  • Took roughly 18 secs to complete.
  • Too many triggers for garbage collection.
  • Max memory usage went up till about 1GB according to GC logs but around 450MB in heapdump log

Experiment 2: Push operation

Code for the experiment

Instead of using spread now, we are using age old push operation and pushing items one at a time.
Notice that we donโ€™t need to create any new array here, which is saving us the extra memory usage from before.
const logMemoryUsage = () => { console.log( "Heap snapshot", Date.now(), `${process.memoryUsage().rss / 1024 / 1024} MB`, ) } logMemoryUsage() const arrMap = { arr: [1] } const start = performance.now() for (let i = 0; i < 1000; i++) { const newArr = Array(10000).fill(0) for (let j = 0; j < newArr.length; j++) { arrMap.arr.push(newArr[j]) } if (i % 10 === 0) { logMemoryUsage() } } logMemoryUsage() global.gc() console.log("GC done", performance.now() - start) logMemoryUsage() console.log("XXXXXXXXX part 2 end")
Run the above using below command.
node --trace-gc --expose-gc <file_name.js>

Observations

  • Took roughly 73 milli secs to complete. (Thats right!)
  • Less triggers for garbage collection.
  • Max memory usage went up till around 240MB in heap dump logs.
ย 
While its so much easier to simply write a spread operator, its equally hard to debug the same and at times can cause exponential memory usage for large scale applications.
Since this is so much dependent on GC to keep clearing the memory usage, if the GC is not running as frequently and clearing the memory, this can lead to frequent service crashes.
Production code is never as simple as the examples above. So its wise to keep an eye out for such instances.
As parting thought, what do you think will happen if we use spread inside the push operation on the array instead of the extra for loop? ๐Ÿ˜€
ย