Benchmarking deleting an array item in JavaScript

Posted September 20th, 2016

Recently I was building a very simple JavaScript application where I needed to delete an item at a specific index in an array, but being a Redux + React application, I needed the deletion to avoid mutating the original array state.

I realise that Immutable.js is an option for solving these kinds of problems, but for the meantime I wanted to do this using plain JavaScript. I wasn't quite ready to bring Immutable.js into my project.

There are a few ways to delete an item at a specific index. In my case I can't just use a plain array.splice because splice mutates the array. so here are some options:

Option #1 - Slice and Splice

const indexToRemove = 2;
              const myArray = [1,2,3,4,5];
              
              // make a copy of the array
              let myCopy = myarray.slice();
              // mutate the copy
              myCopy.splice(2, 1) //remove 1 element from index 2
              

Option #2 - Array.filter

const indexToRemove = 2;
              const myArray = [1,2,3,4,5];
              
              myCopy.filter((item, index) => (index !== indexToRemove)) //remove 1 element from index 2
              

Option #3 - ES6 spread operator

const indexToRemove = 2;
              const myArray = [1,2,3,4,5];
              
              [ ...myArray.slice(0, indexToRemove), //copy the first 2 elements
                ...myArray.slice(indexToRemove + 1, myArray.length) //copy the last 2 elements
              ]
              

Now looking at the speed of each option filter seems terrible since it will always need to traverse the entire array to remove an item. As for the ES6 version, I realised I actually have no idea how fast it would be in comparison to the others.

Does speed matter? If you were choosing between filter and the other options I think it matters. It is a pretty bad idea to traverse the entire array regardless of size, since large arrays will be slowed dramatically by the filter version. I don't think this is a case of pre-optimization, it's just common sense. I wouldn't want this filter version perpetuated throughout my entire code base so it's best not to use it even once.

I decided that as long as the ES6 version wasn't a whole lot slower than the slice and splice version that I'd be happy to use it. I created a benchmark using benchmark.js which you can play with here. I wouldn't normally do micro-benchmarking but this was interesting to me.

The result was that the ES6 spread version is really slow. In fact it's only a bit faster than the filter version. I only tested the benchmark in Chrome but here are the numbers on a 100,000 item array:

slice and splice x 1,280 ops/sec ±1.69% (49 runs sampled)
              spread and slice x 96.59 ops/sec ±4.27% (49 runs sampled)
              filter x 85.87 ops/sec ±2.13% (52 runs sampled)
              

Muddying the waters with Babel

It occurred to me that Babel is likely doing something very different with the spread operator when compiling to ES5, so I thought I should benchmark that as well. in this version of the fiddle I've added the Babel compiled version which uses array.concat. It also uses a function _toConsumableArray which slows it down considerably.

slice and splice x 1,266 ops/sec ±5.05% (43 runs sampled)
              spread and slice x 99.06 ops/sec ±4.37% (49 runs sampled)
              filter x 84.99 ops/sec ±2.09% (52 runs sampled)
              babelified x 660 ops/sec ±3.80% (41 runs sampled)
              

Does it matter?

That's up to you, information is power. I found it an interesting exercise to compare them all. I think looking at these numbers that the Babelified version is still slow enough for me to fall back to slice and splice. The effort level of that approach is low and arguably doesn't add any complexity.