Sometimes we tend to over-engineer our code. Just because we think it will look smarter, or run faster, or be more canon-compliant. Let’s take, for example, this function that gets a value from the server and translates it into a class name to apply to an element.
angular.module('widgetTransactionsFilters', [])
.filter('transactionType', function() {
return function(input) {
var strClass = 'type-0';
switch(input) {
case 'Payment Credit Card':
strClass = 'type-1';
break;
case 'Cash Withdrawl':
strClass = 'type-2';
break;
case 'Bill Payment':
strClass = 'type-3';
break;
case 'Salary':
strClass = 'type-4';
break;
case 'Online Transfer':
strClass = 'type-5';
break;
}
return strClass;
};
});
it’s not the most elegant filter, but hey, it’s mine! As they say, devil will find code for idle hands to refactor. In this case, I decided, don’t ask me why, to change the switch for an array lookup. To make it less legible, mainly. To make any changes to the filter slightly difficult? Who knows. In any case, this was the second version.
angular.module('widgetTransactionsFilters', [])
.filter('transactionType', function() {
return function(input) {
return 'type-' +
(['Payment Credit Card',
'Cash Withdrawl',
'Bill Payment',
'Salary',
'Online Transfer'].indexOf(input) + 1);
};
});
Despite my feeble attempts at AngularJS I know some things about JS. One of these things is that most of the array methods are not very optimized. So maybe it’s time to check which version is faster. Or less slow. Fortunately there’s an online tool that can help us to quickly solve this questions: jsperf.com.
Using jsperf.com. you can create a set of tests that will be run in a loop for some time. The speed of the test will be determined by the number of loops executed in that time. Additionally you can run the same tests using different browsers in different platforms. This is specially useful when you’re optimizing your code for an hybrid app where you know the browser and the platform.
You can code your setup, and add a number of tests. This is the setup:
<script>
Benchmark.prototype.setup = function() {
function bySwitch(input) {
var strClass = 'type-0';
switch(input) {
case 'Payment Credit Card':
strClass = 'type-1';
break;
case 'Cash Withdrawal':
strClass = 'type-2';
break;
case 'Bill Payment':
strClass = 'type-3';
break;
case 'Salary':
strClass = 'type-4';
break;
case 'Online Transfer':
strClass = 'type-5';
break;
}
return strClass;
}
function byArray(input) {
return 'type-' +
(['Payment Credit Card',
'Cash Withdrawal',
'Bill Payment',
'Salary',
'Online Transfer'].indexOf(input) + 1);
}
};
</script>
And these are the tests, along with the results.

As you can see, the array lookup is way more slow than the old fashioned switch. Not surprises here, people. The only surprise can be the huge difference of performance between Chrome Canary and WebKit when you perform the same test in both browsers.

Corollary: be careful when you start refactoring and always test the performance of your code.
UPDATE: Vyacheslav Egorov, aka @mraleph, noticed this post and thankfully redirected me to this excellent presentation on how to avoid benchmarking pitfalls due to the JIT optimization. Basically “optimizer eats µbenchmarks for breakfast”.

I’ve modified the first function to force the switch input NOT to be treated by the compiler as a constant, by changing it from switch(input) to switch(input.toString()). And here are the updated results.

The difference is still there, both between browsers and ways to test for the string. But the number of iterations for each test shows that all (or most) of the code in each of my tests is being executed. Or so I hope. BTW, the benchmark is located here. Feel free to use and abuse it.
So, the bottom line is: 1. don’t trust blindly in microbenchmarks, 2. don’t assume anything about the language and 3. run, don’t walk, to see mraleph’s presentation.