You can obviously approach solving algorithmic problems from many different angles. I’ll try to show multiple solutions to reflect it.
If there will be more than one solution I’ll check performance of a particular code snippet. Maybe checking each algorithm for speed is not a core factor to make your Front-End applications faster but it’s a small step in a good direction :)
The assumption here is that faster is better. No surprises.
To perform my checks I’ll use
console.time('Name') method. You can check out performance.now() as well.
Please note, that to have reliable measurements you need to run thousands of tests for each algorithm and then derive some average time out of it. Single run is not really a valid test as a browser running time varies quite a lot even for the same code snippet ran several times in a row.
My tests were ran around 20 times each so it gave me some indication.
You’ve got two arrays with numbers. Your task is to get unique values out of each array. Then you need to create an array with those unique values.
Unique value means a value that is present in array A and absent in array B and the other way around - present in B but not present in A.
I’ll cover a very basic concept here - first I’ll create empty array then I’ll compare each array against another one. If I won’t find a particular value I’ll consider this value as unique and push to the initial array:
I end up having an array with only unique (according to our definition) values so it kind of works… But what if we’ve got more than two arrays? This approach is neither efficient nor flexible…
Another strategy… Now I’ll merge two tables together and sort them. Then I’ll know that any value that had duplicates in other table(s) will have the same value next to it.
The next step will be iteration through combined table and if I won’t find any duplicates it means that a value is unique - I push it to my results table that will be returned at the end of the algorithm.
This algorithm looks a bit better and seems to be more efficient. What still is a bit over the top is that we sort all values from all tables even if we don’t need them. Obsolete operation… In addition conditions inside if statement doesn’t look nice.
Let’s try another approach.
You might have noticed that I used two different techniques to iterate through an array -
for loop and
map. There is a great work here that compares performance of both.
For seems to be faster or much faster for big datasets. The syntax is friendly for entry level developers.
On the other hand using
map syntax is very nice and clean. Values you use are not mutable so your initial data are safe.
I would say if you expect big dataset go for
for loop otherwise
map seems to do the job.
So to the point :) That’s the fastest and cleanest solution from the three in my opinion. We concatenate both arrays and then filter values. If both arrays contain a value we return false if it’s in only one a value is return in output array.
I would go for this solution.
I know you love the task just by reading the title :) Plan is simple we provide a parameter value then we check all Fibonacci’s values smaller than our parameter. If a Fibonacci’s values are odd numbers we sum them up and return a sum value.
So here I’ll generate Fibonacci sequence with
while loop and use remainder operator
% to check if we’ve got an odd value.
This one is better in my opinion. First a sequence is generated with
for loop and then we nicely reduce an array:
Cleaner and faster.
This one is really cool :) You just provide an arabic / decimal number and it’s converted to roman numeral value.
The concept is that I create set of decimal values and Roman equivalents. Then I iterate through the decimal array and as long as a parameter value is greater than what I’ve got in my table I compare it against the roman array and modify my output value. Have a look:
What if your string has some HTML symbol and it needs to be replaced with HTML entity? Or maybe any other character?
I’ll try to make it as fast as I can. Basically I need to know what HTML symbols I’m suppose to replace. I’ll use RegExp and string replace method for that.
It seems to be quite fast and effective but come on… It looks bad and what if I want to add more characters to my function?
Now I’ll take two parameters to my function. First is a string I want to modify and a second is an object with list on my entities:
This algorithm is a bit slower but looks so much better! I can get my entities eg. from API. That would be my choice.
Typical interview question:
‘Can you sum up all prime numbers smaller than a given value?’
OK - no problem :)
First I create a function
findPrimes that will find all primes within given range and output them as an array. Then I’ll use double
for loop (as it’s performant) and start from 2 because 0 and 1 are not primes.
The next step will be to sum them up.
Now something that hasn’t been used yet in this article - recursive function.
The concept is that I check null hypothesis if a number can be divided by any number that is not 1 or itself this number is not a prime. So I loop through all numbers smaller than our parameter and do my check.
Once my check is completed I recursively check decremented values until I reach 1:
Ok that’s it for this article. I hope that you were patient enough to get to this point.
The main takeaway in my opinion is that as long as you not dealing with big datasets readability is more important as it makes maintenance easier.
Functions should be as flexible and testable as possible.
If you deal with more data
for loop is a good choice for performance. Do you agree?