Pages

Saturday, January 10, 2015

We're hiring again!

Our team is expanding an we're growing again.
We need to fill 9 positions. If you're interested (or know anyone interested) send me your resume at pavel@fusu.us


About things we do:
http://www.allthingsdistributed.com/2014/11/apollo-amazon-deployment-engine.html

Thursday, May 8, 2014

We're hiring!

Update:
Position has been filled but we'll be hiring again soon!

Builder Tools team @ Amazon is hiring:

http://www.amazon.com/gp/jobs/254131

Do you think you have what it takes? We need you!
Apply online or send me your resume and we'll get you an interview.

Sunday, July 28, 2013

Finding The Median In Large Sets Of Numbers Split Across 1000 Servers

How would you find the median across a thousand servers with a billion of numbers each?

This is a question that involves lots of discussion because it may be quite vague and may require taking some assumptions. Obviously we can't do any kind of in-memory sorting because we don't have enough memory. We may possibly fit one set at a time, which would be under 1 GB of memory.

The first obvious solution is an external merge sort and then a look up of the n/2 element (or the average of n/2 and n/2 + 1 on even n's). This solution would be n log n, but you may look for an O (n) solution, which someone may find impossible, since you need a sorted set to determine the median that involves element comparison which leads to the n log n complexity. But, who says we need sort by comparison.. ?

Another solution comes from our observation that we need to find the n/2 element in a sorted array, which can be generalized to the order statistics algorithm of finding the kth smallest number in an unsorted array. This algorithm is also called quick select or the selection algorithm. I talked about an implementation of this algorithm in this article. The problem is that this is an in-memory algorithm and we would not have enough memory to maintain all the sets. So we will need a distributed version of this algorithm which will basically need a routine to swap numbers across servers. We have to assume that this is an optimized and efficient routine, because there's just too many factors that can affect the efficiency of this routine (network latency, connection problems). This is a O(n) solution and the distributed version of kth smallest may look like this:


Another solution comes from the second observation that we shouldn't use a comparison sort. We could use a histogram of the counts of the numbers across the servers, like in the counting sort. But here we would have to assume a few more things. First we have to know something about the range of the numbers. If we have ranges of order of billions we could store an array of a few billions cells or at least one billion since the problem statement allows us to process a billion numbers in memory, which is the second assumption. Again this is a O(n) solution because we compute the histogram by going once through all the numbers and then find the kth number (the median) by going through the elements of the histogram. In code it may look similar to this:



We could think of more solutions if we could assume even more facts. Let's say we know that the numbers across all servers are uniformly distributed. Having this large amount of numbers we could say the median is also the average of all numbers. So if we also know the range of distribution, say 0.. 1 billion, then computing the median is a O(1) operation and all we need to do is to compute 0 + (1 billion - 0) / 2 which is 500 million. If we don't know the range we can compute the median of medians in O(n) by using order statistics on each server.

If the distribution is not uniform but we know some information about it we could still calculate the median with some probability. Of course we can find other different solutions if we take various assumptions, but the above solutions can probably satisfy any interviewer or whoever asked this question.

Friday, July 19, 2013

Printing Outside Nodes (Boundary) Of A Binary Tree

Contrary to many solutions I found online for this problem - I'm going to say a breadth first search (while printing the first and the last element) is not appropriate here and it won't help in cases where the tree has leafs at levels closer to the root, and those algorithms would miss that. The solution here is a depth first pre-order algorithm, but let me explain the problem statement first. So having the tree below (it is a binary search tree), print the boundary nodes of the tree anticlockwise. Printing the boundary nodes for our tree should result in the following sequence: [60, 41, 16, 25, 42, 55, 62, 64, 70, 65, 74].

Simple Binary Search Tree
Fig. 1 - Simple Binary Search Tree

A pre-order traversal is more appropriate here because it traverses branches in the way we need; for example, it will traverse [60, 41, 16, 25] first which is what we need and we print them. The next sequences are of no interest to us, so we could add a flag that determines whether nodes should be printed out or not (ex: print only leaf nodes). So out of [53, 46, 42, 55, 74, 65, 63, 62, 64, 70] we would print only leaf nodes [42, 55, 62, 64, 70]. We're left with right branch only, where we have to print all the non-leaf nodes. For this we could have a separate stack where, while traversing we add the non-leaf nodes. In code this may look like this:




Wednesday, July 17, 2013

Matching An Input String With A Given Dictionary

Questions about string matching are pretty often in interviews and this is one of them. I've seen two variations online: one to split the string in meaningful words and another one to determine all possible meaningful words that can be produced out of sub-strings of the input.

First one seems easy - you just go once through the string and output what you've matched, thus O(n) where n is the number of the input string's characters. Second one would involve trying all possible sub-words, which would be O(nk) where k is the length of the longest word.

Now another problem is how would you represent a dictionary. A hash map won't work here because there will be too many collisions and the hash map would loose its efficiency. The first solution here would be a trie and that's what I used in both mentioned problems. An example of an implementation of a trie can be found here




Now can it be done faster? Yes, we could use the Aho-Corasick string matching algorithm which uses a finite state machine that resembles to a trie. That will improve our time in the second problem from O(nk) to O(n), but don't forget that we have the pre-processing time. This may not be asked to be implemented in an interview, but it's good to mention.

Tuesday, July 16, 2013

Re-creating A Binary Search Tree Given Its Pre-order Traversal

As a reminder a pre-order traversal consists of visiting the root first then the left and right child. Thus, the first element in a pre-order traversal will always be the root of the tree. It may be easier to find a quick solution by looking at this post where I showed how to validate a tree. Our interest lays in the recursive solution. We could traverse the array and create the tree in the same manner it is traversed in the above mentioned solution, by maintaining a minimum and maximum value.





Note:
Using the external index here is because passing it as an integer argument will not work, since Java passes arguments by value. To avoid using the external index, we could have a mutable integer class or an array of ints, where the first cell is the index value:


Implementing a mutable integer class is fairly easy but it is mostly re-inventing the wheel - Apache Commons has an implementation of this class that could be used here.