I’m a skeptical engineer who thinks, specification / documentation is never ever obvious. I believe the unambiguous and comprehensive stuff is * code*, not documentation. When I say unambiguous, I mean the computer can interpret it in only one way, yet people can easily misunderstand it.

We can observe this problem in mathematics too!

There are lot of people out there who think that they wrote down the specs, so all is left to do is coding. Coding, as some sort of mechanical activity. Those people might provide some mockups. Sometimes they create step-by-step task breakdowns. Still, it’s documentation.

## Mathematics

Let’s pick domain that works with well-defined abstractions, and check whether they are unambiguous. Mathematicians work with exact definitions and formal proofs, right? Is there an exception? Let’s check the Heap data structure. Actually, let’s check the complexity of the Insert operation on the Heap data structure.

### Is Insert on Heap `O(log n)`?

#### First thing first. What’s a Heap?

The Heap data structure is a Tree where each element is bigger (or smaller) than the elements below it. When the bigger comes first, then it’s a max-heap. Otherwise it’s a min-heap.

This one below is a max heap.

It’s a binary heap, i.e. each node has two child nodes. It’s also a full heap because each level of the tree is filled as much as possible with the given number of elements.

This other heap is a max heap too:

Probably it’s a binary tree. Or a ternary tree. But definitely not a full one. Unless we consider one-ary trees.

#### Okay, then what’s `O(log n)`?

It’s pronounced as big-oh log-n. Ladies can consider the picture below:

Okay, let’s take it seriously. The `O()` (big-oh) notation is a way to measure the complexity of an algorithm. It says that the number of steps required is proportional to the function within the big-oh.

E.g. finding the maximum element in an array is `O(n)` where n the size of the array. We call this kind of complexity *linear*. Naively sorting an array, e.g. finding the biggest number, then the second biggest, etc, it has `O(n ^{2})` complexity. We call that polynomial complexity.

#### Asymptotic Upper Bound

Actually, the `O()` provides an *asymptotic* *upper bound* for complexity.

*Asymptotic* means that `O()` only cares about the *very long run*. In the very long run, linear algorithms will be faster than polynomial algorithms. E.g. if one sorting algorithm takes n^{2} steps for doing whatever with an n-long array, while the other one takes 100 * n, then the second one will be faster for n greater than 10. So, `O(n)` is better than `O(n ^{2})`, even if the

`O(n`algorithm performs way better with small inputs. This CS Stack Exchange answer explains it with some pictures.

^{2})*Upper bound* means that `O()` notation is about the worst case scenario – when the actual steps of the algorithm is very high. So when people say that *quicksort is O(n * log n), but the worst case complexity is O(n^{2})*, then they are being mathematically incorrect. Being mathematically incorrect can mean you’re being practical. See this Linux man page for some details.

### So, what’s the insert like in a Heap?

The insert goes like this: you insert an element to the bottom of the heap (tree). If it’s bigger than the element above it, then you replace them. Then you replace again, if necessary, up to the top. So, an insert operation takes as much steps, as the levels within the heap.

How much levels are there within the Heap? Let’s check our heaps again:

Both of the heaps have four levels. The full binary heap stores nine nodes on this four levels while the other one stores four nodes.

The full heap has ceiling( log n) levels for n nodes. The not so full heap can degrade to n levels for n items.

So, mathematically speaking, the Insert operation of the Heap has `O(n)` complexity.

If you check the implementations, then you’ll see that they are all implemented as a full tree, represented in an array. Probably this is why all the Algorithm textbooks refer to the Heap as a data structure with `O(log n)` complexity.

See? This is when theory and practice have their disagreements, but they work on the issue and find a compromise.

Still, the documentation is neither comprehensive, nor complete. Definitely not obvious.

## Using the Heap

By the way, why is this Heap so important? It has many uses. Of the them is the n-way merge. This is the case when you have n sorted arrays, each are m items long, and you want to merge them into one sorted array. In this case n is way smaller than m.

If you use Heap, then you can do it with `O(log n * m)` steps. If you check the heads of the arrays at each item, then it would be `O(n * log n * m)` steps. This difference doesn’t really matter because n is way smaller than m, and writing to the HDD is way slower than comparing numbers.

### An example

Anyway, the java implementation of the Heap is the PriorityQueue. Just as an exercise, this is how you can use that for the n-way merge:

First, you initialize the heap with all the inputs you have:

private <T extends Comparable<T>> void initializeHeap(final PriorityQueue<MergeInputHolder<T>> heap, Stream<T>... inputs) { for (Stream<T> input : inputs) { final MergeInputHolder<T> holder = new MergeInputHolder<T>(input); if (! holder.isEmpty()) { heap.add(holder); } } }

Then, you wrap it into an iterator, then a Stream:

private static class HeapMergeIterator<T extends Comparable<T>> implements Iterator<T> { private final PriorityQueue<MergeInputHolder<T>> queueOfStreams; public HeapMergeIterator(PriorityQueue<MergeInputHolder<T>> queueOfStreams) { this.queueOfStreams = queueOfStreams; } @Override public boolean hasNext() { return ! queueOfStreams.isEmpty(); } @Override public T next() { MergeInputHolder<T> holder = queueOfStreams.poll(); T result = holder.getHead(); holder.step(); if (!holder.isEmpty()) { queueOfStreams.add(holder); } return result; } } private <T extends Comparable<T>> Stream<T> wrapIteratorToStream(final Iterator<T> iterator) { Iterable<T> iterable = () -> iterator; return StreamSupport.stream(iterable.spliterator(), false); }

Finally, users can loop over it. With a Collector too:

// WHEN Stream<Integer> merged = merger.merge(inputList1.stream(), inputList2.stream()); // THEN List<Integer> collected = merged.collect(Collectors.toList()); assertEquals(Arrays.asList(1, 2, 3, 4, 5, 6), collected);

You can find the full project at github.