Any ideas about a serious advantage of a heap over a balanced BST?

I see that heap can be implemented in an array, and the code is simpler. Fine, that's something.

Does anybody see any big advantage of heaps over balanced BSTs?

(If we cache the maximum in a BST we have the same O(1) getMax operation. )

Thanks

The size of a heap is smaller than the size of a tree: Heaps can be implemented on an array with effectively no overhead for storage whereas trees need extra storage to refer to each node's two subtrees(*). Intuitively (I haven't really thought about it hard), it seems to me that in insert into a heap should be cheaper than an insert into a tree because the heap is only semi-ordered; though the two operations are likely to be BigO the same since the depth of the tree/heap will be the same.

(*) For extra credit, think about how to implement a balanced binary tree on an array so that the locations of each node's subtrees can be calculated rather than stored. Is there a size savings doing it that way? Why (not)? Hint: Is a balanced binary tree guaranteed to be completely symmetrical at every node?

Now, with memory being very cheap, we are often tempted to use a 'big' solution for a programming problem. In the past, memory was relatively much more expensive, so people spent a lot of time making algorithms that were both fast and small. For even more extra credit: What is the downside of using larger than necessary structures in today's world? Hint: Can it affect the speed of an algorithm that processes that larger structure? How? What happens if you want to share the structure among parallel processors?

It's mostly a question of constant factor speed and memory allocation. A real fast heap implementation won't use the packed-in-array representation though, since that's not cache-friendly. Instead they'll use a tree of packed-in arrays. Either way, there's no need to worry about rebalancing or all the other ugly problems BSTs have.

The code is also a lot simpler, making your program fit more easily on a floppy disk.

So, Griswolf, you're actually saying that I'm right - no qualitative advantages of a heap over a BST.

You emphasize the importance of the quantitative-non-assimptotic differences, and that's fine. I guess your last riddle meant that size is important when transforming via slow medium like DB, networks etc. This makes lot of sense

Thank you. Gidi

So, Griswolf, you're actually saying that I'm right - no qualitative advantages of a heap over a BST.

Huh? There are qualitative advantages -- if you're dying for better performance.

I guess your last riddle meant that size is important when transforming via slow medium like DB, networks etc. This makes lot of sense

That makes no sense at all, you can flatten a BST to a sorted array to transfer it over a network or to a DB.

It's mostly a question of constant factor speed and memory allocation. A real fast heap implementation won't use the packed-in-array representation though, since that's not cache-friendly. Instead they'll use a tree of packed-in arrays. Either way, there's no need to worry about rebalancing or all the other ugly problems BSTs have.

The code is also a lot simpler, making your program fit more easily on a floppy disk.

Yeh Rashakil, you are also going in the same direction.

So, Griswolf, you're actually saying that I'm right - no qualitative advantages of a heap over a BST.

No. At least in my view of the world, "small" really does beat "large" as a quality, all other things equal. But even if you are spelling speed or some other aspect as "quality", heaps beat balanced trees sometimes.

You emphasize the importance of the quantitative-non-assimptotic differences, and that's fine. I guess your last riddle meant that size is important when transforming via slow medium like DB, networks etc. This makes lot of sense

Thank you. Gidi

No, I'm not talking about transformations, but outright sharing. I'm also talking about single processor issues. Think about pointer size and what it means to memory, cpu memory-handling sub-units etc. Rashakil's post mentioning cache is germane. You might want to consider that Moore's law is probably on its last legs (at least in "ordinary silicon", so we may once again find ourselves effectively limited (at a much much larger scale) by the cost or availability of memory... Though we are already doing an end run by adding processors, which will need to share (somehow) the data they process...

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.