Jump to content

Talk:Binary heap

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

heap

[edit]

So is it O(n log n) or O(n) after all ? Sorting can't be O(n), but we aren't really doing full sorting here. Taw 00:35 Dec 12, 2002 (UTC)

Was:

It appears offhand to me that this should make this process take O(n lg n) time, but all the references I can find say it's O(n), although I can't understand their proofs. Once you have a heap, you can quickly find the node with the greatest value in it; it will be the node at the root of the tree. If you want to remove this node (as is necessary for a priority queue), in the array representation, you can swap that node's value with the value of the last node in the array (node 11 in the above example), shorten the array by 1 (removing that last node from the heap), and then restore the heap property, which still holds everywhere but at the top of the heap. The value at the top of the tree is now less than at least one of its children; swap it with the greater of its two children, and the heap property will be true at the top, but now possibly not where the top node was just swapped to; so you have to repeat the process until it is greater than either of its two children, which probably means until it reaches the bottom of the heap --- up to lg n swaps.

So removing the largest node from a heap takes O(lg n) steps.



Swapping elements to adjust the heap is inefficient. Each element moves towards the top of the heap at most one place, in each sift-down operation. The difference is the same as the difference between bubble sort and insertion sort; instead of adjusting K ~= lg(N) elements with K-1 swaps (and 3*K-3 moves in all), K+1 moves will suffice. The Heapsort example in Knuth "Art of Programming" Vol II uses insertion to do the sift-down, rather than exchanging.

In the exercises after the discussion of Heapsort, Knuth vol. II also mentions "bottom-up" heap insertion (doubtless much better than I can, but here goes...):

While it is common to implement a binary heap as outlined in the article, reinsertion into a heap (replacing the value at the top of the heap with another, while preserving the heap property) can be done with rather fewer comparisons, on average. Instead of comparing the replacement value with elements in the array "on the way down", we assume all the elements will have to be moved (saving a comparison at each level of the heap), insert the element at the bottom of the heap, and then search "back up the way we came down" for the place it belongs ("spending", on average, about two additional comparisons and one additional move). Instead of

slightly less than 2 * lg(N) comparisons and slightly less than lg(N) moves,

bottom-up reinsertion requires

slightly more than lg(N) comparisons and slightly more than lg(N) moves. Using inserts rather than exchanges to adjust the heap, and implementing bottom-up reinsertion, Heapsort (see the Heapsort article) can compete with Quicksort (unless locality of reference is a major factor, or if parallel processing hardware is available) particularly if the compiler or interpreter is poor. If the compiler is good Quicksort still wins comfortably.

james_barbetti@yahoo.com

Are you talking about insertions or deletions? You seem to be describing a deletion algorithm, but you keep talking about "reinsertions", and I'm really not sure what you mean by that. If you are describing an improved deletion algorithm, I don't think that you'd see much improvement -- it looks to me that you'll only save yourself one swap when acting on a heap in an array. I suppose what you describe might help if you used a simple binary tree to represent the heap, but it seems to me that you could easily end up with an unbalanced tree with your algorithm. And, of course, a heap implemented with binary trees isn't of any help when writing heapsort. Could you try to explain the algorithm better, or provide a link?--67.71.21.145 17:21, 18 Nov 2004 (UTC)
Yes, I'm having trouble understanding this method as well. I think I might have a handle on how it works; let me see if I have it right (I'm using "node" loosely, not necessarily designating a separate data object linked by pointers/references):
  1. We remove the node at the top of the heap, as in the traditional heap deletion procedure.
  2. We remove the last node of the heap, as in the traditional heap deletion procedure.
  3. Instead of moving this node into the place vacated by the just-removed node, and calling heap-down (the traditional implementation), we trade the last node of the heap for the node that was, until just now, that node's parent, and call heap-down on it.
  4. We still have a node that was removed from the heap, but now it is a different node. Again, we trade this node for the node that was, until just now, its parent, and call heap-down on it. If the "parent" is in fact the empty space where the top value was just removed, we place the node in that top space, restoring the shape property, and call heap-down one last time.
Is this essentially correct? -- Antaeus Feldspar 30 June 2005 00:56 (UTC)

"It can be seen as a binary tree with the additional constraint that each node is larger than all its children (for a max-heap) or smaller than all its children (for a min-heap)" - I believe heaps must also satisfy the property that they are complete trees. That is the lowest level of the tree must be filled from left to right -ToasterStrudel@comcast.net

k-ary heap

[edit]

Can you guys make some pages for "k-ary heap"? I like this "binary heap" page, but I also wish to see some information for "k-ary heap". Thanks, everyone..

Don't want to be offensive, but the ASCII art sketch kinda sucks :) --84.57.224.246 13:12, 8 November 2005 (UTC)[reply]

a faster method of deleting the root

[edit]

I wanted to edit the page to add this, but I don't have an svg editor.

A faster method of deleting the root is to remove it and promote its larger (max-heap) or smaller (min-heap) child. The space occupied by that child is now free, so promote one of its children, and continue recursively until the free space is pushed down to the bottom level. Move the last element of the heap into this new free position, and then percolate it up.

Here is an example (requiring fixed width font): initial heap:

        15
   11         8
 3    4     5

remove root:

        __
   11         8
 3    4     5

11 > 8 so move 11 to the root:

        11
   __         8
 3    4     5

4 > 3 so move 4 up:

        11
   4          8
 3   __     5

We have reached the bottom level. Move the last element (5) into the blank space.

        11
   4          8
 3   5     

5 > 4 so swap 5 with its parent:

        11
   5          8
 3   4     

5 < 11 so stop.

Analysis: The algorithm which replaces the root with the last leaf requires two comparisons per level, to an average depth of log_2(n). Because the element is a leaf, it will be pushed all the way back down to the bottom on average. Total cost: 2*log_2(n) comparisons and log_2(n) swaps.

The algorithm above does one comparison per level to a guaranteed depth of log_2(n), followed by one comparison per level for an average of k~=1.6 levels. Because the last element was a leaf, it does not move up the heap very far on average. Total cost: log_2(n) + k comparisons and log_2(n)+k swaps, where k~=1.6. It is twice as fast.

Also, above 90% of the time with heaps is spent deleting the root (insertion is O(!) on average) so the better algorithm really will make your heaps almost twice as fast.

Another improvement: Perhaps some budding computer science student would be willing to write this up. For array based algorithms, it's really better to think in terms of copies instead of swaps. A swap requires 3 copies, but all of the algorithms above can be implemented using the same number of copies as is quoted for swaps.

Here is the example above, written as an array, with copies not swaps.

initial heap: 15 11 8 3 4 5

We have copied the root (15) and now we want to clobber it. Copy the larger child into the root: 11 11 8 3 4 5

The second 11 is really empty space. Clobber it with its larger child: 11 4 8 3 4 5

The second 4 is really empty space. We need to insert 5, but we want to delay copying it until we know its final position. To determine that, compare 5 with the parent of the "free" spot (the second 4). The parent is the first 4. 5 > 4, so clobber the free space with 4: 11 4 8 3 4 5

Compare 5 with the parent of the free space again. 11 > 5 so we stop and copy 5 into the free space: 11 5 8 3 4 5

The last 5 is really free space, but there's no point in overwriting it. It will be clobbered the next time something is added to the heap.

I know it looks inefficient (or at least, not better) on this small example, but recall the argument above. This method does one comparison per level plus one or two extra comparisons on average, and the same number of copies. On heaps with more than 3 levels it will be faster.

I hereby release this post to the public domain. I would be delighted if someone were to incorporate any of it into the article. Thank you Wikipedia people for an invaluable resource. —The preceding unsigned comment was added by Roman pearce (talkcontribs) 04:07, 19 December 2006 (UTC).[reply]

  • Unfortunately the algorithm above does not seem to work better than the usual approach of percolating the former last leaf down from the root. A profiler shows that the cost of the comparison against the node being percolated down is negligible, while descending the tree, selecting the "higher priority" child and "swapping" nodes are the big deal. Thus, traversing the tree all the way down to leafs and percolating the former last leaf up actually reduces the performance. This was on a records-and-references implementation tested on 10, 100, 1000 and 10000 nodes. SalvoIsaja (talk) 16:51, 18 October 2009 (UTC) Edit: I've forgot to mention that for larger data sets (depends on the payload size, say 1000 or 10000 nodes and above) the effect of memory caching dominates even on the log2(n) theoretical running time. SalvoIsaja (talk) 16:57, 18 October 2009 (UTC)[reply]

can use array-like structures when number of items is not known in advance

[edit]

quote:---
This approach is particularly useful in the heapsort algorithm, where it allows the space in the input array to be reused to store the heap. However it requires allocating the array before filling it, which makes this method not that useful in priority queues implementation, where the number of tasks (heap elements) is not necessarily known in advance.
---:endquote
True for true arrays, but it's easy to have a 'growing' array simply by adding a redirection layer when reading and writing, and a redirection + test for more-space-required (plus, if more space required, actually alloc'ing that space).
Granted it's not an array any more, exactly, but it still resembles an array programmatically (even if it has to be addressed through functions/methods rather than [] syntax) and is very fast. My point is not to elaborate on how to get around the problem of fixed-size arrays, just that the problem quoted isn't a problem and the mention of a 'limitation' should be removed or amended.
Thoughts? —Preceding unsigned comment added by Water pepper (talkcontribs) 09:00, 7 January 2008 (UTC)[reply]

Of course you can use a dynamic pseudo-array, simulated by an additional software layer. But in this case you must either re-allocate space for growing array and copy its contents, each time you need to extend it (possibly many times), OR you need a dynamic structure (some lists, trees, etc.) and search for items to be compared or swapped, every time you simulate an index addressing. Both ways impose a not acceptable overhead to the sorting routine. In fact there are dynamic structures (like lists or trees) and algorithms tailored to them, which perform as good as heapsort with arrays. So, if you need dynamic data size, use dynamic structures and appropriate algorithms. Of course it is always (well, almost always...) possible to force some algorithm for inappropriate data structure — but that is not a good programming practice. Don't force a specific algorithm just because you know it! --CiaPan (talk) 14:19, 7 January 2008 (UTC), some typos corrected 06:34, 8 January 2008 (UTC)[reply]
Or you dangle a new heap off an element in the heap at the cost of an extra test for each operation (in any language with objects as pointers (such as python or javascript) the cost is negligible). In my experience doing this with small blocks (say 64 entries) out performs any purely tree based approach (such as fib trees) (perhaps for cache reasons). It also makes it trivial to merge two heaps. --Jaded-view (talk) 20:32, 27 September 2008 (UTC)[reply]
Note that a records-and-references implementation is not much worse than an array-based implementation. I've made a C++ implementation based on nodes with pointers to left and right children (plus the level-order node index, to allow arbitrary "erase element" and "update element priority" operations), while resolving the tree algorithmically using a parent stack. Compared to my C++ implementation based on a fixed-capacity array, the "pure tree" implementation takes almost 200% more than the array implementation for pushes (but we talk of 0.03us vs. 0.09us, approx constant with the number of elements, as expected), and a 30% more than the array implementation for popping the root (0.10us to 0.44us vs. 0.14us to 0.55us for a node count ranging from 10 to 10000). As I've written on another comment, above 1000 or 10000 nodes (depending on the payload size) the effect of cache misses dominates in both implementations. For the record, a sorted doubly-linked list implementation has comparable running time up to about 50 elements, skyrocketing for larger data sets. This was on an AMD Sempron 3200+ PC using GCC 4.3 on Debian Testing. All in all, considering that a dynamically resizable array involves copying its content on resize, I think the flexibility of a records-and-references implementation worths the little extra time for data sets of arbitrary sizes. SalvoIsaja (talk) 17:23, 18 October 2009 (UTC)[reply]

building a heap--wtf?

[edit]

Are you sure it can be done in O(n)? Cuz that means you can heapsort in O(n), which of course is impossible. I don't see any references, so someone who understands math symbols should check on that section. —Preceding unsigned comment added by 20.132.68.146 (talk) 15:47, 27 August 2009 (UTC)[reply]

I was wondering the same thing. If the first step is to build a binary tree, isn't that already O(n log n)? Mr.Lobotomy (talk) 20:09, 6 September 2009 (UTC)[reply]

I'm not sure what 'it' means in 'it can be done'. If you, 20.132.68.146, mean 'building a heap with a given set of n nodes (values)', as the section's title says, then yes, _it_ can be done in linear time O(n). On the other hand if you mean a heapsorting a given sequence of n values, then of course it can't be done in O(n), because the second phase of the sort alogrithm performs n 'delete root' operations, half of which may cost up to log(n)−1 swaps. --CiaPan (talk) 20:34, 12 September 2009 (UTC)[reply]

Typo

[edit]

"sift-up" ? Possibly "shift-up" —the preceding comment added 193.91.83.251 (talk) 19:39, 6 October 2009.

Nope. See sift in Wiktionary. --CiaPan (talk) 14:30, 21 October 2009 (UTC)[reply]

Derivation of parent/child index formulas

[edit]

This whole section really seems out of place. Even if a mathematical justification that the formulas define a tree is appropriate in this article, I would think something simple and direct would be more appropriate than what's there. (e.g. observing that 'parent' and ('left child' and 'right child') are inverses in an appropriate sense, and maybe stating which range of indexes correspond to which levels of the tree) Anyone else agree? Hurkyl (talk) 13:49, 26 February 2012 (UTC)[reply]

Totally agree. Actually, the derivation of this relation can be done much more compactly. Observe that the index of the first node in layer k (layers being numbered starting at zero) is 2^k-1. Then observe that if i is the position of a node N within its layer (i=0,...,2^k-1) then the position of the left child of N within its layer is 2i. This is because all the i nodes that precede node N in layer k give rise to two children that precede the left child of N. Hence, the index of the left child is 2^{k+1}-1+2i = 2(2^k-1+i)+1 = 2a+1, where a is the index of node N. Csaba Szepesvari (talk) 03:37, 28 January 2016 (UTC)[reply]

Weak proof for O(1) average-case insertion

[edit]

I edited the average to O(1) because of the article http://i.stanford.edu/pub/cstr/reports/cs/tr/74/460/CS-TR-74-460.pdf but it was undone by an anonymous user 24.54.206.152. I am redoing it and linking to the article as reference right on the complexity summary to prevent it from being undone. Please show that the article is wrong or does to apply before undoing. — Preceding unsigned comment added by Ciro.santilli (talkcontribs) 07:21, 10 May 2015 (UTC)[reply]

The insertion section currently claims that the average insertion cost is O(1). The proof for this is very weak. It assumes a lot about the distribution of values as they are inserted in an arbitrary order into the heap. It also conflicts with the runtime complexity summary given at the top of the page. If this weak proof is to remain, perhaps a citation of a formal proof can be given? — Preceding unsigned comment added by Arakageeta (talkcontribs) 18:03, 27 March 2012 (UTC)[reply]

I am skeptical if O(1) is even possible for insertion. Using the algorithm presented, if we assume all values are from a bound interval (e.g. 1 to 100) and we add n random integers, for any given single insertion the programme has to visit on average about ½ h nodes where h is the height of the tree at the time of the insertion, which in turn averages to log ½ n. Now obviously O(½ log ½n) = O(log n), not O(1).
This is not explicitly written out in Introduction to Algorithms, but it has the following reader exercise 17.3-3 (2nd ed.) in the chapter on amortized analysis (note:Extract-Min means removing the root and re-heapifying):

Consider an ordinary min-heap data structure with n elements that supports the instructions Insert and Extract-Min in O(lg n) worst-case time. Give a potential function Φ such that the amortized cost of Insert is O(lg n) and the amortized cost of Extract-Min is O(1), and show that it works.

It would be indeed peculiar if the writers had put in an exercise that asks for a solution with non-optimal parameters. However, root deletion (Extract-Min) can (apparently) be shown to be constant-time with amortized analysis if first given a sequence of Inserts (obviously at least one earlier Insert is required for a successful Extract-Min, while an Insert even into an empty heap is possible..)
Could it be that someone along the way has gotten Extract-Min and Insert mixed up? --hydrox (talk) 02:11, 9 April 2012 (UTC)[reply]
Looking back now, there's a clear fallacy in my reasoning: the algorithm does not have to visit ½h levels on average – the values quickly decrease by every level it ascends (in min-heap). It would in fact seem that most of the time (~½) the inserted node can be left on the bottom level. There's proper mathematical proof referenced by @Cbcomp: below. --hydrox (talk) 06:25, 1 February 2014 (UTC)[reply]

Recently, an unidentified user (63.237.229.130) updated the Binary Heap page to again show that the Insertion method is O(1) on average. His/her reasoning came from this web page, which cites its source as the Wikipedia page for Binary Heap - in other words, the reasoning was circular and self-referential!

This past semester, I took a data structures course in which the class further explored the idea of insertion into a binary heap being O(1) on average. After some research and digging, we came across an article by Thomas Porter and Istvan Simon who had published a paper at Stanford University in 1974 proving this very point [1] . They used a mathematical and formal approach by looking at a function for the average number of levels that a node inserted into an n-heap (a binary heap is an n-heap where n = 2) moves up during the insertion process and found that this function is upper-bounded by a constant value (1.6067). I do not profess to understand the whole of the mathematical proofs in the paper (it is cited above if you would like to look for yourself); however, I would say that this is definitive proof of the average complexity for binary heap insertion being O(1).

Furthermore, a member of my class created a program that simply inserted 10000 elements into a binary heap and plotted the number of operations that each insertion took. The graph he provided of his work showed that the number of operations hovered around 2.5 for every single insertion made, which is very close to the upper-bound of 2.6067 (there is one more operation than the number of levels an inserted node moves up since a final comparison must be done to make sure the node is in the correct spot). This is further, empirical proof of a constant average complexity for binary heap insertion.

--- Cbcomp (talk) 17:48, 31 January 2014 (UTC)[reply]

I've removed the claim, since it isn't quite as general as it is presented. Porter and Simon, as pointed out by Mehlhorn and Tsakalidis, "analyzed the average cost of inserting a random element into a random heap in terms of exchanges. They proved that this average is bounded by the constant 1.61. Their proof does not generalize to sequences of insertions since random insertions into random heaps do not create random heaps." QVVERTYVS (hm?) 14:08, 28 January 2016 (UTC)[reply]

I think it's widely recognized that binary heap insertion is on average O(1), worst case O(ln(N)) where the worst case is inserting a new min value. The proof can be explained as; 50% of all the nodes are leaves, 25% are in the level above the leaves, 12.5% are 2 levels above the leaves, etc. So when you insert a random element into a heap built from inserting random elements, there is a 50% chance it will end up being a leaf (requiring only 1 compare), 25% chance it will be a level above a leaf (requiring 2 compares), 12.5% chance it will be 2 levels above a leaf (requiring 3 compares), etc. When you sum these up, you find any random insert requires on average 2.6067 compares, which is independent of the number of nodes in the heap. This is O(1) on average. DonovanBaarda (talk) 00:41, 20 August 2018 (UTC)[reply]

  1. ^ Thomas Porter; Istvan Simon. "Random Insertion into a Priority Queue Structure" (PDF). Stanford University Reports. Stanford University. p. 13. Retrieved 31 January 2014.

Building a tree

[edit]

How can the number of nodes at height be ? The term is never greater than 1 for any level/height (example: , but there're 2 nodes at the height 2.

188.103.194.161 (talk) 20:29, 28 May 2012 (UTC)[reply]

——

I think it’s wrong, too.

Example: (assuming we start counting at )

the number of nodes at height is (you can easily draw this)

I figured out it must be , but I’m too busy to prove it right now. Can someone verify?

Also, why is the the upper bound of the sum written as ? Since the height of the heap is , it should be , shouldn’t it?

-- Lilalas (talk) 21:07, 5 January 2017 (UTC)[reply]

I have now implemented some changes correcting this. Lilalas (talk) 14:22, 6 March 2017 (UTC)[reply]

Search Time

[edit]

Why is search time complexity listed? Correct me if I'm wrong, but I did not believe heaps use a search operation at all. What would it be used for? Heapsort doesn't use it. Inserting wont use it (that's O(lg n) w.c.). Etc. Why is the time of an operation that, I thought, is irrelevant to a data structure listed? Furthermore, the heap overview page does not list searching as an operation for heaps, nor does it list any runtimes in this table (http://en.wikipedia.org/wiki/Heap_(data_structure)#Comparison_of_theoretic_bounds_for_variants). I'm going to remove it, as I believe it to be irrelevant information. Accurate yes, but still an irrelevant operation for this data structure. Unless someone can correct me? Also there is a problem with the gadget used for listing the time complexities -- it does not allow for additional listings such as extract min time, decrease key time, etc. How can we fix this? Thanks 68.49.45.5 (talk) 05:51, 18 August 2012 (UTC)[reply]

possible error

[edit]

min heaps used for priority queues? not max heaps? --195.7.8.195 (talk) 6 February 2013‎

No, not max heaps. Usually the higher priority is represented with lower numeric value, so it's min-heap which pushes high-priority items to the front of a queue. --CiaPan (talk) 00:25, 7 February 2013 (UTC)[reply]

heap length?

[edit]

The Max-Heapify pseudo-code doesn't make any sense without an explanation of what the mysterious heap_length is. There's no concept of a heap length introduced earlier in the article. 208.54.40.174 (talk) 18:12, 14 February 2013 (UTC)[reply]

Thanks for noticing. Fixed. --hydrox (talk) 18:26, 14 February 2013 (UTC)[reply]

Is the remove pseudo-code wrong?

[edit]

Nevermind, it is correct. Fmcoelho (talk) 22:37, 22 April 2013 (UTC)[reply]

Unnecessary logic in delete?

[edit]

Swap with its smaller child in a min-heap and its larger child in a max-heap. <-- is this really necessary? Why can't we just always replace the left node, for example? If you read under Priority Queue at http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=dataStructures, they say "If the new root node is larger than its left child, then the root is swapped with its left child, in order to maintain the property that the root is always less than its children. This continues as far as necessary down the left branch.". Which is wrong? — Preceding unsigned comment added by 62.20.115.9 (talk) 19:53, 4 November 2013 (UTC)[reply]

Community.topcoder is wrong. Imagine a root with value 7 and its children 5 and 3:
         7
        / \
       5   3
The root is greater than its children. This violates the min-heap property and needs fixing.
The rule from the Community.topcoder's article says 'if root is larger than its left child, it gets swapped with the left child', yes? Let's do that:
         5
        / \
       7   3
And we got the root greater than its right child, which still violates the heap property.
To get the root smaller than both its children you need to put the smallest value of the three to the root, so you must select the smaller of the children to swap with a root. --CiaPan (talk) 06:20, 5 November 2013 (UTC)[reply]
Thanks CiaPan, that makes sense. Should have thought of making a simple example instead of just lazily asking, sorry. Will take contact with the author of the TopCoder article to have them correct it. — Preceding unsigned comment added by 62.20.115.9 (talk) 18:46, 5 November 2013 (UTC)[reply]

I see Topcoder's article is fixed now. --CiaPan (talk) 09:31, 26 August 2014 (UTC)[reply]

And now I can't see it at all – seems unavailable for not logged-in and I do not have an account. --CiaPan (talk) 06:57, 28 January 2016 (UTC)[reply]

Searching heaps

[edit]

There was already a comment here regarding searching. The author argued that it does not make sense to list search time complexity for heaps since no search operation were defined for them.

I consider this a mistake. The thing is: The heap is a collection, and collections can be searched. Why should the heap be an exception to this rule?

Of course, the heap is not used for fast searching. But that has nothing to do with the question whether there is a search operation defined or whether it makes sense to list its search time complexity. For example, the linked list has a search time complexity of O(n). Obviously, it is not meant for searching and will not be used for that (in general). But does that mean that there is no search operation defined on linked lists or that the WP user does not want to know its search time complexity? Obviously, no.

So, in my opinion articles should always state the search time complexity for collection data structures, even if the respective collection is not meant for searching. 134.130.224.143 (talk) 08:04, 25 August 2014 (UTC)[reply]

Assignment operator

[edit]

@Nbro: is the use of ← for assignment really so strange that you feel you need to explain it in a comment? If so, we'd better find a new notation, like :=. I find the comment a bit distracting.

(Related discussion.) QVVERTYVS (hm?) 20:45, 11 August 2015 (UTC)[reply]