Implement a queue in which push_rear(), pop_front() and get_min() are all constant time operations
You can implement a stack with O(1) pop(), push() and get_min(): just store the current minimum together with each element. So, for example, the stack [4,2,5,1]
(1 on top) becomes [(4,4), (2,2), (5,2), (1,1)]
.
Then you can use two stacks to implement the queue. Push to one stack, pop from another one; if the second stack is empty during the pop, move all elements from the first stack to the second one.
E.g for a pop
request, moving all the elements from first stack [(4,4), (2,2), (5,2), (1,1)]
, the second stack would be [(1,1), (5,1), (2,1), (4,1)]
. and now return top element from second stack.
To find the minimum element of the queue, look at the smallest two elements of the individual min-stacks, then take the minimum of those two values. (Of course, there's some extra logic here is case one of the stacks is empty, but that's not too hard to work around).
It will have O(1) get_min()
and push()
and amortized O(1) pop()
.
Okay - I think I have an answer that gives you all of these operations in amortized O(1), meaning that any one operation could take up to O(n), but any sequence of n operations takes O(1) time per operation.
The idea is to store your data as a Cartesian tree. This is a binary tree obeying the min-heap property (each node is no bigger than its children) and is ordered in a way such that an inorder traversal of the nodes gives you back the nodes in the same order in which they were added. For example, here's a Cartesian tree for the sequence 2 1 4 3 5
:
1
/ \
2 3
/ \
4 5
It is possible to insert an element into a Cartesian tree in O(1) amortized time using the following procedure. Look at the right spine of the tree (the path from the root to the rightmost leaf formed by always walking to the right). Starting at rightmost node, scan upward along this path until you find the first node smaller than the node you're inserting.
Change that node so that its right child is this new node, then make that node's former right child the left child of the node you just added. For example, suppose that we want to insert another copy of 2 into the above tree. We walk up the right spine past the 5 and the 3, but stop below the 1 because 1 < 2. We then change the tree to look like this:
1
/ \
2 2
/
3
/ \
4 5
Notice that an inorder traversal gives 2 1 4 3 5 2, which is the sequence in which we added the values.
This runs in amortized O(1) because we can create a potential function equal to the number of nodes in the right spine of the tree. The real time required to insert a node is 1 plus the number of nodes in the spine we consider (call this k). Once we find the place to insert the node, the size of the spine shrinks by length k - 1, since each of the k nodes we visited are no longer on the right spine, and the new node is in its place. This gives an amortized cost of 1 + k + (1 - k) = 2 = O(1), for the amortized O(1) insert. As another way of thinking about this, once a node has been moved off the right spine, it's never part of the right spine again, and so we will never have to move it again. Since each of the n nodes can be moved at most once, this means that n insertions can do at most n moves, so the total runtime is at most O(n) for an amortized O(1) per element.
To do a dequeue step, we simply remove the leftmost node from the Cartesian tree. If this node is a leaf, we're done. Otherwise, the node can only have one child (the right child), and so we replace the node with its right child. Provided that we keep track of where the leftmost node is, this step takes O(1) time. However, after removing the leftmost node and replacing it with its right child, we might not know where the new leftmost node is. To fix this, we simply walk down the left spine of the tree starting at the new node we just moved to the leftmost child. I claim that this still runs in O(1) amortized time. To see this, I claim that a node is visited at most once during any one of these passes to find the leftmost node. To see this, note that once a node has been visited this way, the only way that we could ever need to look at it again would be if it were moved from a child of the leftmost node to the leftmost node. But all the nodes visited are parents of the leftmost node, so this can't happen. Consequently, each node is visited at most once during this process, and the pop runs in O(1).
We can do find-min in O(1) because the Cartesian tree gives us access to the smallest element of the tree for free; it's the root of the tree.
Finally, to see that the nodes come back in the same order in which they were inserted, note that a Cartesian tree always stores its elements so that an inorder traversal visits them in sorted order. Since we always remove the leftmost node at each step, and this is the first element of the inorder traversal, we always get the nodes back in the order in which they were inserted.
In short, we get O(1) amortized push and pop, and O(1) worst-case find-min.
If I can come up with a worst-case O(1) implementation, I'll definitely post it. This was a great problem; thanks for posting it!
Ok, here is one solution.
First we need some stuff which provide push_back(),push_front(),pop_back() and pop_front() in 0(1). It's easy to implement with array and 2 iterators. First iterator will point to front, second to back. Let's call such stuff deque.
Here is pseudo-code:
class MyQueue//Our data structure
{
deque D;//We need 2 deque objects
deque Min;
push(element)//pushing element to MyQueue
{
D.push_back(element);
while(Min.is_not_empty() and Min.back()>element)
Min.pop_back();
Min.push_back(element);
}
pop()//poping MyQueue
{
if(Min.front()==D.front() )
Min.pop_front();
D.pop_front();
}
min()
{
return Min.front();
}
}
Explanation:
Example let's push numbers [12,5,10,7,11,19] and to our MyQueue
1)pushing 12
D [12]
Min[12]
2)pushing 5
D[12,5]
Min[5] //5>12 so 12 removed
3)pushing 10
D[12,5,10]
Min[5,10]
4)pushing 7
D[12,5,10,7]
Min[5,7]
6)pushing 11
D[12,5,10,7,11]
Min[5,7,11]
7)pushing 19
D[12,5,10,7,11,19]
Min[5,7,11,19]
Now let's call pop_front()
we got
D[5,10,7,11,19]
Min[5,7,11,19]
The minimum is 5
Let's call pop_front() again
Explanation: pop_front will remove 5 from D, but it will pop front element of Min too, because it equals to D's front element (5).
D[10,7,11,19]
Min[7,11,19]
And minimum is 7. :)