Hope everyone's kind to a newbie :)

I kind of "discovered" (if that's what you call it) this simple quicksort variant just today. I was wondering if anyone had seen it before. I Googled around a bit but no one seems to have described it before. Hmm.

It's based on complementing the strengths and weaknesses of quicksort and mergesort (on arrays). Though both algorithms require extra space, a hybrid version combining both can be done in-place. Interesting.


//merge sorts L using S as an auxillary array
//the elements in S will be preserved after the sort
//but they may have been moved around quite a bit

function mergesort(array L, array S)

//actual sort function

function sort(array L)
    pivot = partition(L, choose_pivot(L))
    array L1 = L.subarray(0, pivot)
    array L2 = L.subarray(pivot, L.size)
    //mergesort smaller list using larger list as scratch
    //thus scratch space never overflows
    if(L1.size larger than L2.size)
        mergesort(L2, L1)
        mergesort(L1, L2)

It has the usual time complexity of quicksort, depending on how you choose the pivot you get (n log n) average- and (n^2) worst-case. But if mergesort() is performed in a non-recursive manner (ie. bottom-up style) and the recursive calls to sort() are modified to use tail recursion, you end up with a sort algorithm with O(1) worst case memory requirement.

It can be optimized in the usual ways: insertion sort for small merge lists, switching to heapsort/combsort when encountering worst-case partitions ala intosort etc.

If any of you have seen it before (preferably online, I'm kinda strapped of dough for books right now) do let me know. But I'd love to be the one to name this algorithm :D


Member Avatar for iamthwee

I've seen it before on daniweb just now.

I've never seen it before. But, you know, it has no advantage over Quicksort, unless it'd kill you to use the (log n)/(log 2) words of auxiliary memory needed for quicksort.

Heh, it's mostly a curosity. C++ std::sort runs about twice faster (what can I say, I'm not very good at coding) but it beats my handwritten optimized quicksort routine by about 20% on random lists. Profiler says it's mostly due to the merge function.

I have seen this algorithm before. It is mentioned in some of the Algorithms books by Robert Sedgwick.

Sounds a lot like introsort, but that uses heapsort instead of mergesort...

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.