for wat reason we are taking time complexity into account????for wat purpose???

To give us an idea of how the algorithm scales as the amount of data grows.

bcoz all programmers cant take less steps for finishing an algorithm

I'm unable to translate your meaning from that sentence.

ok if time complexity increases will that algorithm be a flop????

ok if time complexity increases will that algorithm be a flop????

Not necessarily, but an algorithm with poorer time complexity must be used judiciously. For example, you wouldn't use bubble sort for a general sort function because it has quadratic complexity and will perform badly on larger data sets. That's why any general sorting function in production quality libraries will use an algorithm that's at least O(n logn) in the average case.

There is an excellent PDF document on Quicksort, freely available. I got it from Googling for Quicksort.

The choice of a pivot in Quicksort is crucial. Best choice I have found in many tests, is using (left + right)/2. It's faster than median of three, and gives better sub-array distribution than choosing it randomly.

At best, Quicksort is 0(n log n), which is the best any comparison sorter can be. A poor choice of a pivot can change that quite a bit, however.

The best improvement you can make to Quicksort, is to use Insertion Sort, on the sub-array's, once they are down to a small size. Insertion sort SCREAMS if it has a small amount of data to be sorted, and the data is already partially sorted! The improvement is about 15%, so it's VERY noticeable.

The "small size" for an i7 Intel cpu system, is between 17 and 100, in my tests. Below 17, you don't get the full benefit of using Insertion Sort, and above 100, the benefit of using Insertion Sort, begin to fade.

//for a recursive version of Quicksort, just add this to to the top of the Quicksort function:

if(right - left < 50) {
   InsertionSort(array, left, right);
   return;
}

Quicksort is the fastest (general purpose) sorter, and the "horrid" data scenario so frequently mentioned for Quicksort, has never happened to me. The choice of (left+right)/2 for the pivot, and using Insertion Sort too, has undoubtedly helped that out.

Since it uses just a small amount of auxillary storage for it's sorting, it works very well when you have a huge array of data that you're working with, and need to sort.

For sorting whole files that are not in memory, I use merge sort.

P.S. As the number of items to be sorted increases, the advantage of using Quicksort continues to grow. The efficiency of some other programs (Merge sort, Heap sort, SIS, etc.) may be the same, BUT the locality of the data continues to favor Quicksort. The data it needs, is more likely to be in faster cache memory, than with other sorting programs.

If you haven't seen the Quicksort demo with the dancing hungarians on YouTube, be sure to catch it. It's completely accurate in how Quicksort works, and shows the divide and conquer method in use.

Myself, I use usually use quicksort for existing arrays that can be sorted in-situ, and a head-tail-optimized insertion sort (using a bsearch algorithm to find the insertion location) when I am adding elements to an empty or already sorted array or if I need to sort the elements of an array that must be left untouched. The head-tail optimization sped up inserting into sorted arrays tremendously for large arrays (thousands of elements).

If you picks pivot as random the running time varies in between O(N^2) to O(Nlog(N)) on size of input N.
Picking middle as pivot gives quicksort of O(Nlog(N)) since quicksort on size N again calls on size N/2 twice given array elements are randomly in nature.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.