As part of an assignment, we have to write a Quicksort algorithm, and then verify it's time complexity.

I am trying to time how long it takes to run the quickSort function, but it keeps returning 0.

Here is a snippet of the code

.
.
#include <time.h>
.
.
int main()
{
    time_t startTime, endTime;
    double stopwatch;
.
.
    startTime = clock();
    quickSort(nums, 0, size-1); //sorts array
    endTime = clock();

    stopwatch = endTime - startTime;
    cout << "It took " << stopwatch << "milliseconds to execute quickSort" << "\n";
.
.

Even when n is quite large, it still returns 0. I'm guessing that the function is completed very quickly by the computer, and this might be rounded to 0.

Is there anyway I can time this, in order to verify that it has time complexity of O(n logn) (in best cases).

I've tried using the difftime(endTime, startTime) function but it gives the exact same result. I've also tried using time(NULL) instead of clock(), but again it yielded the same result.

I think that I might need to run the code 1,000 times (for instance), then divide by 1,000 and get an average, but I'm not sure how I could do this.

Thanks!

Recommended Answers

All 4 Replies

use the boost function
boost::posix_time::ptime after = boost::posix_time::microsec_clock::local_time();

These sort of tests take so little time (nanoseconds) that the clock() function cannot resolve them. Usually what we do is to time a loop calling the function in question for many thousands of iterations. Then we can get a rational timing number which we can then divide by the number of iterations of the loop to get the time for a single call.

>> Even when n is quite large, it still returns 0.

How large is large? I imagine if you had a million numbers, you could probably get a timing. A thousand might be too few.

For small arrays, a quicksort will finish in literally nanoseconds on modern systems. Even a high resolution clock only measures in microseconds, so thousands of iterations would be required even to get a measurable value. Also, since the array is already sorted after the first loop, it may take even less time on subsequent iterations. So, you would need to repopulate the array after each loop. So, you might want to use an interval timer that you can start, before the quicksort, stop before you reset the array, and then restart before the next quicksort call. Another disadvantage of clock() is that it wraps around and computing that event adds unnecessary complexity to your code.

So, you should be doing something like this:

create timer;
for (int i = 0; i < nloops; i++)
{
    initialize array;
    start timer;
    quicksort(...);
    stop timer;
}
read elapsed time;

I'd suggest that you look at the clock_settime() and clock_gettime() functions with the CLOCK_PROCESS_CPUTIME_ID (high resolution per-process timer) or CLOCK_THREAD_CPUTIME_ID (thread-specific timer) clock type for this purpose. These are POSIX functions found on all Linux systems. I'm not sure about Windows unless you have the POSIX api's installed.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.