hello,
i'm trying to to get the time of my sorting function in seconds, and i always get ZERO time.

here is what i'm using:

``````clock_t t1,t2;
t1=clock();
clock_t IT
I.insertion(A,200000);
t2=clock();
IT=((t2-t1)/ CLK_TCK)*10000000000000000000;``````

i print the IT and get ZERO.
can someone help me with that issue?

> *10000000000000000000
What's this do - apart from arithmetic overflow?

i was trying to enlarge the value by multiplying it by 1000 first and then increesed it to that number .. but yet i got zero.
so u can ignore it.

So it's fixed then?

no .. i just told you to ignore that huge nubmer.
s still need to get the time in seconds.
am i using the wrong methods??

Do you get different values in t1 and t2, even before your calculation?

Bear in mind that this will all be done using integer arithmetic, so you're just not going to see something like 0.02 seconds.

yeah i have different values for t1 and t2.
i also tried to cast it to float and print out but still the same.

(t2-t1)/ CLK_TCK) .. does this give seconds? or milli seconds ? or sth else?

(b) that the insertion runs in a different thread.

You are likely to find that CLK_TCK is a very big number. i.e there are very very few CLK_TCK per second. On my computer there are only 200 per second.

So: There are two ways out. (a) Insert lots and lots and lots.
(b) Get a better clock.

Linux solution is easy: (this may work on windows but I have no way to test that.)

``````#include <sys/time.h>

struct timeval clockInit;
struct timeval clockNext;

gettimeofday(&clockInit,NULL);

I.insertion(A,200000);

// My stuff here
gettimeofday(&clockNext,NULL);

double tval=clockNext.tv_sec-clockInit.tv_sec+
(clockNext.tv_usec-clockInit.tv_usec)/1e6;

std::cout<<"Time Taken == "<<tval<<std::endl;``````

hope that helps

p.s. If you still get zero, turn off the optimizer to see if you can get some resolution.

p.p.s. I think that your insertion command actually inserts the value 200000.

> i also tried to cast it to float and print out but still the same.
Did you try
double delta = (double)(t2-t1) / CLK_TCK;

> does this give seconds? or milli seconds ? or sth else?
Did you look up CLK_TCK in your manual page?

BTW, I thought it was CLOCKS_PER_SEC

First off I would like to apologies about my previous post.
I mistakenly thought that gettimeofday was more accurate than clock.
It is not. Sorry. Both are similar accuracy.

CLK_TCK is the old Posix-1998 standard but is included in <ctime> and the posix standards since 2000 say that CLOCK_PER_SEC is required to be 1000000 (1million).

Second I have done a little research and found the following. On Linux there is a nano-second accurate clock. [It is not actually THAT accurate but it is tied into the machine tick rate]. However, it seem that it is as good as you are going to get.

On Windows, there are a number of ugly assembler hacks (compiler dependent) [But they have the advantage of being quicker to execute than the Linux call.] And a number of functions (from windows.h) that do the same thing

Linux solution:

``````#include <iostream>
#include <iomanip>
#include <time.h>

int main()
{
int sum(0);
timespec ts,te;
clock_gettime(CLOCK_REALTIME,&ts);

// Your stuff here (my example is about the minimum you can do)
sum+=1;

clock_gettime(CLOCK_REALTIME,&te);
double TVal=1e9*(te.tv_sec-ts.tv_sec);
TVal+=te.tv_nsec-ts.tv_nsec;
std::cout<<"Time out (nanosec)== "<<TVal<<std::endl;
}``````

Some care is needed. You need to like to librt. i.e. `g++ test.cpp -lrt` .
Note also that you want to keep the optimization off since it actually can remove the sum+=1 part. (even if you add a use below!!). For your example, since it is doing real work that isn't a problem. However at the nano-sec resolution, calls to clock_gettime are not zero time.