I've followed an example from a website on how to use diff time, however its telling me that it took 0 seconds which obviously isn't right...can anyone please explain what I'm doing wrong?

Thanks in advance!

int main()
{
    time_t t1, t2;
    double dif;

    Matrix A(2);
    Matrix B(2);

    A.SetElement(1,1,1);
    A.SetElement(1,2,2);
    A.SetElement(2,1,3);
    A.SetElement(2,2,4);

    B.SetElement(1,1,5);
    B.SetElement(1,2,6);
    B.SetElement(2,1,7);
    B.SetElement(2,2,8);

    for (int i = 0; i < 5; i++)
    {
    time(&t1);

    A.Multiply(B);

    time(&t2);
    dif = difftime(t2,t1);

    cout << "This took "<< dif << " seconds" << endl;
    }

    return 0;
}

Recommended Answers

All 5 Replies

0 seconds is probably correct. Computers will execute that code in just a few nanoseconds. Call clock() instead of time() because clock() has higher resolution. But that too may result in 0. If it does, then call that Multiply function several hundred times and take the average time.

clock_t t1, t2;
for (int i = 0; i < 5; i++)
    {
    t1 = clock();

    for(int k = 0; k < 1000; i++)
        A.Multiply(B);

    t2 = clock();
    dif = t2 - t1;

    cout << "This took "<< dif << " miliseconds" << endl;
    }

You could skip the call to the function and just write this

std::cout<<"Time taken == "<<static_cast<double(t2-t2)/CLOCKS_PER_SEC;

You could wrap the static part in a funcit

You could skip the call to the function and just write this

std::cout<<"Time taken == "<<static_cast<double(t2-t2)/CLOCKS_PER_SEC;

You could wrap the static part in a funcit

Skipping the call to the Multiply function would defeat the whole purpose of the program. And diff() returns seconds, not milliseconds, so the division you posted will not work anyway.

Brilliant, thanks for your suggestion Ancient Dragon.

7.23.2.1 The clock function
Synopsis
1 #include <time.h>
clock_t clock(void);
Description
2 The clock function determines the processor time used.
Returns
3 The clock function returns the implementation’s best approximation to the processor
time used by the program since the beginning of an implementation-defined era related
only to the program invocation. To determine the time in seconds, the value returned by
the clock function should be divided by the value of the macro CLOCKS_PER_SEC. If
the processor time used is not available or its value cannot be represented, the function
returns the value (clock_t)-1.)

Be wary of confusing Accuracy and Precision

The precision of CLOCKS_PER_SEC might be 1uS, but you should not infer that it is accurate to 1uS as well.
It is entirely within the standard that it could just tick once a second with internalClock += 1000000; Or it ticks with the OS scheduler, say internalClock += 20000; // 50Hz tick In the absence of a high resolution clock (say QueryPerformanceCounter), you should do as AD suggested and run the code to be timed in a loop to the point it takes several seconds to run.
You'll get a more accurate average run-time for each iteration of the loop.

But this time will be the "average in a running system", not the ideal best case when nothing else is happening.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.