Hello....


I am running a C++ program that takes an N array and reads a
from a file. One requirement of this project is that it returns an
elapsed time of each function so the function efficiency can be
calculated based upon an input.

I am attempting to implement the time.h
header file and use 2 clock_t variables in order to calculate the
difference between the 2 times. Using for example t2-t1. When I return
this value I receive 0 seconds run time continually for each function.

I am looking for some way to output the run time of each function in
microseconds to be able to calculate the fast run time without having to
input a large graph . So how could i do this?!

Recommended Answers

The best way to do that is to call that function many times then take the average time. For example, start timer, call the function 1,000,000 times, get end timer, (end-start)/1000000 = average. Of course you'll have to use floats for all calculations.

Jump to Post
clock_t t1, t2;

t1 = clock();

run function 1,000,000 times

t2 = clock()

float diff = ((float)t2 - (float)t1) / 1000000.0F;
Jump to Post

Regrettably, it's impossible to get time interval with microseconds precision from standard C and C++ run-time library calls. The best function in this area is clock() but it yields ticks with OS process sheduler time slice precision (near 10-15 milliseconds in the best case).

See your OS manuals: as …

Jump to Post

All 11 Replies

The best way to do that is to call that function many times then take the average time. For example, start timer, call the function 1,000,000 times, get end timer, (end-start)/1000000 = average. Of course you'll have to use floats for all calculations.

Do you mean by a timer time_t t1 variable ?

time_t t1, t2;

t1=time(0);

call of the function..

t2=time(0);

then i display t2-t1 and it always give me zero...

i have been told to run the code in unix but it still give me an zero...

So can you explain your way more because i didn't get it ...

what is end and start?

Thanks.....

clock_t t1, t2;

t1 = clock();

run function 1,000,000 times

t2 = clock()

float diff = ((float)t2 - (float)t1) / 1000000.0F;

Regrettably, it's impossible to get time interval with microseconds precision from standard C and C++ run-time library calls. The best function in this area is clock() but it yields ticks with OS process sheduler time slice precision (near 10-15 milliseconds in the best case).

See your OS manuals: as usually, there is system clock function in OS API. Prepare to get unexpected long intervals (another process was sheduled while you wait i/o or exhaust your time slice).

So Ancient Dragon's approach is more productive (but is not so universal as desired;).

I tried this code but the program got jammed :)

Do you have other suggestions?!

Thanks...

Use fewer iterations of the loop?

it works!

it generated a number but what does exactly this number represent?


Thanks...

If you divide the difference by CLOCKS_PER_SEC, you'd have the time in seconds for all iterations. Divide that by the number of iterations, and you'd have an approximation of the time of one iterations in seconds.

Thanks for your generous help :)

#include<iostream>
#include<conio.h>
#include<time.h>
using namespace std;

int main()
{ int i;
clock_t t1,t2;

t1=clock();
	for(i=1;i<=100;i++)
	{
         cout<<" "<<i ;
		if (i%10==0)
			cout<<"\n";
		
		
	}
	t2=clock();

	cout<<"before process time:"<<t1<<"...after process time:"<<t2;
	getch();
	
}
commented: I think the guy had his answer 18-mos ago... 1. Don't give out code willy nilly 2. Check thread dates, don't rez dead threads. +0
commented: Don't reply to dead threads. -2

get time from running system and use windows.h file at Sleep function to pass the cursor
include<iostream.h>
include<windows.h>
int main()
{
int min=0,hr=0,sec=0;
while(1)
{ cout<<hr<<":"<<min<<":"<<sec;
}

}

Be a part of the DaniWeb community

We're a friendly, industry-focused community of 1.20 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.