1,105,427 Community Members

trouble with ctime: (clock()-start)/CLOCKS_PER_SEC

Member Avatar
Sudo Bash
Light Poster
35 posts since Dec 2010
Reputation Points: 0 [?]
Q&As Helped to Solve: 2 [?]
Skill Endorsements: 0 [?]
 
0
 

I am writing a game in which I need to know whether or not a user preforms an action in one second or less. I can not use time() because it measures time in seconds. If the user starts the action half-way through a second it would mess with accuracy.

I am experimenting with clock(). I have got clock to work with some programs to make a delay, but not in this way to measure time:

#include <iostream>
#include <ctime>

using namespace std;

int main()
{
	clock_t start = clock();
	cout << "press enter...";
	cin.ignore();
	clock_t finish = clock();
	double time = (finish-start)/CLOCKS_PER_SEC;
	cout << "you took " << time << " seconds\n";
	return 0;
}

This program returns 0 for time seemingly no matter how long I wait. I have tried several things including eliminating the need for the double time variable and testing the time variable to make sure it is exactly zero. Does anyone see what mistake I am making, or is there something I do not know about clock()?

Member Avatar
firstPerson
Industrious Poster
4,052 posts since Dec 2008
Reputation Points: 761 [?]
Q&As Helped to Solve: 634 [?]
Skill Endorsements: 24 [?]
 
0
 

Bad naming, time is already defined in ctime

#include <iostream>
#include <ctime>

using namespace std;

int main()
{
	clock_t start = clock();
	cout << "press enter...";
	cin.ignore();
	clock_t finish = clock();
	double doneTime = float(finish-start)/CLOCKS_PER_SEC;
	cout << "you took " << doneTime << " seconds\n";
	return 0;
}
Member Avatar
Salem
Posting Sage
7,177 posts since Dec 2005
Reputation Points: 5,138 [?]
Q&As Helped to Solve: 970 [?]
Skill Endorsements: 41 [?]
Team Colleague
 
0
 

http://linux.die.net/man/3/clock
clock() determines (to some approximation) the amount of CPU time used by a program.
Waiting around for I/O typically does not count.

If you want to measure the amount of elapsed time, as by your watch for example, then use the time() functions.

Member Avatar
Sudo Bash
Light Poster
35 posts since Dec 2010
Reputation Points: 0 [?]
Q&As Helped to Solve: 2 [?]
Skill Endorsements: 0 [?]
 
0
 

Is there any way to get the time() function to have greater precision? I need more precision than one second as explained in my question. I have heard that this is implemented in the boost libraries, but I want to stay away from that as much as possible.
Thanks for your help so far.

Member Avatar
griswolf
Veteran Poster
1,144 posts since Apr 2010
Reputation Points: 303 [?]
Q&As Helped to Solve: 264 [?]
Skill Endorsements: 3 [?]
 
0
 

Consider using gettimeofday() which keeps track of seconds and microseconds since the epoch. To compare them, subtract the two fields individually, then multiply one (or divide the other) and add the differences; then convert as needed.

Member Avatar
firstPerson
Industrious Poster
4,052 posts since Dec 2008
Reputation Points: 761 [?]
Q&As Helped to Solve: 634 [?]
Skill Endorsements: 24 [?]
 
0
 

http://linux.die.net/man/3/clock
clock() determines (to some approximation) the amount of CPU time used by a program.
Waiting around for I/O typically does not count.

If you want to measure the amount of elapsed time, as by your watch for example, then use the time() functions.

The clock() function returns an approximation of processor time used by the program

Waiting for I/O counts as it constitute to time used by processor for waiting

Member Avatar
Sudo Bash
Light Poster
35 posts since Dec 2010
Reputation Points: 0 [?]
Q&As Helped to Solve: 2 [?]
Skill Endorsements: 0 [?]
 
0
 

Is gettimeofday() portable to windows as well? I am using Linux right now, but I will want this game to be portable. It looks like SDL can measure time in milliseconds, so I may end up using that. If gettimeofday() is portable it looks like it may work better.

Member Avatar
firstPerson
Industrious Poster
4,052 posts since Dec 2008
Reputation Points: 761 [?]
Q&As Helped to Solve: 634 [?]
Skill Endorsements: 24 [?]
 
0
 

Is gettimeofday() portable to windows as well? I am using Linux right now, but I will want this game to be portable. It looks like SDL can measure time in milliseconds, so I may end up using that. If gettimeofday() is portable it looks like it may work better.

If you want to use windows, check out queryPerformanceCounter, that's what boost uses if available.

EDIT: Oh I see, why not just use boost?

Member Avatar
nezachem
Practically a Posting Shark
896 posts since Dec 2009
Reputation Points: 616 [?]
Q&As Helped to Solve: 197 [?]
Skill Endorsements: 0 [?]
 
0
 

Waiting for I/O counts as it constitute to time used by processor for waiting

Waiting for IO uses no processor time. The processor is busy running other processes.

Member Avatar
firstPerson
Industrious Poster
4,052 posts since Dec 2008
Reputation Points: 761 [?]
Q&As Helped to Solve: 634 [?]
Skill Endorsements: 24 [?]
 
0
 

Waiting for IO uses no processor time. The processor is busy running other processes.

Maybe I'm confused. The clock() function returns an approximation of processor time used by the program. When I/O operation occurs, CPU usually sends the job to a IO processor and proceeds with other jobs until it gets interrupted right? So in that wait period, the CPU was busy doing something else, but there is still that wait period. So why shouldn't for example,

long s = clock();
cin.get();
long e = clock() - s;

'e' be some time unit? I'm not saying that CPU just waits for IO to be done, but I'm saying that there is this IO waiting time, in which although CPU does other useful work.

Member Avatar
griswolf
Veteran Poster
1,144 posts since Apr 2010
Reputation Points: 303 [?]
Q&As Helped to Solve: 264 [?]
Skill Endorsements: 3 [?]
 
0
 

Because the clock() function returns the time that the processor uses [Ifor this task[/I] and not for the amount of elapsed time. The perceived need, when it was written, was to understand how much CPU-time each particular task was using. Elapsed time has only a vague connection to that (elapsed time can never be less than CPU time).

Member Avatar
Salem
Posting Sage
7,177 posts since Dec 2005
Reputation Points: 5,138 [?]
Q&As Helped to Solve: 970 [?]
Skill Endorsements: 41 [?]
Team Colleague
 
2
 

> but I'm saying that there is this IO waiting time, in which although CPU does other useful work.
Yes, other useful work, incrementing the clock() for the other processes which are having useful work done.

If you want sub-second precision, then the OP needs to state which OS/Compiler is being used.

gettimeofday() has microsecond precision, but the accuracy can vary from one machine to another. For example, the usec field could be stuck at zero, and the sec field increments once a second, and it would be within spec (although not so useful). Do NOT expect that the usec field will always tick once per microsecond.

Even queryperformancecounter, or the "raw" RDTSC instruction is not without issues.
http://en.wikipedia.org/wiki/Time_Stamp_Counter

Question Answered as of 2 Years Ago by firstPerson, Salem, griswolf and 1 other
You
This question has already been solved: Start a new discussion instead
Post:
Start New Discussion
View similar articles that have also been tagged: