I am writing a game in which I need to know whether or not a user preforms an action in one second or less. I can not use time() because it measures time in seconds. If the user starts the action half-way through a second it would mess with accuracy.

I am experimenting with clock(). I have got clock to work with some programs to make a delay, but not in this way to measure time:

#include <iostream>
#include <ctime>

using namespace std;

int main()
{
	clock_t start = clock();
	cout << "press enter...";
	cin.ignore();
	clock_t finish = clock();
	double time = (finish-start)/CLOCKS_PER_SEC;
	cout << "you took " << time << " seconds\n";
	return 0;
}

This program returns 0 for time seemingly no matter how long I wait. I have tried several things including eliminating the need for the double time variable and testing the time variable to make sure it is exactly zero. Does anyone see what mistake I am making, or is there something I do not know about clock()?

Bad naming, time is already defined in ctime

#include <iostream>
#include <ctime>

using namespace std;

int main()
{
	clock_t start = clock();
	cout << "press enter...";
	cin.ignore();
	clock_t finish = clock();
	double doneTime = float(finish-start)/CLOCKS_PER_SEC;
	cout << "you took " << doneTime << " seconds\n";
	return 0;
}

http://linux.die.net/man/3/clock
clock() determines (to some approximation) the amount of CPU time used by a program.
Waiting around for I/O typically does not count.

If you want to measure the amount of elapsed time, as by your watch for example, then use the time() functions.

Is there any way to get the time() function to have greater precision? I need more precision than one second as explained in my question. I have heard that this is implemented in the boost libraries, but I want to stay away from that as much as possible.
Thanks for your help so far.

Consider using gettimeofday() which keeps track of seconds and microseconds since the epoch. To compare them, subtract the two fields individually, then multiply one (or divide the other) and add the differences; then convert as needed.

http://linux.die.net/man/3/clock
clock() determines (to some approximation) the amount of CPU time used by a program.
Waiting around for I/O typically does not count.

If you want to measure the amount of elapsed time, as by your watch for example, then use the time() functions.

The clock() function returns an approximation of processor time used by the program

Waiting for I/O counts as it constitute to time used by processor for waiting

Is gettimeofday() portable to windows as well? I am using Linux right now, but I will want this game to be portable. It looks like SDL can measure time in milliseconds, so I may end up using that. If gettimeofday() is portable it looks like it may work better.

Is gettimeofday() portable to windows as well? I am using Linux right now, but I will want this game to be portable. It looks like SDL can measure time in milliseconds, so I may end up using that. If gettimeofday() is portable it looks like it may work better.

If you want to use windows, check out queryPerformanceCounter, that's what boost uses if available.

EDIT: Oh I see, why not just use boost?

Waiting for I/O counts as it constitute to time used by processor for waiting

Waiting for IO uses no processor time. The processor is busy running other processes.

Waiting for IO uses no processor time. The processor is busy running other processes.

Maybe I'm confused. The clock() function returns an approximation of processor time used by the program. When I/O operation occurs, CPU usually sends the job to a IO processor and proceeds with other jobs until it gets interrupted right? So in that wait period, the CPU was busy doing something else, but there is still that wait period. So why shouldn't for example,

long s = clock();
cin.get();
long e = clock() - s;

'e' be some time unit? I'm not saying that CPU just waits for IO to be done, but I'm saying that there is this IO waiting time, in which although CPU does other useful work.

Because the clock() function returns the time that the processor uses [Ifor this task[/I] and not for the amount of elapsed time. The perceived need, when it was written, was to understand how much CPU-time each particular task was using. Elapsed time has only a vague connection to that (elapsed time can never be less than CPU time).

> but I'm saying that there is this IO waiting time, in which although CPU does other useful work.
Yes, other useful work, incrementing the clock() for the other processes which are having useful work done.

If you want sub-second precision, then the OP needs to state which OS/Compiler is being used.

gettimeofday() has microsecond precision, but the accuracy can vary from one machine to another. For example, the usec field could be stuck at zero, and the sec field increments once a second, and it would be within spec (although not so useful). Do NOT expect that the usec field will always tick once per microsecond.

Even queryperformancecounter, or the "raw" RDTSC instruction is not without issues.
http://en.wikipedia.org/wiki/Time_Stamp_Counter

I also find clock() does not give the result I want.
So I just wrote a simple Timer class to do the time duration calculation using QueryPerformanceCounter on Windows and clock_gettime on Linux

Just try this: https://github.com/AndsonYe/Timer

:)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.