Does anyone know of a header file for windows with a function that enables the execution of code once every specified number of milliseconds, something that can be used like this:

if ( delay(5) ){
    // do something every 5 milliseconds
} 

If it's in microseconds is even better.

If there is't one does anyone know how to make one knowing just the elipsed time since the program started?

Recommended Answers

All 13 Replies

I'm not sure in *nix systems, but in windows you can use sleep(x);, x are the ms, including windows.h: #include <Windows.h>

If you explain a little more what you want to do... because the if( delay() ) {} doesn't look good at first sight :)

windows.h SetTimer(). Your program must be Windows (WinMain instead of main) and have a message pump for that to work.

If you just want a console program then you could create a thread and call Sleep() in a loop to put that thread to sleep then execute some code when it wakes up

The "delay" function is called Sleep() and you can get it through the <windows.h> header. The Sleep function sleeps for the specified number of milliseconds. However, do not expect good precision, thread-switching frequency in Windows is notoriously slow and unresponsive, don't expect precision higher than about 10ms to 30ms.

The other solution is to use the clock() function in a tight loop. As so:

#include <ctime>

void delay_us(std::clock_t us_interval) {
  std::clock_t end_time = clock() + (us_interval * CLOCKS_PER_SEC) / 1000000;
  while( clock() < end_time ) ;
};

while( clock() < end_time ) ;

Sorry, but that's a horrible suggestion as it consumes all the cpu time leaving little or mnothing for other windows programs.

Sorry, but that's a horrible suggestion as it consumes all the cpu time leaving little or mnothing for other windows programs.

Yes you are correct, it is horrible, but the operating systems doesn't give you a choice if you want to sleep for a time-interval below 10ms or so, because that is basically the resolution of the Sleep function. Using timers can get you closer to a few millisecond resolution for a nearly-constant refresh rate (or frame rate) on calling a call-back function. For anything below that, as the OP is asking, your only choice is to have a tight loop (and possibly with an atomic lock if you can) because as soon as you use a Sleep or timer function you have to relinquish thread time to the thread scheduler with a software-level wake-up interrupt which has a minimum latency of at least a few milliseconds. For precise timing operations (down to tens of kilo-hertz resolution, i.e., 100 to 10 micro-seconds), you need a hard-real-time system, and Windows is not designed for that at all.

and Windows is not designed for that at all.

Yup, and neither is *nix. That was one of the nice things about MS-DOS. There are add ons to MS-Windows, such as this one, that provide accurate millisecond response time, but I think they cost lots of $$$.

Yup, and neither is *nix.

That's true for most *nix variants (at least, desktop / server operating systems), but remember that *nix variants is a very wide family, and almost all real-time operating systems are *nix variants, like QNX, LynxOS, VxWorks, FreeRTOS, and certain special versions of Linux (e.g., RTLinux). There are some slap-on frameworks for doing hard-real-time on Windows, but they effectively shutdown Windows kernel activity and replace it with their own kernel, which is often a posix-compliant micro-kernel. And yes, those cost quite a bit of money, because they usually involve strict quality and determinism guarantees.

But again, to make it clear, all my points about latencies and wait times on Windows apply, for the most part, exactly the same for all other generic desktop operating systems. Server versions have the same problem, but I think they are faster (software interrupts are clocked faster). And, of course, the actual latency / response times will vary between the different operating systems. But I know that desktop Linux versions (at least, those I have worked with, Ubuntu, Red Hat and SUSE) are clocked to achieve sub-millisecond resolution on sleeping / timing / wake-ups, since I have run up to 4 kHz multi-threaded apps (with thread synchros) on Linux kernels, so, in my experience, the resolution is at least down to the 100 to 50 micro-second range. But, of course, the same problems apply (not hard-real-time systems), but at one or two orders of magnitude higher time-resolution.

To clarify things, I was looking for a function that returns true every specified number of milliseconds or microseconds if possible.
Or use the number of CPU clocks in a way that will remain constant over faster and slower CPUs.

I have a piece of code that I want to be executed constantly on all CPU speeds.

There is no such standard win32 api functon. Sleep() is not even constant on the same CPU let along all CPUs because there are many other programs running and competing for CPU time. Sometimes programs do something that cannot be interrupted. If you need exact millisecond timing then don't use MS-Windows.

while( clock() < end_time ) ;

Sorry, but that's a horrible suggestion as it consumes all the cpu time leaving little or nothing for other windows programs.

Would this result in hogging the CPU too? I use this lots and after reading this I am wondering how else you might do this.

#include <ctime>
#include <iostream>

using namespace std;

int main()
{
    time_t startTime = clock();
    time_t check = 0;
    while(1) //program main loop
    {
        time_t check = float(clock() - startTime)/CLOCKS_PER_SEC*1000;
        while( check >= 5 ) //after 5 milliseconds have elapsed
        {
            cout << check << endl;
            startTime = clock();
            check = 0;
        }
    }

    cin.get();
    return 0;
}

I put in the check variable just to output the time in milliseconds.

I tried to edit the above post but it didn't go through, and now I'm locked out of editing it.

Anyways the inner while() loop should be an if() statement.

Would this result in hogging the CPU too?

Yes, it has the same affect as Mike's previous post. It would be ok if you would put a Sleep(1) somewhere in that loop -- but don't expect the program to be asleep for only 1 millisecond, most likely it will not wake up for about 10 milliseconds.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.