is there any function(API or something) for get 1ms of precision?
i have 1 timer, but i only get 10ms of max.

Recommended Answers

All 30 Replies

No -- it's impossible to get that precision in operating systems such as MS-Windows and *nix. You need a real-time os such as MS-DOS 6.X in order to do that. I recall reading about some 3d party add-ons to turn MS-Windows into real-time os, but the cost a lot of money.

i understand.. thanks for the answer my friend
sorry what is '*nix'?

*nix is a colloquialism to describe POSIX-based operating systems such as Unix or Linux.

thanks for that. thank you

On most systems, there are a number of different accessible clocks, some more precise than others. And the C++11 standard reflects that with the use of std::chrono::system_clock::now() and std::chrono::high_resolution_clock::now(). The actual precision of those clocks is dependent on the system.

I'm afraid Ancient Dragon's statement is somewhat out-dated. For a long time, Windows had a fairly coarse system clock, at around 15ms precision. Now, however, the more typical precision is 1ms or 100us (micro-seconds). Similarly, Linux versions that are not ancient have a system clock set at either 4ms or 1ms precision, and you can configure that.

Furthermore, the standard high_resolution_clock is a clock that tries to exploit the finest possible precision that the system can offer, which is usually the CPU tick counts (which means nano-seconds on > 1 GHz CPUs), to provide high-resolution time values. These days, many computers and systems deliver nano-second "resolution" on that clock. This used to be unreliable, and still might be on some systems, because at that scale, the clock needs time to keep time, so to speak, and you also get issues of precision and multi-core synchronization. What happens is, the value given will be precise to the nano-second, but there can be some drift or readjustments. The intervals between those depend directly on your CPU's capabilities, but it's usually in the micro-seconds range.

For example, on my system (Linux), I get the following resolutions:

Resolution of real-time clock: 1 nanosec.
Resolution of coarse real-time clock: 4000000 nanosec.

The "coarse" clock is the "system-clock" (configured at 4ms), while the other one is the high-resolution clock (giving nano-second precision). If I look at the /proc/timer_list pseudo-file, I see the following:

Tick Device: mode:     1
Per CPU device: 0
Clock Event Device: lapic
 max_delta_ns:   257701235439
 min_delta_ns:   1800
 [....]
 event_handler:  hrtimer_interrupt

which should be understood as saying that there is some sort of high-resolution timer activity on a hardware interrupt with a minimal delta of 1800 nano-seconds (1.8 micro-seconds). So, that's probably the effective interval between reliable increments to the clock's value.

Long story short, most systems today can deliver timing values down to the nano-second, but with a granularity somewhere in the micro-seconds range. If you have a C++11 capable compiler, with a complete implementation of the standard <chrono> header, you should be able to just use the std::chrono::high_resolution_clock::now() function to obtain the system's time down to nano-second values. For example, here is a little program that prints out the minimum resolution of the high-resolution clock:

#include <chrono>
#include <iostream>

int main() {

  auto start = std::chrono::high_resolution_clock::now();
  auto end = std::chrono::high_resolution_clock::now();
  while( start >= end )
    end = std::chrono::high_resolution_clock::now();

  auto elapsed = end - start;
  std::cout << elapsed.count() << std::endl;

  return 0;
};

Which, on my system, prints out 1 nanoseconds (as expected).

Otherwise, you can use Boost.Chrono, which is essentially the same library as the new standard chrono library.

Or, you can use system calls directly. On Windows. On POSIX systems.

i have now tested your code:
the ouput is: 1 000 000
what is the output scale mode?

The scale is in nano-seconds. So, this means that your clock's resolution 1 millisecond (or 1 million nano-seconds). That's not great, but it will be just enough for you.

I'm guess that you are using Windows, and that your implementation / compiler does not yet support a "real" high-resolution clock, so it just uses the system clock, which has a resolution of 1ms on many modern versions Windows.

sorry.. i'm confuse... so 1 000 000 it's 1ms?
i'm using Windows 7(i don't like to much of 8, because it's like i need a touchscreen because of it's 'start menu')

What compiler are you using?

GNU compiler.. the free

mingw32... but i think that it's the GNU too

i don't like to much of 8, because it's like i need a touchscreen because of it's 'start menu')

Wrong, Windows 8 does not require touch screen. Anything you can do with touch screen you can do just as easily with the mouse.

mingw32... but i think that it's the GNU too

MinGW's implementation may not use the highest resolution implementation. You might try Boost.Chrono. IIRC, it uses QueryPerformanceCounter under the hood, which is the best you can get in terms of granularity. However, note that on Windows you can't reliably get a higher resolution than 1μs in certain increasingly common circumstances (eg. multicore processors). But it should be sufficient for your current needs.

thanks for all to both

Digging a bit deeper, it seems that the reliability of QueryPerformanceCounter or the Linux equivalent isn't really much of a problem anymore. The whole issue had to do with multi-cores keeping separate counters and the introduction of cores that can slow down, thus, slowing the progress of the counters. Both Intel and AMD processors have had counter-measures (constant_tsc and hpet / lapic interrupts for sync'ing) for almost a decade now. So, there is really only a small window of PCs that have those features (multi-core / variable-speed) and that are, at the same time, too old to have the counter-measures. It seems like only "fancy" PCs from 2001-2005 or so might be susceptible to this problem (the list is short). I think it's one of these issues that has a life of its own, it was a short-lived concern at one point, but isn't anymore. It was only an issue with pre-Vista Windows high-end PCs from the early 2000s.

As I said earlier, you shouldn't expect the QueryPerformanceCounter or Linux equivalent to be reliable down to the "advertised" resolution of 1 nano-second. But, today, it's perfectly reasonable to expect reliability down to 1 micro-second or so. That is, unless you run a deprecated operating system (pre-Vista Windows) on a deprecated machine (about 10 years old).

sorry.. i'm confuse... so 1 000 000 it's 1ms?

Yes, it is:

1 milli-second (ms)  = 1,000 micro-seconds (us)  = 1,000,000 nano-seconds (ns)

And the scale of the high_resolution_clock durations is in nano-seconds, so, 1 000 000 is indeed 1 ms.

GNU compiler.. the free

It appears that the MinGW/GCC does not have a "real" high_resolution_clock implementation. I am not too surprised since Windows support has never been a big priority for the GNU team, for obvious reasons (Windows sucks). I guess nobody got around to implementing the high_resolution_clock using the QueryPerformanceCounter functions in the GNU implementation.

Just use Boost.Chrono. Or, if you can live with that 1 ms resolution, just stay with the standard chrono library.

yes.. 1ms is my big objective... thanks for that great information my friends

someone correct me these: 1s=1000ms?

someone correct me these: 1s=1000ms?

Yup.

thanks for all

i'm trying do a counter timer for test the interval, but without sucess :(

#include <iostream>
#include <chrono>

using namespace std;

int main()
{

    auto start = std::chrono::high_resolution_clock::now();
    auto end = std::chrono::high_resolution_clock::now();
    int interval=50000000;
    auto elapsed=std::chrono::high_resolution_clock::now();
    while(elapsed.count()<=interval)
    {
        end = std::chrono::high_resolution_clock::now();
        elapsed= end - start;
    }
    cin.get();
    return 0;
}

can anyone tell me what i'm doing wrong?

elapsed is declared as a time_point, not a duration. Read up on the chrono library here to see what options are available to you.

The code will compile like this (one of many possible variations):

#include <iostream>
#include <chrono>

using namespace std;

int main()
{
    auto start = chrono::high_resolution_clock::now();
    auto end = chrono::high_resolution_clock::now();
    int interval = 50000000;

    while ((end - start).count() <= interval)
    {
        end = chrono::high_resolution_clock::now();
    }

    cin.get;
}

sorry... but i think, that i don't recive the mail from your answer :(
the 1 000 000 ns=1ms so 5 000 000 = 5ms... so 5 000 000 * 1000 is 1s(second) right???

int interval = 5000000 * 1000;

but seems that my calculation isn't right :(

ok.. i see the problem.. i'm using the wrong type too.

auto interval = 5000000 * 1000;
cout << interval << endl;

the outup is:
705032704

finally i resolved in a diferent way:

#include <iostream>
#include <chrono>
using namespace std;
int main()
{
    auto start = chrono::high_resolution_clock::now();
    auto end = chrono::high_resolution_clock::now();
    auto interval = 5;
    cout << interval << endl;
    while (((end - start).count()/1000000000) <= interval)
    {
        end = chrono::high_resolution_clock::now();
    }
    cout << "hello";
    cin.get();
}

thanks for all

auto ConvertSecondsToNanoSeconds(int a)
{
    return (a*1000000000);
}

what is the chrono type for use these big number?

my problem was:
- i can't use int, but double. if the left side is int and the right is double, the compiler will considered int instead double. that's why i recived bad results.
- instead 'auto', i must use double:

double ConvertSecondsToNanoSeconds(double a)
{
    cout.precision(15);//these is for show me the results in numbers and not with 'e'..
    return (a*1000000000);
}

thanks for all

what is the chrono type for use these big number?

That is none of your concern, nor is it mine. Chrono (like even the old time libraries) is designed to hide away all this nonesense about what type is actually used under-the-hood to store the time values. I don't know, and I don't care, and neither should you.

The kind of conversions that you are trying to do is already provided by the chrono library, and you should not attempt to do them yourself (in part, because of the possibly large numbers involved).

First of all, the chrono library has a number of "duration" times to represent various units, and they all inter-operate very well. And if you need to do a conversion, you can use the duration_cast function. So, your loop that tries to wait 5 seconds can be written like this:

#include <iostream>
#include <chrono>

using namespace std;

int main()
{
    auto start = chrono::high_resolution_clock::now();
    auto end = chrono::high_resolution_clock::now();
    auto interval = chrono::seconds(5);

    while ((end - start) <= interval)
    {
        end = chrono::high_resolution_clock::now();
    }

    // here is the part where I do a cast:
    auto total_ms = chrono::duration_cast< chrono::milliseconds >(end - start);
    cout << "The time elapsed was " << total_ms.count() << " ms." << endl;

    cin.get();
}

The chrono library uses a number of different types under-the-hood, but mainly you should know that some are time-points (time_point) and some are durations (duration). Their internal representations and whatever else is none of your concern, and that shouldn't be a problem as long as you stick with the tools the library provides you with.

BTW, the standard does not even specify what types are used, that's how little they should matter to users.

thanks
why the print is 5001 instead 5000?

can you give me a nice link for study about chrono libary?

out off topic: why i didn't recive the mail when someone answer me?
(i only have seen on these topic)

why the print is 5001 instead 5000?

Because there is a 1ms error. That's expected, your platform's timer has a resolution of 1ms.

can you give me a nice link for study about chrono libary?

You can navigate the main reference site, there are many examples under each different classes and functions.

Then, you can look at the Boost.Chrono documentation. That's because Boost.Chrono was the basis for the standard chrono library, so, most of the documentation is valid for both libraries.

why i didn't recive the mail when someone answer me?

I believe there is a slight delay. I think you are just faster then the system.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.