On most systems, there are a number of different accessible clocks, some more precise than others. And the C++11 standard reflects that with the use of std::chrono::system_clock::now()
and std::chrono::high_resolution_clock::now()
. The actual precision of those clocks is dependent on the system.
I'm afraid Ancient Dragon's statement is somewhat out-dated. For a long time, Windows had a fairly coarse system clock, at around 15ms precision. Now, however, the more typical precision is 1ms or 100us (micro-seconds). Similarly, Linux versions that are not ancient have a system clock set at either 4ms or 1ms precision, and you can configure that.
Furthermore, the standard high_resolution_clock
is a clock that tries to exploit the finest possible precision that the system can offer, which is usually the CPU tick counts (which means nano-seconds on > 1 GHz CPUs), to provide high-resolution time values. These days, many computers and systems deliver nano-second "resolution" on that clock. This used to be unreliable, and still might be on some systems, because at that scale, the clock needs time to keep time, so to speak, and you also get issues of precision and multi-core synchronization. What happens is, the value given will be precise to the nano-second, but there can be some drift or readjustments. The intervals between those depend directly on your CPU's capabilities, but it's usually in the micro-seconds range.
For example, on my system (Linux), I get the following resolutions:
Resolution of real-time clock: 1 nanosec.
Resolution of coarse real-time clock: 4000000 nanosec.
The "coarse" …