How can I calculate CLOCKS_PER_SEC.. It is defined as 1000 in ctime. I don't want it defined. I want to calculate the actual value..

I tried the below code but it seems to be giving me my CPU speed.. on a 2.8 Ghz processor, it prints 2800. On a 2.0 Ghz processor it prints 2000.

So what is CLOCKS_PER_SEC and why is it defined as 1000? How can I calculate it? I'm trying to run a function every actual tick.

#include <iostream>
#include <windows.h>
#include <chrono>
#include <thread>
#include <ctime>

std::chrono::time_point<std::chrono::high_resolution_clock> SystemTime()
{
    return std::chrono::high_resolution_clock::now();
}

std::uint32_t TimeDuration(std::chrono::time_point<std::chrono::high_resolution_clock> Time)
{
    return std::chrono::duration_cast<std::chrono::nanoseconds>(SystemTime() - Time).count();
}

double ClocksPerSecond()
{
    std::uint32_t I = 0, Result = 0;
    auto Begin = std::chrono::high_resolution_clock::now();

    for (I = 0; I < 3; ++I)
    {
        std::this_thread::sleep_for(std::chrono::seconds(1));
        Result += (TimeDuration(Begin) / 1000.0);
    }

    return Result / I;
}

int main()
{
    std::cout<<ClocksPerSecond() / 1000<<std::endl;  //ClocksPerSecond() / 1000000;
    std::cout<<CLOCKS_PER_SEC;
    return 0;
}

Edited 3 Years Ago by triumphost

It seems to me that getting millisecond precision is going to be very difficult, without using external hardware. Any multitasking OS is going to screw up your timing by interrupting it for a different task. The CLOCKS_PER_SEC is actually a macro, who's actual value changes according to the OS. You divide it into the number of clock ticks to get the number of seconds.

Edited 3 Years Ago by tinstaafl

So exactly, what do you want? Do you want to

run a function every actual tick.

or do you want to get the milliseconds?

How can I calculate CLOCKS_PER_SEC.. It is defined as 1000 in ctime. I don't want it defined. I want to calculate the actual value..

CLOCKS_PER_SEC is a standard macro, you can ignore it if you'd like, but you shouldn't redefine it.

So what is CLOCKS_PER_SEC and why is it defined as 1000? How can I calculate it?

CLOCKS_PER_SEC is the number of units calculated by std::clock() over the span of one second. std::clock() is defined as such:

"The clock function returns the implementation’s best approximation to the processor
time used by the program since the beginning of an implementation-defined era related
only to the program invocation."

As a concrete example, one of my simple C standard libraries implements clock() as this:

/*
    @description:
        Determines the processor time used.
*/
clock_t clock(void)
{
    return _sys_getticks() - __clock_base;
}

Where __clock_base is a static object initialized to _sys_getticks() at program startup, and _sys_getticks() is defined like so:

/*
    @description:
        Retrieves the process' startup time in clock ticks.
*/
long long _sys_getticks(void)
{
    return GetTickCount64();
}

GetTickCount64() is a Win32 API function that returns the number of elapsed milliseconds since system startup, which means for this implementation clock() returns milliseconds since program startup, and CLOCKS_PER_SEC is the number of milliseconds in a second (ie. 1000).

I'm trying to run a function every actual tick.

Define "tick". Apparently the tick of std::clock() isn't sufficient, and neither is the hires result of sleeping for one second. I'd expect the latter to be closer to your system clock. But like Lucaci Andrew said, what exactly do you want?

Running a function every "tick" is unlikely to be meaningful, since the act of running a function will eat time to the point where you're unable to match ticks and function calls perfectly unless you define "tick" to be something much larger than CPU clock ticks.

This question has already been answered. Start a new discussion instead.