Respected Sir/Madam

I need to find out the execution time of a fragment of code.
For that I am using
time_t
structure, but it gives time in seconds.
And the value of CLOCKS_PER_SEC in my comp is 1000000.
Will you please me in getting the time in milliseconds

Thanks & regards
Srishekh

It gives you the time in seconds and you need to know how many milliseconds? Well a millisecond is a thousandth of a second ... one thousand milliseconds in a second. Did I misunderstand your question??

Respected Sir/madam
You misunderstood my question.
When it takes time greater than or equal to 1 second it gives
the correct output,but when it takes time less than 1 second
it gives the 0 as the answer.

For example, if the code takes 0.00045 seconds, the output
must be 0.00045,
How to get such a precesion.

I maybe wrong, but I remember reading in a Visual Studio 6.0 book that the best resolution possible in it is 1 millisecond. So if that is the case you wont be able to measure 0.00045 seconds. It will come as 0.

PS.
Isn't the greeting "Respected Sir/madam" a bit out of date? It drives me nuts.

The functions in time.h are only accurate to 1 second -- so no matter how you divide it you still have only 1 second accuracy. You need to use other functions, such as clock(), or win32 api function GetTickCount(), or GetSystemTime() that provide more accuracy.

The functions in time.h are only accurate to 1 second -- so no matter how you divide it you still have only 1 second accuracy. You need to use other functions, such as clock(), or win32 api function GetTickCount(), or GetSystemTime() that provide more accuracy.

Do you know the resolution of those functions. As I said earlier I remember that Clock() had only a resolution of 1ms.

If you are in Unix:
setitimer() and getitimer() have microsecond resolution on POSIX-compliant systems.

Do you know about profiling and profilers?

I need to find out the execution time of a fragment of code.

You don't need finer resolution timing. If you want to measure the thickness of a piece of paper, would you buy an expensive microscope, or would you measure a stack of 200 pieces of paper and then divide by 200?

i don't think you can use the concept of 200 pages to measure time accurately.
let us suppose the width of a paper is 1/10 th of mm and you have a scale which can measure only upto a mm. suppose you end up measuring 209 pages with the scale, than you'll end up with a reading that'llgive you the width of each page as .95 = 20/209.
The problem becomes more acute in computing where instructions take around a nano sec to complete.

lots of platform specific api. On windows this works <windows.h>
GetTickCount() . gives the number of ms since midnight

The most precise unit is as precise as you can get, so Rashakil's analogy does not make sense to me. You can get less precise, but not more precise.

You don't need finer resolution timing. If you want to measure the thickness of a piece of paper, would you buy an expensive microscope, or would you measure a stack of 200 pieces of paper and then divide by 200?

In other words, run it a whole bunch of times until it runs to 10 seconds, then divide 10 by the number of times it ran. Now, of course, all that counting and looping will throw off your results -- after all, that code and all the calls to time() also take time. So you'll be off only by 5-15%.

i don't think you can use the concept of 200 pages to measure time accurately.

but you can wait 3 1/2 years to respond to a thread and divide that time by the number of total posts, for a rough estimate of the number of thread-posts per year.

amirite?

Comments
That's the same as the number of pointless thread bumps per month :)

uRrite. Just another case of some turkey not caring that 100 pages in on a forum might be moot thread. Nothing better to do, obviously.

Comments
And it wasn't even a good post either!
This article has been dead for over six months. Start a new discussion instead.