I'm doing my master thesis on radar signal generation and to create a scenario, I need to have a timing in execution of my C-script of microseconds resolution. By using the time.h I can only get to a resolution of 1 millisecond. I know it is feasible in linux, but the use of windows is here mandatory. A major problem with windows is that the resolution varies too much if you have different processes running.
I have been thinking about two different solutions:
1. Could I change the priority of processes running in order to have the processors priority for my software?
2. Is it possible to boot in MSDOS (on a XP PC) and acces the processors clock? Since the processor clock is 2.53GHz it means theoretically a nanosecond resolution.
In this way I could use the core for my programm?
3. Using a FPGA or PIC for the timing?

Could anybody give me advise on this topic?



No, you can't get access to the cpu clock, or run DOS on a Windows XP or above. The Windows kernel was changed with Windows2000 to the NT version. Now, DOS programs can run only in a virtual window of some kind. NTDVM.exe is the one I see in WindowsXP all the time. (NT kernel DOS Virtual Machine). VMWare and DOSBOX can do the same thing for you.

If I were doing something that required critical timing, I would buy a clock board. Haven't priced them in a long time, but they weren't very expensive back in the day, so they shouldn't be too costly.

Some boards, and some versions of Windows do have better than the old PIC (programmable interrupt controller) based timing, but it's hit or miss, imo. With a clock board and a minimum number of services and other programs running, you have the best chance of getting reliable timing info.

You can't get good timing info from your oscilloscope?

I maybe have found a solution, but I still should be sure that the clock remains stable while my script is executed:


int main(void) 
	LARGE_INTEGER begin, result, einde, freq; /*Performance functies van windows.h eisen dit formaat*/
	float tijd; /* float voor de vereiste precisie*/
	QueryPerformanceFrequency(&freq);/*Initialisatie frequentie (wordt on the spot genomen, 
					maar voor deze PC constant 2.52705 GHz ). In performance staat 2.53GHz */
    QueryPerformanceCounter(&begin); /*Initialisatie klok: klok begint te lopen*/
	printf("test\n"); /*Een commando om tijd te gebruiken*/
	QueryPerformanceCounter(&einde); /*Einde van de timer*/
    result.QuadPart = einde.QuadPart - begin.QuadPart; /*Tijdsverschil in klok cycli*/
    printf("Einde klok, begin klok : %f\n%f\n", (float)(einde.QuadPart),(float)(begin.QuadPart)); /*Einde klok, begin klok*/
	printf("Frequentie: %f\n",(float) (freq.QuadPart));/* Frequentie (on the spot)*/
	tijd = (float)result.QuadPart/freq.QuadPart; /*klokverschil gedeelt door de frequentie*/
	printf("Tijd in seconden: %f\n",tijd); /*Tijdsverschil in seconden*/
    return 0;

The oscilloscope is no problem, but since I have to stimulate signal generaters for an electronic warfare threat scenario, timing is crucial!

Thanks for the advise, if anybody has experience in this domain, advise is more than welcome.