In my Operating System lectures, we were taught various scheduling algorithms like FCFS, Shortest Job first, etc.

In Shortest Job First, the scheduler is supposed to execute the processes with the least amount of burst time first. My question is: How does the scheduler calculate/estimate the burst time/execution time of a process before it is dispatched to the CPU? So, how is it able to determine which jobs are short and which jobs are long?

Also, isn't calculating the burst time of a process an overhead? Are there any ways to reduce or eliminate this overhead?

Thanks!

Recommended Answers

All 4 Replies

Hmm...have I posted this in the wrong section? I'll post it somewhere else then.

Well from WHAT I KNOW, most modern operating systems do not use SJFS.
SJFS is more common in the batch computing days when the user had to explicitly submit his jobs. In such a scenario when the user submitted his job he was required to estimate the run time also
Most of the current operating systems use prioritized round robin scheme. In this all processes start at the same priority. Then if the process used the entire time slot and does not finish, it is assumed that this is a batch job and the priority of this process decreases. Similarly if the process uses a part of it allotted time and then tell the OS that it needs IO, the process is assumed to be interactive and its priority is increased.

Thanks for the reply, that cleared a few things up. But are you sure that SJFS is never used in a modern operating system that does not require the user to estimate process runtimes? I mean, today's OSes being as complex as they are, might they not use a combination of various scheduling algos, possibly including SJFS?

Anyways, thanks again.

I am not SURE.... but it is very unlikely...

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.