Hi...

This is my first post in here... here goes...

I'm stuck at trying to understand how does round robin work.

I understand that round robin is an algorthim that allocates an amount of timeslice to each process to provide a best possible fair allocation of CPU processing time. Each process is allocated up to the pre-determined timeslice for processing and the CPU would switch to another process. The order of which process obtains usage of the CPU is determined by the order the processes arrive.

But I am unsure what happens in the following scenario:

timeslice: 2 seconds
pid arrivalTime duration (in seconds)
1 0 5
2 4 8
3 6 9
4 8 10

p1 (process with pid of 1) would be using the CPU for 4 seconds (i.e. 2 timeslices). At the 4th second, when p1 has just finished using the CPU, p2 arrives. The waiting queue for using the CPU is empty. At this instance, is where I am unsure what happens.

Question: Does p1 continue to get the timeslice and p2 gets put into the waiting queue OR p1 gets put onto the waiting queue and p2 get usage of the CPU?

I would greatly appreciate it if someone could answer my query. I referred to a few textbooks, their examples did not mention about this explicit scenario. I need to do a programming assignment on this thus really need to understand how it works before I can begin doing it.

Please help!!

Recommended Answers

All 9 Replies

Assuming they all have the same priority, generally any process who is ready for the cpu gets put at the end of the queue.

In your case, it centers around if p1 or p2 are ready to place themselves on the queue FIRST. Since the guy who manages the processes cannot be interrupted, only ONE of (p1 or p2) wins. Whichever one does, gets the next slice. There's no "judgement" call needed.

Yes, I forgot to say that they all have the same priority. Sorry about that.

Thank you Chainsaw for your reply.
Just to make sure that I understood what you are saying...

It does not matter if p1 or p2 gets the next time slice. Therefore it can be either one that gets it and it still is within the definition of round robin, right?

right, it only matters which one arrives in the queue first.

If no one is in the queue, then you aren't really skipping anyone.

Thank you chainsaw for you reply.

Is there any specific way/algorithm that states which will enter the queue first since p1 has just finished using the CPU, i.e. going to enter the queue, and p2 has just arrived, going to enter the queue also?

E.g. would be... a rule that perhaps say since both are waiting to enter the queue, so the one with the smaller pid would get into the queue first or something like that??

Unless you have parallel processors, only one thing at a time can happen. The scheduler is not itself multi-threaded, so when a process is running and becomes stalled, the scheduler adds it to the queue.

Since the two kinds of stalls (timeslice up and waiting for a device) are both under the control of the scheduler itself (or the rest of the kernel), only one process can be rescheduled at a time, thus there's no need for tiebreakers.

(unless there is some unusual situation like multiple processors or the kernel itself is being scheduled by the scheduler or something)

Oh....

Thank you very much!!!

Hi again, I got another query about roundrobin.

There's this term DISPATCHER, i ain't too sure if that's what its called. Its about overhead time after timeslice that involves scheduling.

(With no reference to the set of values to my earlier question)
My query is this, according to the diagram in my textbook, it displays that even if the CPU is processing only one process, p1. After each timeslice, there is this overhead time, DISPATCHER, imposed on the time line.

E.g.
(time slice = 2.0; DISPATCHER: 10.0)
p1 is a process with duration of 5.0 and the waiting queue is empty and the CPU is ready. Assuming no process comes in.

time = 2.0: p1 arrives.
time = 2.0: CPU process p1.
time = 4.0: Add DISPATCHER.
time = 14.0: CPU process p1.
time = 16.0: Add DISPATCHER.
time = 26.0: CPU process p1.
time = 27.0: Add DISPATCHER.
time = 37.0: CPU idles.

But IF there is only one process that needs to be processed, shouldn't there be no need for any DISPATCHER after each timeslice as there is no switching between the processes, i.e. p1 and any other processes as there are no other processes waiting for the usage of the CPU, right?

Or does this overhead time, DISPATCHER, always added onto the timeline irregardless if there are one or more than one processes waiting.

'dispatcher' usually refers to the scheduler process, as it 'dispatches' other processes. So I'll assume DISPATCHER in your case means dispatcher OVERHEAD.

Yes, you might be able to design a dispatcher that says "as long as the queue is empty, let the current process run." That could shorten the overhead, but it could be hard to get rid of it.

I'm assuming you have a 'real time clock' that fires every <slice> time (maybe 100 ms?) and interrupts the current process, which is how the dispatcher/scheduler runs (it IS the interrupt driver for the real time clock).

- the real time clock fires.
- the interrupt stops p1 from running and now the dispatcher is running.
- the dispatcher places the current running process at the end of the queue of tasks ready to run.
- the dispatcher looks at the front of the queue
- sees p1 is at the front
- notices that p1's context is already the context installed
- resumes running p1

So, assuming you have that context test, the time to run the dispatcher is really a couple dozen instructions. Almost no time at all.

Maybe you could somehow turn off the realtime clock, but then you'd have to know to turn it back on as soon as some other process enters the queue.

I think the real overhead in any scheduler is the 'context switch' time: swapping the registers (including any memory bank registers) from the running process to a new process. Since you've avoided this, I suspect that in the one-process case you have outlined, a tiny tiny fraction of one percent of CPU time is lost to DISPATCHER time.

Thank you Chainsaw!

:D

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.