My book says that interrupt chaining is a "compromise between the overhead of a huge interrupt table and the inefficiency of dispatching to a single interrupt handler." I understand how the interrupt vector table works & why dispatching to a single interrupt handler is worse than choosing from an array. But I don't understand how it is any better to have one huge list of handlers residing in RAM than it is to have a smaller list of "pointers to handler lists" (I think the first option is better if it is possible to do). You'd still be using the same amount of memory, just in the second case the chunks are spread out . . right? I'm asking this because the way it is phrased it sounds like the issue here is that we're going to run out of contiguous RAM. I don't see why that is an issue given that the interrupt vector table is one of the first things set up when the OS boots and it seems to me that modern machines have more than enough memory?

If anyone could shed some light on this, it'd be much appreciated. :)

Recommended Answers

All 5 Replies

A major pro the huge table is a fast dispatch. To make it actually fast, the table shall be stored in an on-chip RAM. This is very expensive gatewise. This is a major con. To minimize gates some designs go to the extreme of a single interrupt vector with a software dispatch. The drawback is larger latency.

A chained table takes less gates in the processor, but is slower due to an extra RAM access.

There are other design considerations of course. Prioritizing them depends on the target application.

Can you elaborate? The cache is "on chip RAM" - It sounds like you're saying that there is another cache* which stores the interrupt vector. Even if this is true, that doesn't explain how having pointers to lists of handlers saves any memory space. Are the lists of handlers stored in a different location other than the cache you're talking about? As far as the gates go, I'm afraid I don't understand. I thought accessing any location in the RAM took the same amount of time.

*when I say "another" cache I mean one other than the cache that is used to bridge the gap between RAM speed being slower than CPU speed.

Anyway, thank you for your reply.

Can you elaborate? The cache is "on chip RAM" - It sounds like you're saying that there is another cache* which stores the interrupt vector. Even if this is true, that doesn't explain how having pointers to lists of handlers saves any memory space. Are the lists of handlers stored in a different location other than the cache you're talking about? As far as the gates go, I'm afraid I don't understand. I thought accessing any location in the RAM took the same amount of time.

*when I say "another" cache I mean one other than the cache that is used to bridge the gap between RAM speed being slower than CPU speed.

Anyway, thank you for your reply.

The primary table is stored in an on-chip RAM. It contains pointers to a second tier, which resides at an off-chip RAM. A fetch of an ISR address therefore takes a bus cycle (two, in fact). This is s l o w.

The difference between the RAM used for the primary table, and the cache, is that the primary table is never purged. Caching it means playing a chance game in regard to interrupt latency.

Technically it is correct to say that this table pages are cached yet locked (forever). In any case, the primary table eats up space which otherwise can serve as a true cache. Note that the said space could've also be used for other purposes, such as longer pipeline, larger register file, more execution units, etc, etc.

Well it really depends on what ACTUAL implementation you're talking about. There is no "one size fits all" answer.

If your processor has only one IRQ level, then you're stuck with chaining.
If you've got 256 levels say, then you can spread out a little.

Thank you guys for your helpful responses.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.