My book says that interrupt chaining is a "compromise between the overhead of a huge interrupt table and the inefficiency of dispatching to a single interrupt handler." I understand how the interrupt vector table works & why dispatching to a single interrupt handler is worse than choosing from an array. But I don't understand how it is any better to have one huge list of handlers residing in RAM than it is to have a smaller list of "pointers to handler lists" (I think the first option is better if it is possible to do). You'd still be using the same amount of memory, just in the second case the chunks are spread out . . right? I'm asking this because the way it is phrased it sounds like the issue here is that we're going to run out of contiguous RAM. I don't see why that is an issue given that the interrupt vector table is one of the first things set up when the OS boots and it seems to me that modern machines have more than enough memory?
If anyone could shed some light on this, it'd be much appreciated. :)