Code Structure Monolithic Kernel Structure

The more modern approach is to use a microkernel, that just provides the basic functionality. The rest of the operating system work is done by external servers. Each of these servers encapsulates the policies and data structures it uses. Other parts of the system can only communicate with it using a well-defined interface. This distinction is also related to the issue of where the operating system code runs. A kernel can either run as a separate entity that is, be distinct from all pro- cesses, or be structured as a collection of routines that execute as needed within the environment of user processes. External servers are usually separate processes that run at user level, just like user applications. Routines that handle interrupts or sys- tem calls run within the context of the current process, but typically use a separate kernel stack. Some services are delegated to daemons In any case, some services can be carried out by independent processes, rather than being bundled into the kernel. In Unix, such processes are caled daemons. Some dae- mons are active continuously, waiting for something to do. For example, requests to print a document are handled by the print daemon, and web servers are implemented as an http daemon that answers incoming requests from the network. Other daemons are invoked periodically, such as the daemon that provides the service of starting user applications at predefined times.

11.2 Monolithic Kernel Structure

Monolithic kernels may be layered, but appart from that, tend not to be very modular. Both the code and the data structures make use of the fact that everything is directly accessible.

11.2.1 Code Structure

The operating system has many entry points Recall that an operating system is basically a reactive program. This implies that it needs to have many entry points, that get called to handle various events. These can be divided into two types: interrupt handlers and system calls. But using these two types is rather different. Interrupt handlers must be known by the hardware Interrupts are a hardware event. When an interrupt occurs, the hardware must know what operating system function to call. This is supported by the interrupt vector. When the system is booted, the addresses of the relevant functions are stored in the 191 interrupt vector, and when an interrupt occurs, they are called. The available in- terrupts are defined by the hardware, and so is the interrupt vector; the operating system must comply with this definition in order to run on this hardware. While the entry point for handling the interrupt must be known by the hardware, it is not necessary to perform all the handling in this one function. In many cases, it is even unreasonable to do so. The reason is that interrupts are asynchronous, and may occur at a very inoportune moment. Thus many systems partition the handling of interrupts into two: the handler itself just stores some information about the in- terrupt that has occurred, and the actual handling is done later, by another function. This other functions is typically invoked at the next context switch. This is a good time for practically any type of operation, as the system is in an itermediate state after having stopped one process but before starting another. System calls cannot be known by the hardware The repertoire of system calls provided by an operating system cannot be known by the hardware. In fact, this is what distinguishes one operating system from another. Therefore a mechanism such as the interrupt vector cannot be used. Instead, the choice of system calls is done by means of a single entry point. This is the function that is called when a trap instruction is issued. Inside this function is a large switch instructions, that branches according to the desired system call. For each system call, the appropriate internal function is called. But initial branching does not imply modularity The fact that different entry points handle different events is good — it shows that the code is partitioned according to its function. However, this does not guarantee a modular structure as a whole. When handling an event, the operating system may need to access various data structures. These data structures are typically global, and are accessed directly by different code paths. In monolithic systems, the data structures are not encapsulated in separate modules that are only used via predefined interfaces.

11.2.2 Data Structures