Multitasking for Concurrency The Cost

Finally, by removing the constraint that jobs have to wait for each other, and al- lowing resources to be utilized instead of being left idle, more jobs can be serviced. This is expressed as a potential increase in throughput. Realization of this potential depends on the arrival of more jobs. Exercise 49 In the MM1 analysis from Chapter 2, we saw that the average response time grows monotonically with utilization. Does this contradict the claims made here? All this depends on an appropriate job mix The degree to which multiprogramming improves system utilization depends on the requirements of the different jobs. If all the jobs are compute-bound, meaning they need a lot of CPU cycles and do not perform much IO, the CPU will be the bottleneck. If all the jobs are IO-bound, meaning that they only compute for a little while and then perform IO operations, the disk will become a bottleneck. In either case, multiprogramming will not help much. In order to use all the system components effectively, a suitable job mix is re- quired. For example, there could be one compute-bound application, and a few IO- bound ones. Some of the applications may require a lot of memory space, while others require only little memory. Exercise 50 Under what conditions is it reasonable to have only one compute-bound job, but multiple IO-bound jobs? What about the other way around? The operating system can create a suitable job mix by judicious long-term schedul- ing . Jobs that complement each other will be loaded into memory and executed. Jobs that contend for the same resources as other jobs will be swapped out and have to wait. The question remains of how to classify the jobs: is a new job going to be compute- bound or IO bound? An estimate can be derived from the job’s history. If it has already performed multiple IO operations, it will probably continue to do so. If it has not performed any IO, it probably will not do much in the future, but rather continue to just use the CPU.

3.2.3 Multitasking for Concurrency

When multiple applications are active concurrently they can interact A third reason for supporting multiple processes at once is that this allows for con- current programming, in which the multiple processes interact to work on the same problem. A typical example from Unix systems is connecting a set of processes with 63 pipes. The first process generates some data or reads it from a file, does some pro- cessing, and passes it on to the next process. Partitioning the computational task into a sequence of processes is done for the benefit of application structure and reduced need for buffering. Exercise 51 Pipes only provide sequential access to the data being piped. Why does this make sense? The use of multitasking is now common even on personal systems. Examples include: • Multitasking allows several related applications to be active simultaneously. For example, this is especially common with desktop publishing systems: a word processor may embed a figure generated by a graphic editor and a graph gen- erated by a spread-sheet. It is convenient to be able to switch among these applications dynamically rather than having to close one and open another each time. • Multitasking allows the system to perform certain tasks in the background. For example, while working with a word processor, and user may request to print the document. With multitasking, the system may prepare the print job in the background, while at the same time supporting continuous work with the word processor. As another example, a fax handling application can be activated in the background to receive a fax that arrived while the user is busy with some- thing else.

3.2.4 The Cost

Multitasking also has drawbacks, which fall into three categories: Overhead: in order to perform a context switch that is, stop running one process and start another, register values have to be stored in memory and re-loaded from memory. This takes instruction cycles that would otherwise be dedicated to user applications. Degraded performance: even when the CPU is running application code, its per- formance may be reduced. For example, we can see • Contention for resources such as memory: in order to run, multiple ap- plications need their address spaces to be loaded into memory. If the to- tal requirements exceed the physically available memory, this can lead to swapping or even thrashing Section 5.4. • Cache interference: switching among applications causes a corruption of cache state, leading to degraded performance due to more cache misses. 64 Another example is possible interference with real-time tasks, such as viewing a movie or burning a CD. Complexity: a multitasking operating system has to deal with issues of synchro- nization and resource allocation Chapter 4. If the different processes belong to different users, the system also needs to take care of security this has been the standard in Unix since the 1970s, but supporting multiple users at once still doesn’t exist on Windows desktop systems. However, on the bottom line, the benefits of multiprogramming generally far outweigh the costs, and it is used on practically all systems.

3.3 Scheduling Processes and Threads