1. 17. Consider the following preemptive priority-scheduling algorithm based on dynamically changing priorities. Larger numbers imply higher priority. Tasks are preempted whenever there is a higher priority task. When a task is waiting for CPU (in the ready queue, but not running), its priority changes at a rate of a: P(t) = P0 + a ⇥ (t − t0) where t0 is the time at which the task joins the ready queue and P0 is its initial priority, assigned when the task enters the ready queue or is preempted. Similarly, when it is running, the task’s priority changes at a rate b. The parameters a, b and P0 can be used to obtain many different scheduling algorithms. a) What is the algorithm that results from P0 = 0 and b > a > 0? Solution: A running task gains priority faster than any waiting tasks, and any waiting tasks will have a priority greater than any task that arrives later. Therefore this will act like FIFO. b) What is the algorithm that results from P0 = 0 and a < b < 0? Solution: Any newly arrived task will have higher priority than any waiting or running task, and so will be immediately scheduled. Once it is running, its priority will decrease more slowly than any waiting tasks, so it will keep the processor until another task arrives or it completes. Thus, this is last-in-first-out (LIFO). 2.Consider a computer system running a general-purpose workload. Measured utilizations (in terms of time, not space) are given in Figure 7.24. For each of the following changes, say what its likely impact will be on processor utilization, and explain why. Is it likely to significantly increase, marginally increase, significantly decrease, marginally decrease, or have no effect on the processor utilization? a) Get a faster CPU Solution: The disk is the bottleneck. A faster CPU will simply spend less time computing between each disk I/O, so it will significantly decrease CPU utilization. b) Get a faster disk Solution: The disk is the bottleneck. A faster disk will reduce the time spent waiting for the disk, allowing all other resources to be more fully used, so it will significantly increase CPU utilization. c) Increase the degree of multiprogramming Solution: The degree of multiprogramming has little to no effect on the ratio of time that applications spend computing versus waiting for the disk, so it will have litte to no effect on CPU utilization. d) Get a faster network Solution: Tasks spend very little time waiting for the network. If that time is reduced, then they will spend slightly more time waiting for the disk and CPU, but most of the (very small amount of) time will be spent waiting for the disk since it is the bottleneck. Thus, it will have litte to no effect on CPU utilization. 3. The sequence of virtual pages referenced by a program has length p with n distinct page numbers occurring in it. Let m be the number of page frames that are allocated to the process (all the page frames are initially empty). Let n >m. a) What is the lower bound on the number of page faults? Solution: With n distinct pages referenced, there must be at least n page faults, assuming that none of the pages are in memory initially. For example, the reference string could reference each page repeatedly before moving onto the next page, but the first reference to each page would trigger a page fault. b) What is the upper bound on the number of page faults? Solution: Suppose the next reference is always chosen to be a page not currently in memory. Assuming that the number of pages that can be referenced does not fit in memory n > m, there can be as many as p page faults. The lower/upper bound should be for any page replacement policy.