real-time system, address space, storage hierarchy, storage protection, execution
states, loader, linker, .... Gary Nutt, Operating Systems (Addison Wesley, 2004).
Topics for Operating Systems Comprehensive Exam Posted Fall 2011 1. Some basic terminology Centralized computing, distributed computing, batch processing, multitasking, multiprocessing, parallel processing, real-time system, address space, storage hierarchy, storage protection, execution states, loader, linker, address binding time (compile/link/execute), kernel, memory fragmentation (internal and external), sharing of code and data, reentrant code, virtual memory, page fault, thrashing 2. Processes, and concurrent process control · Process states and their transition · Representation of processes (PCB) and information maintained for each process · Lightweight (thread) vs. heavyweight processes · Interrupt processing and context switching · Concurrent processes and the need for their control · Process synchronization (mutual exclusion & general synchronization) · Mechanisms for providing mutual exclusion (hardware-based vs. software-based, busy waiting vs. non-busy waiting) · Software mechanisms with a single variable (semaphores, sequencers & event counts) · Mechanisms that allow multiple variables (OR, AND, NOT synchronization) · Strengths and weakness of each mechanism · Classical problems of synchronization and their solutions using the various mechanisms 3. Higher level concurrent programming mechanisms · Monitors · Serializers · Rendezvous implemented in Ada tasks · Open path expressions · Rationale for the development of high-level mechanisms · Strengths and weakness of each mechanism · Classical problems of synchronization and their solutions using the various mechanisms 4. Distributed concurrency control · Inherent problems in a distributed environment (lack of global clock & global memory) · Mechanisms to counter these problems (Lamport's logical clocks, vector clocks) · Applications of these mechanisms o Causal relation of events o Causal ordering of messages o Global state recording o Termination detection · Mutual exclusion algorithms in a distributed system o Non-token-based algorithms (Lamport's algorithm, The Ricart-Agrawala algorithm, Maekawa's algorithm) o Token-based algorithms (Suzuki-Kasami's broadcast algorithm, Singhal's heuristic algorithm, Raymond's tree-based algorithm) o Measure and comparison of performance (response time, synchronization delay, message traffic) 5. Deadlock · States & state transitions in terms of resource request/allocation · Necessary conditions for deadlock and their relevance to deadlock prevention · Sufficient condition for deadlock · Deadlock detection o Resource allocation graphs and graph reduction o Difficulty with deadlock detection in systems with reusable and consumable resources o Efficient deadlock detection algorithms for special cases
·
o
Resolution when deadlock is detected Deadlock avoidance using the Banker's algorithm
6. Distributed deadlock detection · Difficulty with deadlock prevention and avoidance in distributed systems · Control organizations for distributed deadlock detection (centralized, distributed, hierarchical) · Deadlock detection o The possibility of detecting phantom deadlock o Algorithms with centralized control (completely centralized algorithm, the Ho-Ramamoorthy 2-phase & 1phase algorithms) o Algorithms with distributed control (Obermarck's path-pushing algorithm, Chandy-Misra-Haas' edge-chasing algorithm) o Algorithms with hierarchical control (the Ho-Ramamoorthy algorithm) o Performance considerations (communication overhead, deadlock persistence time, storage overhead, processing overhead) o Deadlock resolution 7. Resource management & task scheduling · Modeling of scheduling problems o Nonpreemptive vs. preemptive scheduling o Dispatcher and context switching o Representation of schedules (Gantt charts) o Scheduling algorithms (first-come-first-served, shortest-job-first, priority, highest response ratio next, roundrobin, multilevel-queue, multilevel-feedback-queue) o Performance measures (utilization, throughput, waiting time, response time, turnaround time) · Probability scheduling models o Queuing models of the scheduling problem: Model parameters § Arrival pattern and service time distribution: M/M/1 (function form, characteristics, rationale for their adoption in the study of scheduling in operating system), M/G/1 § Arrival and execution rates (state independent and state dependent rates) § Number of processors (single or multiprocessors) o Scheduling discipline § Sequencing independent of execution time (e.g., FIFO, LIFO, externally assigned priority) o Performance measures § Mean number of jobs in system, and mean number in queue (analytic results for M/M/1 and M/G/1 systems) § Mean time in system and mean time in queue (analytic results for M/M/1 and M/G/1 systems) § Little's result and its significance o State dependent arrivals and service times in Poisson queues § Applications (finite waiting room problem, finite source problem, multi-server system) · Distributed scheduling and load distributing o Motivation for load distributing o Components of a load distributing algorithm (Policies on transfer, selection, location, information) o Load distributing algorithms § Sender-initiated § Receiver-initiated § Symmetrically initiated § Adaptive algorithms § Performance comparison of the various algorithms § Strengths and weakness of each algorithm 8. Memory management · Management schemes, hardware/software support required and inherent problem of each o Single contiguous allocation (resident monitor approach) o Partitioned memory allocation (fixed partitions, variable partitions, fragmentation problems o Paging
·
o Segmentation o Combined systems (segmented paging, paged segmentation) o Virtual memory Virtual memory implemented with paging o Hardware/software support o Instruction execution in a virtual memory system o Overhead in a virtual memory system § Page fault rate and the effective memory access time § Why use associative registers for page table o Page replacement § Algorithms (FIFO, OPT, LRU, LFU, MFU, etc.) § The FIFO anomaly § Implementation and hardware support required § Stack algorithms and calculation of cost as a function of available real memory size § The stack updating procedure § Local/global replacement o Means to speed up page swaps o Thrashing § Locality principle § Methods to reduce thrashing (working set model, page fault frequency strategy) § Page size considerations
REFERENCES 1. M. Singhal and N.G. Shivaratri, Advanced Concepts in Operating Systems (McGraw Hill, 1994) 2. Silberschatz, Galvin and Gagne, Operating System Concepts (John Wiley, 2009) 3. Gary Nutt, Operating Systems (Addison Wesley, 2004) 4. Shui Lam, CECS 526 Lecture Notes