Operating System (QUESTION BANK) (CSE)
Question and answer
UNIT-2
[NOTE: may be some answers of the above question are not contain proper syllabus topics ]
1. What is CPU Scheduling?
- CPU scheduling is the process of determining which process or thread should be allocated the CPU (central processing unit) at any given time. It is an essential part of the operating system's task of managing processes and maximizing CPU utilization.
2. List and describe five scheduling criteria used in evaluating various scheduling policies.
- Response Time: It measures the time it takes for a process to start responding after a request is made. A shorter response time is generally preferred.
- Throughput: It represents the number of processes completed per unit of time. Higher throughput is desirable.
- Turnaround Time: It measures the time taken to execute a particular process from its arrival to its completion. Smaller turnaround time is preferable.
- Fairness: It ensures that each process gets a fair share of the CPU's processing time.
- CPU Utilization: It calculates the percentage of time the CPU is busy executing processes. Higher CPU utilization indicates better efficiency.
3. What is meant by process priority?
- Process priority refers to a numerical value assigned to a process, indicating its importance or priority level relative to other processes. A higher priority value suggests that the process should receive more CPU time and resources compared to processes with lower priorities.
4. What are the different types of priorities?
- The different types of priorities include:
1. Static Priority: Priorities assigned to processes remain constant unless explicitly changed by the system administrator or programmer.
2. Dynamic Priority: Priorities can change during runtime based on various factors such as aging, resource requirements, or response time.
3. Absolute Priority: Each process is assigned an absolute priority value, and the process with the highest priority always gets the CPU.
4. Relative Priority: Processes are assigned priority values relative to each other, allowing for prioritization within a specific range or group.
5. What role does priority play in process scheduling?
- Priority plays a crucial role in process scheduling as it determines the order in which processes are allocated CPU time. Processes with higher priorities are given preferential treatment and are scheduled to run before processes with lower priorities. It ensures that critical or time-sensitive tasks receive the necessary resources and are executed promptly.
6. Differentiate between preemptive and non-preemptive scheduling.
- Preemptive Scheduling: In preemptive scheduling, a running process can be interrupted and temporarily suspended to allocate the CPU to a higher-priority process. The interrupted process is placed back in the scheduling queue and can resume execution later.
- Non-preemptive Scheduling: In non-preemptive scheduling, a running process continues to execute until it voluntarily releases the CPU, blocks on I/O, or completes its execution. The CPU is not forcefully taken away from a process by the scheduler.
7. Explain the multiple queuing scheduling algorithm.
- The multiple queuing scheduling algorithm involves dividing processes into separate queues based on certain criteria or characteristics. Each queue is assigned a different priority or scheduling policy. Processes with higher priority or special requirements are placed in queues with higher priority, ensuring they receive preferential treatment during scheduling. This approach allows for efficient resource allocation and prioritization of processes.
8. What is context switching? Explain with an example.
- Context switching refers to the process of saving and restoring the state of a process or thread so that it can be resumed later from the same point. When a context switch occurs, the operating system saves the current process's state (registers, program counter, etc.) and loads the state of the next process to be executed.
- Example: Suppose a multitasking operating system has two processes, A and B. Process A is currently running, and an interrupt occurs, indicating that process B needs to be scheduled. The operating
system performs a context switch, saves the state of process A, and loads the state of process B. Process B now starts executing from where it left off, and process A can resume execution later with its saved state.
9. What is dispatch latency? Describe the makeup of dispatch latency.
- Dispatch latency refers to the time delay or overhead incurred during the process of dispatching or scheduling a process for execution. It consists of the following components:
- Time spent in the operating system scheduler to make the scheduling decision.
- Time required to perform a context switch, including saving the state of the current process and loading the state of the new process.
- Time taken to update data structures and perform any necessary bookkeeping tasks related to the scheduling decision.
10. What are the CPU scheduling parameters?
- CPU scheduling parameters include:
- Burst Time: The amount of time a process requires to complete its execution.
- Arrival Time: The time at which a process enters the ready state and is available for execution.
- Priority: The relative importance assigned to a process compared to other processes.
- Quantum/Time Slice: The maximum amount of CPU time allocated to a process in preemptive scheduling algorithms like Round Robin.
11. Explain different types of CPU scheduling algorithms.
- Different types of CPU scheduling algorithms include:
- First Come, First Served (FCFS): Processes are executed in the order they arrive.
- Shortest Job First (SJF): The process with the smallest burst time is scheduled next.
- Priority Scheduling: Processes are assigned priorities, and the highest priority process is scheduled first.
- Round Robin (RR): Each process is assigned a fixed time slice or quantum, and they take turns executing in a circular manner.
- Multilevel Queue Scheduling: Processes are divided into multiple queues, each with its own priority and scheduling algorithm.
12. What relation holds between each pair of algorithms?
a) Priority Scheduling algorithm and Shortest job first algorithm.
- Priority scheduling and shortest job first scheduling are similar in that both use a form of process prioritization. However, in priority scheduling, priorities are assigned based on the process's importance, while in shortest job first scheduling, priorities are determined by the burst time of the process. Priority scheduling can result in processes with higher priorities executing before processes with smaller burst times, unlike SJF.
b) Round Robin Scheduling algorithm and Shortest job first algorithm.
- Round robin scheduling and shortest job first scheduling are different in terms of their execution order. Round robin assigns a fixed time slice to each process, ensuring fairness, whereas shortest job first scheduling prioritizes the process with the smallest burst time. The processes scheduled by round robin can have different burst times, whereas shortest job first always selects the process with the smallest burst time next.
c) Priority Scheduling algorithm and First come first serve scheduling algorithm.
- Priority scheduling and first come, first served (FCFS) scheduling differ in their prioritization logic. Priority scheduling uses process priorities to determine execution order, whereas FCFS follows the order of arrival. In priority scheduling, a higher priority process can preempt and execute before a lower priority process, while FCFS strictly adheres to the arrival order without considering priorities.
13. What are the scheduling queues and their role in process scheduling? Consider the following set of processes:
Process Arrival Time Burst Time Priority
P1 0 10 1
P2 2 1 3
P3 3 2 5
P4 4 1 1
P5 5 5 2
a) Draw Gantt charts
showing the execution of these processes using FCFS, SJF, RR, and priority scheduling schemes. (TQ=2MS)
b) Compute the waiting and turnaround time for each process for each of the above schemes.
NOTE: you want to solve that problems your self. I will try to upload as soon as possible the answers of this question.
14. Consider the following set of processes:
Process Arrival Time Burst Time Priority
P1 0 7 5
P2 3 10 4
P3 4 5 3
P4 5 3 1
P5 7 4 2
a) Draw Gantt charts showing the execution of these processes using FCFS, SJF, RR, and priority scheduling schemes. (TQ=4ms)
b) Compute the waiting and turnaround time for each process for each of the above schemes.
NOTE: you want to solve that problems your self. I will try to upload as soon as possible the answers of this question.
15. What is multilevel queue scheduling? Explain with the help of a diagram.
- Multilevel queue scheduling is a scheduling policy where processes are divided into separate queues, each with a different priority level. Each queue can use its own scheduling algorithm. The queues are arranged in a hierarchy, with the highest priority queue at the top and lower priority queues below. Processes in higher priority queues are executed first, and if the queue is empty, the scheduler moves to the next lower priority queue.
- Here's a diagram illustrating the concept:
```
---------------------
| High Priority |
---------------------
| Medium Priority |
---------------------
| Low Priority |
---------------------
```
16. What is multilevel feedback queue scheduling?
- Multilevel feedback queue scheduling is an extension of multilevel queue scheduling where processes can move between queues based on their behavior and requirements. Each queue has a different priority, and processes start in the highest priority queue. If a process uses up its allotted time quantum in a higher priority queue, it is moved to a lower priority queue. Conversely, if a process waits too long in a lower priority queue, it can be promoted to a higher priority queue. This allows for dynamic adjustment of priorities based on process behavior.
17. What is starvation and aging in an operating system?
- Starvation refers to a situation where a process is unable to make progress or receive the necessary resources to complete its execution. It often occurs when a low-priority process is continuously preempted or blocked by higher-priority processes, leading to an unfair distribution of resources.
- Aging is a technique used to prevent starvation. It involves gradually increasing the priority of a process as it waits in the system for an extended period. Aging ensures that processes that have been waiting for a long time eventually receive the necessary resources by gradually boosting their priority level. This prevents processes from being indefinitely starved and promotes fairness in resource allocation.