Static vs Dynamic Scheduling:

Preemptive vs Non-Preemptive Scheduling:

Preemptive Scheduler**:**

  1. OS moves thread from Run State to Ready State:

    In preemptive scheduling, the operating system can interrupt a running thread and move it back to the ready state to allow another thread to execute.

  2. Preemption is needed to guarantee fairness:

    Preemption ensures that no single thread monopolizes the CPU. By interrupting long-running threads, the OS can allocate CPU time to other threads, maintaining fairness in resource distribution.

  3. Preemption needs interrupts:

    Preemption relies on hardware interrupts to signal the OS to stop the current thread and switch to another. These interrupts can be triggered by timers, external events, or system calls.

  4. Preemption helps meet deadlines:

    In real-time systems, preemption allows high-priority tasks to interrupt lower-priority ones, ensuring critical tasks meet their deadlines and system requirements.

Scheduler Criteria:

  1. Throughput:

    Measures the number of tasks completed per unit time. Higher throughput indicates better performance.

  2. Turnaround Time:

    The total time taken for a task to complete, from submission to completion. Lower turnaround time is desirable.

  3. Response Time:

    The time from when a request is made to when the system starts responding. Important for interactive systems.

  4. CPU Utilization:

    The percentage of CPU cycles actively used for processing tasks. Higher utilization means better resource usage.

    image.png

  5. Wait Time:

    The time a task spends waiting in the ready queue before execution. Lower wait time improves system responsiveness.

    List of Scheduling Algorithms:

    1. First-Come, First-Served (FCFS):
      • Tasks are executed in the order they arrive. Simple but can lead to long wait times for later tasks.
    2. Shortest Job Next (SJN):
      • Executes the task with the shortest execution time first. Minimizes turnaround time but requires knowledge of task durations.
    3. Priority Scheduling:
      • Tasks are executed based on priority. Higher-priority tasks preempt lower-priority ones. Can lead to starvation of low-priority tasks.
    4. Round Robin (RR):
      • Each task gets a fixed time slice (quantum) in a cyclic order. Ensures fairness but may increase context-switching overhead.
    5. Multilevel Queue Scheduling:
      • Tasks are divided into queues based on priority or type (e.g., system tasks vs. user tasks). Each queue may use a different scheduling algorithm.
    6. Multilevel Feedback Queue Scheduling:
      • Similar to multilevel queue scheduling but allows tasks to move between queues based on behavior (e.g., CPU-bound vs. I/O-bound).
    7. Earliest Deadline First (EDF):
      • Executes tasks based on their deadlines. Common in real-time systems to ensure deadlines are met.
    8. Rate-Monotonic Scheduling (RMS):
      • A fixed-priority algorithm where tasks with shorter periods are given higher priority. Used in real-time systems.
    9. Shortest Remaining Time (SRT):
      • Preemptive version of SJN, where the task with the shortest remaining execution time is executed next.

    Each algorithm has trade-offs and is suited for specific types of systems (e.g., real-time, batch processing, or interactive systems). check youtube better understanding each algo

Quanta

The time quantum in Round Robin (RR) scheduling significantly impacts performance. If the quantum is extremely large, RR behaves like First-Come, First-Served (FCFS). If the quantum is extremely small, RR achieves true processor sharing but increases overhead due to frequent context switching. Smaller quanta lead to more time spent on context switching, reducing efficiency. To optimize RR, the time quantum must be much greater than the context-switch time to balance responsiveness and overhead.

image.png