11/20/19

PROCESSES Operating system


UNIT: - 02

PROCESSES

02.01 The Process Concept
02.02 Systems Programmer’s view of Processes
02.03 The Operating System view of Processes
02.04 Operating System Services for Process Management
02.05 Scheduling algorithms
02.06 Performance Evaluation.



Q. What is Process Management?
 
        1.   A process management is a program in execution. A program does nothing unless its instructions are executed        by a CPU.
        2.   Compiling a particular program, using a word processor, sending output to a printer, is a process (i.e. a running        program is called process).
        3.   A process requires resources-including CPU time, memory, files, and I/O devices to accomplish its task.
        4.   To process an instruction, CPU fetches and executes one instruction of a process after another in main memory.
        5.   The operating system is responsible for the following activities in connection with process management.
                    i.    Process creation and deletion.
                   ii.    Process suspension and resumption.
                iii. Provision of mechanisms for: process synchronization and process communication.
                 iv.    Deadlock handling.


02.01 The Process Concept: -
         A process is an instance of a program in execution. Basically, a process is not the same ass “program”.
1)  A program is a passive text of executable codes resides in disk.
2)  Process is a basically a programming execution.
3)  The execution of a process must process in sequential fashion.
4)  A process is an active entity ready for execution which includes; a program counter, stack and data section.
5)  When a task is loaded into the memory, it becomes a process.
6)  A process basically consist of: -





























                         1.          Program code (text): - The text section comprises the compiled program code.
                       2.    Current activity: - The heap is used for dynamic memory allocation representing the value of the                                                              program counter and the contents of the processor’s registers.
              3.          Process Stack: - The stack is used for local variables. Space on the stack is reserved for local                                                                variables when they are declared.
            4.          Data section: -The data section stores global and static variables.


02.01.02 Process States: -
       1. Process Control Block (PCB, also called Task Controlling Block) is a data structure in the operating system             containing the information needed to manage a particular process.
    2.  The process control block is identified by an integer process ID.
    3.  For each process there is a Process Control Block which stores the following types of specified information.








Process State: - New, Ready, Running, Waiting, etc. , are the process state.
Program counter: - Indicates the address of the next instruction to be executed for this program.
CPU register: - These need to be saved and restored when swapping process in and out of the CPU is performed.
CPU-Scheduling information: - Such as priority information and pointers to scheduling queues.
Memory management information: - It includes value of base tables, registers and limit registers also page tables or segment tables.
Accounting information: - Includes, CPU time consumed, account numbers, limits, etc.
I/O status information: - I/O devices allocated, open file tables, etc.

02.03 The Operating System view of Processes: -

Views of Process States by the operating system: -






























   i.      New: - The process is in the stage of being created.

  ii.       Ready: - The process has all the resources available that it needs to run, but the CPU is not currently working on this process’s instructions.

 iii.      Running: - The CPU is working on this process instruction.

 iv.      Waiting: - The process cannot run at the moment, because it is waiting for someone resource to become available or for some event to occur.

 v.      Terminated: - The process has completed and can be discard from the process of execution.



02.04 Operating System Services for Process Management: -
1. Process management is integral part of any model operating system.


An operating system provides an environment for the execution of the program. Following are the various services provided by an operating system: -
• Program Execution: The operating system must provide the ability to load a program into memory at run time.
• I/O Operation: Program in running process must require I/O. It may involve in a file or specific I/O device.
• File System Manipulation: The operating system must provide the capability to read, write, create and delete a file for user program. Therefore each and every files must maintained correctly by the operating system.
 • Communication: The process of sharing message via shared memory or by the technique of message passing in which packets of information are moved between the processes by the operating system this process is called communication.
• Error detection: Operations like arithmetic overflow, access to the illegal memory location and too large user CPU time, the operating system should take the appropriate actions for these occurrences.
• Research Allocation: It is the ability to allocate multiple users, which are logged on to the system the resources must be allocated to each of them. Among the various processes the operating system uses the CPU scheduling algorithms which determine which process will be allocated with the resource.
 • Accounting: Operating system keep track of the users which use how many and which kind of computer resources.
• Protection: The protection of hardware as well as software is the responsibility of operating system.


02.05 Scheduling algorithms: -
1.   CPU scheduling is the process used to maximize the CPU utilization obtained with multi programming.
2.   Scheduling is fundamental function of operating system. Scheduling is the method by which threads, processes or data flows are given access to system resources.
3.   The need for a scheduling algorithm arises from the requirement for most modern systems to perform multitasking.

02.05.01 CPU Scheduler: -
1.   The task of scheduling the processes in memory that are ready to execute, and allocating them to the CPU is performed by the CPU scheduler.
2.   CPU scheduling decisions may take place when a process:
a.   Switches from running to waiting state.
b.   Switches from running to ready state.
c.    Switches from waiting to ready.
d.   Terminates.
3.   It is not necessary that a ready queue is always a first-in first-out (FIFO) queue. It can be implemented as a FIFO queue, a priority queue, a tree, or simply an unordered linked list.
4.   There are three types of scheduler: -

a.   Long term scheduler: - it is also known as Job scheduler or Job selector a process from new state & transfers it to the ready state. It is responsible to repair a process to get the attention of CPU.

b.   Short term scheduler: - It is responsible to provide attention of CPU to the selected process from ready queue.

c.   Medium term scheduler: - It is responsible to manage the suspended or interrupt process.


02.05.02 Preemptive and Non-preemptive scheduling: -




























02.05.03 Some important terms to remember: -
 
Dispatcher: -
1.   The dispatcher monitors processes and decides when to switch execution from one process to another.
2.   It follows the instructions of scheduler.
3.   The component Dispatcher involves
a.   Switching context
b.   Switching to user mode
c.    Jumping to the proper location in the user program to restart that program

Arrival Time (AT): - Delay to bring a process from new to ready state.
Waiting Time (WT): - The time spent by the process in ready to get the attention of CPU is called waiting time.
WT = TAT - BT
CPU time or scheduling time or burst time (BT): - The time for which a process run or executes is called Burst time.
I/O waiting and burst time: - The Time spent in the completion of any I/O request is called I/O burst time
Compilation Time (CT): - The total time required completing the execution of a process from ready state to terminate.
CT = TAT + AT
Turn Around Time (TAT): -The total time required to complete the execution of a process from ready state to terminated state.
TAT = CT - AT
Response Time (RT): - The time required submitting a process request to the CPU & the CPU response to their first request.












02.05.04 Scheduling Algorithm: -
               i.        Scheduling algorithm are used distributing resources among processes which simultaneously and asynchronous request them.
                ii.        The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongest the processes utilizing the resources.
           iii.         Scheduling deals with the problem of deciding which of the requests is to be allocated resources.
                  iv.        There are many different scheduling algorithms they are as follows: -
a.   FCFS (First Come First Serve)
b.   SJN ‘or’ SJF (Shortest Job Nest / First)
c.    Priority Scheduling
d.   Round Robin Scheduling
e.   Shortest Test Remaining Time
f.    Multi Level Queues Scheduling
g.   Largest remaining time first (LRTF)

 (Note: - All numerical link)


1.          First Come First Serve: -
                           i.        First Come First Serve is the scheduling of processes where the process leaves the queue in the order in which they arrive.
                          ii.        It is based on Queuing.
                         iii.        The FCFS scheduling algorithm in non-preemptive.
                      iv.        Once the CPU has been allocated to a process that process will keep the CPU until it relea-ses by the CPU, either by terminating or by requesting I/O.
                         v.        This algorithm is problematic for time-sharing several users at regular intervals.

2.        Shortest Job First: -
i.         Shortest Job First, which connects with each process the length of second next CPU bursts.
ii.        Whichever process requires less time to complete the execution is processed first.
iii.       When the CPU is obtained, it is given to the process that has shortest CPU time.
iv.      If there is a case, where two processes in ready queue has same CPU burst, FCFS is used to decide among them.
v.        This algorithm takes minimum average waiting time.
vi.      To schemes to find the shortest job first : -
a.           Non-preemptive: - once CPU given to the process it cannot be preempted until completes its CPU bursts.
b.          Preemptive: - if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is known as the Shortest-Remaining Time-First (SRTF).
3.        Priority Scheduling: -
i.         Priority is associated with each process in the queue and the process with the highest priority is processed.
ii.        SJF is an example of priority algorithm.
iii.       Priorities are defined as: -
a.           Internally:  A specific measureable quantity to compute process’s priority.
b.          Externally: defined priorities are set by criteria that are external to operating system such as – the importance of process, Type or amount of funds being paid for computer.
iv.   It is also two  types: -
a.           Non-preemptive: - puts the process at the head of the ready queue.
b.     Preemptive: - prompts the CPU the process’s priority.
v. If the process is waiting indefinite, then blockage is removed with a technique named Aging which increases the priority of that particular process.
4.          Round Robin scheduling: -
i.             It is especially used with time sharing systems. It is called as process sharing approach.
ii.        Time quantum is used with this algorithm, which ranges from 10 to 100 milliseconds.
iii.          The queue is seen as a circular queue.
iv.          The queue is kept as FIFO queue.
v.           As the time quantum is completed the queue is dispatched.
vi.          One of two things will then happen: -
a.           If process is less than 1 quantum the process itself will release the CPU for next process.
b.          Otherwise, if process is longer, the timer will stop and can cause interrupt. Then context switching is used and process is kept at the end of ready queue.

5.         Shortest Test Remaining Time: -
  i.             It is also known as Shortest Remaining Time (SRT).
ii.           It is scheduling method that is preemptive of shortest job nest scheduling.
iii.          In this the scheduling algorithm the process with the smallest amount of the time remaining until completion is selected to execute shortest remaining time first is short process are handled very quickly.

6.         Multilevel Queues Scheduling: -
i.            If process can easily be classified into groups we may use multi-queue scheduling algorithm (MQS).
ii.           We partition the ready queue into separate queues.
iii.          Processes are assigned to one queue depending on their properties (e.g. memory size, process type).
iv.         Then each queue will use a different process scheduling algorithm.
v.          Scheduling must be done between the queues.
vi.         Fixed priority scheduling.
vii.        Possibility of starvation can occur with this algorithm.
7.         Largest remaining time first scheduling: -


i.            In this scheduling the process whose remaining time greatest, is selected for the first execution.
ii.           This scheduling may also be non-preemptive or preemptive also.
iii.          When the largest remaining first scheduling works, in non-preemptive mode then its works like largest Job first scheduling   .






(Note:- Update available soon, comment for any type of help)




{Other links
 COA : -


  







No comments:

Post a Comment

Please do not enter any spam link in the comment box and use English and Hindi language for comment.

Latest Update

Key Components of XML

Popular Posts