Home Icon Home Computer Science What Are Process States In Operating System?

What Are Process States In Operating System?

Process state in an operating system includes various stages. This article will help you understand every stage in the lifecycle of a process in detail.
Shreeya Thakur
Schedule Icon 0 min read
What Are Process States In Operating System?
Schedule Icon 0 min read

Table of content: 

  • 5 states of a process
  • Two-State Process Model
  • Difference between a Program and a Process
  • The Different Process States: Details
  • Program Execution in OS
  • Context Switching
  • Operations on a process
  • Degree of Multiprogramming
  • Process vs. Thread
  • Summing Up
expand

In an operating system, when we run any program or software, it becomes a process. In simple words, a process is nothing but a program in execution. Now, every process during its life cycle has to pass through some stages. Every process may or may not pass through every phase, but it must follow a minimum of 5 phases in order to get executed. Below mentioned is the total list of phases of a process life cycle:

5 states of a process

  • New
  • Ready
  • Running
  • Wait/Block
  • Termination

Now, let’s try to understand these phases through a pictorial observation which is called a process state diagram.

Process States Overview

Two-State Process Model

This model describes a process to be in two different states only:

  1. Running: When a new process is created, it enters into the system as in the running state.
  2. Not Running: Process existing anywhere other than running for example: be it in waiting for some I/O operation or waiting for getting scheduled, etc. are all termed as Not Running state.

Difference between a Program and a Process

A program is just a collection of instructions or a piece of code and a process in an operating system is a program in execution but not just merely a program code as it has various other parameters as well like, program counter, process stack, registers, program code, etc. So before the execution, these programs are stored inside the secondary memory, and when their time comes up for execution, they are brought to the main memory and are called a process. we therefore also call the program a passive entity and the process an active entity.

The Different Process States: Details

1. New: During this phase, a process is in its program state and is residing inside the secondary memory, here they are brought to the job queue.

2. Ready: During this phase, the demand for this program has been brought up, and is hence getting transformed into a process, and kept inside the Ready queue. A Ready queue is a place in the primary memory or main memory, where all the processes are kept, and which are about to go to the running state. By default, the process that comes first runs first. Here, they are waiting for their turn to come.

3. Running: During this phase, the ready process is picked up from the ready queue by the CPU, gets assigned to any one of the CPU cores, and starts to execute the instruction line by line.

4. Wait/Block:During this phase, the process has to step out from the Running phase, and this happens due to the 2 most common reasons which are as follows:

  • First reason is that during the course of execution, the CPU encounters an instruction, which is demanding some resource or some user’s intervention, i.e., some input from the user. Here, the process waits for the user’s input, and as soon as it gets the input, it moves back to the ready state.
  • Second reason is that the CPU gets a priority request, which demands instant stopping of the present execution of the process and proceeds with a higher priority process. In such scenarios, it is said that the process has entered the blocked state. From here, the process may go to either ready state or suspend wait.

5. Suspend wait: When a process has been blocked either because of some resources or due to the incoming of a higher priority process and is waiting for a long time, then it moves to the suspend wait phase.

6. Suspend ready: When a process residing inside the suspend wait state, has finished its input-output request, or if the higher priority process, due to which it was brought down to the suspend wait state, completes its execution, and the CPU is again available for executing the suspend wait processes, then the process enters into the suspend ready phase.

7.Termination: A process enters into this phase if it has finished its execution entirely, and has finally stopped. All the contents of the Process control block also get erased, as the process now is going to become a program and will no longer be a process.

Program Execution in OS

  1. A program is fetched from the secondary memory and kept in the job queue by the job scheduler, this state is called a new state.

  2. From the job queue, the process is brought to the ready queue. This task is done by the Long Term CPU scheduler.

  3. From the ready queue, the process is now ready for execution and to be processed by the CPU. Now, all the processes present in the ready queue are processed by the Short-Term CPU scheduler.

  4. After the scheduling process, one process gets picked up and is sent for execution, which is called running state. Every process has a definite burst time associated with it, which is nothing but the total CPU cycles needed for the entire execution of the process.

  • The most common scheduling algorithm processes the process in a Round Robbin technique. In this technique, a process executes for a certain time slice (time period) or context switching time, and then the algorithm context switches to the process just before that current process, the current process gets its chance for execution only after all the processes present in the ready queue gets executed for that time slice. this process is known as context switching, and this switching is taken care of by the context switcher of the OS.
  • Another type of scheduling algorithm is also present which processes the process according to process priority, the higher the priority, the sooner the execution. In this mode of processing, context switching does not happen. This type of processing is called Priority Scheduling.
  1. In between the execution of a process, if a process needs any type of input from the user, then it moves out of there, waits for the user input, and is brought to the waiting stage. The program counter now points to the next current activity or process.

  2. If a process waits for access to any device, then it resides in the device queue while being in waiting state, and if it is not getting a chance for access to that device, then that processes further moves from waiting for the state to suspend-wait state. It further resumes to suspend-wait state when it gets access.

  3. Now, when the process has finished its user’s input-output tasks, it is brought back to the ready state, and it again waits for its chance to come for processing.

  4. Once a process finishes its entire execution, it moves to the termination state, after which it is no longer called a process.

Context Switching

It is a mechanism to store and restore the state or context of a CPU in the Process’s PCB so that a process execution can be resumed from the same point at a later time. Using this technique, we can exhibit the features of a multitasking operating system.

When the scheduler switches the CPU from executing one process to executing another, the state from the current running process is stored in the PCB, and the second process resumes from the point where it had stopped earlier.

Context Switching - Unstop

The following data of a process is stored in the process’s PCB, when it switches :

  • Program Counter
  • Scheduling information
  • Base and limit register value
  • Currently used register
  • Changed State
  • I/O State information
  • Accounting information

Operations on a process

1. Creation: This phase involves bringing up a program from the secondary memory to the ready queue and hence making it a process.

2. Execution: This operation involves executing the process that has been selected by the scheduler.

3. Schedule: This operation involves selecting one out of the many processes present in the ready queue. There are three types of schedulers present in the operating system, which are as follows:

  • Long-term scheduler: Schedules processes from new state to ready state
  • Medium-term scheduler: Schedules processes that are in the suspended state to resume state.
  • Short-term scheduler: Schedules processes from ready to running state.

4. Delete: This operation involves killing up a process that has completed its entire execution. Once the process gets killed, all its corresponding data present in the Process control block gets deleted as well.

Degree of Multiprogramming

When we do process scheduling then more than one process present in the system gets the chance to execute together. But if we see from a broader view, we can say that the system has a certain processing capacity, so the rate of process creation in that system should be equal to the rate of departure of process from the system in order to make the degree of multiprogramming stable.

Process vs. Thread

A Process is a sequence of instructions for a particular task. To perform that task, a task has to perform multiple sub-tasks. Now, in order to perform these subtasks, we use the concept of thread. A thread takes ownership of a subtask of a process. So we can say that a process is able to reach its termination state only if all its subtasks have been performed by the corresponding threads.

A process is subdivided into sub-tasks called threads, so multiple threads function parallel if possible (i.e., if the threads are not dependent ), and this gives a feeling of parallel processing and is termed Multi-threading.

Difference between a Process and a Thread

Process Thread
It is basically a program in its execution state. A thread is a sub-division of a process.
A process refers to a bigger task; hence it takes a longer time to terminate. As the thread is a subdivision of a process, hence it is a smaller body, and so takes lesser time to terminate.
All the processes that run parallel in multiple CPU cores are isolated amongst themselves. All the independent threads that run in parallel are not isolated amongst them, they share the resources.
Switching between the processes requires OS intervention. Switching between the threads does not require OS intervention, rather they cause an interrupt to the kernel.
Process communication is less efficient. Thread communication is more efficient.

Summing Up

Process states are not standard terms but they just intend to resemble the basic life path of an executing program. We have subdivided the processes into threads which play a very significant role in the faster execution of a process.

You may also like to read:

  1. Best Books to Learn JAVA for Beginner and Intermediate Developers 2022
  2. What Is Competitive Programming?
  3. What is Virtualization in Cloud Computing?
  4. Difference Between Structure And Class In C++
Edited by
Shreeya Thakur
Sr. Associate Content Writer at Unstop

I am a biotechnologist-turned-content writer and try to add an element of science in my writings wherever possible. Apart from writing, I like to cook, read and travel.

Tags:
Computer Science

Comments

Add comment
comment No comments added Add comment