Several processes may be associated with the same program; for example, opening up several instances of the same program often means more than en process is being executed. 1. 2 What is a thread? In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by an operating system scheduler. The scheduler itself is a light-weight process. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process.
Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources (Lewis, Bill, and Daniel J. Berg 1997). N particular, the threads of a process share the latter are instructions (its code) and its context (the values that its variables reference at any given moment). 1. 3 What is the relationship between processes, threads and operating systems An operating system (SO) is a collection of software that manages computer hardware resources and provides common services for computer programs.
The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function (Stallings and William, 2005). In computing, a process is an instance of a computer program that is being executed. It contains the program code and its current activity. Barrel, Andrew D (1989). Depending on the operating system (SO), a process may be made up of multiple threads of execution that execute instructions concurrently. A computer program is a passive collection of instructions; a process is the actual execution of those instructions. . 0 ANATOMY OF A PROCESS 2. 1 Process states In a multitasking computer system, processes may occupy a variety of states. These distinct states may not actually be recognized as such by the operating system erne, however they are a useful abstraction for the understanding of processes (Nelson, 1991). The following typical process states are possible on computer systems of all kinds. In most of these states, processes are “stored” on main memory. I. Created (Also called New) When a process is first created, it occupies the “created” or “new” state.
In this state, the process awaits admission to the “ready” state. This admission will be approved or delayed by a long-term, or admission, scheduler. Typically in most desktop computer systems, this admission will be approved automatically, over for real-time operating systems this admission may be delayed. In a real time system, admitting too many processes to the “ready” state may lead to overestimation and overcorrection for the systems resources, leading to an inability to meet process deadlines. Attenuate, Andrew S (1995). It.
Ready and waiting A “ready” or “waiting” process has been loaded into main memory and is awaiting execution on a CPU (to be context switched onto the CPU by the dispatcher, or short- term scheduler). There may be many “ready” processes at any one point of the system’s execution-?for example, in a one-processor system, only one process can be executing at any one time, and all other “concurrently executing” processes will be waiting for execution. Booking, Joseph, David Kirsches, Alan Lumberman, and Susan Lovers(1994). A ready queue or run queue is used in computer scheduling.
Modern computers are capable of running many different programs or processes at the same time. However, the CPU is only capable of handling one process at a time. Processes that are ready for the CPU are kept in a queue for “ready” processes. Other processes that are waiting for an event to occur, such as loading information from a hard drive r waiting on an internet connection, are not in the ready queue Attenuate, Andrew S (1995). Iii. Running A process moves into the running state when it is chosen for execution. The process’s instructions are executed by one of the Cups (or cores) of the system.
There is at most one running process per CPU or core. Yves Beakers and Jacques Cohen(1992). Lb. Blocked (Waiting) A process that is blocked on some event (such as 1/0 operation completion or a signal). A process may be blocked due to various reasons such as when a particular process has exhausted the CPU time allocated to it or it is waiting for an event to occur. Yves Beakers and Jacques Cohen(1992). V. Terminated A process may be terminated, either from the “running” state by completing its execution or by explicitly being killed. In either of these cases, the process moves to the “terminated” state.
If a process is not removed from memory after entering this state, it may become a Zombie process. Yves Beakers and Jacques Cohen(1992). 2. 2 Additional process states Two additional states are available for processes in systems that support virtual memory. In both of these states, processes are “stored” on secondary memory (typically a hard disk). . Swapped out and waiting (Also called suspended and waiting. ) In systems that support virtual memory, a process may be swapped out, that is, removed from main memory and placed in virtual memory by the mid-term scheduler. From here the process may be swapped back into the waiting state.
Lee Sergeant, T. And B. Furthermore(1992). It. Swapped out and blocked (Also called suspended and blocked. ) Processes that are blocked may also be swapped out. In this event the process is both swapped out and blocked, and may be swapped back in again under the same circumstances as a swapped out and waiting recess (although in this case, the process will move to the blocked state, and may still be waiting for a resource to become available). Nelson, Greg (editor) (1991). 2. 3 Process Priorities Here are some applications that hog up too much memory even when you don’t need them.
This greatly lowers the responsiveness of your computer and makes other, more important applications, run slower than they are supposed to. Some of the times, you are not even aware and a process related to some third party application keeps running in the background eating up all the memory. You realize something is ring only when the computer, either gets too slow, or goes entirely into non responding mode. Lee Sergeant, T. And B. Furthermore(1992). Process Prioritize is an application for Windows that allows you to adjust the priority of running applications.
This allows you to stop memory intensive applications interfering with others. The application lets you add applications to different priority settings, such as Real-Time, High, Normal and Idle. You can also view a list of currently active processes and choose a custom Operating Mode. Keep reading to find out more about Process Prioritize. . 4 Context switching In computing, a context switch is the process of storing and restoring the state (context) of a process so that execution can be resumed from the same point at a later time.
This enables multiple processes to share a single CPU and is an essential feature of a multitasking operating system. Attenuate, Andrew S (1995). What constitutes the context is determined by the processor and the operating system. Context switches are usually computationally intensive, and much of the design of operating systems is to optimize the use of context switches. Switching from one recess to another requires a certain amount of time for doing the administration – saving and loading registers and memory maps, updating various tables and lists etc. . 4 Process Relationships In the concurrent environment basically processes have two relationships, competition and cooperation. In the concurrent environment, processes compete with each other for allocation of system resources to execute their instructions. In addition, a collection of related processes that collectively represent a single logical application cooperate with each other. There should be a proper operating system to purport these relations. In the competition, there should be proper resource allocation and protection in address generation.
Arnold, Ken and James Gosling(1998). We distinguish between independent process and cooperating process. A process is independent if it cannot affect or be affected by other processes executing in the system. Features of independent processes include the following; I. Their state is not shared in any way by any other process. It. Their execution is deterministic, I. E. , the results of execution depend only on the input values. Iii. Their execution is reproducible, I. E. The results of execution will always be the same for the same input. ‘v.
Their execution can be stopped and restarted without any negative effect. 2. 5 Process Maps allocation and protection in address generation (Robbins, Kay, 1996). Cooperating process: In contrast to independent processes, cooperating processes can affect or be affected by other processes executing the system. They are characterized by: I. Their states are shared by other processes. It. Their execution is not deterministic, I. E. , the results of execution depend on relative execution sequence and cannot be predicted in advance. Their execution is irreproducible, I. . , the results of execution are not always the same for the same input 2. 6 Process resources In general, a computer system process consists of (or is said to ‘own’) the following resources Mary Norton (1997): I. An image of the executable machine code associated with a program. It. Memory (typically some region of virtual memory); which includes the executable code, process-specific data (input and output), a call stack (to keep track of active subroutines and/or other events), and a heap to hold intermediate computation data generated during run time. Iii.
Operating system descriptors of resources that are allocated to the process, such as file descriptors (Unix terminology) or handles (Windows), and data sources and sinks. Iv. Security attributes, such as the process owner and the process’ set of permissions (allowable operations). V. Processor state (context), such as the content of registers, physical memory addressing, etc. The state is typically stored in computer registers when the process is executing, and in memory otherwise. The operating system holds most of this information about active processes in data structures called process control locks.
Any subset of resource, but typically at least the processor state, may be associated with each of the process’ threads in operating systems that support threads or ‘daughter’ processes. Mary Norton (1997). The operating system keeps its processes separated and allocate the resources they need, so that they are less likely to interfere with each other and cause system failures (e. G. , deadlock or thrashing). The operating system may also provide mechanisms for inter-process communication to enable processes to interact in safe and predictable ways. . 0 MULTITASKING 3. 1 What is Multitasking Multitasking is the ability of a computer to run more than one program, or task, at the same time. Multitasking contrasts with single-tasking, where one process must entirely finish before another can begin ( Keith E,2011). Multitasking operating system refers to an operating system in which multiple processes, also called tasks, can execute (I. E. Run) on a single computer seemingly simultaneously and without interfering with each other (Keith E, 2011).
That is each process has the illusion that it is the only process on the computer and that it has exclusive access to all the services of the operating system. The concurrently running processes can represent different programs, different parts of a single program and different instances of a single program. The total number of processes (or programs) that can run on the system at any time depends on several factors including the size of the memory, the speed of the CPU (central processing unit) and the size of the programs ( Keith E, 2011).
All processes are fully protected from each other, Just as the kernel (I. E. , the core of an operating system) on a well-designed system is protected from all processes, so that a crash (I. E. , a halt to functioning) in one process or program will to cause another program or the entire system to crash. However, processes communicate with each other when necessary (Adair D, 2010). The ability to execute more than one task at the same time, a task being a program. The terms multitasking and multiprocessing are often used interchangeably, although multiprocessing implies that more than one CPU is involved.
In multitasking, only one CPU is involved, but it switches from one program to another so quickly that it gives the appearance of executing all of the programs at the same time (Dislikes, 2009). 3. 2 Levels of Multitasking: 3. 2. Session level multitasking It is a semi-permanent interactive information interchange, also known as a dialogue, a conversation or a meeting, between two or more communicating devices, or between a computer and user. A session is set up or established at a certain point in time; this process is called assassination, and torn down at a later point in time.
An established communication session may involve more than one message in each direction. A session is typically, but not always, statuses, meaning that at least one of the communicating parts needs to save information about the session history in order to be able to communicate, as opposed to stateless communication, where the communication consists of independent requests with responses (Dislikes, 2009). This is the multitasking between different user sessions, A user when using a computer system may run or may be performing more than one task/ sessions in such a situation, we term it as session-level multitasking . . 2. 2 Process-level multitasking When different people are using a process at a time I. E each person running a thread of execution within the process that is multitasking at the process. Different users ay be using a process at the same time and that is multitasking at the process level (Keith E, 2011). 3. 2. 3 Multithreading Multithreading is the ability of a program or an operating system process to manage its use by more than one user at a time and to even manage multiple requests by the same user without having to have multiple copies of the programming running in the computer.
Each user request for a program or system service (and here a user can also be another program) is kept track of as a thread with a separate identity. As programs work on behalf of the initial request for that thread and are interrupted by there requests, the status of work on behalf of that thread is kept track of until the work is completed. 3. 3 Cooperative and preemptive multitasking Cooperative A form of multitasking where it is the responsibility of the currently running task to give up the processor to allow other tasks to run.
This contrasts with pre-emotive multitasking where the task scheduler periodically suspends the running task and restarts another. Cooperative multitasking requires the programmer to place calls at suitable points in his code to allow his task to be deselected which is not always say if there is no obvious top-level main loop or some routines run for a long time. If a task does not allow itself to be deselected all other tasks on the system will appear to “freeze” and will not respond to user action (Adair D, 2010).
The advantage of cooperative multitasking is that the programmer knows where the program will be deselected and can make sure that this will not cause unwanted interaction with other processes. Under pre-emotive multitasking, the scheduler must ensure that sufficient state for each process is saved and restored that they will not interfere. Thus cooperative multitasking can have lower overheads than pre-emotive multitasking because of the greater control it offers over when a task may be deselected (Adair, 2010).
Cooperative multitasking is used in RISC SO, Microsoft Windows and Macintosh System 7. (1995-03-20) Preemptive In computing, preemption is the act of temporarily interrupting a task being carried out by a computer system, without requiring its cooperation, and with the intention of resuming the task at a later time. Such a change is known as a context switch. It is normally carried out by a privileged task or part of the system known as a preemptive scheduler, which has the power to preempt, or interrupt, and later resume, other tasks in the system (Keith E, 2011).
The term preemptive multitasking is used to distinguish a multitasking operating system, which permits preemption of tasks, from a cooperative multitasking system wherein processes or tasks must be explicitly programmed to yield when they do not need system resources. In simple terms: Preemptive multitasking involves the use of an interrupt mechanism which suspends the currently executing process and invokes a scheduler to determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time.
In preemptive multitasking, the operating system kernel can also initiate a context switch to satisfy the scheduling policy’s priority constraint, thus preempting the active task. In general, preemption means “prior seizure of”. When the high priority task at that instance seizes the currently running task, it is known as preemptive scheduling (Dislikes, 2009). According to Dislikes (2009) Preemptive multitasking allows the computer system to more reliably guarantee each process a regular “slice” of operating time.
It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process. Preemptive multitasking is task in which a computer operating system uses some criteria to decide how long to allocate to any one task before giving another task a turn to use the operating system. The act of taking control of the operating system from one task and giving it to another task is called preempting (Keith E, 2011). A common criterion for preempting is simply elapsed time (this kind of system is sometimes called time sharing or time slicing).
In some operating systems, some applications can be given higher priority than other applications, giving the higher priority programs control as soon as they are initiated and perhaps longer time slices. Example: A receive interrupt handler for a serial port writes data too mailbox. If a task is waiting at the mailbox, it is immediately activated by the scheduler during preemptive scheduling. In cooperative scheduling, however, the task is only brought into the state “Ready”. A task switch does not immediately take place; after the interrupt handler has completed, the task having been interrupted continues to run.
Such a “pending” task switch is performed by the kernel at some later time, as soon as the active task calls the kernel. 4. 0 THREADS 4. 1 Multithreading Definition Multithreading is a type of execution model that allows multiple threads to exist within the context of a process such that they execute independently but share their process resources. A thread maintains a list of information relevant to its execution including the priority schedule, exception handlers, a set of CPU registers, and stack state in the address space of its hosting process (Arrant J, Adjusts l, 2012). . 2 Similarities and differences between threads and processes Similarities I. Both threads and processes are used to accomplish different tasks in the operating system. I’. Both threads and processes have their own resources that are: stack, registers, memory, program counters and open files. Differences Number Process Thread 1 Process is heavy weight or resource intensive. Thread is light weight taking lesser resources than a process. 2 Process switching needs interaction with operating system. Thread switching does not need to interact with operating system. An multiple processing environments each process executes the same code but has its wan memory and file resources. All threads can share same set of open files, child processes. 4 If one process is blocked then no other process can execute until the first process is unblocked. While one thread is blocked and waiting, second thread in the same task can run. 5 Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources. 6 In multiple processes each process operates independently of the others.
One thread can read, write or change another thread’s data. I. DVD 4. 3 Advantages and Disadvantages of threads Advantages The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Clay Breathers (2009). Some other advantages are I. User-level threads do not require modification to operating systems. It. Simple Representation: Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space. Iii.
Simple Management: This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel. V. Fast and Efficient: Thread switching is not much more expensive than a procedure call. Disadvantages: I. There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespective of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads. Clay Breathers (2009). I’. A user-level thread requires non-blocking systems call I. . , a multithreading kernel. Otherwise, entire process will blocked in the kernel, even if there are run able threads left in the processes. For example, if one hared causes a page fault, the process blocks. 4. 4 Importance of multithreading in the era of multiform processor chips I. Responsiveness – One thread may provide rapid response while other threads are blocked or slowed down doing intensive calculations. It. Resource sharing – By default threads share common code, data, and other resources, which allows multiple tasks to be performed simultaneously in a single address space.
Brian Battle, Grant Martin (2009). Iii. Economy – Creating and managing threads (and context switches between them) is much faster than performing the same tasks for processes. V. Scalability, I. E. Utilization of multiprocessor architectures – A single threaded process can only run on one CAP], no matter how many may be available, whereas the execution of a multi-threaded application may be split amongst available processors. Brian Bailey, Grant Martin (2009) ( Note that single threaded processes can still benefit from multi-processor architectures when there are multiple processes contending for the CAP], I. . When the load average is above some certain threshold. ) 4. 5 Information about threads In most modern computer systems, each thread has a reserved region of memory offered to as its stack. Stacks in computer architectures are regions of memory where data is added or removed in a in a last-in-first-out manner (Laborer, 1995). When a function executes, it may add some of its state data to the top of the stack; when the function exits it is responsible for removing that data from the stack.
At a minimum, a thread’s stack is used to store the location of function calls in order to allow return statements to return to the correct location, but programmers may further choose to explicitly use the stack. If a region of memory lies on the thread’s tack, that memory is said to have been allocated on the stack 4. 6 Thread creation Creating and Terminating Threads Routines: pithead_create (thread,attar,start_routine,rag) pithead_exit (status) pithead_cancel (thread) pithead_attar_knit (attar) pithead_attar_destroy (attar) Creating Threads: 0 Initially, your main() program comprises a single, default thread.
All other threads must be explicitly created by the programmer. Nichols, Bradford, Dick Butler, and Jacqueline Propel Farrell (1996). pithead_create creates a new thread and makes it executable. This routine can be called any number of times from anywhere within our code. 0 pithead_create arguments: thread: An opaque, unique identifier for the new thread returned by the subroutine. – attar: An opaque attribute object that may be used to set thread attributes. You can specify a thread attributes object, or NULL for the default values. Start routine: the C routine that the thread will execute once it is created. Erg: A single argument that may be passed to start_routine. It must be passed by reference as a pointer cast of type void. NULL may be used if no argument is to be passed. 0 The maximum number of threads that may be created by a process is implementation dependent. Programs that attempt to exceed the limit can fail or produce wrong results. 0 Querying and setting your implementation’s thread limit – Linux example shown. Demonstrates querying the default (soft) limits and then setting the maximum number of processes (including threads) to the hard limit.
Then verifying that the limit has been overridden. Scott J. Norton, Mark D. Audiovisual (1997). 4. 7 Thread stack referred to as its stack. Features of thread stack Stack data is added and removed in a last-in-first-out manner, stack-based memory allocation is very simple and typically faster than heap-based memory allocation (also now as dynamic memory allocation). Gregory R. Andrews (2000). Another feature is that memory on the stack is automatically, and very efficiently, reclaimed when the function exits, which can be convenient for the programmer if the data is no longer required.
If however, the data needs to be kept in some form, then it must be copied from the stack before the function exits. Therefore, stack based allocation is suitable for temporary data or data which is no longer required after the creating function exits. Richard H. Carver, Quo-Chunk ATA (2005). A thread’s assigned stack size can be s small as a few dozen kilobytes Allocating more memory on the stack than is available can result in a crash due to stack over flow. Some processors families, such as the EX., have special instructions for manipulating the stack of the currently executing thread.
Richard H. Carver, Quo-Chunk ATA(2005). Other processor families, including power PC and MIPS,do not have explicit stack support, but instead rely on convention and delegate stack management to the operating system’s Application Binary Interface(Able). 4. 8 Thread control Thread Control Block (ETC) is a data structure in the operating system kernel which notations thread-specific information needed to manage it. The ETC is the manifestation of a thread in an operating system (Edward L, 2008). An example of information contained within a ETC is: I.
Stack Pointer: Points to thread’s stack in the process it. Program Counter iii. State of the thread (running, ready, waiting, start, done) Lb. Thread’s register values v. Pointer to the Process Control Block(PC) of the process thread lives on The Thread Control Block acts as a library of information about the threads in a system. Specific information is stored in the thread control block highlighting important information bout each process. Bill Lewis, Daniel J. Berg (2000). 4. 9 Thread priorities Threads have three priorities associated with them by the system, I. A priority I’.
A maximum priority iii. A scheduled priority The scheduled priority is used to make scheduling decisions about the thread. It is determined from the priority by the policy (for timesharing, this means adding an increment derived from CPU usage). The priority can be set under user control, but may never exceed the maximum priority. Changing the maximum priority requires presentation of the control port for the thread’s processor set; since the control port or the default processor set is privileged, users cannot raise their maximum priority to unfairly compete with other users on that set.
Newly created threads obtain their priority from their task and their Max priority from the thread. Marin Aegean, William D. Roomer (1989). Conceptually it is true that threads run concurrency but in practice it isn’t. Most computer configurations have a single CAP], so threads actually run one at a time in such a way as to simulate concurrency. The execution of multiple threads on a single CAP], in some order, is called scheduling. The Java runtime purports a very simple, deterministic scheduling algorithm known as fixed priority scheduling.
This algorithm schedules threads based on their priority relative to other “Rentable” threads. Mary Norton (1997). When a Java thread is created, it inherits its priority from the thread that created it. You can also modify a thread’s priority at any time after its creation using the osteoporosis() method. Thread priorities range between MIN_PRIORITY and MAX_PRIORITY (constants defined in class Thread). At any given time, when multiple threads are ready to be executed, the runtime system chooses he “Rentable” thread with the highest priority for execution.
Only when that thread stops, yields, or becomes “Not Rentable” for some reason, will a lower priority thread start executing. If there are two threads of the same priority waiting for the CAP], the scheduler chooses them in a round-robin fashion. Arrant Shall, Adjusts Garish (2012). The Java runtime system’s thread scheduling algorithm is also preemptive. If at any time a thread with a higher priority than all other “Rentable” threads becomes “Rentable”, the runtime system chooses the new higher priority thread for execution. The new higher priority thread is said to preempt the other threads.
Clay Breathers (2009). The Java runtime system’s thread scheduling scheme can be summed up with this simple rule: Rule: At any given time, the highest priority rentable thread is running. Sometimes we have “selfish” threads which take over the CPU and cause other threads to have to wait for a long time before getting a chance to run. Some systems fight selfish thread behavior with a strategy known as time- slicing. Time-slicing comes into play when there are multiple “Rentable” threads of equal priority and those threads are the highest priority threads competing for the CAP].
Arrant Shall, Adjusts Garish (2012). 4. 10 Thread states When you are programming with threads, understanding the life cycle of thread is very valuable. While a thread is alive, it is in one of several states. By invoking start() method, it doesn’t mean that the thread has access to CPU and start executing straight away. Several factors determine how it will proceed. I. New state After the creations of Thread instance the thread is in this state but before the start() method invocation. At this point, the thread is considered not alive.