Paste your esEssay on Operating Systems and Related Concepts
1. Operating System Concepts and Discussion
An operating system’s hardware (CPU, the memory and the I/O devices) provides the basic computing resources and it’s the operating system (OS) that controls and coordinates the use of the hardware. The OS provides the environment within which programs are executed by exploiting the resources of one or more processors to provide common services for computer programs and a set of services to the user. Five concepts of the OS (process management, memory management, storage management, distributed systems and case studies) will be explored briefly in this report.
From the user view, the OS should be designed mostly for ease of use if the OS is directed towards a single-user experience. However, when looking at other cases such as OS for workstations, the OS is designed to compromise between individual usability and resource utilisation.
From the system view, the OS is a resource allocator, most intimately involved with the hardware. A computer has many resources at its disposal such as I/O devices and file-storage space and it’s the OS which acts as a manager of these sources.
The fundamental goal of computer systems is to execute user programs and to make solving user problems easier, and it’s per this fundamental goal that the system has evolved over the years to become both more efficient and reliable.
The first computers used batch operating systems, which were time-consuming and depended on professional operators – machine interactions, unlike today’s user-machine interaction. These systems were later replaced with time-shared operating systems. The difference with this system was that it relied on direct user-machine interactions but included a printing terminal component. This was later replaced with personal computers with graphical user interfaces (GUIs) possible, the first successful system being Apple Macintosh, introducing an evolution of hardware. (Bpaststudio.csudh.edu, n.d.)
Processes are known as programs in execution- an operation which takes the single sequential thread of execution provided by the program and performs the designated task, using an associated set of system resources (mediated by kernels). Processes are dependent on programs and are created for the program when it is run by the OS. The OS assigns an identifier (see Table 1) to the new process and allocates it space. The process control block (PCB) is then initialised, and the OS sets appropriate linkages and creates/expands data structures. Multiple processes may run simultaneously for the same program. The main difference between programs and processes is that programs are pieces of code stored on the HDD (hard disk drive), whereas, processes are RAM copies of the program on the HDD.
The OS creates and manages the PCB, constructed for the program and contains the process elements. A process contains a variety of different elements (process elements) as well as the program. Process elements are components making up the process including two essential components: an executable program, (e.g. program code which may be shared with other processes executing the same program) and code associated data (needed by the program when the code is executed). During the execution of a program, the PCB is created for the process, that can be uniquely characterised by many elements, which are portrayed in Table 1 below.
Process states, mentioned in Table 1, are several states used to label the current stage of the process. These states are internal data via which the OS supervises and controls processes. A process’s information such as its priority number or its stage (i.e. waiting for the completion of a certain I/O event) affects which state a process is in. When a program is run, the process makes transitions from and to different states such as ‘running state’, ‘new state’ or ‘ready state’.
Figure 1 below portrays the different process states a task will transition through when a program is executed. When a process is first created then it is referred to as ‘not running state;’ or ‘new state’. When the CPU becomes available, the dispatcher (program that allows CPU to control processes) allows the CPU to process a task that is in the ‘ready state’ waiting in the queue. The process then transitions to a ‘running state’ meaning the process is currently being executed and then to the ‘terminated state’ when the process has been fully executed and/or removed from RAM.
CPU scheduling is controlled by components known as schedulers, which may be short-term, medium-term or long-term. Short-term schedulers manage processes’ access to the CPU that reside in RAM. On the other hand, a long-term scheduler decides which process is moved from HDD to RAM to be run as processes are created on the HDD in a new state. Thus, the HDD will be full of new state processes of which only some may be moved across to RAM. Medium-term schedulers play a role in the term ‘swapping’. Swapping occurs when the RAM is full and a new process of higher priority becomes present in the HDD and the MTS will have to decide which process in the RAM will be swapped out -partly or fully-, for the higher priority process, into the HDD. This enhances performance of the OS as the disk I/O is the fastest I/O.
Processes may be executed concurrently but concurrent access to shared data may result in data inconsistency and critical sections. Critical section is a section of code within a process that accesses shared variables, and only one process may be executing its critical section at any one time. However, this can be maintained by using mechanisms to ensure the execution of processes which is vital for an OS. Although, concurrency encompasses several design issues, for example, process communication, sharing and competing for resources and synchronisation of the activities of multiple processes. Semaphores are a mechanism that allows for shared data consistency, used to force synchronisation if used properly by programmers. This is a simple mechanism that can permit multiple processes into the CPU at once via unique access semaphores for each critical section. Although deadlocks may arise when using semaphores to block processes waiting for a limited resource.
Deadlocks are the permanent blocking of a set of communicating processes or system resource-competing processes and is caused by the conflicting needs for resources (i.e. main/ secondary memory –reusable resource and messages –consumable resource) by two or more processes. Permanent blocking is caused by an awaited event never being triggered by a blocked process in the set –due to the process being blocked. Two examples of deadlocks are memory request deadlock (MRD) and consumable resource deadlock (CRD). MRD is simply two processes experiencing deadlock in an instance where their requested amount of memory precedes the amount of available memory allocated. CRD can be explained by both processes attempting to receive messages from each other, and the receiving is block and therefore deadlock occurs. Table 2 below will suggest four ways deadlocking can occur.
Memory management is vital in multiprogramming systems and is intended to satisfy the following requirements, seen in Table 3. Physical organisation of memory includes both main memory and secondary memory. The user will be unable to accurately estimate how a process will be split across both levels of memory, thus partitioning is introduced. There are two types of partitioning that will be briefly explored. Fixed partitioning is the splitting of physical memory into partitions. A process belongs in one partition, which can only hold data relating to a single process. Dynamic partitioning is the partition allocation by the OS only when processes are in the RAM, based on the requirements of the process. Unlike the fixed partitioning, dynamic partitioning is fixed –size wise.
Virtual memory (VM) is considerably bigger than physical memory, and less I/O is required making it a quicker way for process swapping. Paging manages how resources are shared in VM. VM is split into fixed-size frames, which are storage spaces for pages. Pages are parts of an executing process that represents a logical unit of memory. The pages are only swapped in from disk to main memory when needed. The processes are managed on VM via swapping mechanisms but overall, memory tables keep track of both virtual and physical memory.
A file is an entry in a directory and some examples of its attributes include name, identifier index (organised list to located file records –file information is uniquely stored in a record), type, size etc. It has a simple record structure of fixed or variable length lines but a complex structure itself. The OS regards files as byte sequences providing maximum amount of flexibility and uses methods such as hashing and indexing as location maintenance accuracy in the file system. (Tanenbaum and Bos, 2015) For example, when file information needs retrieving, the OS will use hash values from the hash table to determine what record is needed and will locate and retrieve it.
Information protection and security is a very important and serious concept for the OS. When considering the protection and security of information the CIA model is commonly used. Security principles include confidentiality, authenticity and data integrity and are explored below. OS security threats focus on actively making the computer misbehave (e.g. taking control of a user’s Web browser to make it execute malicious code) or passively try to steal information (e.g. an adversary trying to break encryption to get to the data). (Tanenbaum and Bos, 2015)
I/O devices are resource components of a computer utilised by processes. The OS requires access to I/O devices and is responsible for controlling the use of I/O devices. Examples of some I/O devices include printers, keyboards, digital cameras, HDD etc. The OS will decide when an I/O device can be used by a program which is being executed. These devices will send and retrieve (exchange) information to the OS from an external environment via a system bus. The system bus is a component which provides for communication among processors, memory and I/O modules. Interrupt controllers use the system bus to communicate interrupts, for example, in a scenario when a key is pressed on the keyboard, an interrupt is generated to deliver the key-press event to the OS. These interrupt controllers are provided to improve processor utilisation, as I/O devices are slower than processors. Processors issue commands to appropriate I/O modules when encountering I/O related instructions and are usually forced to pause to wait for I/O devices (which is wasteful).
2. Scheduling Algorithms and Process Management
The operating system is responsible for controlling use of the computer’s recourses. As the CPU cannot directly access the HDD (Hard Disk Drive), programs must be moved to main memory by the OS via the scheduler. The Operating System decides and determines how much process time is to be devoted to the execution of a particular user program via the scheduler. The term “resource manager” is used to describe one of the fundamental roles of an Operating System, and this is since the OS helps manage (schedule) the interactions of resources via algorithms.
The scheduler is responsible for organising access to the CPU, and every process has a PCB (process control block) which contains a series of attributes which can be used by the scheduler to determine priority. The process manager needs to arrange executions of processes (active unit of a program i.e. executed program code) to promote the most efficient use of resources and to avoid disruptions such as competition and deadlock. The most efficient policy should have maximised throughput; being able to run as many jobs in a limited period, minimise response time; quick movement of processes in and out of the system, minimal waiting time; quick movement of processes out of the ready queue, maximum CPU efficiency; maintaining the CPU’s engagement always and ensuring fairness for all processes; giving each process equal contact with the CPU and IO time. Two scheduling strategies that exist are the Pre-emptive Scheduling Policy and the Non-Pre-emptive Scheduling Policy. CPU scheduling algorithms are only applied to processes in the RAM, which are in the ready state and not in the I/O state. Pre-emptive Scheduling Policy is a policy that interrupts the processing of a job that is already running and replaces it with a policy of higher priority, which has moved into the RAM. Once the higher priority process has been executed, the scheduling algorithm re-evaluates all the processes in the RAM. Whereas the Non-pre-emptive scheduling policy functions without external interrupts. If a policy of higher priority comes along into the RAM, and an existing process is already running, the process is left until it is executed, and only then will the scheduling algorithm re-evaluate all the processes in the RAM. Some differences between the two scheduling algorithms are CPU utilisation, starvation and flexibility. Although both scheduling algorithms utilise the CPU, the pre-emptive scheduling algorithm allows the CPU to be utilised more due to its increased flexibility and overhead of switching process’ states. However, both algorithms have issues with starvation as with pre-emptive, low priority, processes are more likely to starve and in non-pre-emptive, processes with less CPU burst time may starve in situations where long burst time processes are running. Examples of pre-emptive scheduling include Shortest Remaining Time First (SRTF), Round Robin etc., and examples of non-pre-emptive scheduling include First Come First Served (FCFS), Shortest Job First (SJF), etc. (McHoes and Flynn, 2006)
SJF and SRTF are both similar scheduling algorithms, which promote the process which will take the shortest amount of CPU time to get to the ready state. In a non-pre-emptive kernel, this is known as SJF. This means that all processes waiting in the ready state queue, are evaluated by the algorithm and whichever process is deemed as the shortest, will move to the front of the line to be executed. On the other hand, SRTF is what the pre-emptive kernel is known as. If a process becomes present in the ready state queue, and is shorter than the program being currently run, then the kernel will pre-empt the process that is running and run the shorter process in replacement. Both algorithm schedulers minimise waiting time assuming CPU burst time is accurately predicted, however, both can cause starvation process. (Barker, 2014)
PCB (process control blocks) is a set of elements, which schedulers use to characterise processes and resource allocation. Elements such as identifier, priority, context data, I/O status information all help play a role in the allocation of processes to execute. SJF and SRTF use state (an element) to help resource allocation. For example, a process can have several states: running (currently being executed), ready (prepared to be executed), waiting (waiting for resources), hold (process is just created) and exit (process has been executed and is released by OS). To determine the current state of the process, the scheduler would look at job’s information, such as register contents (if the job had been interrupted and is waiting) and process status word (present instruction counter and register contents when the job is not running).
Table 4 below is an example of an SJF algorithm calculation. The Table represents the five processes and their arrival, burst, completion, turn around and waiting times. The latter three time categories are calculated, explained below.
Process ID
Arrival Time
Burst Time
Completion Time
Turn Around Time (TAT)
Waiting Time
1
1
7
8
7
0
2
2
5
16
14
9
3
3
1
9
6
5
4
4
2
11
7
5
5
5
8
24
19
11
Schedule Length = 23
TOTAL
53
30
Throughput = 0.22
AVERAGE
10.6
6.0
Here we look at the process timings for the SJF in detail. Completion Time is the time at which the process was completely executed and removed from the RAM. This is calculated by looking at the time at which the process was finished, which is taken from the Gantt chart (Figure 3). For example, Process 1’s completion time is eight (as it arrived at one and had a burst time of seven) and Process 3’s completion time is nine (as it had a burst time of one and was executed at eight).
TAT is the total time in which the process was inside the RAM (the time taken for a process to progress from arrival to completion). This is calculated by subtracting the Arrival Time from the Completion Time. For example, Process 2’s TAT is 14, as 16 (Completion Time) minus two (Arrival Time) equals 14.
Waiting time is the total time in which a process is idle in the RAM. This is calculated by subtracting the process’ burst time from the TAT. For example, Process 5’s Waiting Time is eleven as nineteen (TAT) minus eight (Burst Time) equals eleven.
Schedule Length is the total time the algorithm (SJF) spends scheduling the processes. The Schedule Length’s value (23) is calculated by one sum- subtracting the Arrival Time of the first process from the Completion Time of the final process. For example, 24 (Process 5’s Completion Time) minus one (Process 1’s Arrival Time) equals 23. Throughput is the number of processes executed per unit time and is calculated by dividing the number of processes (five) by the schedule length (23) –giving you the value of 0.22.
3. Scenario
Ten years ago, OS weren’t so at risk of cyber-attacks as they are today. Attacks have become more advanced, with well-organised criminal networks with vast computing resources. Computer Systems Administrators deal with the upkeep of a network’s technical issues, performance and security. (Sokanu.com, n.d.) Windows as an OS, have constantly stuck at developing their systems stopping external programs from having access to the OS’s memory space or other apps. However, Windows as a system is targeted most frequently than others due to the large number of Windows-based systems on the market.
Linux, on the other hand, is open source (individual coders can review the code for issues/improvements) but has less exploitable security flaws known to the information security world, making it the more secure option for the company. For a company, data security is of high priority as it will affect the business (i.e. accounts and financial business information etc.) If Linux was to be used with the office computers, the cost of training the staff to be confident with using Linux would be greater, however, it is expected that the System Administrator will have prior hands-on experience with Linux, requiring no financial cost for training.
But despite that, Windows as a system for the office staff’s PCs seems to be of greater benefit. Windows is readily available as it’s the most popular system used across companies and supports programs frequently used in offices. This makes it cost-effective, as staff are most likely going to be well experienced with working on Windows system and will not require training. For the office staff to use Windows, it means compatibility isn’t an issue when you’re networking with outside customers and clients.
References
Barker, S. (2014). Lecture 5: CPU Scheduling.
Bpastudio.csudh.edu. (n.d.). Operating system evolution. [online] Available at: http://bpastudio.csudh.edu/fac/lpress/471/hout/misc/osgenerations.htm [Accessed 14 Nov. 2018].
McHoes, A. and Flynn, I. (2006). Understanding Operating Systems. 4th ed.
Silberschatz, A., Gagne, G. and Galvin, P. (2004). Operating System Concepts, Seventh Edition. 7th ed. John Wiley & Sons.
Sokanu.com. (n.d.). What Does a Computer Systems Administrator do?. [online] Available at: https://www.sokanu.com/careers/computer-systems-administrator/ [Accessed 14 Nov. 2018].
Tanenbaum, A. and Bos, H. (2015). Modern Operating Systems. 4th ed. Pearson.
say in here…