Home > Information technology essays > Development of computing technology

Essay: Development of computing technology

Essay details and download:

  • Subject area(s): Information technology essays
  • Reading time: 7 minutes
  • Price: Free download
  • Published: 25 April 2020*
  • Last Modified: 22 July 2024
  • File format: Text
  • Words: 1,876 (approx)
  • Number of pages: 8 (approx)

Text preview of this essay:

This page of the essay has 1,876 words.

Ever since the inception of the computer, every iteration has needed a processor that handles the logic that the system was designed for. These processors could only handle a single task at a time until the introduction of multi-core processors. Multicore processors overcome the limitation of previous processors by combining two or more processing units into a single processor device, which allows the processor to handle multiple tasks.
Basic Diagram of a multi-core processor system
This report will look first look at how the multi-core processor improved upon previous iterations and came to prominence. Then how the operating system interacts with different multicore processor structures and then a more detailed look at the interaction between Windows 7 and multicore processing. Finally, a look into the future of multicore processing and the possible replacements to the system.

Rise of the Multi-Core Processor

The emergence of multicore processing began in the early 2000s. Until that time, performance of single core CPUs could be improved by increasing the transistor density or increasing the clock cycles. However there were limitations to the amount of transistors able to be built on a single core.
More transistors means a higher demand for power consumption leading to increase in heat generated. More cooling and power to feed it was therefore needed to mitigate those obstacles but meant it was economically not feasible for the increase in performance gained. This led to manufacturers looking for alternative ways to improve performance leading to the eventual adoption of multicore processor architecture.[18]
Instead of increasing the performance of a single processor to handle more demanding tasks, multicore processors have several CPUs built in which are able to share their cache. They are able to split the task up and achieve parallel computing where each CPU is processing a task meaning that CPU time is not wasted.
The year 2001 was when the world’s first multi core processor was created by IBM [17] that aimed to compete against their rivals at the time, the manufacturer Sun Microsystems and HP by doubling the performance at half the cost. It was called the IBM POWER4 with 2 64-bit microprocessors embedded into it and extended the PowerPC architecture. It had a clock speed of 1.1-1.3 GHz and the later enhanced version POWER4+ had a recorded 1.9 GHz.
Around the year 2005-2006, manufacturers such as Intel and AMD were offering duo core CPU such as the AMD’s Athlon 64 X2 3800+ and Intel’s Core 2 Duo processor E6320. Moore’s Law showed that the number of transistors were doubling every 24 months and the few iteration of CPUs released by Intel and AMD later on proved that Moore’s Law was still holding with 8 cores CPUs being released around 2009-2010.
Another notable development of multi-core processors was Nvidia and the company at the time known as ATI (later acquired by AMD) began adopting multi core architecture by using GPU (graphics processing unit) based parallel computing in 2007. CUDA was Nvidia’s platform for parallel computing which allowed GPUs to be used for general computing. Before then GPUs were really meant for gaming as they were used to improve the performance of applications that required intensive vector-based processing. CUDA allowed GPU processing to handle other applications such as computational finance.

Operating Systems with Multi-Core Support

Before exploring how different operating systems utilise multi-core processors, we must understand the relationship between the operating system and processors.
With processors becoming more and more powerful, software such as the operating system can very easily hold back the performance of the processor. Most applications that are ran spend a significant amount of time within the operating system kernel. This means that without proper support for the processor, the operating system will bottleneck the performance of the computer. For the processor to reach its full potential, the operating system must ensure scalability[15].
Several tools have been implemented within various operating systems to lessen the bottleneck effect. For example, when running an application in Linux the memory allocator within the operating system will use all of the available memory from the core that is being used (the local core)[15]. Only until all memory is used within that core will the memory allocator go to another core[15]. This is useful to ensure all applications are running at maximum efficiency and that cores are properly sharing the workload when necessary.
When discussing the relationship between modern multi-core processors and operating systems, we must also look at two implementations of a multi-core processor in a computer system – symmetric multi-processing (SMP) and asymmetric multi-processing (AMP). Both of these methods of implementation directly affect how the operating system must interact with the multi-core processor[14].
SMP is implemented by connecting two or more processors to a singular main memory, while the operating system treats these processors equally. This means that neither processor are restricted to certain tasks. In terms of multi-core processors, each core is treated as its own processor. Each core also has a shared design, referred to as a homogeneous multi-core design. The cores are able to communicate between one another through the operating system. SMP is generally useful for systems that simply need to split the workload of applications by providing more CPU power[13].
AMP is implemented similarly to SMP, by having either multiple processors or a multi-core processor, however each of these has its own unique structure. This is referred to as heterogeneous multi-core design. Each core may or may not run it’s own operating system, additionally these may even be completely different operating systems. For example, in a quad-core system two cores may not even run an operating while one core runs Linux and one core runs a specially designed operating system to handle a financial system. This allows each core in the multi-processor to be specialised to run a certain task within the system[13].
SMP is more common for general use such as for desktop computers, whereas AMP is utilised in much more specialised systems. AMP systems generally do not share any memory between cores and will have a main “master” core that handles the majority of operating system tasks such as task scheduling.
The main problem with the multicore processor was that operating systems were not completely ready to deal with processors with many cores. Applications consist of one or more processes, each in turn consisting of one or more threads. A thread is an individual unit of execution, and it cannot be split between multiple processors or between processor cores. In the current Windows architecture, the operating system maintains control over all processor cores.
Anytime a thread wants to get access to an item that might be claimed by another thread, it must use a lock to make sure that only one thread at a time can modify the item. Managing multiple threads within an individual program can be a complex task, and poorly written applications can result in an entire application becoming locked as threads wait for each other to finish. Multicore-aware software can be broken up into multiple threads and assigned to each core without slowing down performance.[21]
Prior to Windows 7 (July 2009 – January 2015), when a thread needed to get (or access) a lock, its request had to go through a global locking mechanism. This mechanism called the “kernel dispatcher lock” would handle the requests. Because it was unique and global, it handled potentially thousands of requests from all processors on which Windows ran. As a result, this dispatcher lock was becoming a major bottleneck.
The Windows 7 operating system had a built-in software called “the scheduler” that was smart about assigning software tasks. The scheduler knew which cores are being used and which cores are busiest. The scheduler then assigned tasks or threads to an unused or under-utilised core. The scheduler also prioritised more important threads or apps as needed. And thanks to built-in tools, programmers could write applications that worked well with multiple cores.[22]
[23][24]
Earlier versions of Windows desktop graphics weren’t designed for multi-core CPU’s. A program often had to wait until a previous application finished any graphical tasks it needed to perform before it could access the Windows Desktop. With Windows 7, each application communicates with the Windows graphics subsystem, which then manages access to the underlying graphics hardware. Programs no longer have to wait in line. Gaming graphics also got a boost as Microsoft’s programming interface for games, DirectX 11, had been re-engineered to better support multiple CPU cores.[22]

The Future

Moore’s Law has stated that the number of transistors on a chip doubles every two years, but this is not going to be the case forever. Experts expect the law to hold for a decade because there will come a point where we cannot shrink transistors further to fit a chip of the same size.
Research into materials to utilize for making transistors smaller and more efficient in the last decade has led to the possibility of using carbon nanotubes or graphene to replace silicon semiconductors as they are more thermally and electrically efficient.
Due to the rapidly approaching ceiling of what is possible with the current computing technology, researchers have been leading the path to utilizing entirely different methods in which computers operate. The emerging technology being researched include quantum, neuromorphic, and photonic computing.
Quantum computing introduces a fundamental shift away from binary as it is replaced by quantum bits. Quantum bits or qubits are subatomic particles that can act like binary digits but they can also represent both binary values at the same time. Today’s quantum computers are very much like computers in the early stages of computing – they are big and not as powerful as they could be. These computers could have infinite parallel processing where many operations can be carried out simultaneously. This could possibly be problematic for our current day security systems because it becomes possible to more efficiently crack encryption algorithms.
Optical computing uses the photons in light to represent binary. Electric signals in wires are slower than light, making light a quicker medium for data transfer. Wires take up physical space in a computer, whereas light only requires a vacuum – leading to optical computers occupying less space than today’s computers. Electric wires generate heat due to the resistance of the wire, this heat is not as intense in light. An issue with optical computing is that light is very large on an atomic scale. To combat this size issue engineer have utilized surface plasmons which are electrons with quantum properties which allow it to travel like a photon across a vacuum as opposed to using wires.
Neuromorphic computing introduces a new processor architecture that utilizes a network of digital “neurons” in the which communicate in parallel similarly to our brains. There are electric signals sent through these neurons and each neuron knows which neuron to send the signal to based on the properties of the signal. These processors are more energy efficient and can carry out operations for AI algorithms more efficiently than current day processors. Designs for neuromorphic processors have existed since the 80s but were not feasible as programs had to be physically included in the processor and it was not possible to use a programming or assembly language to chan
28.02.2019

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Development of computing technology. Available from:<https://www.essaysauce.com/information-technology-essays/development-of-computing-technology/> [Accessed 11-04-26].

These Information technology essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.