Processors are measured in terms of clock speed. The clock is an electronic oscillator that produces a signal to synchronise the operation of the processor.
In general terms, the greater the clock speed, the faster instructions are carried out. Clock speed is usually measured in GHz (gigahertz). However, the processing of a single instruction typically takes more than one clock cycle. Therefore, a processor with a clock speed of 3.6GHz does not mean that 3.6 billion instructions are processed per second.
Clock speed is a theoretical maximum. It is unlikely that a system will perform to the greatest stated number. Making a processor perform more instructions per second than it is recommended to (overclocking), could cause the processor to overheat.
Computers with more than one processing unit (core) are called ‘multicore’. For example:
Dual-core = two processing units
Quad-core = four processing units
Hexa-core = six processing units
Octo-core = eight processing units
Generally speaking, the more cores a computer has, the more instructions it can execute at the same time. As a result, the computer will perform more efficiently than computers with the same type of processor but fewer cores.
Having a quad-core instead of a dual-core processor (both running at the same speed) does not mean that the amount instructions that can be processed in the same time frame will double
The quad core will still achieve a significant improvement because data and instructions need to be fed to the cores appropriately and so the computer system will need to spend time organising which cores receive which data and instructions
The efficiency of a multicore processor depends on the nature of the required task, i.e. if it is possible to divide a computation into subtasks that can be processed in parallel (one task per core at the same time)
The larger the cache, the more instructions can be queued and carried out
Storing instructions in cache reduces the amount of time it takes to access that instruction and pass it to a CPU core
If a system does not use caching, there is an increased need for accessing the main memory, thereby increasing the time it takes for an operation to be carried out
The faster the cache, the faster an instruction is fetched to the processor
Having L2 cache as part of the circuitry of each core reduces the time it takes for instructions and data to pass through the system registers, increases the speed of processing and allows instructions to be carried out more efficiently
The greater the number of cache levels, the more efficient the system
The combined increase in speed and memory size means that more data is held nearer to the CPU
The width of the data bus determines the number of bits that can be transferred to or from in one operation (i.e. at the same time, in one pass). The larger the data bus, the better the processor performance. This is because the greater the width of the data bus, the more data can be transferred between the internal components simultaneously.
If the width of the data bus is n bits, then n bits can be transferred between the internal components in one operation.
The majority of computer systems have a data bus width equal to the system word length, e.g. if a computer system uses a 16-bit word then the data bus has a width of 16 bits. It means that the contents of each addressable memory location can be transmitted in one go.
The width of the address bus determines the number of bits that can be used to form an address of a memory location. The greater the width of the address bus, the more memory locations can be addressed. Therefore, the processor benefits from a larger main memory to access data and instructions, which also improves processor performance as it reduces reliance on slower virtual memory.
If the width of the address bus is n bits, then there are 2^n memory addresses available for the main memory. This means that the processor can access 2^n distinct memory locations using these addresses.