The CPU (central processing unit) plays a very important role in our computers. It acts as a both the brain and heart of any computing devices. Every single operation that is being performed in the computer is processed by the CPU. To put it simply, the CPU is the one that understands every single functions.
The GPU (graphics processing unit), on the other hand, is like the eye of the computer. It helps the CPU to create images that is then being displayed on the screen in real-time. But the differences never ends there.
Today, we will be discussing the major differences between GPU and CPU, and also tackle all the misconceptions regarding these two computer components.
Central Processing Unit (CPU)
As mentioned, the CPU is like the brain and heart of the computer. It knows every single functions and execute it by the request of some other computer components.
The CPU performs these three basic steps to process and decode instructions:
- Fetch – During this process, the CPU checks the program counter to know which set of instructions to run next. The program counter will then give the CPU an address value in the memory where the instruction is located, then fetches it.
- Decode – After fetching the instruction, the CPU will now decode these set of instructions. All executable programs are translated to assembly instructions. The assembly code is then decoded into binary instructions.
- Execute – After decrypting the instructions, the CPU will then execute these instructions. It can then do calculations with its arithmetical logical unit (ALU), which is responsible for all mathematical and logical operations. During this stage, it can also move data from one memory location to another, or jump to a different address.
- Repeat Cycle – Once everything is completed, the processor goes back to program counter to find the next set of instructions to run. This cycle is repeated if further set of instructions is on queue for decoding and execution.
The number of operation the CPU can perform is measured in Hertz (Hz), which one Hertz is the speed it takes to perform a single operation in one second. However, computer speed is usually measured in Gigahertz (GHz), which is the speed it takes the CPU to perform one million simple tasks per second. This speed rating is also referred to as “clock speed.”
Although higher clock speed may mean faster speed in older times, it is no longer the case for modern CPUs. Advances in technology have made these chips efficient so they perform more with less. For example, a 3GHz Intel Core i5 will not be faster than an Intel Core i7 running at 2.80GHz.
Discounting the clock speed, there are many more factors that influences CPU’s performance such as CPU architecture, cache memory, world length (bandwidth), multiple cores and bus speed.
Graphics Processing Unit (GPU)
Older computers don’t have a GPU (at least not the same as what we have now). It’s because these old computers doesn’t have much graphics to render into a display.
The first use of a graphics processing chip can be traced back in 1970s with arcade game boards. RAM for frame buffers was expensive at the time, and the best possible solution for arcade game manufacturers is to use video chips to composite data together while the display is being scanned out on the monitor.
The ARTC HD63484, the first CMOS graphics processor for PC, was released by Hitachi in 1984. It is capable of displaying up to 4K resolution in monochrome mode, and was used in many graphics card throughout the 1980s. Since then, many manufacturers have started creating their own versions of graphics processor.
It wasn’t until 1994 that we heard the term “GPU,” which was used by Sony to describe the graphic processing chip inside its PlayStation console.However, it was NVidia that popularized the term, calling their GeForce 256 as “the world’s first GPU.” It was the first single-chip processor integrated with transform, lighting, triangle setup/clipping, and rendering engines.
Basically, GPU works by the signal of the CPU to render the sets of instructions it has decoded. To put it simply, the CPU knows what will happen if the car struck a tree, and it is for GPU to render what came out of that incident.
1. Is GPU and Graphics Card the same?
Well, sort of. A graphics card will not a work without a GPU. If the CPU is the brain of the whole computer, the GPU is brain of the graphics card.
This is where things get a little different. There are two types of graphic processor. A graphic processor could either be integrated or dedicated graphics (also referred to as discrete).
An integrated graphics is an on-board graphics that is soldered unto the motherboard, or CPU. It uses a portion of a computer’s system RAM instead of having it’s own dedicated memory. Integrated graphics are always less powerful than the dedicated graphics, but is also power efficient.
Since it is built unto the motherboard (or CPU) itself, it can never be removed for an upgrade. However, some computers allow BIOS to disable the integrated chip to take advantage of a better graphics processing performance from a dedicated graphics processor. This dedicated graphics processor is called graphics card, or video card.
Note that whilst both the integrated and dedicated graphics processor can be called as a graphics card, today, however, when people say graphics card, it refers to a dedicated graphics processing system.
While they work the same as an integrated graphics, a graphics card is more powerful as it has its own hardware. A graphics card is a complete board that has several components. It has its own graphics processing unit (GPU), random access memory (RAM), and digital to analog converter (DAC).
To sum up, the GPU is a dedicated graphics processor inside the graphics card. In modern terms, a graphics card is a graphics processing system that is not built into a motherboard, and has its own hardware (dedicated or discrete graphics). However, an integrated graphics can also be referred as a graphics card. Other sources may also refer the graphics card as a video card.
2. Does System-on-Chip (SoC) and CPU work the same?
On computers, we have a CPU, and GPU. On smartphones, on the other hand, we have what we call as System-on-Chip (SoC). A CPU is referred to as a “microprocessor,” while the SoC is sometimes referred to as a “microcontroller.”
A CPU is the heart and brain of the SoC. An SoC is an integrated circuit that has all the components to make the system run. A SoC has its own integrated RAM, ROM, GPU, CPU and many other little processors that makes our smartphones and tablets work.
A modern SoC like the Qualcomm Snapdragon 855, for example, has integrated a cellular modems into the chip, as well as digital signal processor (DSP), image signal processor (ISP), an artificial intelligence engine (AIE), a Wi-Fi chip, a Bluetooth chip, NFC, GPS, and all sorts of sensors. This, alongside the CPU, and GPU makes up the SoC. Nonetheless, both works the same.
3. How does CPU and GPU operate?
Both the CPU and GPU are very important components of our computer. While they differ in most operations, both can’t work without the other.
Let’s put it this way. When we are playing Assassin’s Creed on a computer, every time we move our character, execute an amazing parkour skills, or kill the enemies, it is the CPU that calculates these movements as it is the one that has the understanding of the physics. However, to render these images, the CPU needs help from the GPU as it is the one that can understand and translate these instructions to images.CPU vs GPU Example
In other words, the CPU is responsible for processing the instructions and inputs from the players, and tells the GPU to render this instructions to images. Better CPU means faster operational cycles, and better GPU means swift render times, which translates to higher frame rates when gaming.
In terms of the computational power, the GPU can only do a fraction of what the CPU is capable of doing. The CPU is more flexible as it has a larger instruction set than the GPU despite of only having four cores, at most. The CPU can also run in faster clock speeds.
While the GPU can be limited in terms of its operation, it does the work in incredible speed. The GPU will use hundreds of cores to make time-sensitive calculations for thousands of pixels at a time, making it possible to display complex 3D graphics.
NVidia’s GTX 2080, for example, features 2944 shading cores. That means, it can perform 2944 set of operations per clock cycle. In comparison, a CPU like the Intel Core i5 and Intel Core i7 can only execute four simultaneous instructions in one clock cycle, while budget Intel Core i3 can only operate two simultaneous instructions per one clock cycle. This is one of the reason why a good GPU is used for a cryptocurrency mining.