Friday, December 18, 2020

GPU(Graphical Processor Unit)

 If we think of a central processing unit (CPU) as the logical thinking section of a computer’s silicon brain, then the graphics processing unit (GPU) is its creative side, helping render graphical user interfaces into visually attractive icons and designs rather than reams of black and white lines. 

While many CPUs come with some form of integrated GPU to ensure that Windows can be displayed on a connected screen, there is a myriad of more intensive graphics-based tasks, such as video rendering and computer-aided design (CAD) that often require a dedicated or discreet GPU notably in the form of a graphics card. 

When it comes to the latter, Nvidia and AMD are the two main players in the graphics card arena, while Intel’s own Iris Plus and UHD integrated GPUs tend to carry out a lot of light-weight work in laptops without dedicated graphics. On the mobile side, the likes of Qualcomm and MediaTek provide lightweight GPUs for handheld devices, though these often come in system-on-a-chip (SoC) designs where the GPU is on the same chip as the CPU and other core mobile chipset components. 

It can be easy to think of a GPU as something only people keen on playing PC games are interested in, but a GPU provides a lot more than just a graphical grunt.

What does a GPU do?

"GPU" became a popular term for the component that powers graphics on a machine in the 1990s when it was coined by chip manufacturer Nvidia. The company's GeForce range of graphics cards was the first to be popularised and ensured related technologies such as hardware acceleration, programmable shading, and stream processing were able to evolve.

While the task of rendering basic objects, like an operating system's desktop environment, can usually be handled by the limited graphics processing functionalities built into the CPU, some more strenuous workloads require the extra horsepower, which is where a dedicated GPU comes in.

In short, a GPU is a processor that is specially designed to handle intensive graphics rendering tasks.

Computer-generated graphics - such as those found in videogames or other animated mediums - require each separate frame to be individually 'drawn' by the computer, which requires a large amount of power.

Most high-end desktop PCs will feature a dedicated graphics card, which occupies one of the motherboard's PCIe slots. These usually have their own dedicated memory allocation built into the card, which is reserved exclusively for graphical operations. Some particularly advanced PCs will even use two GPUs hooked up together to provide even more processing power.

Laptops, meanwhile, often carry smaller mobile ships, which are smaller and less powerful than their desktop counterparts. This allows them to fit an otherwise bulky GPU into a smaller chassis, at the expense of some of the raw performance offered by desktop cards.

What are GPUs used for?

GPUs are most commonly used to drive high-quality gaming experiences, producing life-like digital graphics and super-slick rendering. However, there are also several business applications that rely on powerful graphics chips.

3D modeling software like AutoCAD, for example, uses GPUs to render models. Because the people that work with this kind of software tend to make multiple small changes in a short period of time, the PC they're working with needs to be able to quickly re-render the model.

Video editing is another common use-case; while some powerful CPUs can handle basic video editing, if you're working with large amounts of high-resolution files - particularly 4K or 360-degree video - a high-end GPU is a must-have in order to transcode the files at a reasonable speed.

GPUs are often favored over CPUs for use in machine learning too, as they can process more functions in a given period of time than CPUs. This makes them better-suited to creating neural networks, due to the volume of data they need to deal with.

Not all GPUs are created equal, however. Manufacturers like AMD and Nvidia commonly produce specialized enterprise versions of their chips, which are designed specifically with these kinds of applications in mind and come with more in-depth support provided.


How a GPU works

CPU and GPU architectures are also differentiated by the number of cores. The core is essentially the processor within the processor. Most CPUs have between four and eight cores, though some have up to 32 cores. Each core can process its own tasks or threads. Because some processors have multithreading capability -- in which the core is divided virtually, allowing a single core to process two threads -- the number of threads can be much higher than the number of cores. This can be useful in video editing and transcoding. CPUs can run two threads (independent instructions) per core (the independent processor unit). GPUs can have four to 10 threads per core.

CPUs and the End of Moore’s Law

With Moore’s law winding down, GPUs, invented by NVIDIA in 1999, came just in time.

Moore's Law

Moore’s law posits that the number of transistors that can be crammed into an integrated circuit will double about every two years. For decades, that’s driven a rapid increase of computing power. That law, however, has run up against hard physical limits.

GPUs offer a way to continue accelerating applications — such as graphics, supercomputing and AI — by dividing tasks among many processors. Such accelerators are critical to the future of semiconductors, according to John Hennessey and David Patterson, winners of the 2017 A.M. Turing Award and authors of Computer Architecture: A Quantitative Approach the seminal textbook on microprocessors.

No comments:

Post a Comment