Monday, March 15, 2021

VPN

 VPN(Virtual Private Network):

In recent times most of us are using VPN for various purposes. What is VPN? So basically every time we browse the internet using the ISP(Internet Service Provider), the IP address which we are using is under their directories itself which means they can know your IP address in this way our government will know which sites your using and which details are you entering. In this way, they get to know and use your data. That's why most of the sites which are blocked by our country are shown as blocked in our browsers. To solve this problem we will use the IP spoofing technique in other platforms but when it comes to android or our daily use laptops we need a VPN that will spoof your IP address showing that you're in a different country or different region and spoofs your IP address. In this way no one will know the content or websites you are visiting and the credentials ur using. In this, you can stay anonymous from not only ISP but most of the hackers.

For best and free usage of the VPN please use PROTON VPN which is available both in android and Desktop versions. It is available for WINDOWS, MAC, Linux platforms also.

Monday, December 21, 2020

SSD vs HDD

A hard disk drive (HDD) is an old-school storage device that uses mechanical platters and a moving read/write head to access data. A solid-state drive (SSD) is a newer, faster type of device that stores data on instantly-accessible memory chips.

HDDs: 

An enclosure contains a series of platters covered by a ferromagnetic coating. The direction of the magnetization represents the individual bits. Data is read and written by a head (similar to the way vinyl record albums work) that moves extremely fast from one area of the disk to another. Since all of these pieces are “mechanical,” the hard disk is the slowest component of any computer – and the most fragile.

SSD: 

These newer types of disks store information on flash memory, which consists of individual memory cells storing bits that are instantly accessible by the controller. 

Why are SSDs useful for laptops?

While lower-priced laptops still come with traditional hard drives (it’s one way for manufacturers to minimize their costs), most mid-range to high-end machines come with an SSD.

Due to their non-mechanical nature, SSDs require less power, which translates into better battery life.

They’re also shock-resistant. Hard disks have moving parts. If you drop your laptop, chances are that the read/write head of an old-school hard drive is in motion, which could lead to data failure. This doesn’t apply to SSDs.

It isn’t always an either/or choice. In some cases, you find “hybrid” computers. The system partition that contains the operating system, application programs, and the most-used files are installed on an SSD. Other data, such as movies, photos, and documents, are stored on a traditional HDD, which is larger and less expensive.

How much faster are solid-state drives compared to hard disk drives?

The speed difference is significant. SSDs are extremely fast in all areas, but the speed difference is more pronounced when performing certain tasks, such as:

Sequential read/write operations: Copying and moving huge files (such as movies) are where the difference is most apparent. On old-school HDDs, the copying process takes 30-150 MB per second (MB/s), where the same action takes about 500 MB/s on normal SSD or even 3,000-3,500 MB/s on new NVME SSDs. In this example, copying a 20 GB movie is complete in less than 10 seconds with an SSD, while a hard disk would need at least two minutes.

Small “4K” read/write operations: Most of the time, when you run Windows (or macOS), open programs, or browse the web, you’re actually opening and manipulating thousands of smaller files, which are stored in small blocks of data (usually sized at 4K). The faster your disk can read (and write) these 4K blocks, the faster and snappier your system seems. With HDDs, the speed ranges from 0.1 to 1.7 megabytes per second (MB/s). SSDs and NVME SSDs, however, operate at much faster speeds of 50-250 MB/s in 4K reads/writes.

What’s the lifespan of an SSD

There are lots of myths surrounding SSD life spans, and the assumptions go back to the early days of SSDs in the 1990s and early 2000s. It is true that SSD cells have a limited lifespan, but today this is not really an issue.

In theory, the more data written to a cell, the faster it wears out. Nowadays, an SSD cell survives about 3,000 write cycles, which doesn’t sound like much at first. But thanks to the principle of wear leveling, the SSD controller makes sure that write operations are spread evenly across all cells in order to minimize “cell death.” Additionally, modern SSDs contain spare cells that will replace cells that go bad. This is called bad block management, and it’s why the larger the SSD, the longer its lifespan.

However, even if you were to constantly write data onto a hard disk, 24 hours a day, you’d still have decades until the disk eventually dies. 

What about capacity differences between HDDs and SSDs?

If you are concerned about how much information you can store on each type of drive, be reassured. There are no differences in storage capacity. You can get HDDs and SSDs in similar sizes. Usually, the range is 128 GB to 2 TB. And if you need to radically free up space, you can easily format any hard drive, internal or external — no matter if it's an HDD or SSD.

Is an HDD or an SDD better for gaming?

Given the huge amounts of data a game has to shuffle back and forth (loading levels, character models, etc), an SSD helps games load and run faster. You’ll also experience less stutter when playing games, as the rest of your PC doesn’t need to wait for game data to load – which can give you quite an advantage, especially in the eSports arena.

Here’s a simple example: Loading the world of GTA V takes about 25 seconds on my Samsung 970 Evo Plus with SSD, compared to more than two minutes when using an old mechanical hard disk. A game-changer.

Friday, December 18, 2020

GPU(Graphical Processor Unit)

 If we think of a central processing unit (CPU) as the logical thinking section of a computer’s silicon brain, then the graphics processing unit (GPU) is its creative side, helping render graphical user interfaces into visually attractive icons and designs rather than reams of black and white lines. 

While many CPUs come with some form of integrated GPU to ensure that Windows can be displayed on a connected screen, there is a myriad of more intensive graphics-based tasks, such as video rendering and computer-aided design (CAD) that often require a dedicated or discreet GPU notably in the form of a graphics card. 

When it comes to the latter, Nvidia and AMD are the two main players in the graphics card arena, while Intel’s own Iris Plus and UHD integrated GPUs tend to carry out a lot of light-weight work in laptops without dedicated graphics. On the mobile side, the likes of Qualcomm and MediaTek provide lightweight GPUs for handheld devices, though these often come in system-on-a-chip (SoC) designs where the GPU is on the same chip as the CPU and other core mobile chipset components. 

It can be easy to think of a GPU as something only people keen on playing PC games are interested in, but a GPU provides a lot more than just a graphical grunt.

What does a GPU do?

"GPU" became a popular term for the component that powers graphics on a machine in the 1990s when it was coined by chip manufacturer Nvidia. The company's GeForce range of graphics cards was the first to be popularised and ensured related technologies such as hardware acceleration, programmable shading, and stream processing were able to evolve.

While the task of rendering basic objects, like an operating system's desktop environment, can usually be handled by the limited graphics processing functionalities built into the CPU, some more strenuous workloads require the extra horsepower, which is where a dedicated GPU comes in.

In short, a GPU is a processor that is specially designed to handle intensive graphics rendering tasks.

Computer-generated graphics - such as those found in videogames or other animated mediums - require each separate frame to be individually 'drawn' by the computer, which requires a large amount of power.

Most high-end desktop PCs will feature a dedicated graphics card, which occupies one of the motherboard's PCIe slots. These usually have their own dedicated memory allocation built into the card, which is reserved exclusively for graphical operations. Some particularly advanced PCs will even use two GPUs hooked up together to provide even more processing power.

Laptops, meanwhile, often carry smaller mobile ships, which are smaller and less powerful than their desktop counterparts. This allows them to fit an otherwise bulky GPU into a smaller chassis, at the expense of some of the raw performance offered by desktop cards.

What are GPUs used for?

GPUs are most commonly used to drive high-quality gaming experiences, producing life-like digital graphics and super-slick rendering. However, there are also several business applications that rely on powerful graphics chips.

3D modeling software like AutoCAD, for example, uses GPUs to render models. Because the people that work with this kind of software tend to make multiple small changes in a short period of time, the PC they're working with needs to be able to quickly re-render the model.

Video editing is another common use-case; while some powerful CPUs can handle basic video editing, if you're working with large amounts of high-resolution files - particularly 4K or 360-degree video - a high-end GPU is a must-have in order to transcode the files at a reasonable speed.

GPUs are often favored over CPUs for use in machine learning too, as they can process more functions in a given period of time than CPUs. This makes them better-suited to creating neural networks, due to the volume of data they need to deal with.

Not all GPUs are created equal, however. Manufacturers like AMD and Nvidia commonly produce specialized enterprise versions of their chips, which are designed specifically with these kinds of applications in mind and come with more in-depth support provided.


How a GPU works

CPU and GPU architectures are also differentiated by the number of cores. The core is essentially the processor within the processor. Most CPUs have between four and eight cores, though some have up to 32 cores. Each core can process its own tasks or threads. Because some processors have multithreading capability -- in which the core is divided virtually, allowing a single core to process two threads -- the number of threads can be much higher than the number of cores. This can be useful in video editing and transcoding. CPUs can run two threads (independent instructions) per core (the independent processor unit). GPUs can have four to 10 threads per core.

CPUs and the End of Moore’s Law

With Moore’s law winding down, GPUs, invented by NVIDIA in 1999, came just in time.

Moore's Law

Moore’s law posits that the number of transistors that can be crammed into an integrated circuit will double about every two years. For decades, that’s driven a rapid increase of computing power. That law, however, has run up against hard physical limits.

GPUs offer a way to continue accelerating applications — such as graphics, supercomputing and AI — by dividing tasks among many processors. Such accelerators are critical to the future of semiconductors, according to John Hennessey and David Patterson, winners of the 2017 A.M. Turing Award and authors of Computer Architecture: A Quantitative Approach the seminal textbook on microprocessors.

Thursday, December 17, 2020

Intel(I3 I5 I7 I9)

 A processor is like a computer’s brain - and Intel® Core™ processors are the most powerful. They have multiple cores for more power and smoother multi-tasking.

There are 4 main categories: i3, i5, i7, and i9. Each has numerous spec options, but strictly speaking, i5 is superior to i3 and so on.

What is the difference between Intel Generations?

Each year the Intel Core processors are updated. 2018’s updates are known as the 8th generation.

Not sure what generation of processor your PC has? The number after the hyphen on its serial number should give you a clue – for example, an Intel Core i7-7820HQ is 7th gen.

8th Generation Intel Core processors

Intel’s 8th Generation of processors has moved with the times and delivered some exciting new features. Including:

Incredible VR experiences

High-quality 4K UHD content

Two more cores – up to 6 instead of the original 4

Introduction of the super-powered Intel Core i9 and X-series.

 Core i3: Everyday users – web browsing, Word, and media streaming

If you’re after a laptop for everyday computing tasks like web browsing, video streaming, and office-type work, the i3 is great.

The latest 8th Gen has up to 4 cores – twice as many as the 7th Gen – making day-to-day use a lot slicker. It even lets you watch content in immersive 4K and 360° viewing.

With a Core i3 you can:

Browse multiple webpages smoothly. Work in Word or Excel. Stream movies and TV shows from Netflix in HD. Listen to music on Spotify. Multi-task efficiently with Intel Hyper-Threading technology.

With a Core i5 you can:

Smoothly multitask - work on spreadsheets, stream music, and browse the web. Work on complicated tasks – like rendering big Excel files. Edit in Photoshop and sketch in Illustrator. Create, share, and watch 4K content. Play intensive PC games – see more of Intel's gaming processors. Benefit from faster-repeated tasks thanks to the large cache size. Stream from multiple sites. Get a temporary boost when using demanding programs with Intel Turbo Boost Technology 2.0. 

Core i7 - Video editors, designers, and gamers who crave power

If you are advanced in video editing or 3D modeling you’ll need a very fast processor and good graphics – so look at an i7. It’s perfect for the demands of gaming fans too. With 6 original cores and the technology to create another 6 virtual cores, it has enough power for the most demanding tasks.

With a Core i7 you can:

Encode video more efficiently. Work smoothly in 3D modeling programs. Smoothly edit in Photoshop and sketch in Illustrator. Watch and edit 4K UHD content and 360° videos. Work productively with demanding creative programs – each core uses 2 ‘threads’ rather than 1 with Intel's Hyper-Threading technology. Benefit from faster-repeated tasks thanks to the large cache size. Get a temporary power boost when using demanding programs with Intel Turbo Boost Technology 2.0.

Core i9 – Extreme gaming, mega-tasking, and high-end content creation

The newest addition to the Intel family, the Core i9 X-Series, is Intel’s most powerful processor with 18 cores and 36 threads. And with updated Intel® Turbo Boost Max Technology 3.0, it elevates everything you do to new heights.

i9 turns your PC into a studio, producing breathtaking 4K or 360° videos, stunning photos, or high-quality music. Gamers won’t be disappointed either – this is the ultimate tool for virtual reality gaming.

With a Core i9 you can:

Produce, edit, and share 4K UHD content and 360° videos. Work smoothly in 3D modeling programs Produce and edit high-quality music. Smoothly edit in Photoshop and sketch out in Illustrator. Enjoy the ultimate gaming and VR experience. Work more productively when using demanding creative programs. Benefit from faster-repeated tasks thanks to the large cache size.

Intel Core X-series Processor family

X-series variations of the Intel Core range are now available. These unlocked versions of the original platforms are supercharged and allow for overclocking. They’re designed to scale to your performance needs by delivering options between 4 to 18 cores for extreme performance and the latest technological advancements while providing headroom for the future.


CPU Performance measures(Cache)

Cache memory is a chip-based computer component that makes retrieving data from the computer's memory more efficient. It acts as a temporary storage area that the computer's processor can retrieve data from easily. This temporary storage area, known as a cache, is more readily available to the processor than the computer's main memory source, typically some form of DRAM.

Cache memory is sometimes called CPU (central processing unit) memory because it is typically integrated directly into the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU. Therefore, it is more accessible to the processor and able to increase efficiency, because it's physically close to the processor.

In order to be close to the processor, cache memory needs to be much smaller than main memory. Consequently, it has less storage space. It is also more expensive than the main memory, as it is a more complex chip that yields higher performance.

What it sacrifices in size and price, it makes up for in speed. Cache memory operates between 10 to 100 times faster than RAM, requiring only a few nanoseconds to respond to a CPU request.

The name of the actual hardware that is used for cache memory is high-speed static random access memory (SRAM). The name of the hardware that is used in a computer's main memory is dynamic random access memory (DRAM).

Cache memory is not to be confused with the broader term cache. Caches are temporary stores of data that can exist in both hardware and software. Cache memory refers to the specific hardware component that allows computers to create caches at various levels of the network.

Types of cache memory

Cache memory is fast and expensive. Traditionally, it is categorized as "levels" that describe its closeness and accessibility to the microprocessor. There are three general cache levels:

L1 cache, or primary cache, is extremely fast but relatively small and is usually embedded in the processor chip as CPU cache.

L2 cache, or secondary cache, is often more capacious than L1. The L2 cache may be embedded on the CPU, or it can be on a separate chip or coprocessor and have a high-speed alternative system bus connecting the cache and CPU. That way it doesn't get slowed by traffic on the main system bus.

Level 3 (L3) cache is a specialized memory developed to improve the performance of L1 and L2. L1 or L2 can be significantly faster than L3, though L3 is usually double the speed of DRAM. With multicore processors, each core can have a dedicated L1 and L2 cache, but they can share an L3 cache. If an L3 cache references an instruction, it is usually elevated to a higher level of cache.

In the past, L1, L2, and L3 caches have been created using combined processor and motherboard components. Recently, the trend has been toward consolidating all three levels of memory caching on the CPU itself. That's why the primary means for increasing cache size has begun to shift from the acquisition of a specific motherboard with different chipsets and bus architectures to buying a CPU with the right amount of integrated L1, L2, and L3 cache.

Contrary to popular belief, implementing flash or more dynamic RAM (DRAM) on a system won't increase cache memory. This can be confusing since the terms memory caching (hard disk buffering) and cache memory are often used interchangeably. Memory caching, using DRAM or flash to buffer disk reads, is meant to improve storage I/O by caching data that is frequently referenced in a buffer ahead of slower magnetic disk or tape. Cache memory, on the other hand, provides read buffering for the CPU.

Cache memory mapping

Caching configurations continue to evolve, but cache memory traditionally works under three different configurations:

Direct mapped cache has each block mapped to exactly one cache memory location. Conceptually, a direct-mapped cache is like rows in a table with three columns: the cache block that contains the actual data fetched and stored, a tag with all or part of the address of the data that was fetched, and a flag bit that shows the presence in the row entry of a valid bit of data.

Fully associative cache mapping is similar to direct mapping in structure but allows a memory block to be mapped to any cache location rather than to a prespecified cache memory location as is the case with direct mapping.

Set associative cache mapping can be viewed as a compromise between the direct mapping and fully associative mapping in which each block is mapped to a subset of cache locations. It is sometimes called N-way set associative mapping, which provides for a location in main memory to be cached to any of "N" locations in the L1 cache.

Data writing policies

Data can be written to memory using a variety of techniques, but the two main ones involving cache memory are:

Write-through. Data is written to both the cache and main memory at the same time.

Write-back. Data is only written to the cache initially. Data may then be written to the main memory, but this does not need to happen and does not inhibit the interaction from taking place.

The way data is written to the cache impacts data consistency and efficiency. For example, when using write-through, more writing needs to happen, which causes latency upfront. When using write-back, operations may be more efficient, but data may not be consistent between the main and cache memories.

One way a computer determines data consistency is by examining the dirty bit in memory. The dirty bit is an extra bit included in memory blocks that indicates whether the information has been modified. If data reaches the processor's register file with an active dirty bit, it means that it is not up to date and there are more recent versions elsewhere. This scenario is more likely to happen in a write-back scenario because the data is written to the two storage areas asynchronously.

Specialization and functionality

In addition to instruction and data caches, other caches are designed to provide specialized system functions. According to some definitions, the L3 cache's shared design makes it a specialized cache. Other definitions keep the instruction cache and the data cache separate and refer to each as a specialized cache.

Translation lookaside buffers (TLBs) are also specialized memory caches whose function is to record virtual address to physical address translations.

Still, other caches are not, technically speaking, memory caches at all. Disk caches, for instance, can use DRAM or flash memory to provide data caching similar to what memory caches do with CPU instructions. If data is frequently accessed from the disk, it is cached into DRAM or flash-based silicon storage technology for faster access time and response.

Specialized caches are also available for applications such as web browsers, databases, network address binding, and client-side Network File System protocol support. These types of caches might be distributed across multiple networked hosts to provide greater scalability or performance to an application that uses them.

Locality

The ability of cache memory to improve a computer's performance relies on the concept of locality of reference. Locality describes various situations that make a system more predictable. Cache memory takes advantage of these situations to create a pattern of memory access that it can rely upon.

There are several types of the locality. Two key ones for the cache are:

Temporal locality. This is when the same resources are accessed repeatedly in a short amount of time.

Spatial locality. This refers to accessing various data or resources that are near each other.

Performance

Cache memory is important because it improves the efficiency of data retrieval. It stores program instructions and data that are used repeatedly in the operation of programs or information that the CPU is likely to need next. The computer processor can access this information more quickly from the cache than from the main memory. Fast access to these instructions increases the overall speed of the program.

Aside from its main function of improving performance, cache memory is a valuable resource for evaluating a computer's overall performance. Users can do this by looking at cache's hit-to-miss ratio. Cache hits are instances in which the system successfully retrieves data from the cache. A cache miss is when the system looks for the data in the cache, can't find it, and looks somewhere else instead. In some cases, users can improve the hit-miss ratio by adjusting the cache memory block size -- the size of data units stored.  

Improved performance and the ability to monitor performance are not just about improving general convenience for the user. As technology advances and is increasingly relied upon in mission-critical scenarios, having speed and reliability becomes crucial. Even a few milliseconds of latency could potentially lead to enormous expenses, depending on the situation.

Cache vs. main memory

DRAM serves as a computer's main memory, performing calculations on data retrieved from storage. Both DRAM and cache memory are volatile memories that lose their contents when the power is turned off. DRAM is installed on the motherboard, and the CPU accesses it through a bus connection.

DRAM is usually about half as fast as L1, L2, or L3 cache memory, and much less expensive. It provides faster data access than flash storage, hard disk drives (HDD), and tape storage. It came into use in the last few decades to provide a place to store frequently accessed disk data to improve I/O performance.

DRAM must be refreshed every few milliseconds. Cache memory, which also is a type of random access memory, does not need to be refreshed. It is built directly into the CPU to give the processor the fastest possible access to memory locations and provides nanosecond speed access time to frequently referenced instructions and data. SRAM is faster than DRAM, but because it's a more complex chip, it's also more expensive to make.

Cache vs. virtual memory

A computer has a limited amount of DRAM and even less cache memory. When a large program or multiple programs are running, it's possible for memory to be fully used. To compensate for a shortage of physical memory, the computer's operating system (OS) can create virtual memory.

To do this, the OS temporarily transfers inactive data from DRAM to disk storage. This approach increases virtual address space by using active memory in DRAM and inactive memory in HDDs to form contiguous addresses that hold both an application and its data. Virtual memory lets a computer run larger programs or multiple programs simultaneously, and each program operates as though it has unlimited memory.

In order to copy virtual memory into physical memory, the OS divides memory into page files or swap files that contain a certain number of addresses. Those pages are stored on a disk and when they're needed, the OS copies them from the disk to main memory and translates the virtual memory address into a physical one. These translations are handled by a memory management unit (MMU).

Implementation and history

Mainframes used an early version of cache memory, but the technology as it is known today began to be developed with the advent of microcomputers. With early PCs, processor performance increased much faster than memory performance, and memory became a bottleneck, slowing systems.

In the 1980s, the idea took hold that a small amount of more expensive, faster SRAM could be used to improve the performance of the less expensive, slower main memory. Initially, the memory cache was separate from the system processor and not always included in the chipset. Early PCs typically had from 16 KB to 128 KB of cache memory.

With 486 processors, Intel added 8 KB of memory to the CPU as Level 1 (L1) memory. As much as 256 KB of external Level 2 (L2) cache memory was used in these systems. Pentium processors saw the external cache memory double again to 512 KB on the high end. They also split the internal cache memory into two caches: one for instructions and the other for data.

Processors based on Intel's P6 microarchitecture, introduced in 1995, were the first to incorporate L2 cache memory into the CPU and enable all of a system's cache memory to run at the same clock speed as the processor. Prior to the P6, L2 memory external to the CPU was accessed at a much slower clock speed than the rate at which the processor ran and slowed system performance considerably.

Early memory cache controllers used a write-through cache architecture, where data written into cache was also immediately updated in RAM. This approached minimized data loss, but also slowed operations. With later 486-based PCs, the write-back cache architecture was developed, where RAM isn't updated immediately. Instead, data is stored on cache, and RAM is updated only at specific intervals or under certain circumstances where data is missing or old.

Tuesday, December 15, 2020

CPU performance measures(Cores)

 A CPU core is a CPU’s processor. In the old days, every processor had just one core that could focus on one task at a time. Today, CPUs have been two and 18 cores, each of which can work on a different task.

A core can work on one task, while another core works on a different task, so the more cores a CPU has, the more efficient it is. Many processors, especially those in laptops, have two cores, but some laptop CPUs (known as mobile CPUs), such as Intel’s 8th Generation processors, have four. You should shoot for at least four cores in your machine if you can afford it.

Most processors can use a process called simultaneous multithreading or if it’s an Intel processor, Hyper-threading (the two terms mean the same thing) to split a core into virtual cores, which are called threads. For example, AMD CPUs with four cores use simultaneous multithreading to provide eight threads, and most Intel CPUs with two cores use Hyper-threading to provide four threads.

Some apps take better advantage of multiple threads than others. Lightly-threaded apps, like games, don't benefit from a lot of cores, while most video editing and animation programs can run much faster with extra threads.

Note: Intel also uses the term “Core” to brand some of its CPUs (ex: Intel Core i7-7500U processor). Of course, Intel CPUs (and all CPUs) that do not have the Core branding use cores as well. And the numbers you see in an Intel Core (or otherwise) processor is not a direct correlation to how many cores the CPU has. For example, the Intel Core i7-7500U processor does not have seven cores.

Threading:

A thread is a virtual version of a CPU core. To create a thread, Intel CPUs use hyper-threading, and AMD CPUs use simultaneous multithreading, or SMT for short (they’re the same thing). These are both names for the process of breaking up physical cores into virtual cores (threads) to increase performance.


CPU Performance measures(Clock Speed)

 A CPU's clock speed represents how many cycles per second it can execute. Clock speed is also referred to as clock rate, PC frequency, and CPU frequency. This is measured in gigahertz, which refers to billions of pulses per second and is abbreviated as GHz.

A PC’s clock speed is an indicator of its performance and how rapidly a CPU can process data (move individual bits). A higher frequency (bigger number) suggests better performance in common tasks, such as gaming. A CPU with higher clock speed is generally better if all other factors are equal, but a mixture of clock speed, how many instructions the CPU can process per cycle (also known as instructions per clock cycle/clock, or IPC for short) and the number of cores the CPU has all help determine overall performance.

Note that clock speed differs from the number of cores a CPU has; cores help you deal with less common, time-consuming workloads. Clock speed is also not to be confused with bus speed, which tells you how fast a PC can communicate with outside peripherals or components, like the mouse, keyboard, and monitor.

Most modern CPUs operate on a range of clock speeds, from the minimum "base" clock speed to a maximum "turbo" speed (which is higher/faster). When the processor encounters a demanding task, it can raise its clock speed temporarily to get the job done faster. However, higher clock speeds generate more heat and, to keep themselves from dangerously overheating, processors will "throttle" down to a lower frequency when they get too warm. A better CPU cooler will lead to higher sustainable speeds.

When buying a PC, its clock speed is a good measurement of performance, but it’s not the only one to consider when deciding if a PC is fast enough for you. Other factors include, again, bus speed and core count, as well as the hard drive, RAM, and SSD (solid-state drive).

You can reach faster clock speeds through a process called overclocking. Generally overclocking is used for Gaming processors.