Monday, December 21, 2020

SSD vs HDD

A hard disk drive (HDD) is an old-school storage device that uses mechanical platters and a moving read/write head to access data. A solid-state drive (SSD) is a newer, faster type of device that stores data on instantly-accessible memory chips.

HDDs: 

An enclosure contains a series of platters covered by a ferromagnetic coating. The direction of the magnetization represents the individual bits. Data is read and written by a head (similar to the way vinyl record albums work) that moves extremely fast from one area of the disk to another. Since all of these pieces are “mechanical,” the hard disk is the slowest component of any computer – and the most fragile.

SSD: 

These newer types of disks store information on flash memory, which consists of individual memory cells storing bits that are instantly accessible by the controller. 

Why are SSDs useful for laptops?

While lower-priced laptops still come with traditional hard drives (it’s one way for manufacturers to minimize their costs), most mid-range to high-end machines come with an SSD.

Due to their non-mechanical nature, SSDs require less power, which translates into better battery life.

They’re also shock-resistant. Hard disks have moving parts. If you drop your laptop, chances are that the read/write head of an old-school hard drive is in motion, which could lead to data failure. This doesn’t apply to SSDs.

It isn’t always an either/or choice. In some cases, you find “hybrid” computers. The system partition that contains the operating system, application programs, and the most-used files are installed on an SSD. Other data, such as movies, photos, and documents, are stored on a traditional HDD, which is larger and less expensive.

How much faster are solid-state drives compared to hard disk drives?

The speed difference is significant. SSDs are extremely fast in all areas, but the speed difference is more pronounced when performing certain tasks, such as:

Sequential read/write operations: Copying and moving huge files (such as movies) are where the difference is most apparent. On old-school HDDs, the copying process takes 30-150 MB per second (MB/s), where the same action takes about 500 MB/s on normal SSD or even 3,000-3,500 MB/s on new NVME SSDs. In this example, copying a 20 GB movie is complete in less than 10 seconds with an SSD, while a hard disk would need at least two minutes.

Small “4K” read/write operations: Most of the time, when you run Windows (or macOS), open programs, or browse the web, you’re actually opening and manipulating thousands of smaller files, which are stored in small blocks of data (usually sized at 4K). The faster your disk can read (and write) these 4K blocks, the faster and snappier your system seems. With HDDs, the speed ranges from 0.1 to 1.7 megabytes per second (MB/s). SSDs and NVME SSDs, however, operate at much faster speeds of 50-250 MB/s in 4K reads/writes.

What’s the lifespan of an SSD

There are lots of myths surrounding SSD life spans, and the assumptions go back to the early days of SSDs in the 1990s and early 2000s. It is true that SSD cells have a limited lifespan, but today this is not really an issue.

In theory, the more data written to a cell, the faster it wears out. Nowadays, an SSD cell survives about 3,000 write cycles, which doesn’t sound like much at first. But thanks to the principle of wear leveling, the SSD controller makes sure that write operations are spread evenly across all cells in order to minimize “cell death.” Additionally, modern SSDs contain spare cells that will replace cells that go bad. This is called bad block management, and it’s why the larger the SSD, the longer its lifespan.

However, even if you were to constantly write data onto a hard disk, 24 hours a day, you’d still have decades until the disk eventually dies. 

What about capacity differences between HDDs and SSDs?

If you are concerned about how much information you can store on each type of drive, be reassured. There are no differences in storage capacity. You can get HDDs and SSDs in similar sizes. Usually, the range is 128 GB to 2 TB. And if you need to radically free up space, you can easily format any hard drive, internal or external — no matter if it's an HDD or SSD.

Is an HDD or an SDD better for gaming?

Given the huge amounts of data a game has to shuffle back and forth (loading levels, character models, etc), an SSD helps games load and run faster. You’ll also experience less stutter when playing games, as the rest of your PC doesn’t need to wait for game data to load – which can give you quite an advantage, especially in the eSports arena.

Here’s a simple example: Loading the world of GTA V takes about 25 seconds on my Samsung 970 Evo Plus with SSD, compared to more than two minutes when using an old mechanical hard disk. A game-changer.

Friday, December 18, 2020

GPU(Graphical Processor Unit)

 If we think of a central processing unit (CPU) as the logical thinking section of a computer’s silicon brain, then the graphics processing unit (GPU) is its creative side, helping render graphical user interfaces into visually attractive icons and designs rather than reams of black and white lines. 

While many CPUs come with some form of integrated GPU to ensure that Windows can be displayed on a connected screen, there is a myriad of more intensive graphics-based tasks, such as video rendering and computer-aided design (CAD) that often require a dedicated or discreet GPU notably in the form of a graphics card. 

When it comes to the latter, Nvidia and AMD are the two main players in the graphics card arena, while Intel’s own Iris Plus and UHD integrated GPUs tend to carry out a lot of light-weight work in laptops without dedicated graphics. On the mobile side, the likes of Qualcomm and MediaTek provide lightweight GPUs for handheld devices, though these often come in system-on-a-chip (SoC) designs where the GPU is on the same chip as the CPU and other core mobile chipset components. 

It can be easy to think of a GPU as something only people keen on playing PC games are interested in, but a GPU provides a lot more than just a graphical grunt.

What does a GPU do?

"GPU" became a popular term for the component that powers graphics on a machine in the 1990s when it was coined by chip manufacturer Nvidia. The company's GeForce range of graphics cards was the first to be popularised and ensured related technologies such as hardware acceleration, programmable shading, and stream processing were able to evolve.

While the task of rendering basic objects, like an operating system's desktop environment, can usually be handled by the limited graphics processing functionalities built into the CPU, some more strenuous workloads require the extra horsepower, which is where a dedicated GPU comes in.

In short, a GPU is a processor that is specially designed to handle intensive graphics rendering tasks.

Computer-generated graphics - such as those found in videogames or other animated mediums - require each separate frame to be individually 'drawn' by the computer, which requires a large amount of power.

Most high-end desktop PCs will feature a dedicated graphics card, which occupies one of the motherboard's PCIe slots. These usually have their own dedicated memory allocation built into the card, which is reserved exclusively for graphical operations. Some particularly advanced PCs will even use two GPUs hooked up together to provide even more processing power.

Laptops, meanwhile, often carry smaller mobile ships, which are smaller and less powerful than their desktop counterparts. This allows them to fit an otherwise bulky GPU into a smaller chassis, at the expense of some of the raw performance offered by desktop cards.

What are GPUs used for?

GPUs are most commonly used to drive high-quality gaming experiences, producing life-like digital graphics and super-slick rendering. However, there are also several business applications that rely on powerful graphics chips.

3D modeling software like AutoCAD, for example, uses GPUs to render models. Because the people that work with this kind of software tend to make multiple small changes in a short period of time, the PC they're working with needs to be able to quickly re-render the model.

Video editing is another common use-case; while some powerful CPUs can handle basic video editing, if you're working with large amounts of high-resolution files - particularly 4K or 360-degree video - a high-end GPU is a must-have in order to transcode the files at a reasonable speed.

GPUs are often favored over CPUs for use in machine learning too, as they can process more functions in a given period of time than CPUs. This makes them better-suited to creating neural networks, due to the volume of data they need to deal with.

Not all GPUs are created equal, however. Manufacturers like AMD and Nvidia commonly produce specialized enterprise versions of their chips, which are designed specifically with these kinds of applications in mind and come with more in-depth support provided.


How a GPU works

CPU and GPU architectures are also differentiated by the number of cores. The core is essentially the processor within the processor. Most CPUs have between four and eight cores, though some have up to 32 cores. Each core can process its own tasks or threads. Because some processors have multithreading capability -- in which the core is divided virtually, allowing a single core to process two threads -- the number of threads can be much higher than the number of cores. This can be useful in video editing and transcoding. CPUs can run two threads (independent instructions) per core (the independent processor unit). GPUs can have four to 10 threads per core.

CPUs and the End of Moore’s Law

With Moore’s law winding down, GPUs, invented by NVIDIA in 1999, came just in time.

Moore's Law

Moore’s law posits that the number of transistors that can be crammed into an integrated circuit will double about every two years. For decades, that’s driven a rapid increase of computing power. That law, however, has run up against hard physical limits.

GPUs offer a way to continue accelerating applications — such as graphics, supercomputing and AI — by dividing tasks among many processors. Such accelerators are critical to the future of semiconductors, according to John Hennessey and David Patterson, winners of the 2017 A.M. Turing Award and authors of Computer Architecture: A Quantitative Approach the seminal textbook on microprocessors.

Thursday, December 17, 2020

Intel(I3 I5 I7 I9)

 A processor is like a computer’s brain - and Intel® Core™ processors are the most powerful. They have multiple cores for more power and smoother multi-tasking.

There are 4 main categories: i3, i5, i7, and i9. Each has numerous spec options, but strictly speaking, i5 is superior to i3 and so on.

What is the difference between Intel Generations?

Each year the Intel Core processors are updated. 2018’s updates are known as the 8th generation.

Not sure what generation of processor your PC has? The number after the hyphen on its serial number should give you a clue – for example, an Intel Core i7-7820HQ is 7th gen.

8th Generation Intel Core processors

Intel’s 8th Generation of processors has moved with the times and delivered some exciting new features. Including:

Incredible VR experiences

High-quality 4K UHD content

Two more cores – up to 6 instead of the original 4

Introduction of the super-powered Intel Core i9 and X-series.

 Core i3: Everyday users – web browsing, Word, and media streaming

If you’re after a laptop for everyday computing tasks like web browsing, video streaming, and office-type work, the i3 is great.

The latest 8th Gen has up to 4 cores – twice as many as the 7th Gen – making day-to-day use a lot slicker. It even lets you watch content in immersive 4K and 360° viewing.

With a Core i3 you can:

Browse multiple webpages smoothly. Work in Word or Excel. Stream movies and TV shows from Netflix in HD. Listen to music on Spotify. Multi-task efficiently with Intel Hyper-Threading technology.

With a Core i5 you can:

Smoothly multitask - work on spreadsheets, stream music, and browse the web. Work on complicated tasks – like rendering big Excel files. Edit in Photoshop and sketch in Illustrator. Create, share, and watch 4K content. Play intensive PC games – see more of Intel's gaming processors. Benefit from faster-repeated tasks thanks to the large cache size. Stream from multiple sites. Get a temporary boost when using demanding programs with Intel Turbo Boost Technology 2.0. 

Core i7 - Video editors, designers, and gamers who crave power

If you are advanced in video editing or 3D modeling you’ll need a very fast processor and good graphics – so look at an i7. It’s perfect for the demands of gaming fans too. With 6 original cores and the technology to create another 6 virtual cores, it has enough power for the most demanding tasks.

With a Core i7 you can:

Encode video more efficiently. Work smoothly in 3D modeling programs. Smoothly edit in Photoshop and sketch in Illustrator. Watch and edit 4K UHD content and 360° videos. Work productively with demanding creative programs – each core uses 2 ‘threads’ rather than 1 with Intel's Hyper-Threading technology. Benefit from faster-repeated tasks thanks to the large cache size. Get a temporary power boost when using demanding programs with Intel Turbo Boost Technology 2.0.

Core i9 – Extreme gaming, mega-tasking, and high-end content creation

The newest addition to the Intel family, the Core i9 X-Series, is Intel’s most powerful processor with 18 cores and 36 threads. And with updated Intel® Turbo Boost Max Technology 3.0, it elevates everything you do to new heights.

i9 turns your PC into a studio, producing breathtaking 4K or 360° videos, stunning photos, or high-quality music. Gamers won’t be disappointed either – this is the ultimate tool for virtual reality gaming.

With a Core i9 you can:

Produce, edit, and share 4K UHD content and 360° videos. Work smoothly in 3D modeling programs Produce and edit high-quality music. Smoothly edit in Photoshop and sketch out in Illustrator. Enjoy the ultimate gaming and VR experience. Work more productively when using demanding creative programs. Benefit from faster-repeated tasks thanks to the large cache size.

Intel Core X-series Processor family

X-series variations of the Intel Core range are now available. These unlocked versions of the original platforms are supercharged and allow for overclocking. They’re designed to scale to your performance needs by delivering options between 4 to 18 cores for extreme performance and the latest technological advancements while providing headroom for the future.


CPU Performance measures(Cache)

Cache memory is a chip-based computer component that makes retrieving data from the computer's memory more efficient. It acts as a temporary storage area that the computer's processor can retrieve data from easily. This temporary storage area, known as a cache, is more readily available to the processor than the computer's main memory source, typically some form of DRAM.

Cache memory is sometimes called CPU (central processing unit) memory because it is typically integrated directly into the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU. Therefore, it is more accessible to the processor and able to increase efficiency, because it's physically close to the processor.

In order to be close to the processor, cache memory needs to be much smaller than main memory. Consequently, it has less storage space. It is also more expensive than the main memory, as it is a more complex chip that yields higher performance.

What it sacrifices in size and price, it makes up for in speed. Cache memory operates between 10 to 100 times faster than RAM, requiring only a few nanoseconds to respond to a CPU request.

The name of the actual hardware that is used for cache memory is high-speed static random access memory (SRAM). The name of the hardware that is used in a computer's main memory is dynamic random access memory (DRAM).

Cache memory is not to be confused with the broader term cache. Caches are temporary stores of data that can exist in both hardware and software. Cache memory refers to the specific hardware component that allows computers to create caches at various levels of the network.

Types of cache memory

Cache memory is fast and expensive. Traditionally, it is categorized as "levels" that describe its closeness and accessibility to the microprocessor. There are three general cache levels:

L1 cache, or primary cache, is extremely fast but relatively small and is usually embedded in the processor chip as CPU cache.

L2 cache, or secondary cache, is often more capacious than L1. The L2 cache may be embedded on the CPU, or it can be on a separate chip or coprocessor and have a high-speed alternative system bus connecting the cache and CPU. That way it doesn't get slowed by traffic on the main system bus.

Level 3 (L3) cache is a specialized memory developed to improve the performance of L1 and L2. L1 or L2 can be significantly faster than L3, though L3 is usually double the speed of DRAM. With multicore processors, each core can have a dedicated L1 and L2 cache, but they can share an L3 cache. If an L3 cache references an instruction, it is usually elevated to a higher level of cache.

In the past, L1, L2, and L3 caches have been created using combined processor and motherboard components. Recently, the trend has been toward consolidating all three levels of memory caching on the CPU itself. That's why the primary means for increasing cache size has begun to shift from the acquisition of a specific motherboard with different chipsets and bus architectures to buying a CPU with the right amount of integrated L1, L2, and L3 cache.

Contrary to popular belief, implementing flash or more dynamic RAM (DRAM) on a system won't increase cache memory. This can be confusing since the terms memory caching (hard disk buffering) and cache memory are often used interchangeably. Memory caching, using DRAM or flash to buffer disk reads, is meant to improve storage I/O by caching data that is frequently referenced in a buffer ahead of slower magnetic disk or tape. Cache memory, on the other hand, provides read buffering for the CPU.

Cache memory mapping

Caching configurations continue to evolve, but cache memory traditionally works under three different configurations:

Direct mapped cache has each block mapped to exactly one cache memory location. Conceptually, a direct-mapped cache is like rows in a table with three columns: the cache block that contains the actual data fetched and stored, a tag with all or part of the address of the data that was fetched, and a flag bit that shows the presence in the row entry of a valid bit of data.

Fully associative cache mapping is similar to direct mapping in structure but allows a memory block to be mapped to any cache location rather than to a prespecified cache memory location as is the case with direct mapping.

Set associative cache mapping can be viewed as a compromise between the direct mapping and fully associative mapping in which each block is mapped to a subset of cache locations. It is sometimes called N-way set associative mapping, which provides for a location in main memory to be cached to any of "N" locations in the L1 cache.

Data writing policies

Data can be written to memory using a variety of techniques, but the two main ones involving cache memory are:

Write-through. Data is written to both the cache and main memory at the same time.

Write-back. Data is only written to the cache initially. Data may then be written to the main memory, but this does not need to happen and does not inhibit the interaction from taking place.

The way data is written to the cache impacts data consistency and efficiency. For example, when using write-through, more writing needs to happen, which causes latency upfront. When using write-back, operations may be more efficient, but data may not be consistent between the main and cache memories.

One way a computer determines data consistency is by examining the dirty bit in memory. The dirty bit is an extra bit included in memory blocks that indicates whether the information has been modified. If data reaches the processor's register file with an active dirty bit, it means that it is not up to date and there are more recent versions elsewhere. This scenario is more likely to happen in a write-back scenario because the data is written to the two storage areas asynchronously.

Specialization and functionality

In addition to instruction and data caches, other caches are designed to provide specialized system functions. According to some definitions, the L3 cache's shared design makes it a specialized cache. Other definitions keep the instruction cache and the data cache separate and refer to each as a specialized cache.

Translation lookaside buffers (TLBs) are also specialized memory caches whose function is to record virtual address to physical address translations.

Still, other caches are not, technically speaking, memory caches at all. Disk caches, for instance, can use DRAM or flash memory to provide data caching similar to what memory caches do with CPU instructions. If data is frequently accessed from the disk, it is cached into DRAM or flash-based silicon storage technology for faster access time and response.

Specialized caches are also available for applications such as web browsers, databases, network address binding, and client-side Network File System protocol support. These types of caches might be distributed across multiple networked hosts to provide greater scalability or performance to an application that uses them.

Locality

The ability of cache memory to improve a computer's performance relies on the concept of locality of reference. Locality describes various situations that make a system more predictable. Cache memory takes advantage of these situations to create a pattern of memory access that it can rely upon.

There are several types of the locality. Two key ones for the cache are:

Temporal locality. This is when the same resources are accessed repeatedly in a short amount of time.

Spatial locality. This refers to accessing various data or resources that are near each other.

Performance

Cache memory is important because it improves the efficiency of data retrieval. It stores program instructions and data that are used repeatedly in the operation of programs or information that the CPU is likely to need next. The computer processor can access this information more quickly from the cache than from the main memory. Fast access to these instructions increases the overall speed of the program.

Aside from its main function of improving performance, cache memory is a valuable resource for evaluating a computer's overall performance. Users can do this by looking at cache's hit-to-miss ratio. Cache hits are instances in which the system successfully retrieves data from the cache. A cache miss is when the system looks for the data in the cache, can't find it, and looks somewhere else instead. In some cases, users can improve the hit-miss ratio by adjusting the cache memory block size -- the size of data units stored.  

Improved performance and the ability to monitor performance are not just about improving general convenience for the user. As technology advances and is increasingly relied upon in mission-critical scenarios, having speed and reliability becomes crucial. Even a few milliseconds of latency could potentially lead to enormous expenses, depending on the situation.

Cache vs. main memory

DRAM serves as a computer's main memory, performing calculations on data retrieved from storage. Both DRAM and cache memory are volatile memories that lose their contents when the power is turned off. DRAM is installed on the motherboard, and the CPU accesses it through a bus connection.

DRAM is usually about half as fast as L1, L2, or L3 cache memory, and much less expensive. It provides faster data access than flash storage, hard disk drives (HDD), and tape storage. It came into use in the last few decades to provide a place to store frequently accessed disk data to improve I/O performance.

DRAM must be refreshed every few milliseconds. Cache memory, which also is a type of random access memory, does not need to be refreshed. It is built directly into the CPU to give the processor the fastest possible access to memory locations and provides nanosecond speed access time to frequently referenced instructions and data. SRAM is faster than DRAM, but because it's a more complex chip, it's also more expensive to make.

Cache vs. virtual memory

A computer has a limited amount of DRAM and even less cache memory. When a large program or multiple programs are running, it's possible for memory to be fully used. To compensate for a shortage of physical memory, the computer's operating system (OS) can create virtual memory.

To do this, the OS temporarily transfers inactive data from DRAM to disk storage. This approach increases virtual address space by using active memory in DRAM and inactive memory in HDDs to form contiguous addresses that hold both an application and its data. Virtual memory lets a computer run larger programs or multiple programs simultaneously, and each program operates as though it has unlimited memory.

In order to copy virtual memory into physical memory, the OS divides memory into page files or swap files that contain a certain number of addresses. Those pages are stored on a disk and when they're needed, the OS copies them from the disk to main memory and translates the virtual memory address into a physical one. These translations are handled by a memory management unit (MMU).

Implementation and history

Mainframes used an early version of cache memory, but the technology as it is known today began to be developed with the advent of microcomputers. With early PCs, processor performance increased much faster than memory performance, and memory became a bottleneck, slowing systems.

In the 1980s, the idea took hold that a small amount of more expensive, faster SRAM could be used to improve the performance of the less expensive, slower main memory. Initially, the memory cache was separate from the system processor and not always included in the chipset. Early PCs typically had from 16 KB to 128 KB of cache memory.

With 486 processors, Intel added 8 KB of memory to the CPU as Level 1 (L1) memory. As much as 256 KB of external Level 2 (L2) cache memory was used in these systems. Pentium processors saw the external cache memory double again to 512 KB on the high end. They also split the internal cache memory into two caches: one for instructions and the other for data.

Processors based on Intel's P6 microarchitecture, introduced in 1995, were the first to incorporate L2 cache memory into the CPU and enable all of a system's cache memory to run at the same clock speed as the processor. Prior to the P6, L2 memory external to the CPU was accessed at a much slower clock speed than the rate at which the processor ran and slowed system performance considerably.

Early memory cache controllers used a write-through cache architecture, where data written into cache was also immediately updated in RAM. This approached minimized data loss, but also slowed operations. With later 486-based PCs, the write-back cache architecture was developed, where RAM isn't updated immediately. Instead, data is stored on cache, and RAM is updated only at specific intervals or under certain circumstances where data is missing or old.

Tuesday, December 15, 2020

CPU performance measures(Cores)

 A CPU core is a CPU’s processor. In the old days, every processor had just one core that could focus on one task at a time. Today, CPUs have been two and 18 cores, each of which can work on a different task.

A core can work on one task, while another core works on a different task, so the more cores a CPU has, the more efficient it is. Many processors, especially those in laptops, have two cores, but some laptop CPUs (known as mobile CPUs), such as Intel’s 8th Generation processors, have four. You should shoot for at least four cores in your machine if you can afford it.

Most processors can use a process called simultaneous multithreading or if it’s an Intel processor, Hyper-threading (the two terms mean the same thing) to split a core into virtual cores, which are called threads. For example, AMD CPUs with four cores use simultaneous multithreading to provide eight threads, and most Intel CPUs with two cores use Hyper-threading to provide four threads.

Some apps take better advantage of multiple threads than others. Lightly-threaded apps, like games, don't benefit from a lot of cores, while most video editing and animation programs can run much faster with extra threads.

Note: Intel also uses the term “Core” to brand some of its CPUs (ex: Intel Core i7-7500U processor). Of course, Intel CPUs (and all CPUs) that do not have the Core branding use cores as well. And the numbers you see in an Intel Core (or otherwise) processor is not a direct correlation to how many cores the CPU has. For example, the Intel Core i7-7500U processor does not have seven cores.

Threading:

A thread is a virtual version of a CPU core. To create a thread, Intel CPUs use hyper-threading, and AMD CPUs use simultaneous multithreading, or SMT for short (they’re the same thing). These are both names for the process of breaking up physical cores into virtual cores (threads) to increase performance.


CPU Performance measures(Clock Speed)

 A CPU's clock speed represents how many cycles per second it can execute. Clock speed is also referred to as clock rate, PC frequency, and CPU frequency. This is measured in gigahertz, which refers to billions of pulses per second and is abbreviated as GHz.

A PC’s clock speed is an indicator of its performance and how rapidly a CPU can process data (move individual bits). A higher frequency (bigger number) suggests better performance in common tasks, such as gaming. A CPU with higher clock speed is generally better if all other factors are equal, but a mixture of clock speed, how many instructions the CPU can process per cycle (also known as instructions per clock cycle/clock, or IPC for short) and the number of cores the CPU has all help determine overall performance.

Note that clock speed differs from the number of cores a CPU has; cores help you deal with less common, time-consuming workloads. Clock speed is also not to be confused with bus speed, which tells you how fast a PC can communicate with outside peripherals or components, like the mouse, keyboard, and monitor.

Most modern CPUs operate on a range of clock speeds, from the minimum "base" clock speed to a maximum "turbo" speed (which is higher/faster). When the processor encounters a demanding task, it can raise its clock speed temporarily to get the job done faster. However, higher clock speeds generate more heat and, to keep themselves from dangerously overheating, processors will "throttle" down to a lower frequency when they get too warm. A better CPU cooler will lead to higher sustainable speeds.

When buying a PC, its clock speed is a good measurement of performance, but it’s not the only one to consider when deciding if a PC is fast enough for you. Other factors include, again, bus speed and core count, as well as the hard drive, RAM, and SSD (solid-state drive).

You can reach faster clock speeds through a process called overclocking. Generally overclocking is used for Gaming processors.

Sunday, December 13, 2020

Processor

 A processor is an integrated electronic circuit that performs the calculations that run a computer. A processor performs arithmetical, logical, input/output (I/O), and other basic instructions that are passed from an operating system (OS). Most other processes are dependent on the operations of a processor.

The terms processor, central processing unit (CPU), and microprocessor are commonly linked as synonyms. Most people use the word “processor” interchangeably with the term “CPU” nowadays, it is technically not correct since the CPU is just one of the processors inside a personal computer (PC).

The Graphics Processing Unit (GPU) is another processor, and even some hard drives are technically capable of performing some processing.

Processors are found in many modern electronic devices, including PCs, smartphones, tablets, and other handheld devices. Their purpose is to receive input in the form of program instructions and execute trillions of calculations to provide the output that the user will interface with.

A processor includes an arithmetical logic and control unit (CU), which measures capability in terms of the following:

Ability to process instructions at a given time.

A maximum number of bits/instructions.

Relative clock speed.

Every time that an operation is performed on a computer, such as when a file is changed or an application is open, the processor must interpret the operating system or software’s instructions. Depending on its capabilities, the processing operations can be quicker or slower and have a big impact on what is called the “processing speed” of the CPU.

Each processor is constituted of one or more individual processing units called “cores”. Each core processes instructions from a single computing task at a certain speed defined as “clock speed” and measured in gigahertz (GHz). Since increasing clock speed beyond a certain point became technically too difficult, modern computers now have several processor cores (dual-core, quad-core, etc.). They work together to process instructions and complete multiple tasks at the same time.

Modern desktop and laptop computers now have a separate processor to handle graphic rendering and send output to the display monitor device. Since this processor, the GPU, is specifically designed for this task, computers can handle all applications that are especially graphics-intensive such as video games more efficiently.

A processor is made of four basic elements: the arithmetic logic unit (ALU), the floating-point unit (FPU), registers, and the cache memories. The ALU and FPU carry basic and advanced arithmetic and logic operations on numbers, and then results are sent to the registers, which also store instructions. Caches are small and fast memories that store copies of data for frequent use and act similarly to random access memory (RAM).

The CPU carries out its operations through the three main steps of the instruction cycle: fetch, decode, and execute.

Fetch: the CPU retrieves instructions, usually from RAM.

Decode: a decoder converts the instruction into signals to the other components of the computer.

Execute: the now decoded instructions are sent to each component so that the desired operation can be performed.


Thursday, December 10, 2020

Keyboard Shortcuts

Ctrl+Z: Undo

Ctrl+W: Close

Not only for any window that is opened it can be used to close the Browser Tab also.

Ctrl+A: Select all

Alt+Tab: Switch apps

This baby is one of the classic Windows shortcuts, and it can be hugely useful when you’re running multiple applications. Just press Alt+Tab and you’ll be able to quickly flick through all your open windows.

Alt+F4: Close apps

Another old-school shortcut, Alt+F4 shuts down active apps so you can skip the process of hunting down their on-screen menus. Don’t worry about losing unsaved work with this command—it will prompt you to save your documents before closing them.


Windows navigation shortcuts

Win+D: Show or hide the desktop

This keyboard combo minimizes all your open windows, bringing your home screen into view. If you store rows and rows of files and shortcuts on your desktop, Win+D will let you access them in in moments.

Win+left arrow or Win+right arrow: Snap windows

Snapping a window simply opens it on one side of the screen (left or right, depending on which arrow you hit). This allows you to compare two windows side-by-side and keeps your workspace organized.

Win+Tab: Open the Task view

Like Alt+Tab, this shortcut lets you switch apps, but it does so by opening an updated Windows application switcher. The latest version shows thumbnails of all your open programs on the screen.

Tab and Shift+Tab: Move backward and forward through options

When you open a dialog box, these commands move you forward (Tab) or backward (Shift+Tab) through the available options, saving you a click. If you’re dealing with a dialog box that has multiple tabs, hit Ctrl+Tab or Ctrl+Shift+Tab to navigate through them.

Ctrl+Esc: Open the Start menu

If you’re using a keyboard that doesn’t have a Windows key, this shortcut will open the Start menu. Otherwise, a quick tap of the Windows key will do the same thing. From there, you can stay on the keyboard and navigate the Start menu with the cursor keys, Tab, and Shift+Tab.

F2: Rename

Simply highlight a file and hit F2 to give it a new name. This command also lets you edit the text in other programs—tap F2 in Microsoft Excel, for example, and you’ll be able to edit the contents of the cell you’re in.

F5: Refresh

While you’re exploring the function key row, take a look at F5. This key will refresh a page—a good trick when you’re using File Explorer or your web browser. After the refresh, you’ll see the latest version of the page you’re viewing.

Win+L: Lock your computer

Keep your computer safe from any prying eyes by using this keyboard combo right before you step away. Win+L locks the machine and returns you to the login screen, so any snoops will need your user account password to regain access.

Win+I: Open Settings

Any time you want to configure the way Windows works, hit this keyboard shortcut to bring up the Settings dialog. Alternatively, use Win+A to open up the Action Center panel, which shows notifications and provides quick access to certain settings.

Win+S: Search Windows

The Windows taskbar has a handy search box that lets you quiz Cortana or sift through your applications and saved files. Jump straight to it with this keyboard shortcut, then type in your search terms.

Win+PrtScn: Save a screenshot

No need to open a dedicated screenshot tool: Win+PrtScn grabs the whole screen and saves it as a PNG file in a Screenshots folder inside your Pictures folder. At the same time, Windows will also copy the image to the clipboard. If you don’t want to snap the whole screen, the Alt+PrtScn combination will take a screenshot of just the active window, but it will only copy this image to the clipboard, so you won’t get a saved file.

Ctrl+Shift+Esc: Open the Task Manager

The Task Manager is your window into everything running on your Windows system, from the open programs to the background processes. This shortcut will call up the Task Manager, no matter what application you’re using.

Win+C: Start talking to Cortana

This shortcut puts Cortana in listening mode, but you must activate it before you can give it a whirl. To do so, open Cortana from the taskbar search box, click the cog icon and turn on the keyboard shortcut. Once you’ve enabled the shortcut, hit the Win+C whenever you want to talk to the digital assistant. You can do this instead of, or in addition to, saying, "Hey Cortana."

Win+Ctrl+D: Add a new virtual desktop

Virtual desktops create secondary screens where you can stash some of your open applications and windows, giving you extra workspace. This shortcut lets you create one. Once you have, click the Task View button to the right of the taskbar search box to switch from one desktop to another. Or stick with shortcuts: Win+Ctrl+arrow will cycle through your open desktops, and Win+Ctrl+F4 will close whichever one you're currently viewing and shift your open windows and apps to the next available virtual desktop.

Win+X: Open the hidden menu

Windows has a hidden Start menu, called the Quick Link menu, that allows you to access all the key areas of the system. From here, you can jump straight to Device Manager to review and configure any hardware, such as printers or keyboards, that are currently attached to the system. Or you can quickly bring up the PowerShell command prompt window to access advanced Windows commands.

About RAM(Random Access Memory)

Your computer’s system memory includes physical memory, called Random Access Memory (RAM), and Virtual memory. System memory works as temporary storage, unlike hard drives where you store data permanently. RAM temporarily stores data while the program or file is in use and once you turn off the system it closes all functions and programs running on it.

When you start a program on your system, it’s processor gives a command to retrieve the program from the hard drive. Once your system retrieves files, the system requires a workplace to display this data and to manipulate it. Here RAM works as a digital countertop to perform this task. Your system places programs in RAM temporarily to function. To understand more about RAM and its functions let’s have a look at this article.

What is RAM?

Random-access Memory or RAM is a kind of computer data storage that stores frequently used program instruction to increase the general speed of a computer. A RAM device allows data items to be written or read at the same time irrespective of the physical location of the data inside the memory. It makes it possible to access data in random order from its physical location which makes it fast enough to find any specific data information instantly. Contrary to that, other non-random access storage types like hard disks, CD-RWs, etc. usually read and write data in a predetermined order due to their mechanical limitations like media rotation speeds and arm movement. It may increase the read and write function time significantly according to the physical location of the information or data on the recording medium.

RAM is used as the main memory of the computer. It is considered to be a volatile memory as the information stored in the RAM may lose when there is no power. RAM is used by the CPU (Central processing unit) of the system when the computer is running to store information and to access it quickly. It doesn’t store any information permanently. In today’s technology, RAM devices use integrated circuits to store information. It makes it a relatively expensive form of storage but it allows quick access to data.

Why memory is called random-access?

Unlike non-random-access storage types where it read and writes data in a predetermined order, RAM can read or write information from the memory in a random manner. In other words, it has random access to any location(cell) on your memory if you know the “address” of that location. This helps you speed up data retrieval as the CPU can access any location without starting each time at the first location and go through the whole data until it finds the correct one. This method is called “Serial Access”.

What is the main function of RAM in a computer?

The main function of Random-access memory or RAM is to act as temporary storage of data and program instructions that can be accessed quickly by the CPU when required. Let’s discuss the primary functions of RAM in a computer.


Reading files: This is one of the primary works RAM performs. Hard drives on your system store a vast number of files and cause delayed functioning. It becomes a hassle job to retrieve any file from the hard drive as files stored on the hard drive remain scattered and to different locations. To access these files, drives require to move their mechanical read/write arm back and forth and need to wait until the spinning platters spin into the correct position. Despite the fact, drives spin at higher speed (thousands of rotations per minute) this process causes a serious delay when reading files. To deal with this situation and to lessen the slowdown, your system stores files in RAM once the files are first read from the drive. Apart from it, RAM doesn’t have moving parts and runs at a faster speed thus it loads information quickly during subsequent uses.

To recover some precious disk space and to reduce a load on RAM while searching files on your system, you can use PC cleaner tools to remove all unnecessary files. It will help you clean and organize your data in a much better way.

Temporary storage: This is another function of the RAM which allows temporary storage of the data on RAM while program or files are in use. In addition to storing files read from your system hard drive, RAM also stores information or data that programs are using actively. This data includes that data need not be stored permanently as it remains temporary on RAM. When your system keeps temporary data on RAM to run it temporarily it allows the system to work fast and efficiently. It improves the speed and responsiveness of the system.

Apart from these functions, more RAM space can help you improve your system speed and performance significantly. You can increase RAM space as per your system configuration for better results.

Motherboard

 It is the most important part of the Computer(you can get it as it is named as "Mother"). Coming for the dimensions of the Motherboard as a form factor ATX(Advanced Technology Extended) is the most used one. Here are the components which are connected to the motherboard.

Mouse and Keyboard:

In the starting stages of the computing days, I/O devices are connected using PS/2 connectors. But now they have been changed to USB.

USB(Universal Serial Bus):

As mentioned above I/O devices can be connected using the USB. In the starting days, any input device has to be connected using PS/2 only when the system is shutdown. This has changed with USB. With USB C everything thing can be connected to the system without switching it off.

Parallel Port:

Most printers use a special connector called a parallel port. Parallel port carries data on more than one wire, as opposed to the serial port, which uses only one wire. The parallel port uses a 25-pin female DB connector. Parallel ports are directly supported by the motherboard through a direct connection or through a dangle.

CPU Chip:

The central processing unit, also called the microprocessor performs all the calculations that take place inside a pc. CPUs come in a variety of shapes and sizes. Modern CPUs generate a lot of heat and thus require a cooling fan or heat sink. the cooling device(such as a cooling fan) is removable, although some CPU manufacturers sell the CPU with a fan permanently attached.

Ram Slots:

Random-Access Memory(RAM) stores programs and data currently being used by the CPU. RAM is measured in units called bytes. RAM has been packaged in many different ways. The most current package is called a 168-pin DIMM(Dual Inline Memory module).

IDE(Integrated Drive Electronics) Controller:

Industry standards define two common types of hard drives: EIDE(Enhanced IDE) and SCSI (Small Computer System Interface). The majority of the PCs use EIDE drives. SCSI drives show up in high-end PCs such as network servers or graphical workstations. The EIDE drive connects to the hard drive via a 2-inch wide, 40-pin ribbon cable, which in turn connects to the motherboard. IDE controller is responsible for controlling the hard drive.

PCI(Peripheral Component Interconnect) slot:

Intel introduced the Peripheral component interconnect bus protocol. The PCI bus is used to connect I/O devices(such as NIC(Network Interface Controller) or RAID(Redundant Array of Inexpensive Disks) to the main logic of the computer. The PCI bus has replaced the ISA bus.

ISA slot:

  (Industry Standard Architecture) It is the standard architecture of the Expansion bus. The motherboard may contain some slots to connect ISA compatible cards.

CMOS Battery:

 To provide CMOS with power when the computer is turned off all motherboards come with a battery. These batteries mount on the motherboard in one of three ways: the obsolete external battery, the most common onboard battery, and built-in battery.

AGP slot:

If you have a modern motherboard, you will almost certainly notice a single connector that looks like a PCI slot but is slightly shorter and usually brown. You also probably have a video card inserted into this slot. This is an Advanced Graphics Port (AGP) slot.

Wednesday, December 9, 2020

Computer Components

 Hi All,

A fresh series for people like computer gamers or people who are going to buy a new computer. This series is going to educate people about the components present in the computer and which one should be there in yours. Please follow and support.

Sunday, December 6, 2020

About TENET

 

What is Tenet?

It’s a word, and a gesture. And, as The Protagonist (John David Washington) comes to learn, Tenet is the name given to the secret organisation tasked with preventing the temporal apocalypse.

When does Tenet take place?

Today, tomorrow, yesterday, and a number of points in-between. Overall, it’s set (roughly) in the present day, but the start of the film isn’t necessarily the start of the story – The Protagonist’s investigation into Sator (Kenneth Branagh) goes forward a few days, and then the story reverses on itself as the characters ‘invert’ themselves into the past, resulting in a climactic battle that takes place around the same time as the beginning. Ultimately, the film ends on the same day it begins – the film's timeline, just like its title, is a palindrome, you see.

What’s the Sator Square, and what’s that got to do with Tenet?

The Sator Square is a famous palindromic square that dates back about two thousand years, it is the words Rotas, Opera, Tenet, Arepo, Sator and is the same read backward, forwards, up to down or left to right. All of those words appear in the film: Tenet is the title, Sator the villain, opera the opening sequence, Arepo the forger, and Rotas the name of Sator’s company.

What are the Soviet ‘Closed Cities’?

Closed Cities – including Stalask 12, where Branagh’s Sator first made contact with the future as a young man, and where the final battle takes place – date primarily back to the Cold War, and were cities in which sensitive operations were carried out and to which travel was restricted. Essentially invisible, they would have no direct address or transport links, being referred to often by codes referring to adjacent settlements. Within the closed cities would be classified installations (most notably related to the Soviet nuclear program) and the housing for the people who worked there and their families. Closed cities appeared on no maps and their existence was a closely-guarded secret.

What is a Free Port?

A Free Port is essentially a place where goods can remain outside the normal customs and tax jurisdictions of the country they’re in. Goods arriving at a free port are either taxed lightly or not at all and all goods there are generally regarded as being still outside the country. Needless to say, this makes them an appealing destination for illegal or controlled goods – or, you know, hiding experimental time-reversing technology like an Inversion Turnstile.

Who is The Protagonist?

Never directly named, The Protagonist is played by John David Washington and begins the film as a CIA agent before being recruited into Tenet, becoming their most important asset. But as the film reaches its climax, it turns out he has a much greater role in the organization: further on in his own future timeline, The Protagonist becomes the founder of Tenet in the past. Still with us?

What is Inversion?

Inversion is a process whereby an object (or person) has its entropy reversed, essentially flipping its chronology so that from that point on its travels backward in time instead of forwards. The process is achieved via a Turnstile: a temporal reversal engine that has a distinct entrance and exit, ensuring the object/person doesn't accidentally come into contact with its past/future self and cause the universe to implode. Which would – according to our notes – be bad.

What’s a Temporal Pincer Movement?

It’s a time-bending tactical technique for missions: you approach it moving forward in time, and then also approach it in reverse moving backward from the future – each side using the knowledge that the other side gained from having already experienced it. Except, both sides are actually experiencing it simultaneously. It’s… confusing. But it works.

What is the Algorithm?

It’s essentially the film’s MacGuffin. As Priya (Dimple Kapadia) reveals to The Protagonist, The Algorithm is a device from the future that would completely invert the entropy of our world, causing the ‘backwards’ reality to dominate ours and essentially overwrite it. It was created in the future by a scientist who all too quickly realised the error of her ways. To stop the Algorithm being deployed, the scientist split it into nine pieces and inverted them all to hide them in the past. She then killed herself so no one could make her recreate it.

Why does the future want to destroy the past?

Apparently, environmental damage has become so great in the future that the Earth is no longer habitable. The only way for the future to survive, then, is to reverse time and continue their existence in the past. Given that we’re the ones who melted the ice caps and poisoned the air, they’re not all that broken up about killing us in the process.

But won’t overwriting the past cause the future to no longer exist?

And here we start delving into the pitfalls of time travel, in this case, the grandfather paradox and the problem of causality. Tenet seems to adhere to a form of time travel called the ontological paradox, where you can alter the past but only because it’s already been altered, because you always altered it and are thus playing out your part in the timeline, completing the causal loop. It’s what Hermione did with the Time-Turner in Harry Potter And The Prisoner Of Azkaban: yes, she went back in time, but she was always in two places at once and so she fulfilled what would always have been the future and didn’t (as happens in the likes of The Terminator and Back To The Future) change the timeline, causing a different future to come about.

Does this involve parallel worlds?

No. The many worlds theory simply states that every conceivable outcome plays out in infinite alternate realities, so any change you make will create a separate reality and have no bearing on the future that was once your past. Tenet doesn’t use that model.

Hang on. But surely erasing the whole of the past would squash this whole causal loop thing you’re talking about?

It would, which is where Tenet deviates from that model and indeed physics as a whole. The simple answer to this is simply ‘because of the Algorithm’. In the same way that Avengers: Endgame gets away with quite a lot by saying, ‘because Infinity Stones’. The Algorithm, in reversing the entropy of our entire reality, breaks our understanding of traditional causality and timelines.

What was Sator’s master plan?

Having been recruited and paid (in inverted gold) by contacts in the future, Sator had gathered eight of the nine pieces of the Algorithm before the film begins. Now, Sator’s goal is to acquire the final segment and activate the device. Since he is dying of pancreatic cancer, he plans to activate it via a dead man’s switch (his Fitbit, essentially), meaning that when he dies, all of our forwards reality dies with him. It’s a final ‘fuck you’ to the world and is all rather overly dramatic and Blofeldian.

What on Earth was going on during the car chase in Tallinn?

After they steal the ‘plutonium’, which in reality is a piece of the Algorithm, Neil (Robert Pattinson) and The Protagonist are ambushed by Sator, who pursues them in an inverted car. Sator threatens Kat (Elizabeth Debicki) and demands the Algorithm. A second inverted car un-crashes and pulls alongside them containing (unbeknownst to the audience at that point) the future Protagonist from a few hours hence. The present Protagonist gives Sator an empty case and throws the algorithm into the new car with his future self.

Okay, but what on Earth was going on during that fight at the end?

Having located the site where Sator’s forces are assembling the Algorithm at Stalsk 12, team Tenet is split into two groups: the red team attacking the site, and the blue team inverting from just after the conflict and attacking the same site from the future while traveling backward. Yep, it’s a Temporal Pincer Movement! Thus you have inverted and un-inverted Tenet soldiers fighting against Sator’s men, while The Protagonist and Ives (played by a near-unrecognizable Aaron Taylor-Johnson), both on the red team, head down into the bunker to nab the Algorithm. To complicate matters, Neil starts on the blue team but un-inverts partway through the mission via a Turnstile at the site so that he can rescue Ives and The Protagonist from the bunker explosion. Not only that, but Future Neil gets involved as well – he re-inverts after the battle and heads back to the bunker to ensure the mission is a success, being the ‘dead’ body behind the locked gate that appears to spring to life just in time to unlock the gate and save The Protagonist and Ives.

Was Neil part of Tenet all along?

Yes. As Neil confirms at the end of the movie, he’s been working for Future Protagonist from the start. Indeed, the two are good friends before the film begins, having met in Neil’s past but The Protagonist’s future. It’s hinted at when Neil knows The Protagonist’s drink order in Mumbai – but the reason why he knew that isn’t made clear until the film’s final moments.

When did the Protagonist meet Neil?

After the events of the film, The Protagonist clearly spends a lot of time inverted and travels back to the past in order to found and set Tenet in motion (completing the causal loop). At some point long before the film starts, he meets and recruits a younger Neil, setting him in motion to later meet and assist his younger self.

Who saved The Protagonist at The Opera?

That was an inverted Neil — something we only realize at the end of the film when we recognize the medallion on his backpack (also seen on the ‘dead’ body behind the locked gate and on Neil at the end of the film). When exactly Neil fits in this little jaunt to the past is unclear, but we can assume it happens before he meets up with The Protagonist for the ‘first’ time in Mumbai.

So did Neil die at the end?

Yes. Having inverted and then un-inverted during the final battle, Neil’s final task is to invert himself once again and go back into the bunker so that he can unlock the gate and die, saving The Protagonist and Ives as he does so. He has no choice in the matter – he’s already done it, and is simply playing out events that have already occurred.