Integrated GPUs - All about plugging and unplugging. How to Monitor GPU Usage in Windows Task Manager How to Check if View GPU Performance Is Supported

In 2016, hopes for a full-fledged generational change in GPUs, which was previously hampered by the lack of production capabilities necessary to release chips with significantly more high density transistors and clock frequencies, which allowed the proven 28 nm process technology. The 20nm technology we hoped for two years ago has proven to be commercially unviable for chips as large as discrete GPUs. Since TSMC and Samsung, which could act as contractors for AMD and NVIDIA, did not use FinFETs at 20 nm, the potential increase in performance per watt compared to 28 nm was such that both companies chose to wait for mass adoption of 14/16-nm. nm standards, already using FinFET.

However, the years of anxious waiting have passed, and now we can evaluate how GPU manufacturers have used the capabilities of the updated technical process. As practice has once again shown, “nanometers” by themselves do not guarantee high energy efficiency of a chip, so the new architectures of NVIDIA and AMD turned out to be very different in this parameter. And additional intrigue was added by the fact that companies no longer use the services of one factory (TSMC), as was the case in past years. AMD chose GlobalFoundries to produce Polaris GPUs based on 14 nm FinFET technology. NVIDIA, on the other hand, is still collaborating with TSMC, which has a 16nm FinFET process, on all Pascal chips except the low-end GP107 (which is made by Samsung). It was Samsung's 14nm FinFET line that was once licensed by GlobalFoundries, so the GP107 and its rival Polaris 11 give us a convenient opportunity to compare the engineering achievements of AMD and NVIDIA on a similar manufacturing base.

However, let's not dive into technical details prematurely. In general, the proposals of both companies based on the new generation GPUs look like this. NVIDIA has created a full line of Pascal accelerators based on three consumer-grade GPUs - GP107, GP106 and GP104. However, the place of the flagship adapter, which will certainly receive the name GeForce GTX 1080 Ti, currently vacant. A candidate for this position is a card with a GP102 processor, which is so far used only in the “prosumer” accelerator NVIDIA TITAN X. And finally, the main pride of NVIDIA is the GP100 chip, which the company, apparently, is not even going to implement in gaming products and left for Tesla computing accelerators.

AMD's successes are more modest so far. Two processors of the Polaris family were released, products based on which belong to the lower and middle categories of gaming video cards. The upper echelons will be occupied by the upcoming Vega family of GPUs, which are expected to feature a comprehensively upgraded GCN architecture (while Polaris is not that different from the 28nm Fiji and Tonga chips in this regard).

NVIDIA Tesla P100 and new TITAN X

Through the efforts of Jensen Huang, the permanent head of NVIDIA, the company is already positioning itself as a manufacturer computing processors general purpose no less than a manufacturer of gaming GPUs. A signal that NVIDIA is taking the supercomputing business more seriously than ever with the division of its Pascal line of GPUs into gaming, on the one hand, and computing, on the other.

Once the 16nm FinFET process came online at TSMC, NVIDIA put its first efforts into releasing the GP100 supercomputer chip, which debuted ahead of the Pascal line of consumer products.

The distinctive properties of the GP100 were an unprecedented number of transistors (15.3 billion) and shader ALUs (3840 CUDA cores). In addition, this is the first accelerator that is equipped with HBM2 memory (16 GB) combined with a GPU on a silicon substrate. The GP100 is used as part of the Tesla P100 accelerators, which were initially limited to the field of supercomputers due to a special form factor with the NVLINK bus, but later NVIDIA released the Tesla P100 in the standard PCI Express expansion card format.

Initially, experts assumed that the P100 could appear in gaming video cards. NVIDIA apparently did not deny this possibility, because the chip has a full-fledged pipeline for rendering 3D graphics. But it is now clear that it is unlikely to ever go beyond the computing niche. For graphics, NVIDIA has a related product - the GP102, which has the same set of shader ALUs, texture mapping units and ROPs as the GP100, but lacks the ballast of a large number of 64-bit CUDA cores, not to mention other architectural changes (fewer schedulers, reduced L2 cache, etc.). The result is a more compact (12 billion transistors) core, which, together with the abandonment of HBM2 memory in favor of GDDR5X, allowed NVIDIA to distribute the GP102 to a wider market.

Now the GP102 is reserved for the prosumer accelerator TITAN X (not to be confused with the GeForce GTX TITAN X based on the GM200 chip of the Maxwell architecture), which is positioned as a board for reduced-precision calculations (in the range from 8 to 32 bits, among which 8 and 16 are NVIDIA’s favorite deep training) even more than for games, although wealthy gamers can purchase a video card for $1,200. Indeed, in our gaming tests, the TITAN X does not justify its cost with a 15-20 percent advantage over the GeForce GTX 1080, but it comes to the rescue overclocking. If we compare the overclocked GTX 1080 and TITAN X, the latter will be 34% faster. However, the new gaming flagship based on the GP102 will most likely have fewer active computing units or lose support for any computing functions (or both).

Overall, releasing massive GPUs like the GP100 and GP102 early in the 16nm FinFET process is a major achievement for NVIDIA, especially considering the challenges the company faced in the 40nm and 28nm phases.

NVIDIA GeForce GTX 1070 and 1080

NVIDIA deployed the line of GeForce 10 series gaming accelerators in its usual sequence - from the most powerful models to more budget ones. The GeForce GTX 1080 and other Pascal architecture gaming cards released subsequently most clearly showed that NVIDIA fully realized the capabilities of the 14/16 nm FinFET process to make chips denser and more energy efficient.

In addition, by creating Pascal, NVIDIA not only increased performance in various computational tasks (as shown by the example of the GP100 and GP102), but also supplemented the Maxwell chip architecture with functions that optimize graphics rendering.

Let us briefly note the main innovations:

  • improved color compression with ratios up to 8:1;
  • Simultaneous Multi-Projection function of the PolyMorph Engine geometry engine, which allows you to create up to 16 projections of scene geometry in one pass (for VR and systems with multiple displays in NVIDIA configurations Surround);
  • the ability to interrupt (preemption) during the execution of a draw call (during rendering) and the command flow (during calculations), which, together with the dynamic distribution of GPU computing resources, provides full support for asynchronous computing (Async Compute) - an additional source of performance in games running the DirectX 12 API reduced latency in VR.

The last point is especially interesting, since Maxwell chips were technically compatible with asynchronous computing (simultaneous work with a computational and graphics command queue), but performance in this mode left much to be desired. Pascal's asynchronous computing works as intended, allowing games to load the GPU more efficiently with a separate thread for physics calculations (though admittedly on chips NVIDIA problem fully loading shader ALUs is not as acute as for AMD GPUs).

The GP104 processor, which is used in the GTX 1070 and GTX 1080, is the successor to the GM204 (the second tier chip in the Maxwell family), but NVIDIA has achieved such high clock frequencies that the GTX 1080 outperforms the GTX TITAN X (based on a larger GPU) by an average of 29%, all within a more conservative thermal package (180 vs 250 W). Even the GTX 1070, cut much more than the GTX 970 was cut compared to the GTX 980 (and the GTX 1070 uses GDDR5 memory instead of the GDDR5X in the GTX 1080), is still 5% faster than GTX TITAN X.

NVIDIA has updated the display controller in Pascal, which is now compatible with DisplayPort 1.3/1.4 and HDMI 2.b interfaces, which means it allows you to output an image with an increased resolution or refresh rate over one cable - up to 5K at 60 Hz or 4K at 120 Hz. 10/12-bit color representation provides support for dynamic range (HDR) on the few screens that have this capability. The dedicated Pascal hardware unit is capable of encoding and decoding HEVC (H.265) video with resolutions up to 4K, 10-bit color (12-bit decoding) and 60 Hz.

Finally, Pascal has eliminated the limitations inherent in the previous version of the SLI bus. The developers raised the frequency of the interface and released a new, two-channel bridge.

You can read more about these features of the Pascal architecture in our GeForce GTX 1080 review. However, before moving on to other new products of the past year, it is worth mentioning that in the 10th GeForce line, NVIDIA will for the first time release reference design cards throughout the entire lifespan of the corresponding models. They are now called Founders Edition and are sold above the retail price recommended for partner graphics cards. For example, the GTX 1070 and GTX 1080 have recommended prices of $379 and $599 (which is already higher than the GTX 970 and GTX 980 in their youth), while the Founders Edition is priced at $449 and $699.

GeForce GTX 1050 and1060

The GP106 chip brought the Pascal architecture to the mainstream segment of gaming accelerators. Functionally, it is no different from older models, and in terms of the number of computing units it is half the GP104. True, the GP106, unlike the GM206 (which was half of the GM204), uses a 192-bit memory bus. In addition, NVIDIA removed SLI connectors from the GTX 1060 board, upsetting fans of gradual upgrades of the video subsystem: when this accelerator exhausts its capabilities, you can’t add a second video card to it (except for those games running DirectX 12, which allow you to distribute the load between GPUs, bypassing driver).

The GTX 1060 originally featured 6GB GDDR5, a fully functional GP106 chip, and retailed for $249/$299 (partner cards and Founders Edition, respectively). But then NVIDIA released a video card with 3 GB of memory and a recommended price of $199, which also reduced the number of computing units. Both video cards have an attractive TDP of 120 W, and are similar in performance to the GeForce GTX 970 and GTX 980.

The GeForce GTX 1050 and GTX 1050 Ti belong to the lowest category mastered by the Pascal architecture. But no matter how modest they may look compared to their older brothers, NVIDIA has made the greatest step forward in the budget niche. The GTX 750/750 Ti, which occupied it before, belong to the first iteration of the Maxwell architecture, so the GTX 1050/1050 Ti, unlike other accelerators of the Pascal family, have advanced not one, but one and a half generations. With a significantly larger GPU and higher-clocked memory, the GTX 1050/1050 Ti improves performance over its predecessors more than any other member of the Pascal series (90% difference between the GTX 750 Ti and GTX 1050 Ti).

And although the GTX 1050/1050 Ti consume a little more power (75 versus 60 W), they still fit within the power standards for PCI Express cards that do not have an additional power connector. NVIDIA did not release low-end accelerators in the Founders Edition format, but recommended retail prices were $109 and $139.

AMD Polaris: Radeon RX 460/470/480

AMD's response to Pascal was the Polaris family of chips. The Polaris line now includes only two chips, on the basis of which AMD produces three video cards (Radeon RX 460, RX 470 and RX 480), in which the amount of on-board RAM additionally varies. As you can easily see even from the model numbers, the upper echelon of performance in the Radeon 400 series remains unoccupied. AMD will have to fill it with products based on Vega silicon. Back in the 28 nm era, AMD acquired this habit of testing innovations on relatively small chips and only then introducing them into flagship GPUs.

It should be noted right away that in the case of AMD, the new family of graphics processors is not identical new version the underlying GCN (Graphics Core Next) architecture, but reflects a combination of architecture and other product features. For GPUs built using the new process technology, AMD has abandoned the various “islands” in the code name (Northern Islands, South Islands, etc.) and denotes them with the names of stars.

Nevertheless, the GCN architecture in Polaris received another, third update, thanks to which (along with the transition to the 14 nm FinFET process technology) AMD significantly increased performance per watt.

  • The Compute Unit, the elementary form of organizing shader ALUs in GCN, has undergone a number of changes related to instruction prefetching and caching, and access to the L2 cache, which together increased the specific performance of the CU by 15%.
  • There is now support for half-precision calculations (FP16), which are used in computer vision and machine learning programs.
  • GCN 1.3 provides direct access to the internal instruction set (ISA) of stream processors, through which developers can write extremely low-level and fast code - as opposed to the DirectX and OpenGL shader languages ​​abstracted from the hardware.
  • Geometry processors are now capable of eliminating zero-size polygons or polygons that have no pixels in the projection early in the pipeline, and have an index cache that reduces resource consumption when rendering small, duplicate geometry.
  • Double L2 cache.

In addition, AMD engineers have worked hard to get Polaris to run at as high a frequency as possible. The GPU frequency is now controlled with minimal latency (latency less than 1 ns), and the voltage curve of the card is adjusted every time the PC is booted in order to take into account the variation in parameters between individual chips and the aging of silicon during operation.

However, the transition to the 14nm FinFET process has not been smooth sailing for AMD. Indeed, the company was able to increase performance per watt by 62% (judging by the results of the Radeon RX 480 and Radeon R9 380X in gaming tests and the TDP of the cards). However, Polaris' maximum frequencies do not exceed 1266 MHz, and only a few of its manufacturing partners have achieved more with additional work on the cooling and power systems. On the other hand, GeForce video cards still retain the leadership in terms of performance-power ratio, which NVIDIA achieved back in the Maxwell generation. It seems that AMD at the first stage was not able to reveal all the capabilities of the new generation technical process, or the GCN architecture itself already requires deep modernization - the last task was left to the Vega chips.

Polaris-based accelerators occupy the price range from $109 to $239 (see table), although in response to the appearance of the GeForce GTX 1050/1050 Ti, AMD reduced the prices of the two lower cards to $100 and $170, respectively. On this moment In each price/performance category, there is a similar balance of power between competing products: the GeForce GTX 1050 Ti is faster than the Radeon RX 460 with 4GB of RAM, the GTX 1060 with 3GB of memory is faster than the RX 470, and the full-fledged GTX 1060 is ahead of the RX 480. Together At the same time, AMD video cards are cheaper, which means they are popular.

AMD Radeon Pro Duo

The report on the past year in the field of discrete GPUs will not be complete if we ignore one more of the “red” video cards. While AMD had not yet released a flagship single-processor video adapter to replace the Radeon R9 Fury X, the company had one proven move left to continue conquering new frontiers - installing two Fiji chips on one board. This card, the release of which AMD repeatedly postponed, nevertheless went on sale shortly before the GeForce GTX 1080, but fell into the category of professional Radeon Pro accelerators and was positioned as a platform for creating games in the VR environment.

For gamers, at $1,499 (more expensive than a pair of Radeon R9 Fury Xs at launch), the Radeon Pro Duo is of no interest, and we didn't even have the opportunity to test this card. It's a pity, because from a technical point of view, the Radeon Pro Duo looks intriguing. The card's nameplate TDP increased by only 27% compared to Fury X, despite the fact that peak frequencies AMD processors reduced by 50 MHz. Previously, AMD has already managed to release a successful dual-processor video card - the Radeon R9 295X2, so the specifications declared by the manufacturer do not cause much skepticism.

What to expect in 2017

The main expectations for the coming year are related to AMD. NVIDIA will most likely limit itself to releasing a flagship gaming card based on the GP102 under the name GeForce GTX 1080 Ti and, perhaps, fill another vacancy in the 10th GeForce series - GTX 1060 Ti. Otherwise, the Pascal line of accelerators has already been formed, and the debut of the next architecture, Volta, is planned only for 2018.

As in the CPU space, AMD has put all its efforts into developing a truly breakthrough GPU microarchitecture, while Polaris has become just a staging post on the way to the latter. Presumably, already in the first quarter of 2017 the company will release its best silicon, Vega 10, to the mass market for the first time (and along with it or subsequently one or more lower-end chips in the line). The most reliable evidence of its capabilities was the announcement of the MI25 computing card in the Radeon Instinct line, which is positioned as an accelerator for deep learning tasks. Based on the specifications, it is based on none other than the Vega 10. The card develops 12.5 TFLOPS of processing power in single-precision calculations (FP32), which is more than the TITAN X on GP102, and is equipped with 16 GB of HBM2 memory. The TDP of the video card is within 300 W. The real performance of the processor can only be guessed at, but it is known that Vega will bring the most large-scale update to the GPU microarchitecture since the release of the first GCN-based chips five years ago. The latter will significantly improve performance per watt and allow more efficient use of the processing power of shader ALUs (which AMD chips traditionally lack) in gaming applications.

There are also rumors that AMD engineers have now mastered the 14 nm FinFET process technology and the company is ready to release the second version of Polaris video cards with a significantly lower TDP. It seems to us that if this is true, then the updated chips would rather go into the Radeon RX 500 line than receive increased indexes in the existing 400 series.

Application. Current lines of discrete video adapters from AMD and NVIDIA

Manufacturer AMD
Model Radeon RX 460 Radeon RX 470 Radeon RX 480 Radeon R9 Nano Radeon R9 Fury Radeon R9 Fury X
GPU
Name Polaris 11 Polaris 10 Polaris 10 Fiji XT Fiji PRO Fiji XT
Microarchitecture GCN 1.3 GCN 1.3 GCN 1.3 GCN 1.2 GCN 1.2 GCN 1.2
Technical process, nm 14 nm FinFET 14 nm FinFET 14 nm FinFET 28 28 28
Number of transistors, million 3 000 5 700 5 700 8900 8900 8900
1 090 / 1 200 926 / 1 206 1 120 / 1 266 — / 1 000 — / 1 000 — / 1 050
Number of shader ALUs 896 2 048 2 304 4096 3584 4096
56 128 144 256 224 256
ROP number 16 32 32 64 64 64
RAM
Bus width, bits 128 256 256 4096 4096 4096
Chip type GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM H.B.M. H.B.M. H.B.M.
1 750 (7 000) 1 650 (6 600) 1 750 (7 000) / 2 000 (8 000) 500 (1000) 500 (1000) 500 (1000)
Volume, MB 2 048 / 4 096 4 096 4 096 / 8 192 4096 4096 4096
I/O bus PCI Express 3.0 x8 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16
Performance
2 150 4 940 5 834 8 192 7 168 8 602
Performance FP32/FP64 1/16 1/16 1/16 1/16 1/16 1/16
112 211 196/224 512 512 512
Image output
DL DVI-D, HDMI 2.0b, DisplayPort 1.3/1.4 DL DVI-D, HDMI 2.0b, DisplayPort 1.3/1.4 HDMI 1.4a, DisplayPort 1.2 HDMI 1.4a, DisplayPort 1.2 HDMI 1.4a, DisplayPort 1.2
TDP, W <75 120 150 175 275 275
109/139 179 199/229 649 549 649
8 299 / 10 299 15 999 16 310 / 18 970 ND ND ND
Manufacturer NVIDIA
Model GeForce GTX 1050 GeForce GTX 1050 Ti GeForce GTX 1060 3 GB GeForce GTX 1060 GeForce GTX 1070 GeForce GTX 1080 TITAN X
GPU
Name GP107 GP107 GP106 GP106 GP104 GP104 GP102
Microarchitecture Pascal Pascal Maxwell Maxwell Pascal Pascal Pascal
Technical process, nm 14 nm FinFET 14 nm FinFET 16 nm FinFET 16 nm FinFET 16 nm FinFET 16 nm FinFET 16 nm FinFET
Number of transistors, million 3 300 3 300 4 400 4 400 7 200 7 200 12 000
Clock frequency, MHz: Base Clock / Boost Clock 1 354 / 1 455 1 290 / 1 392 1506/1708 1506/1708 1 506 / 1 683 1 607 / 1 733 1 417 / 1531
Number of shader ALUs 640 768 1 152 1 280 1 920 2 560 3 584
Number of texture mapping units 40 48 72 80 120 160 224
ROP number 32 32 48 48 64 64 96
RAM
Bus width, bits 128 128 192 192 256 256 384
Chip type GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5X SDRAM GDDR5X SDRAM
Clock frequency, MHz (bandwidth per contact, Mbit/s) 1 750 (7 000) 1 750 (7 000) 2000 (8000) 2000 (8000) 2000 (8000) 1 250 (10 000) 1 250 (10 000)
Volume, MB 2 048 4 096 6 144 6 144 8 192 8 192 12 288
I/O bus PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16
Performance
Peak performance FP32, GFLOPS (based on maximum specified frequency) 1 862 2 138 3 935 4 373 6 463 8 873 10 974
Performance FP32/FP64 1/32 1/32 1/32 1/32 1/32 1/32 1/32
RAM bandwidth, GB/s 112 112 192 192 256 320 480
Image output
Image output interfaces DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b
TDP, W 75 75 120 120 150 180 250
Suggested retail price at time of release (USA, excluding tax), $ 109 139 199 249/299 (Founders Edition / affiliate cards) 379/449 (Founders Edition / affiliate cards) 599/699 (Founders Edition / affiliate cards) 1 200
Recommended retail price at the time of release (Russia), rub. 8 490 10 490 ND 18,999/- (Founders Edition/Affiliate Cards) ND / 34,990 (Founders Edition / partner cards) ND / 54,990 (Founders Edition / partner cards)

Task Manager Windows 10 contains detailed monitoring tools GPU (GPU). You can view per-app and system-wide GPU usage, and Microsoft promises that the indicators task manager will be more accurate than indicators from third-party utilities.

How it works

These features GPU were added in the update Fall Creators for Windows 10 , also known as Windows 10 version 1709 . If you're using Windows 7, 8, or an older version of Windows 10, you won't see these tools in your task manager.

Windows uses newer features in the Windows Display Driver Model to extract information directly from GPU (VidSCH) and video memory manager (VidMm) in the WDDM graphics core, which are responsible for the actual allocation of resources. It shows very accurate data no matter what API applications use to access the GPU - Microsoft DirectX, OpenGL, Vulkan, OpenCL, NVIDIA CUDA, AMD Mantle or anything else.

That is why in task manager Only WDDM 2.0 compliant systems are displayed GPUs . If you don't see this, your system's GPU is probably using an older type of driver.

You can check which version of WDDM your driver is using GPU by pressing the Windows key + R, typing "dxdiag" in the field, and then pressing "Enter" to open the tool " DirectX Diagnostic Tool" Go to the “Screen” tab and look to the right of “Model” in the “Drivers” section. If you see a WDDM 2.x driver here, your system is compatible. If you see a WDDM 1.x driver here, your GPU incompatible.

How to View GPU Performance

This information is available in task manager , although it is hidden by default. To open it, open Task Manager by right-clicking on any empty space on the taskbar and selecting " Task Manager"or by pressing Ctrl+Shift+Esc on the keyboard.

Click the "More details" button at the bottom of the window " Task Manager" if you see the standard simple view.

If GPU not showing up in task manager , in full screen mode on the " tab Processes"Right-click any column header and then enable the option " GPU " This will add a column GPU , which allows you to see the percentage of resources GPU , used by each application.

You can also enable the option " GPU core" to see which GPU the app is using.

General Use GPU of all applications on your system appears at the top of the column GPU. Click a column GPU to sort the list and see which apps are using your GPU most at the moment.

Number in column GPU- This is the highest usage that the application uses across all engines. So, for example, if an application uses 50% GPU 3D engine and 2% GPU video engine decoding, you will simply see the GPU column displaying the number 50%.

In the column " GPU core» each application is displayed. This shows you what physical GPU and what engine the application uses, for example whether it uses a 3D engine or a video decoding engine. You can determine which GPU qualifies for a specific metric by checking the " Performance", which we will talk about in the next section.

How to view an application's video memory usage

If you are wondering how much video memory is being used by an application, you need to go to the Details tab in the Task Manager. On the Details tab, right-click any column header and select Select Columns. Scroll down and turn on columns " GPU », « GPU core », « " And " " The first two are also available in the Processes tab, but the last two memory options are only available in the Details panel.

Column " Dedicated GPU memory » shows how much memory the application is using on your GPU. If your PC has an NVIDIA or AMD discrete graphics card, then this is part of its VRAM, which is how much physical memory on your graphics card the application is using. If you have integrated graphics processor , a portion of your regular system memory is reserved exclusively for your graphics hardware. This shows how much of the reserved memory is being used by the application.

Windows also allows applications to store some data in regular system DRAM. Column " Shared GPU Memory " shows how much memory the application is currently using for video devices from the computer's normal system RAM.

You can click on any of the columns to sort by them and see which application is using the most resources. For example, to see the applications using the most video memory on your GPU, click the " Dedicated GPU memory ».

How to track GPU share usage

To track overall resource usage statistics GPU, go to the " Performance" and look at " GPU" at the bottom of the sidebar. If your computer has multiple GPUs, you'll see several options here GPU.

If you have multiple linked GPUs - using a feature like NVIDIA SLI or AMD Crossfire, you will see them identified by a "#" in their name.

Windows displays usage GPU in real time. Default Task Manager tries to display the most interesting four engines according to what's going on in your system. For example, you'll see different graphics depending on whether you're playing 3D games or encoding videos. However, you can click on any of the names above the charts and select any of the other available engines.

Name of your GPU also appears in the sidebar and at the top of this window, making it easy to check what graphics hardware is installed on your PC.

You will also see dedicated and shared memory usage graphs GPU. Shared Memory Usage GPU refers to how much of the system's total memory is used for tasks GPU. This memory can be used for both normal system tasks and video recordings.

At the bottom of the window you will see information such as the version number of the installed video driver, development date and physical location GPU on your system.

If you'd like to view this information in a smaller window that's easier to leave on the screen, double-click anywhere inside the GPU screen or right-click anywhere inside it and select the option Graphic summary" You can maximize a window by double-clicking in the panel or by right-clicking in it and unchecking the " Graphic summary».

You can also right-click on the graph and select "Edit Graph" > "Single Core" to view only one engine graph GPU.

To keep this window permanently displayed on your screen, click "Options" > " On top of other windows».

Double click inside the panel GPU again and you'll have a minimal window that you can position anywhere on the screen.

The integrated graphics processor plays an important role for both gamers and undemanding users.

The quality of games, movies, watching videos on the Internet and images depends on it.

Principle of operation

The graphics processor is integrated into the computer's motherboard - this is what integrated graphics looks like.

As a rule, they use it to remove the need to install a graphics adapter -.

This technology helps reduce the cost of the finished product. In addition, due to the compactness and low power consumption of such processors, they are often installed in laptops and low-power desktop computers.

Thus, integrated graphics processors have filled this niche so much that 90% of laptops on US store shelves have such a processor.

Instead of a regular video card, integrated graphics often use the computer's RAM itself as an auxiliary tool.

True, this solution somewhat limits the performance of the device. Still, the computer itself and the graphics processor use the same memory bus.

So this “neighborhood” affects the performance of tasks, especially when working with complex graphics and during gameplay.

Kinds

Integrated graphics have three groups:

  1. Shared memory graphics - a device based on shared control with the main processor RAM. This significantly reduces cost, improves energy saving system, but degrades performance. Accordingly, for those who work with complex programs, integrated graphics processors of this type are most likely not suitable.
  2. Discrete graphics - a video chip and one or two video memory modules are soldered onto system board. Thanks to this technology, image quality is significantly improved, and it also becomes possible to work with 3D graphics with the best results. True, you will have to pay a lot for this, and if you are looking for a high-power processor in all respects, the cost can be incredibly high. In addition, your electricity bill will increase slightly - the power consumption of discrete GPUs is higher than usual.
  3. Hybrid discrete graphics is a combination of the two previous types, which ensured the creation of the PCI Express bus. Thus, access to memory is carried out both through the soldered video memory and through the RAM. With this solution, manufacturers wanted to create a compromise solution, but it still does not eliminate the shortcomings.

Manufacturers

As a rule, large companies - , and - are engaged in the manufacture and development of integrated graphics processors, but many small enterprises are also involved in this area.

This is not difficult to do. Look for Primary Display or Init Display First. If you don’t see something like that, look for Onboard, PCI, AGP or PCI-E (it all depends on the buses installed on the motherboard).

By choosing PCI-E, for example, you enable the PCI-Express video card and disable the built-in integrated one.

Thus, to enable the integrated video card, you need to find the appropriate parameters in the BIOS. Often the activation process is automatic.

Disable

It is better to disable it in the BIOS. This is the simplest and most unpretentious option, suitable for almost all PCs. The only exceptions are some laptops.

Again, search for Peripherals or Integrated Peripherals in the BIOS if you are working on a desktop.

For laptops, the name of the function is different, and not the same everywhere. So just find something related to graphics. For example, the necessary options can be placed in the Advanced and Config sections.

Disabling is also carried out in different ways. Sometimes it’s enough to just click “Disabled” and put the PCI-E video card first in the list.

If you are a laptop user, do not be alarmed if you cannot find a suitable option; a priori, you may not have such a function. For all other devices, the rules are simple - no matter how the BIOS itself looks, the filling is the same.

If you have two video cards and they are both shown in the device manager, then the matter is quite simple: right-click on one of them and select “disable”. However, keep in mind that the display may go dark. This will most likely happen.

However, this is also a solvable problem. It is enough to restart the computer or software.

Make all subsequent settings on it. If it doesn't work this method, roll back your actions using safe mode. You can also resort to the previous method - through the BIOS.

Two programs - NVIDIA Control Center and Catalyst Control Center - configure the use of a specific video adapter.

They are the most unpretentious compared to the other two methods - the screen is unlikely to turn off, and you won’t accidentally mess up the settings through the BIOS either.

For NVIDIA all settings are in the 3D section.

You can select your preferred video adapter for all operating system, and for certain programs and games.

In Catalyst software, an identical function is located in the “Power” option in the “Switchable Graphics” sub-item.

So switching between GPUs is a breeze.

There are different methods, in particular, through programs and through BIOS. Turning on or off one or another integrated graphics may be accompanied by some failures, mainly related to the image.

It may go out or simply become distorted. Nothing should affect the files on the computer themselves, unless you clicked something in the BIOS.

Conclusion

As a result, integrated graphics processors are in demand due to their low cost and compactness.

You will have to pay for this with the level of performance of the computer itself.

In some cases, integrated graphics are simply necessary - discrete processors are ideal for working with three-dimensional images.

In addition, the industry leaders are Intel, AMD and Nvidia. Each of them offers its own graphics accelerators, processors and other components.

The latest popular models are Intel HD Graphics 530 and AMD A10-7850K. They are quite functional, but have some flaws. In particular, this applies to power, performance and cost of the finished product.

You can enable or disable a graphics processor with a built-in core either yourself through BIOS, utilities and various programs, but the computer itself can easily do this for you. It all depends on which video card is connected to the monitor itself.

IN modern devices A graphics processor is used, which is also referred to as a GPU. What is it and what is its principle of operation? GPU (Graphics) is a processor whose main task is to process graphics and floating point calculations. The GPU facilitates the work of the main processor when it comes to heavy games and applications with 3D graphics.

What is this?

The GPU creates graphics, textures, colors. A processor that has multiple cores can run on high speeds. The graphics card has many cores that operate primarily on low speeds. They do pixel and vertex calculations. The latter are processed mainly in a coordinate system. The graphics processor processes various tasks by creating a three-dimensional space on the screen, that is, objects move in it.

Principle of operation

What does a GPU do? He deals with graphics processing in 2D and 3D formats. Thanks to the GPU, your computer can perform important tasks faster and easier. The peculiarity of the GPU is that it increases the calculation speed at the maximum level. Its architecture is designed in such a way that it allows it to process visual information more efficiently than the central CPU of a computer.

He is responsible for the location of three-dimensional models in the frame. In addition, each processor filters the triangles included in it. It determines which ones are visible and removes those that are hidden behind other objects. Draws light sources and determines how these sources affect color. The graphics processor (what it is is described in the article) creates an image and displays it on the user’s screen.

Efficiency

What is the reason effective work GPU? Temperature. One of the problems with PCs and laptops is overheating. This is the main reason why the device and its elements quickly fail. GPU problems begin when the CPU temperature exceeds 65 °C. In this case, users notice that the processor begins to work weaker and skips clock cycles in order to independently lower the increased temperature.

Temperature range 65-80 °C is critical. In this case, the system reboots (emergency) and the computer turns off on its own. It is important for the user to ensure that the GPU temperature does not exceed 50 °C. A temperature of 30-35 °C is considered normal when idle, 40-45 °C with long hours of load. The lower the temperature, the higher the computer's performance. For motherboard, video cards, cases and hard drives- your own temperature conditions.

But many users are also concerned about the question of how to reduce the temperature of the processor in order to increase its efficiency. First you need to find out the cause of overheating. This could be a clogged cooling system, dried out thermal paste, malware, overclocking the processor, raw BIOS firmware. The simplest thing a user can do is replace the thermal paste, which is located on the processor itself. In addition, the cooling system needs to be cleaned. Experts also advise installing a powerful cooler, improving air circulation in system unit, increase the rotation speed by graphics adapter cooler. All computers and GPUs have the same temperature reduction scheme. It is important to monitor the device and clean it on time.

Specifics

The graphics processor is located on the video card, its main task is to process 2D and 3D graphics. If a GPU is installed on the computer, the device's processor does not perform unnecessary work, and therefore functions faster. main feature graphically is that its main goal is to increase the speed of calculating objects and textures, that is graphic information. The processor architecture allows them to work much more efficiently and process visual information. A regular processor cannot do this.

Kinds

What is this - a graphics processor? This is a component included in the video card. There are several types of chips: built-in and discrete. Experts say that the second one copes better with its task. It is installed on separate modules, since it is distinguished by its power, but it requires excellent cooling. Almost all computers have a built-in graphics processor. It is installed in the CPU to make energy consumption several times lower. It cannot be compared with discrete ones in terms of power, but it also has good characteristics, shows good results.

Computer graphics

What's this? This is the name of the field of activity in which computer technology is used to create images and process visual information. Modern computer graphics, including scientific ones, allows you to graphically process results, build diagrams, graphs, drawings, and also perform various kinds of virtual experiments.

Technical products are created using constructive graphics. There are other types of computer graphics:

  • animated;
  • multimedia;
  • artistic;
  • advertising;
  • illustrative.

From a technical point of view, computer graphics are two-dimensional and 3D images.

CPU and GPU: the difference

What is the difference between these two designations? Many users are aware that the graphics processor (what it is - described above) and the video card perform different tasks. In addition, they differ in their internal structure. Both CPUs and GPUs have many similar features, but they are made for different purposes.

The CPU executes a specific chain of instructions in a short period of time. It is designed in such a way that it forms several chains at the same time, splits the stream of instructions into many, executes them, then merges them back into one in a specific order. The instruction in the thread depends on those that follow it, therefore the CPU contains a small number of execution units, here the main priority is given to execution speed and reducing downtime. All this is achieved using a pipeline and cache memory.

The GPU has another important function - rendering visual effects and 3D graphics. It works simpler: it receives polygons as input, performs the necessary logical and mathematical operations, and outputs pixel coordinates. The work of a GPU involves handling a large flow of different tasks. Its peculiarity is that it is endowed with great power but works slowly compared to the CPU. In addition, modern GPUs have more than 2000 execution units. They differ in their memory access methods. For example, graphics do not need large cached memory. GPUs have more bandwidth. If you explain in simple words, then the CPU makes decisions in accordance with the tasks of the program, and the GPU performs many identical calculations.