How to test a computer's power supply. How to check the power supply. Design features of power supplies

The article we bring to your attention describes the methodology we use for testing power supplies - until now, individual parts of this description have been scattered across various articles with tests of power supplies, which is not very convenient for those who want to quickly familiarize themselves with the methodology based on its current state.

This material is updated as the methodology develops and improves, so some of the methods reflected in it may not be used in our old articles with power supply tests - this only means that the method was developed after the publication of the corresponding article. You will find a list of changes made to the article at the end.

The article can be quite clearly divided into three parts: in the first, we will briefly list the block parameters we check and the conditions for these checks, and also explain the technical meaning of these parameters. In Part 2, we will mention a number of terms often used by block manufacturers for marketing purposes and explain them. The third part will be of interest to those who want to familiarize themselves in more detail with technical features construction and operation of our stand for testing power supplies.

The guiding and guiding document for us in developing the methodology described below was the standard , With latest version which can be found at FormFactors.org. At the moment he entered as component in more general document entitled Power Supply Design Guide for Desktop Platform Form Factors, which describes blocks not only of ATX, but also of other formats (CFX, TFX, SFX, and so on). Although PSDG is not formally a mandatory standard for all power supply manufacturers, we a priori believe that unless otherwise explicitly stated for a computer power supply (that is, it is a unit that is in regular retail sale and intended for general use , and not some specific models manufacturer-specific computers), it must comply with PSDG requirements.

You can view the test results for specific power supply models in our catalog: " Catalog of tested power supplies".

Visual inspection of the power supply

Of course, the first stage of testing is a visual inspection of the block. In addition to aesthetic pleasure (or, conversely, disappointment), it also gives us a number of quite interesting indicators of the quality of the product.

First, of course, is the quality of the case. Metal thickness, rigidity, assembly features (for example, the body can be made of thin steel, but fastened with seven or eight bolts instead of the usual four), the quality of the block's painting...

Secondly, the quality of internal installation. All power supplies passing through our laboratory are necessarily opened, examined inside and photographed. We do not focus on small details and do not list all the parts found in the block along with their denominations - this, of course, would give the articles a scientific appearance, but in practice in most cases it is completely meaningless. However, if a block is made according to some generally relatively non-standard scheme, we try to describe it in general terms, as well as explain the reasons why the block designers could choose such a scheme. And, of course, if we notice any serious flaws in the quality of workmanship - for example, sloppy soldering - we will definitely mention them.

Thirdly, the passport parameters of the block. In the case of, let's say, inexpensive products, it is often possible to draw some conclusions about the quality based on them - for example, if the total power of the unit indicated on the label turns out to be clearly greater than the sum of the products of the currents and voltages indicated there.


Also, of course, we list the cables and connectors available on the unit and indicate their length. We write the latter as a sum in which the first number is equal to the distance from the power supply to the first connector, the second number is equal to the distance between the first and second connectors, and so on. For the cable shown in the figure above, the entry will look like this: “removable cable with three power connectors for SATA hard drives, length 60+15+15 cm.”

Full power operation

The most intuitive and therefore most popular characteristic among users is the full power of the power supply. The unit label indicates the so-called long-term power, that is, the power with which the unit can operate indefinitely. Sometimes the peak power is indicated next to it - as a rule, the unit can operate with it for no more than a minute. Some not very conscientious manufacturers indicate either only peak power, or long-term power, but only at room temperature - accordingly, when working inside a real computer, where the air temperature is higher than room temperature, the permissible power of such a power supply is lower. According to recommendations ATX 12V Power Supply Design Guide, a fundamental document on the operation of computer power supplies, the unit must operate with the load power indicated on it at an air temperature of up to 50 ° C - and some manufacturers explicitly mention this temperature to avoid discrepancies.

In our tests, however, the operation of the unit at full power is tested under mild conditions - at room temperature, about 22...25 °C. The unit operates with the maximum permissible load for at least half an hour, if during this time no incidents occur with it, the test is considered successfully passed.

On this moment Our installation allows us to fully load units with a power of up to 1350 W.

Cross-load characteristics

Despite the fact that a computer power supply is a source of several different voltages at the same time, the main ones being +12 V, +5 V, +3.3 V, in most models there is a common stabilizer for the first two voltages. In his work, he focuses on the arithmetic mean between two controlled voltages - this scheme is called “group stabilization”.

Both the disadvantages and advantages of this design are obvious: on the one hand, cost reduction, on the other, the dependence of voltages on each other. Let’s say, if we increase the load on the +12 V bus, the corresponding voltage sags and the unit’s stabilizer tries to “pull” it to the previous level - but, since it simultaneously stabilizes +5 V, they increase both voltage. The stabilizer considers the situation corrected when the average deviation of both voltages from the nominal is zero - but in this situation this means that the +12 V voltage will be slightly lower than the nominal, and +5 V will be slightly higher; if we raise the first, then the second will immediately increase, if we lower the second, the first will also decrease.

Of course, block developers make some efforts to mitigate this problem - the easiest way to evaluate their effectiveness is with the help of the so-called cross-load characteristics graphs (abbreviated CLO).

Example of a KNH schedule


The horizontal axis of the graph shows the load on the +12 V bus of the unit under test (if it has several lines with this voltage, the total load on them), and the vertical axis shows the total load on the +5 V and +3.3 V buses. Accordingly, each a point on the graph corresponds to a certain block load balance between these buses. For greater clarity, we not only depict on the KNH graphs the zone in which the output loads of the unit do not exceed permissible limits, but also indicate their deviations from the nominal in different colors - from green (deviation less than 1%) to red (deviation from 4 to 5 %). A deviation of more than 5% is considered unacceptable.

Let's say, in the above graph we see that the voltage of +12 V (it was built specifically for it) of the tested unit is kept well, a significant part of the graph is filled with green - and only with a strong imbalance of loads towards the +5 V and +3 buses, 3V it goes red.

In addition, on the left, bottom and right of the graph is limited by the minimum and maximum permissible load of the block - but the uneven upper edge is due to stresses exceeding the 5 percent limit. According to the standard, the power supply can no longer be used for its intended purpose in this load range.

Area of ​​typical loads on the KNH graph


Certainly, great importance It also depends on which area of ​​the graph the voltage deviates more strongly from the nominal value. In the picture above, the area of ​​energy consumption typical for modern computers– all of their most powerful components (video cards, processors...) are now powered by the +12 V bus, so the load on it can be very large. But on the +5 V and +3.3 V buses, in fact, only hard drives and motherboard components remain, so their consumption very rarely exceeds several tens of watts even in computers that are very powerful by modern standards.

If you compare the above graphs of the two blocks, you can clearly see that the first of them turns red in an area that is insignificant for modern computers, but the second, alas, is the opposite. Therefore, although in general both blocks showed similar results over the entire load range, in practice the first will be preferable.

Since during the test we monitor all three main buses of the power supply - +12 V, +5 V and +3.3 V - then the power supplies in the articles are presented in the form of an animated three-frame image, each frame of which corresponds to the voltage deviation on one of the mentioned tires

IN Lately Also, power supplies with independent stabilization of output voltages are becoming increasingly widespread, in which the classic circuit is supplemented with additional stabilizers according to the so-called saturable core circuit. Such blocks demonstrate a significantly lower correlation between output voltages - as a rule, the KNH graphs for them are replete with green color.

Fan speed and temperature rise

The efficiency of the unit's cooling system can be considered from two perspectives - from the point of view of noise and from the point of view of heating. Obviously, achieving good performance on both of these points is very problematic: good cooling can be achieved by installing a more powerful fan, but then we will lose in noise - and vice versa.

To evaluate the cooling efficiency of the block, we step by step change its load from 50 W to the maximum permissible, at each stage giving the block 20...30 minutes to warm up - during this time its temperature reaches a constant level. After warming up, using a Velleman DTO2234 optical tachometer, the rotation speed of the unit’s fan is measured, and using a Fluke 54 II two-channel digital thermometer, the temperature difference between the cold air entering the unit and the heated air leaving it is measured.
Of course, ideally both numbers should be minimal. If both the temperature and the fan speed are high, this tells us that the cooling system is poorly designed.

Of course, everything modern blocks have adjustable fan rotation speed - however, in practice it can vary greatly as the initial speed (that is, the speed at minimum load; it is very important, since it determines the noise of the unit at moments when the computer is not loaded with anything - and this means that the video card and processor fans rotate at minimum speed), and a graph of the dependence of speed on load. For example, in power supplies of the lower price category, a single thermistor is often used to regulate the fan speed without any additional circuits - in this case, the speed can change by only 10...15%, which is difficult to even call adjustment.

Many power supply manufacturers specify either the noise level in decibels or the fan speed in revolutions per minute. Both are often accompanied by a clever marketing ploy - noise and speed are measured at a temperature of 18 °C. The resulting figure is usually very beautiful (for example, a noise level of 16 dBA), but does not carry any meaning - in a real computer the air temperature will be 10...15 °C higher. Another trick we came across was to indicate for a unit with two different types of fans the characteristics of only the slower one.

Output voltage ripple

Operating principle pulse block power supply - and all computer units are pulsed - is based on the operation of a step-down power transformer at a frequency significantly higher than the frequency of the alternating current in the supply network, which makes it possible to reduce the dimensions of this transformer many times over.

The alternating mains voltage (with a frequency of 50 or 60 Hz, depending on the country) at the input of the unit is rectified and smoothed, after which it is supplied to a transistor switch that converts constant pressure back to AC, but with a frequency three orders of magnitude higher - from 60 to 120 kHz, depending on the model of the power supply. This voltage is supplied to a high-frequency transformer, which lowers it to the values ​​we need (12 V, 5 V...), after which it is straightened and smoothed again. Ideally output voltage block must be strictly constant - but in reality, of course, it is impossible to completely smooth out the alternating high-frequency current. Standard requires that the range (distance from minimum to maximum) of residual ripple of output voltages of power supplies at maximum load did not exceed 50 mV for the +5 V and +3.3 V buses and 120 mV for the +12 V bus.

When testing the unit, we take oscillograms of its main output voltages at maximum load using a Velleman PCSU1000 dual-channel oscilloscope and present them in the form of a general graph:


The top line on it corresponds to the +5 V bus, the middle line – +12 V, the bottom – +3.3 V. In the picture above, for convenience, the maximum permissible ripple values ​​are clearly shown on the right: as you can see, in this power supply the +12 V bus fits it’s easy to fit into them, the +5 V bus is difficult, and the +3.3 V bus doesn’t fit at all. High narrow peaks on the oscillogram of the last voltage tell us that the unit cannot cope with filtering the highest frequency noise - as a rule, this is a consequence of the use of insufficiently good electrolytic capacitors, the efficiency of which decreases significantly with increasing frequency.

In practice, if the power supply ripple range exceeds the permissible limits, it can negatively affect the stability of the computer and also cause interference with sound cards and similar equipment.

Efficiency

If above we considered only the output parameters of the power supply, then when measuring efficiency, its input parameters are already taken into account - what percentage of the power received from the supply network the unit converts into the power it supplies to the load. The difference, of course, goes to useless heating of the block itself.

The current version of the ATX12V 2.2 standard imposes a limit on the efficiency of the unit from below: a minimum of 72% at rated load, 70% at maximum and 65% at light load. In addition, there are the figures recommended by the standard (80% efficiency at rated load), as well as the voluntary certification program “80+Plus”, according to which the power supply must have an efficiency of at least 80% at any load from 20% to the maximum permissible. The same requirements as in "80+Plus" are contained in new program Energy Star Version 4.0 certified.

In practice, the efficiency of the power supply depends on the network voltage: the higher it is, the better the efficiency; the difference in efficiency between 110 V and 220 V networks is about 2%. In addition, the difference in efficiency between different units of the same model due to the variation in component parameters can also be 1...2%.

During our tests, we change the load on the unit in small steps from 50 W to the maximum possible and at each step, after a short warm-up, we measure the power consumed by the unit from the network - the ratio of the load power to the power consumed from the network gives us the efficiency. The result is a graph of efficiency depending on the load on the unit.


As a rule, the efficiency of switching power supplies increases rapidly as the load increases, reaches a maximum and then slowly decreases. This nonlinearity gives an interesting consequence: from the point of view of efficiency, as a rule, it is slightly more profitable to buy a unit whose rated power is adequate to the load power. If you take a block with a large power reserve, then a small load on it will fall into the area of ​​the graph where the efficiency is not yet maximum (for example, a 200-watt load on the graph of a 730-watt block shown above).

Power factor

As is known, in an alternating current network two types of power can be considered: active and reactive. Reactive power occurs in two cases - either if the load current in phase does not coincide with the network voltage (that is, the load is inductive or capacitive in nature), or if the load is nonlinear. A computer power supply is a clear second case - if no additional measures are taken, it consumes current from the mains in short, high pulses that coincide with the maximum mains voltage.

Actually, the problem is that if the active power is entirely converted in the block into work (by which in this case we mean both the energy supplied by the block to the load and its own heating), then the reactive power is not actually consumed by it at all - it is completely returned back to the network. So to speak, it just walks back and forth between the power plant and the block. But it heats the wires connecting them no worse than the active power... Therefore, they try to get rid of reactive power as much as possible.

A circuit known as active PFC is the most effective means of suppressing reactive power. At its core, this is a pulse converter, which is designed so that its instantaneous current consumption is directly proportional to the instantaneous voltage in the network - in other words, it is specially made linear, and therefore consumes only active power. From the output of the A-PFC, the voltage is supplied to the pulse converter of the power supply, the same one that previously created a reactive load with its nonlinearity - but since it is now a constant voltage, the linearity of the second converter no longer plays a role; it is reliably separated from the power supply network and can no longer affect it.

To estimate the relative value of reactive power, a concept such as power factor is used - this is the ratio of active power to the sum of active and reactive powers (this sum is also often called total power). In a conventional power supply it is about 0.65, and in a power supply with A-PFC it is about 0.97...0.99, that is, the use of A-PFC reduces reactive power to almost zero.

Users and even reviewers often confuse power factor with efficiency - although both describe the efficiency of a power supply, this is a very serious mistake. The difference is that the power factor describes the efficiency of the power supply's use of the AC network - what percentage of the power passing through it the unit uses for its operation, and the efficiency is the efficiency of converting the power consumed from the network into the power supplied to the load. They are not connected with each other at all, because, as was written above, reactive power, which determines the value of the power factor, is simply not converted into anything in the block; the concept of “conversion efficiency” cannot be associated with it, therefore, it does not affect the efficiency in any way.

Generally speaking, A-PFC is beneficial not to the user, but to energy companies, since it reduces the load on the power system created by the computer's power supply by more than a third - and when there is a computer on every desktop, this translates into very noticeable numbers. At the same time, for the average home user there is practically no difference whether his power supply contains A-PFC or not, even from the point of view of paying for electricity - according to at least So far, household electricity meters only take into account active power. Still, manufacturers' claims about how A-PFC helps your computer are nothing more than ordinary marketing noise.

One of the side benefits of the A-PFC is that it can be easily designed to operate over the full voltage range from 90 to 260 V, thus making a universal power supply that works on any network without manual voltage switching. Moreover, if units with mains voltage switches can operate in two ranges - 90...130 V and 180...260 V, but cannot be run in the range from 130 to 180 V, then a unit with A-PFC covers all these tensions in their entirety. As a result, if for some reason you are forced to work in conditions of unstable power supply, which often drops below 180 V, then a unit with A-PFC will either allow you to do without a UPS altogether, or significantly increase the service life of its battery.

However, A-PFC itself does not yet guarantee operation in the full voltage range - it can only be designed for a range of 180...260 V. This is sometimes found in units intended for Europe, since the rejection of the full-range A-PFC allows slightly reduce its cost.

In addition to active PFCs, passive ones are also found in blocks. They represent the simplest method of power factor correction - they are just a large inductor connected in series with the power supply. Due to its inductance, it slightly smoothes out the current pulses consumed by the unit, thereby reducing the degree of nonlinearity. The effect of P-PFC is very small - the power factor increases from 0.65 to 0.7...0.75, but if the installation of A-PFC requires serious modification of the high-voltage circuits of the unit, then P-PFC can be added without the slightest difficulty into any existing power supply.

In our tests, we determine the power factor of the unit using the same scheme as efficiency - gradually increasing the load power from 50 W to the maximum permissible. The obtained data is presented on the same graph as the efficiency.

Working in tandem with a UPS

Unfortunately, the A-PFC described above has not only advantages, but also one drawback - some of its implementations cannot work normally with blocks uninterruptible power supply. At the moment the UPS switches to batteries, such A-PFCs abruptly increase their consumption, as a result of which the overload protection in the UPS is triggered and it simply turns off.

To assess the adequacy of the A-PFC implementation in each specific unit, we connect it to an APC SmartUPS SC 620VA UPS and check their operation in two modes - first when powered from the mains, and then when switching to batteries. In both cases, the load power on the unit gradually increases until the overload indicator on the UPS turns on.

If this power supply is compatible with a UPS, then the permissible load power on the unit when powered from the mains is usually 340...380 W, and when switching to batteries - a little less, about 320...340 W. Moreover, if at the time of switching to batteries the power was higher, the UPS turns on the overload indicator, but does not turn off.

If the unit has the above problem, then the maximum power at which the UPS agrees to work with it on batteries drops noticeably below 300 W, and if it is exceeded, the UPS turns off completely either right at the moment of switching to batteries, or after five to ten seconds . If you are planning to acquire a UPS, it is better not to buy such a unit.

Fortunately, recently there are fewer and fewer units that are incompatible with UPS. For example, if the blocks of the PLN/PFN series of the FSP Group had such problems, then in the next GLN/HLN series they were completely corrected.

If you already own a unit that is unable to work normally with a UPS, then there are two options (in addition to modifying the unit itself, which requires good knowledge of electronics) - change either the unit or the UPS. The first, as a rule, is cheaper, since a UPS will need to be purchased with at least a very large power reserve, or even an online type, which, to put it mildly, is not cheap and is not justified in any way at home.

Marketing noise

Besides technical characteristics, which can and should be checked during tests, manufacturers often like to supply power supplies with a lot of beautiful inscriptions telling about the technologies used in them. At the same time, their meaning is sometimes distorted, sometimes trivial, sometimes these technologies generally relate only to the features of the internal circuitry of the block and do not affect its “external” parameters, but are used for reasons of manufacturability or cost. In other words, beautiful labels are often mere marketing noise, and white noise that does not contain any valuable information. Most of these statements do not make much sense to test experimentally, but below we will try to list the main and most common ones so that our readers can more clearly understand what they are dealing with. If you think that we have missed any of the characteristic points, do not hesitate to tell us about it, we will definitely add to the article.

Dual +12V output circuits

In the old, old days, power supplies had one bus for each of the output voltages - +5 V, +12 V, +3.3 V and a couple of negative voltages, and the maximum power of each bus did not exceed 150...200 W, and only in some particularly powerful server units the load on the five-volt bus could reach 50 A, that is, 250 W. However, over time, the situation changed - the total power consumed by computers kept growing, and its distribution between the buses shifted towards +12 V.

In the ATX12V 1.3 standard, the recommended +12 V bus current reached 18 A... and this is where the problems began. No, not with an increase in current, there were no particular problems with that, but with safety. The fact is that, according to the EN-60950 standard, the maximum power is freely available to the user connectors should not exceed 240 VA - it is believed that high powers in the event of short circuits or equipment failure can most likely lead to various unpleasant consequences, for example, fire. On a 12-volt bus, this power is achieved at a current of 20 A, while the output connectors of the power supply are obviously considered freely accessible to the user.

As a result, when it was necessary to further increase the permissible load current by +12 V, the developers of the ATX12V standard (i.e. by Intel) it was decided to divide this bus into several, with a current of 18 A each (the difference of 2 A was included as a small reserve). Purely for safety reasons, there are absolutely no other reasons for this decision. The immediate consequence of this is that the power supply doesn't actually need to have more than one +12V rail at all - it just needs to trigger protection if it tries to load any of its 12V connectors with more than 18A of current. That's all. The simplest way to implement this is to install several shunts inside the power supply, each of which is connected to its own group of connectors. If the current through one of the shunts exceeds 18 A, the protection is triggered. As a result, on the one hand, the power on any of the connectors individually cannot exceed 18 A * 12 V = 216 VA, on the other hand, the total power removed from different connectors may be greater than this figure. And the wolves are fed, and the sheep are safe.

Therefore - in fact - power supplies with two, three or four +12 V rails are practically not found in nature. Simply because it’s not necessary - why put a bunch of additional parts inside the block, where it’s already quite cramped, when you can get by with a couple of shunts and a simple microcircuit that will control the voltage on them (and since we know the resistance of the shunts, then does the voltage immediately and unambiguously imply the magnitude of the current flowing through the shunt)?

However, the marketing departments of power supply manufacturers could not ignore such a gift - and now on the boxes of power supplies there are sayings about how two +12 V lines help increase power and stability. And if there are three lines...

But it’s okay if that’s all there is to it. The latest fashion trend is power supplies in which there is, as it were, a separation of lines, but it is as if not. Like this? It’s very simple: as soon as the current on one of the lines reaches the treasured 18 A, the overload protection... is turned off. As a result, on the one hand, the sacred inscription “Triple 12V Rails for unprecedented power and stability” does not disappear from the box, and on the other hand, you can add some nonsense next to it in the same font that, if necessary, all three lines merge into one. Nonsense - because, as stated above, they were never separated. To comprehend all the depth" new technology“from a technical point of view, it is absolutely absolutely impossible: in fact, they are trying to present to us the absence of one technology as the presence of another.

Of the cases known to us so far, the companies Topower and Seasonic, as well as, respectively, brands that sell their units under their own brand, have been noted in the field of promoting “self-switching protection” to the masses.

Short circuit protection (SCP)

Defence from short circuit block output. Mandatory according to the document ATX12V Power Supply Design Guide– which means it is present in all blocks that claim to comply with the standard. Even those where there is no "SCP" inscription on the box.

Overpower (overload) protection (OPP)

Protection against unit overload based on total power across all outputs. Is mandatory.

Overcurrent protection (OCP)

Protection against overload (but not yet short circuit) of any of the unit outputs individually. Present on many, but not all blocks - and not for all outputs. Not mandatory.

Overtemperature protection (OTP)

Protection against block overheating. It is not so common and is not mandatory.

Overvoltage protection (OVP)

Protection against exceeding output voltages. It is mandatory, but, in fact, it is designed in case of a serious malfunction of the unit - the protection is triggered only when any of the output voltages exceeds the nominal value by 20...25%. In other words, if your unit produces 13 V instead of 12 V, it is advisable to replace it as quickly as possible, but its protection does not have to work, because it is designed for more critical situations that threaten immediate failure of the equipment connected to the unit.

Undervoltage protection (UVP)

Protection against underestimation of output voltages. Of course, too low a voltage, unlike too high, does not lead to fatal consequences for the computer, but it can cause malfunctions, say, in operation hard drive. Again, the protection is triggered when the voltage drops by 20...25%.

Nylon sleeve

Soft braided nylon tubes in which the output wires of the power supply are tucked away - they make it a little easier to lay the wires inside the system unit, preventing them from getting tangled.

Unfortunately, many manufacturers have moved from the undoubtedly good idea of ​​using nylon tubes to thick plastic tubes, often supplemented with shielding and a layer of paint that glows in ultraviolet light. Glowing paint is, of course, a matter of taste, but the power supply wires need shielding no more than a fish needs an umbrella. But thick tubes make the cables elastic and inflexible, which not only prevents them from being placed in the case, but simply poses a danger to the power connectors, which bear considerable force from the cables that resist bending.

This is often done supposedly for the sake of improving the cooling of the system unit - but, I assure you, packaging the power supply wires in tubes has very little effect on the air flow inside the case.

Dual core CPU support

In fact, nothing more than a beautiful label. Dual-core processors do not require any special support from the power supply.

SLI and CrossFire support

Another beautiful label, indicating the presence of a sufficient number of video card power connectors and the ability to produce power considered sufficient to power an SLI system. Nothing more.

Sometimes the block manufacturer receives some kind of corresponding certificate from the video card manufacturer, but this does not mean anything other than the aforementioned availability of connectors and high power - and often the latter significantly exceeds the needs of a typical SLI or CrossFire system. After all, the manufacturer needs to somehow justify to buyers the need to purchase a block of insanely high power, so why not do this by sticking the “SLI Certified” label only on it?..

Industrial class components

Once again a beautiful label! As a rule, industrial-grade components mean parts that operate in a wide temperature range - but honestly, why put a microcircuit in the power supply that can operate at temperatures from -45 °C if this unit still won’t be exposed to the cold? .

Sometimes industrial components mean capacitors designed to operate at temperatures up to 105 °C, but here, in general, everything is also banal: capacitors in the output circuits of the power supply, heating up on their own, and even located next to hot chokes, are always designed at 105 °C maximum temperature. Otherwise, their operating life turns out to be too short (of course, the temperature in the power supply is much lower than 105 °C, but the problem is that any An increase in temperature reduces the service life of capacitors - but the higher the maximum permissible working temperature capacitor, the less will be the effect of heating on its service life).

The input high voltage capacitors operate at almost ambient temperature, so the use of slightly cheaper 85-degree capacitors does not affect the life of the power supply in any way.

Advanced double forward switching design

Luring the buyer with beautiful, but completely incomprehensible words is a favorite pastime of marketing departments.

In this case, we are talking about the topology of the power supply, that is, the general principle of constructing its circuit. There are quite a large number of different topologies - so, in addition to the actual two-transistor single-ended forward converter (double forward converter), in computer units You can also find single-transistor single-cycle forward converters, as well as half-bridge push-pull forward converters. All these terms are of interest only to electronics specialists; for the average user, they essentially mean nothing.

The choice of a specific power supply topology is determined by many reasons - the range and price of transistors with the necessary characteristics (and they differ significantly depending on the topology), transformers, control microcircuits... For example, a single-transistor forward version is simple and cheap, but requires the use of a high-voltage transistor and high-voltage diodes at the output of the block, so it is used only in inexpensive low-power blocks (the cost of high-voltage diodes and high-power transistors is too high). The half-bridge push-pull version is a little more complicated, but the voltage on the transistors in it is half as much... In general, this is mainly a matter of availability and cost necessary components. For example, we can confidently predict that sooner or later synchronous rectifiers will begin to be used in the secondary circuits of computer power supplies - there is nothing particularly new in this technology, it has been known for a long time, it’s just too expensive and the benefits it provides do not cover the costs.

Double transformer design

The use of two power transformers, which is found in high-power power supplies (usually from a kilowatt) - as in the previous paragraph, is a purely engineering solution, which in itself, in general, does not affect the characteristics of the unit in any noticeable way - simply in some cases it is more convenient to distribute the considerable power of modern units over two transformers. For example, if one full power transformer cannot be squeezed into the height dimensions of the unit. However, some manufacturers present a two-transformer topology as allowing them to achieve greater stability, reliability, and so on, which is not entirely true.

RoHS (Reduction of Hazardous Substances)

New EU directive restricting the use of a number of hazardous substances in electronic equipment from July 1, 2006. Lead, mercury, cadmium, hexavalent chromium and two bromide compounds were banned - for power supplies this means, first of all, a transition to lead-free solders. On the one hand, of course, we are all for the environment and against heavy metals - but, on the other hand, a sudden transition to the use of new materials can have very unpleasant consequences in the future. Thus, many are well aware of the story with Fujitsu MPG hard drives, in which the massive failure of Cirrus Logic controllers was caused by packaging them in cases made of the new “eco-friendly” compound from Sumitomo Bakelite: the components included in it contributed to the migration of copper and silver and the formation of jumpers between tracks inside the chip body, which led to almost guaranteed failure of the chip after a year or two of operation. The compound was discontinued, the participants in the story exchanged a bunch of lawsuits, and the owners of the data that died along with the hard drives could only watch what was happening.

Equipment used

Of course, the first priority when testing a power supply is to check its operation at various load powers, up to the maximum. For a long time in various reviews the authors used for this purpose regular computers, in which the unit being tested was installed. This scheme had two main drawbacks: firstly, it is not possible to control the power consumed from the block in any flexible way, and secondly, it is difficult to adequately load blocks that have a large power reserve. The second problem has become especially pronounced in recent years, when power supply manufacturers began a real race for maximum power, as a result of which the capabilities of their products far exceeded the needs of a typical computer. Of course, we can say that since a computer does not require a power of more than 500 W, then there is little point in testing units at higher loads - on the other hand, since we generally began testing products with a higher rated power, it would be strange at least it is not possible to formally test their performance over the entire permissible load range.

To test power supplies in our laboratory we use adjustable load With program controlled. The system relies on a well-known property of insulated gate field-effect transistors (MOSFETs): they limit the current flow through the drain-source circuit depending on the gate voltage.

Shown above simplest scheme current stabilizer on a field-effect transistor: by connecting the circuit to a power supply with an output voltage of +V and rotating the knob of variable resistor R1, we change the voltage at the gate of transistor VT1, thereby changing the current I flowing through it - from zero to maximum (determined by the characteristics of the transistor and /or the power supply being tested).

However, such a scheme is not very perfect: when the transistor heats up, its characteristics will “float”, which means that the current I will also change, although the control voltage at the gate will remain constant. To combat this problem, you need to add a second resistor R2 and an operational amplifier DA1 to the circuit:

When the transistor is on, current I flows through its drain-source circuit and resistor R2. The voltage at the latter is equal, according to Ohm's law, U=R2*I. From the resistor this voltage is supplied to the inverting input operational amplifier DA1; the non-inverting input of the same op-amp receives the control voltage U1 from the variable resistor R1. The properties of any operational amplifier are such that when turned on in this way, it tries to maintain the voltage at its inputs the same; it does this by changing its output voltage, which in our circuit goes to the gate field effect transistor and, accordingly, regulates the current flowing through it.

Let’s say resistance R2 = 1 Ohm, and we set the voltage at resistor R1 to 1 V: then the op-amp will change its output voltage so that resistor R2 also drops 1 volt - accordingly, current I will be set equal to 1 V / 1 Ohm = 1 A. If we set R1 to a voltage of 2 V, the op-amp will respond by setting the current I = 2 A, and so on. If the current I and, accordingly, the voltage across resistor R2 change due to the heating of the transistor, the op-amp will immediately adjust its output voltage so as to return them back.

As you can see, we have received an excellent controlled load, which allows you to smoothly, by turning one knob, change the current in the range from zero to maximum, and once set, its value is automatically maintained for as long as desired, and at the same time it is also very compact. Such a scheme, of course, is an order of magnitude more convenient than a bulky set of low-resistance resistors connected in groups to the power supply being tested.

The maximum power dissipated by a transistor is determined by its thermal resistance, the maximum permissible temperature of the crystal and the temperature of the radiator on which it is installed. Our installation uses International Rectifier IRFP264N transistors (PDF, 168 kbytes) with a permissible crystal temperature of 175 °C and a crystal-to-heatsink thermal resistance of 0.63 °C/W, and the cooling system of the installation allows us to keep the temperature of the radiator under the transistor within 80 °C (yes, the fans required for this are quite noisy...). Thus, the maximum power dissipated by one transistor is (175-80)/0.63 = 150 W. To achieve the required power, parallel connection of several loads described above is used, the control signal to which is supplied from the same DAC; You can also use parallel connection of two transistors with one op-amp, in which case the maximum power dissipation increases by one and a half times compared to one transistor.

There is only one step left to a fully automated test bench: replace the variable resistor with a computer-controlled DAC - and we will be able to adjust the load programmatically. By connecting several such loads to a multi-channel DAC and immediately installing a multi-channel ADC that measures the output voltages of the unit under test in real time, we will get a full-fledged test system for testing computer power supplies over the entire range of permissible loads and any combinations of them:

The photo above shows our test system in its current form. On the top two blocks of radiators, cooled by powerful fans of standard size 120x120x38 mm, there are load transistors for 12-volt channels; a more modest radiator cools the load transistors of the +5 V and +3.3 V channels, and in the gray block, connected by a cable to the LPT port of the control computer, the above-mentioned DAC, ADC and related electronics are located. With dimensions of 290x270x200 mm, it allows you to test power supplies with a power of up to 1350 W (up to 1100 W on the +12 V bus and up to 250 W on the +5 V and +3.3 V buses).


To control the stand and automate some tests, it was written special program, a screenshot of which is shown above. It allows:

manually set the load on each of the four available channels:

first channel +12 V, from 0 to 44 A;
second channel +12 V, from 0 to 48 A;
channel +5 V, from 0 to 35 A;
channel +3.3 V, from 0 to 25 A;

monitor the voltage of the tested power supply on the specified buses in real time;
automatically measure and plot cross-load characteristics (CLC) for a specified power supply;
automatically measure and plot graphs of the efficiency and power factor of the unit depending on the load;
in the floor automatic mode build graphs of the dependence of unit fan speeds on load;
calibrate the installation in semi-automatic mode in order to obtain the most accurate results.

Of particular value, of course, is the automatic construction of KNH graphs: they require measuring the output voltages of the unit for all combinations of loads permissible for it, which means a very large number of measurements - to carry out such a test manually would require a fair amount of perseverance and an excess of free time. The program, based on the passport characteristics of the block entered into it, builds a map of the permissible loads for it and then goes through it at a given interval, at each step measuring the voltages generated by the block and plotting them on a graph; the entire process takes from 15 to 30 minutes, depending on the power of the unit and the measurement step - and, most importantly, does not require human intervention.



Efficiency and power factor measurements


To measure the efficiency of the unit and its power factor, additional equipment is used: the unit under test is connected to a 220 V network through a shunt, and a Velleman PCSU1000 oscilloscope is connected to the shunt. Accordingly, on its screen we see an oscillogram of the current consumed by the unit, which means we can calculate the power it consumes from the network, and knowing the load power we have installed on the unit, its efficiency. The measurements are carried out in a fully automatic mode: the PSUCheck program described above can receive all the necessary data directly from the oscilloscope software, which is connected to a computer via a USB interface.

To ensure maximum accuracy of the result output power The block is measured taking into account fluctuations in its voltages: say, if under a load of 10 A the output voltage of the +12 V bus drops to 11.7 V, then the corresponding term when calculating the efficiency will be equal to 10 A * 11.7 V = 117 W.


Oscilloscope Velleman PCSU1000


The same oscilloscope is also used to measure the ripple range of the power supply's output voltages. Measurements are made on the +5 V, +12 V and +3.3 V buses at the maximum permissible load on the unit, the oscilloscope is connected using a differential circuit with two shunt capacitors (this is the connection recommended in ATX Power Supply Design Guide):



Peak-to-peak measurement


The oscilloscope used is a two-channel one; accordingly, the ripple amplitude can be measured on only one bus at a time. To get a complete picture, we repeat the measurements three times, and the three resulting oscillograms - one for each of the three monitored buses - are combined into one picture:


The oscilloscope settings are indicated in the lower left corner of the picture: in this case, the vertical scale is 50 mV/div, and the horizontal scale is 10 μs/div. As a rule, the vertical scale is unchanged in all our measurements, but the horizontal scale can change - some blocks have low-frequency ripples at the output, for which we present another oscillogram, with a horizontal scale of 2 ms/div.

The speed of the unit's fans - depending on the load on it - is measured in a semi-automatic mode: the Velleman DTO2234 optical tachometer we use does not have an interface with a computer, so its readings have to be entered manually. During this process, the load power on the unit changes in steps from 50 W to the maximum permissible; at each step, the unit is kept for at least 20 minutes, after which the rotation speed of its fan is measured.


At the same time, we measure the increase in temperature of the air passing through the block. Measurements are carried out using a Fluke 54 II two-channel thermocouple thermometer, one of the sensors of which determines the air temperature in the room, and the other - the temperature of the air leaving the power supply. For greater repeatability of results, we attach the second sensor to a special stand with a fixed height and distance to the unit - thus, in all tests, the sensor is in the same position relative to the power supply, which ensures equal conditions for all testing participants.

The final graph simultaneously displays the fan speeds and the difference in air temperatures - this allows, in some cases, to better assess the nuances of the operation of the unit’s cooling system.

If necessary, to control the accuracy of measurements and calibrate the installation, use digital multimeter Uni-Trend UT70D. The installation is calibrated by an arbitrary number of measurement points located in arbitrary sections of the available range - in other words, for voltage calibration, an adjustable power supply is connected to it, the output voltage of which changes in small steps from 1...2 V to the maximum measured by the installation on a given channel. At each step, the exact voltage value shown by the multimeter is entered into the installation control program, based on which the program calculates the correction table. This calibration method allows for good measurement accuracy over the entire available range of values.

List of changes in testing methodology


10/30/2007 – first version of the article

Nowadays, many devices are powered by external power supplies - adapters. When the device has stopped showing signs of life, you first need to determine which part is defective, in the device itself, or the power supply is faulty.
First of all, an external examination. You should be interested in traces of a fall, a broken cord...

After external examination device being repaired, the first thing to do is check the power supply and what it outputs. It doesn't matter whether it's a built-in power supply or an adapter. It is not enough to simply measure the supply voltage at the power supply output. Needs a small load A. Without load it may show 5 volts, under light load it will be 2 volts.

An incandescent lamp at a suitable voltage does a good job of acting as a load.. The voltage is usually written on the adapters. For example, let's take the power adapter from the router. 5.2 volts 1 amp. We connect a 6.3 volt 0.3 ampere light bulb and measure the voltage. A light bulb is enough for a quick check. Lights up - the power supply is working. It is rare for the voltage to be very different from the norm.

A lamp with a higher current may prevent the power supply from starting, so a low-current load is sufficient. I have a set of different lamps hanging on the wall for testing.

1 and 2 for testing computer power supplies, with more power and less power, respectively.
3 . Small lamps 3.5 volts, 6.3 volts for checking power adapters.
4 . A 12-volt automotive lamp for testing relatively powerful 12-volt power supplies.
5 . 220 volt lamp for testing television power supplies.
6 . There are two garlands of lamps missing from the photo. Two of 6.3 volts, for testing 12 volt power supplies, and 3 of 6.3 for testing laptop power adapters with a voltage of 19 volts.

If you have a device, it is better to check the voltage under load.

If the light does not light, it is better to first check the device with a known good power supply, if one is available. Because power adapters are usually made non-separable, and to repair it you will have to pick it apart. You can't call it dismantling.
An additional sign of a malfunctioning power supply can be a whistle from the power supply unit or the powered device itself, which usually indicates dry electrolytic capacitors. Tightly closed enclosures contribute to this.

The power supplies inside the devices are checked using the same method. In old TVs, a 220 volt lamp is soldered instead of a line scan, and by the glow you can judge its performance. Partly, the load lamp is connected due to the fact that some power supplies (built-in) can produce significantly higher voltage without load than required.

When choosing a computer, most users usually pay attention to such parameters as the number of cores and processor speed, how many gigabytes are built into it random access memory how spacious HDD and whether the video card can handle the recently released new Sims 4.

And they completely forget about the power supply unit (PSU), and this is very in vain. After all, he is the “iron heart of the computer,” supplying through wires the electricity necessary to power all parts of the computer, at the same time transforming alternating current to permanent. A breakdown of the B.P. means the cessation of operation of the entire machine. That is why, when choosing a computer with the desired configuration, it is also worth taking into account the quality and power of the power supply.

If suddenly one fine day the computer, when you try to turn it on, stops showing signs of life, this is a signal that it is extremely necessary to check the functionality of the power supply. Almost every user can easily do this on their own at home in several ways.

It is never possible to say unequivocally that it is the power supply that has broken down. There is only a list of characteristic signs by which one can suspect that computer malfunctions are related specifically to power supply:

The causes of such problems may be:

  • Unfavorable environmental conditions - accumulation of dust, high humidity and air temperature.
  • Absence or systematic interruption of voltage in the network.
  • Poor quality of connections or power supply elements.
  • An increase in temperature inside the system unit due to a failure of the ventilation system.

As a rule, the power supply unit is a fairly strong part, and it does not break very often. If you notice at least one of the symptoms described above in your computer, the power supply will need to be checked first.

Functionality testing methods

To be sure that the computer power supply is faulty and to determine exactly how the problem can be fixed, it is best to check this part comprehensively, using several methods in succession.

Stage one - checking voltage transmission

To measure the voltage transfer in a computer's power supply, the so-called paperclip method is used. The verification procedure is as follows:

The fact that the power supply is turned on does not mean that it is in full working order. The next stage of testing allows us to determine whether the part has other problems that are not yet visible to the eye.

Stage two - checking with a multimeter

Using this device, you can find out whether the alternating voltage of the network is converted into direct voltage and whether it is transmitted to the components of the device. This is done as follows:

Also, with such a diagnostic device you can measure the capacitor and resistor BP. To check the capacitor, the multimeter is set to the “ringing” mode with a measured resistance value of 2 kOhm. When the device is correctly connected to the capacitor it will start charging. Indicator values ​​above 2 M mean the device is working properly. The resistor is checked in resistance measurement mode. A discrepancy between the resistance declared by the manufacturer and the actual resistance indicates a malfunction.

Stage three - visual inspection of the part

If a special measuring device is not at hand, then you can carry out additional diagnostics of the power supply without using parts of the system unit and the network. How to check the power supply without a computer:

  1. Unscrew the power supply from the system unit case.
  2. Disassemble the part by unscrewing several mounting bolts.
  3. If you find swollen capacitors, this clearly indicates that the power supply is broken and needs to be replaced. You can also simply “revive” the old part by resoldering the capacitors with exactly the same ones.

Along the way, you should remove all contaminants from the disassembled power supply, lubricate the cooler, reassemble it and conduct another performance test.

Test software for power element

Sometimes to check the serviceability of the power supply, it is not at all necessary to remove it from the system unit. To do this, you need to download a program that will itself test the battery for problems. It is important to understand that such software is just an additional diagnostic measure that will allow you to accurately determine the location of the malfunction (for example, malfunctions can be caused by a processor or driver) and effectively eliminate it.

To check the power element, the OSST program is used. How exactly to work with it:

At the end of testing, the program produces a detailed report on the failures and errors that were detected, and thus allows you to determine further actions user.

Computer malfunction can manifest itself in different ways. Sometimes these are regular reboots, sometimes they freeze, and sometimes the computer simply refuses to turn on. In such situations, the first suspect is the computer's power supply, because all other components of the computer depend on it, and if there is something wrong with it, the computer will not work normally. Therefore, when troubleshooting, the first thing you need to do is check the computer's power supply for functionality. In this article we will tell you exactly about this.

Warning: Performing the following procedures may result in electric shock and therefore requires experience working with electricity.

Turning on the power supply

The most simple check The computer's power supply is checked for functionality by turning it on. If the power supply does not turn on, then there is simply nothing further to check; you need to send the power supply for repair or look for the cause of the malfunction yourself.

To check the functionality of the power supply, you need to remove it from the computer and turn it on without connecting it to motherboard. This way we will exclude the influence of other components and will exclusively check the power supply.

To do this, you need to look at the motherboard power cable that comes from the power supply and find the green wire there. This wire must be connected to any of the black wires. This can be done using a paperclip or a small piece of wire (photo below).

You also need to connect some device to the power supply. For example, drive optical disks or an old unnecessary hard drive (photo below). This is done in order not to turn on the power supply without a load, as this can lead to its breakdown.

After the green wire is connected to the black wire and the device creating the load is connected to the power supply, it can be turned on. To do this, simply connect the power supply to the power supply and press the power button on the case (if there is such a button). If after this the cooler begins to rotate, then the power supply is working and should produce the required voltage.

Checking the power supply with a tester

After the power supply has turned on, you can proceed to the next stage of checking the computer's power supply for functionality. At this stage we will check the voltages that it does or does not output. To do this, take the tester and set it to voltage test mode direct current and check what voltages are present between the orange and black wires, between red and black, and also between yellow and black (photo below).

A fully functional power supply should produce the following voltages (tolerance ±5%):

  • 3.3 Volts for orange wire;
  • 5 Volts for the red wire;
  • 12 Volts for the yellow wire;

Visual check of the power supply

Another way to check the power supply is visual inspection. To do this, completely disconnect the power supply and disassemble it (photo visa).

After disassembling the power supply, examine its board and fan. Make sure there are no bulging capacitors on the board and that the fan can rotate freely.

Many PC owners encounter various errors and malfunctions in their computer, but cannot determine the cause of the problem. In this article, we will look at the main methods for diagnosing a computer, allowing you to independently identify and fix various problems.

Keep in mind that high-quality diagnostics of a computer can take the whole day; set aside a day in the morning specifically for this, and do not start everything in the late afternoon.

I warn you that I will write in detail as for beginners who have never disassembled a computer, in order to warn about all possible nuances that can lead to problems.

1. Disassembling and cleaning the computer

When disassembling and cleaning your computer, do not rush, do everything carefully so as not to damage anything. Place components in a pre-prepared safe place.

It is not advisable to start diagnostics before cleaning, since you will not be able to identify the cause of the malfunction if it is caused by clogged contacts or the cooling system. Additionally, diagnostics may fail to complete due to repeated failures.

Disable system unit from the outlet at least 15 minutes before cleaning, so that the capacitors have time to discharge.

Perform disassembly in the following sequence:

  1. Disconnect all wires from the system unit.
  2. Remove both side covers.
  3. Disconnect the power connectors from the video card and remove it.
  4. Remove all memory sticks.
  5. Disconnect and remove cables from all drives.
  6. Unscrew and remove all discs.
  7. Disconnect all power supply cables.
  8. Unscrew and remove the power supply.

There is no need to remove the motherboard, processor cooler, or case fans; you can also leave the DVD drive if it works normally.

Carefully blow off the system unit and all components separately with a powerful stream of air from a vacuum cleaner without a dust bag.

Carefully remove the cover from the power supply and blow it out without touching the electrical parts and the board with your hands or metal parts, as there may be voltage in the capacitors!

If your vacuum cleaner does not work on blowing, but only on blowing, then it will be a little more difficult. Clean it well so that it pulls as hard as possible. When cleaning, it is recommended to use a soft bristled brush.

You can also use a soft brush to remove stubborn dust.

Thoroughly clean the processor cooler heatsink, having first examined where and how much it is clogged with dust, as this is one of the common causes of processor overheating and PC crashes.

Also make sure that the cooler mount is not broken, the clamp is not opened and the radiator is securely pressed to the processor.

Be careful when cleaning fans, do not let them spin too much and do not bring the vacuum cleaner attachment close if it does not have a brush, so as not to knock off the blade.

After cleaning, do not rush to put everything back together, but move on to the next steps.

2. Checking the motherboard battery

The first thing after cleaning, so as not to forget later, I check the battery charge on the motherboard, and at the same time reset the BIOS. In order to pull it out, you need to press the latch with a flat screwdriver in the direction indicated in the photo and it will pop out on its own.

After this, you need to measure its voltage with a multimeter, optimally if it is within 2.5-3 V. The initial battery voltage is 3 V.

If the battery voltage is below 2.5 V, then it is advisable to change it. The voltage of 2 V is critically low and the PC is already starting to fail, which manifests itself in resetting the BIOS settings and stopping at the beginning of the PC boot with a prompt to press F1 or some other key to continue booting.

If you don’t have a multimeter, you can take the battery with you to the store and ask them to check it there, or just buy a replacement battery in advance, it’s standard and very inexpensive.

A clear sign of a dead battery is the date and time on the computer constantly disappearing.

The battery needs to be changed in a timely manner, but if you don’t have a replacement on hand right now, then simply do not disconnect the system unit from the power supply until you change the battery. In this case, the settings should not be lost, but problems may still arise, so do not delay.

Checking the battery is a good time to full reset BIOS. In this case, not only are they reset BIOS settings, which can be done through the Setup menu, but also the so-called volatile CMOS memory, which stores the parameters of all devices (processor, memory, video card, etc.).

Errors inCMOSoften cause the following problems:

  • computer won't turn on
  • turns on every other time
  • turns on and nothing happens
  • turns on and off by itself

I remind you that before resetting the BIOS, the system unit must be unplugged from the outlet, otherwise the CMOS will be powered by the power supply and nothing will work.

To reset the BIOS, use a screwdriver or other metal object to close the contacts in the battery connector for 10 seconds; this is usually enough to discharge the capacitors and completely clear the CMOS.

A sign that a reset has occurred will be an erroneous date and time, which will need to be set in the BIOS the next time you boot the computer.

4. Visual inspection of components

Carefully inspect all capacitors on the motherboard for swelling or leaks, especially in the processor socket area.

Sometimes capacitors swell down instead of up, causing them to tilt as if they were just slightly bent or unevenly soldered.

If any capacitors are swollen, then you need to send the motherboard for repair as soon as possible and ask to resolder all the capacitors, including those located next to the swollen ones.

Also inspect the capacitors and other elements of the power supply; there should be no swelling, drips, or signs of burning.

Inspect the disc contacts for oxidation.

They can be cleaned with an eraser and after that be sure to replace the cable or power adapter that was used to connect this disk, since it is already damaged and most likely caused oxidation.

In general, check all the cables and connectors so that they are clean, have shiny contacts, and are tightly connected to the drives and motherboard. All cables that do not meet these requirements must be replaced.

Check that the wires from the front panel of the case to the motherboard are connected correctly.

It is important that the polarity be observed (plus to plus, minus to minus), since there is a common ground on the front panel and failure to observe the polarity will lead to a short circuit, which is why the computer may behave inappropriately (turn on every other time, turn off itself or reboot) .

Where the plus and minus in the front panel contacts is indicated on the board itself, in the paper manual for it and in electronic version manuals on the manufacturer's website. The contacts of the wires from the front panel also indicate where the plus and minus are. Typically the white wire is the negative wire, and the positive connector may be indicated by a triangle on the plastic connector.

Many even experienced assemblers make a mistake here, so check.

5. Checking the power supply

If the computer did not turn on at all before cleaning, then do not rush to assemble it; first of all, you need to check the power supply. However, in any case, it won’t hurt to check the power supply; maybe it’s because of it that the computer is crashing.

Check the power supply fully assembled to avoid electric shock, short circuit, or accidental fan failure.

To test the power supply, connect the only green wire in the motherboard connector to any black one. This will signal to the power supply that it is connected to the motherboard, otherwise it will not turn on.

Then plug the power supply into the surge protector and press the button on it. Don't forget that the power supply itself may also have an on/off button.

A spinning fan should be a sign that the power supply is turned on. If the fan does not spin, it may be faulty and needs to be replaced.

In some silent power supplies, the fan may not start spinning immediately, but only under load; this is normal and can be checked while operating the PC.

Use a multimeter to measure the voltage between the contacts in the connectors for peripheral devices.

They should be approximately in the following range.

  • 12 V (yellow-black) – 11.7-12.5 V
  • 5 V (red-black) – 4.7-5.3 V
  • 3.3 V (orange-black) – 3.1-3.5 V

If any voltage is missing or greatly exceeds the specified limits, then the power supply is faulty. It is best to replace it with a new one, but if the computer itself is inexpensive, then repairs are allowed; power supplies can be done easily and inexpensively.

The startup of the power supply and normal voltages is a good sign, but in itself does not mean that the power supply is good, since failures can occur due to voltage drops or ripples under load. But this is already determined at subsequent stages of testing.

6. Checking power contacts

Be sure to check all electrical contacts from the outlet to the system unit. The socket must be modern (suitable for a European plug), reliable and not loose, with clean elastic contacts. The same requirements apply to the surge protector and the cable from the computer's power supply.

Contact must be reliable, plugs and connectors must not dangle, spark, or be oxidized. Pay close attention to this, since poor contact is often the cause of failure of the system unit, monitor and other peripheral devices.

If you suspect the quality of the outlet, surge protector, power cable of the system unit or monitor, then change them as quickly as possible to avoid damage to the computer. Do not delay or save on this, as repairing a PC or monitor will cost much more.

Also, poor contact is often the cause of PC malfunctions, which are accompanied by a sudden shutdown or reboot with subsequent failures on the hard drive and, as a result, disruption of the operating system.

Failures can also occur due to voltage drops or ripples in the 220 V network, especially in the private sector and remote areas of the city. In this case, failures may occur even when the computer is idle. Try measuring the voltage in the outlet immediately after the computer spontaneously turns off or restarts and watch the readings for a while. This way you can identify long-term drawdowns, from which a linear-interactive UPS with a stabilizer will save you.

7. Assembling and turning on the computer

After cleaning and inspecting the PC, carefully reassemble it and carefully check that you have connected everything you need. If the computer refused to turn on before cleaning or turned on only once, then it is advisable to connect the components one by one. If there were no such problems, then skip the next section.

7.1. Step-by-step PC assembly

First, connect the motherboard power connector and the processor power connector to the motherboard with the processor. Do not insert RAM, video card or connect disks.

Turn on the power of the PC and if motherboard everything is fine, the processor cooler fan should spin up. Also, if a beeper is connected to the motherboard, a beep code usually sounds indicating a lack of RAM.

Memory installation

Turn off the computer with a short or (if that doesn’t work) long press of the power button on the system unit and insert one stick of RAM into the colored slot closest to the processor. If all the slots are the same color, then just go to the one closest to the processor.

Make sure that the memory stick is inserted evenly until it stops and that the latches snap into place, otherwise it may be damaged when you turn on the PC.

If the computer starts up with one stick of memory and there is a beeping sound, then a code usually sounds indicating that there is no video card (if there is no integrated graphics). If the beep code indicates problems with the RAM, then try inserting another stick in the same place. If the problem continues or there is no other bracket, then move the bracket to another nearby slot. If there are no sounds, then everything is probably fine, continue further.

Turn off the computer and insert the second memory stick into the slot of the same color. If the motherboard has 4 slots of the same color, then follow the instructions for the motherboard so that the memory is in the slots recommended for dual-channel mode. Then turn it on again and check whether the PC turns on and what sound signals it makes.

If you have 3 or 4 memory sticks, then simply insert them one by one, turning the PC off and on each time. If the computer does not start with a certain stick or produces a memory error code, then this stick is faulty. You can also check the motherboard slots by moving the working strip into different slots.

Some motherboards have a red indicator that lights up in case of memory problems, and sometimes a segment indicator with an error code, the explanation of which is in the motherboard manual.

If the computer starts, then further memory testing occurs at another stage.

Installing a video card

It's time to test the video card by inserting it into the top PCI-E x16 slot (or AGP for older PCs). Don't forget to connect additional power to the video card with the appropriate connectors.

With a video card, the computer should start normally, without beeps, or with a single sound signal, indicating normal completion of the self-test.

If the PC does not turn on or emits a video card error code, then it is most likely faulty. But don't rush to conclusions, sometimes you just need to connect a monitor and keyboard.

Connecting a monitor

Turn off the PC and connect the monitor to the video card (or motherboard if there is no video card). Make sure that the connector to the video card and monitor is connected tightly; sometimes tight connectors do not go in all the way, which is the reason for the absence of an image on the screen.

Turn on the monitor and make sure that the correct signal source is selected on it (the connector to which the PC is connected, if there are several of them).

Turn on the computer and a graphical splash screen and text messages from the motherboard should appear on the screen. Usually this is a prompt to enter the BIOS using the F1 key, a message about the absence of a keyboard or boot devices, this is normal.

If the computer silently turns on, but there is nothing on the screen, there is most likely something wrong with the video card or monitor. The video card can only be checked by moving it to a working computer. The monitor can be connected to another work PC or device (laptop, player, tuner, etc.). Don't forget to select the desired signal source in the monitor settings.

Connecting a keyboard and mouse

If everything is fine with the video card and monitor, then move on. Connect the keyboard first, then the mouse, one at a time, turning the PC off and on each time. If the computer freezes after connecting a keyboard or mouse, it means they need to be replaced - it happens!

Connecting drives

If the computer starts with a keyboard and mouse, then we begin to connect one by one hard disks. First, connect the second drive without the operating system (if you have one).

Don't forget that in addition to connecting interface cable The connector from the power supply also needs to be connected to the motherboard and the disk.

Then turn on the computer and if it comes to BIOS messages, then everything is fine. If the PC does not turn on, freezes or turns itself off, then the controller of this disk is faulty and needs to be replaced or repaired to save the data.

Turn off the computer and connect the DVD drive (if any) with an interface cable and power supply. If problems arise after this, then the drive has a power failure and needs to be replaced; repairing it usually makes no sense.

At the end we connect the main system disk and getting ready to enter the BIOS for initial setup before starting the operating system. We turn on the computer and if everything is fine, move on to the next step.

When you turn on your computer for the first time, go to the BIOS. Usually, the Delete key is used for this, less often others (F1, F2, F10 or Esc), which is indicated in the prompts at the beginning of the boot.

On the first tab, set the date and time, and on the “Boot” tab, select your hard drive with the operating system as the first boot device.

On older motherboards with a classic BIOS it may look like this.

On more modern ones with a UEFI graphical shell it is a little different, but the meaning is the same.

To exit the BIOS and save the settings, press F10. Don't be distracted and watch the operating system load completely to notice possible problems.

After the PC has finished booting, check whether the fans of the processor cooler, power supply and video card are working, otherwise there is no point in further testing.

Some modern video cards may not turn on the fans until a certain temperature of the video chip is reached.

If any of the case fans does not work, then it’s not a big deal, just plan to replace it in the near future, don’t be distracted by it now.

8. Error analysis

This is where diagnostics essentially begin, and everything described above was just preparation, after which many problems could go away and without it there was no point in starting testing.

8.1. Enabling Memory Dumps

If blue screens of death (BSOD) appear while your computer is running, this can make troubleshooting much easier. A prerequisite for this is the presence of memory dumps (or at least self-written error codes).

To check or enable the dump recording function, press the “Win ​​+ R” key combination on your keyboard, enter “sysdm.cpl” in the line that appears and press OK or Enter.

In the window that appears, go to the “Advanced” tab and in the “Boot and Recovery” section, click the “Options” button.

The “Record debugging information” field should be “Small memory dump”.

If so, then you should already have dumps of previous errors in the "C:\Windows\Minidump" folder.

If this option was not enabled, then dumps were not saved, enable it at least now to be able to analyze errors if they recur.

Memory dumps may not be created in time during serious failures that involve rebooting or shutting down the PC. Also, some system cleaning utilities and antivirus programs can remove them; you must disable the system cleaning function during diagnostics.

If the dumps are in specified folder is, then we move on to their analysis.

8.2. Memory dump analysis

To analyze memory dumps in order to identify what leads to failures, there is a wonderful utility “BlueScreenView”, which you can download along with other diagnostic utilities in the “” section.

This utility shows files in which a failure occurred. These files belong to the operating system, device drivers, or some program. Accordingly, based on the file’s ownership, you can determine which device or software caused the failure.

If you cannot boot your computer into normal mode, then try booting into a safe mode by holding down the “F8” key immediately after the motherboard’s graphical splash screen disappears, or text messages BIOS.

Go through the dumps and see which files appear most often as the culprits of the failure, they are highlighted in red. Right-click on one of these files and view its Properties.

In our case, it is easy to determine that the file belongs to the nVidia video card driver and most of the errors were caused by it.

In addition, some dumps contained the “dxgkrnl.sys” file, even from the name of which it is clear that it refers to DirectX, which is directly related to 3D graphics. This means that it is most likely that the video card is to blame for the failure, which should be subjected to thorough testing, which we will also consider.

In the same way, you can determine that the fault is caused by a sound card, network card, hard drive, or some program that penetrates deeply into the system, such as an antivirus. For example, if a disk fails, the controller driver will crash.

If you cannot determine which driver or program a particular file belongs to, then look for this information on the Internet by the file name.

If failures occur in the driver sound card, then most likely it has failed. If it is integrated, then you can disable it through the BIOS and install another discrete one. The same can be said about the network card. However, network failures can be caused, which is often resolved by updating the driver network card and connecting to the Internet via a router.

In any case, do not make hasty conclusions until the diagnostics are completely completed; maybe your Windows is simply faulty or a virus has entered, which can be solved by reinstalling the system.

Also in the BlueScreenView utility you can see the error codes and inscriptions that were on blue screen. To do this, go to the “Options” menu and select the “Blue Screen in XP Style” view or press the “F8” key.

After that, switching between errors, you will see how they looked on the blue screen.

You can also find by error code possible reason problems on the Internet, but depending on the ownership of the files, this is easier and more reliable. To return to the previous view, you can use the “F6” key.

If the errors always include different files and different error codes, then this is a sign possible problems with RAM in which everything crashes. We will diagnose it first.

9. Testing RAM

Even if you think that the problem is not in the RAM, still check it first. Sometimes a place has several problems, and if the RAM fails, then diagnosing everything else is quite difficult due to frequent PC failures.

Carrying out a memory test with boot disk is a prerequisite to obtain accurate results in the operating room Windows system It’s difficult on a faulty PC.

In addition, “Hiren’s BootCD” contains several alternative memory tests in case “Memtest 86+” does not start and many more useful utilities for test hard drives, video memory, etc.

You can download the “Hiren’s BootCD” image in the same place as everything else – in the “” section. If you don’t know how to properly burn such an image to a CD or DVD, refer to the article where we looked at it, here everything is done exactly the same.

Set the BIOS to boot from the DVD drive or use the Boot Menu as described in, boot from Hiren's BootCD and run Memtest 86+.

Testing can last from 30 to 60 minutes, depending on the speed and amount of RAM. One full pass must be completed and the test will go around the second round. If everything is fine with the memory, then after the first pass (Pass 1) there should be no errors (Errors 0).

After this, testing can be interrupted using the “Esc” key and the computer will reboot.

If there were errors, you will have to test each strip separately, removing all the others to determine which one is broken.

If the broken bar is still under warranty, then take a photo from the screen using a camera or smartphone and present it to the warranty department of the store or service center(although in most cases this is not necessary).

In any case, it is not advisable to use a PC with broken memory and carry out further diagnostics before replacing it, since various incomprehensible errors will appear.

10. Preparation for component tests

Everything else, except RAM, is tested under Windows. Therefore, in order to exclude the influence of the operating system on the test results, it is advisable to do, if necessary, temporarily and the most.

If this is difficult for you or you don’t have time, then you can try testing on an old system. But, if failures occur due to problems in the operating system, some driver, program, virus, antivirus (i.e. in the software part), then testing the hardware will not help determine this and you may go down the wrong path. And on a clean system, you will have the opportunity to see how the computer behaves and completely eliminate the influence of the software component.

Personally, I always do everything as expected from start to finish as described in this article. Yes, it takes a whole day, but if you ignore my advice, you can struggle for weeks without identifying the cause of the problem.

The fastest and easiest way is to test the processor, unless of course obvious signs, that the problem is in the video card, which we will talk about below.

If your computer starts to slow down some time after turning it on, freezes when watching videos or playing games, suddenly reboots or turns off under load, then there is a possibility of the processor overheating. In fact, this is one of the most common causes of such problems.

At the cleaning and visual inspection stage, you should have made sure that the processor cooler is not clogged with dust, its fan is spinning, and the radiator is securely pressed against the processor. I also hope that you did not remove it when cleaning, as this requires replacing the thermal paste, which I will talk about later.

We will use “CPU-Z” for a stress test with warming up the processor, and “HWiNFO” to monitor its temperature. Although, it is better to use for temperature monitoring proprietary utility motherboard, it is more precise. For example, ASUS has “PC Probe”.

To begin with, it would be a good idea to find out the maximum allowable thermal envelope of your processor (T CASE). For example, for my Core i7-6700K it is 64 °C.

You can find out by going to the manufacturer’s website from an Internet search. This is the critical temperature in the heat spreader (under the processor cover), the maximum allowed by the manufacturer. Do not confuse this with core temperature, which is usually higher and is also displayed in some utilities. Therefore, we will focus not on the temperature of the cores according to the processor sensors, but on the overall temperature of the processor according to the readings of the motherboard.

In practice, for most older processors, the critical temperature above which failures begin is 60 °C. The most modern processors They can also operate at 70 °C, which is also critical for them. You can find out the actual stable temperature of your processor from tests on the Internet.

So, we launch both utilities – “CPU-Z” and “HWiNFO”, find the processor temperature sensor (CPU) in the motherboard indicators, run the test in “CPU-Z” with the “Stress CPU” button and observe the temperature.

If after 10-15 minutes of the test the temperature is 2-3 degrees below the critical temperature for your processor, then there is nothing to worry about. But, if there were failures under high load, then it is better to run this test for 30-60 minutes. If your PC freezes or reboots during testing, you should consider improving cooling.

Please note that a lot also depends on the temperature in the room; it is possible that in cooler conditions the problem will not appear, but in hotter conditions it will immediately make itself felt. So you always need cooling with a reserve.

If your CPU is overheating, check if your cooler is compatible. If not, then you need to change it; no tricks will help here. If the cooler is powerful enough, but can’t handle it a little, then you should change the thermal paste to a more effective one; at the same time, the cooler itself may be installed more successfully.

Among inexpensive but very good thermal pastes, I can recommend Artic MX-4.

It must be applied in a thin layer, having first removed the old paste with dry material and then with cotton wool soaked in alcohol.

Replacing thermal paste will give you a gain of 3-5 °C; if this is not enough, then simply install additional case fans, at least the most inexpensive ones.

14. Disk testing

This is the longest step after the RAM test, so I prefer to leave it for last. To begin with, you can conduct a speed test of all drives using the “HDTune” utility, for which I give “”. This sometimes helps to identify freezes when accessing the disk, which indicates problems with it.

Look at the SMART parameters, where the “disk health” is displayed, there should be no red lines and the overall disk status should be “OK”.

You can download a list of the main SMART parameters and what they are responsible for in the “” section.

A full surface test can be performed using the same Windows utilities. The process may take 2-4 hours depending on the size and speed of the disk (about 1 hour for every 500 MB). Upon completion of the test, there should not be a single broken block, which are highlighted in red.

The presence of such a block is an unequivocal death sentence for the disk and is a 100% guaranteed case. Save your data faster and change the disk, just don’t tell the service that you dropped your laptop

You can check the surface of both regular hard drives (HDD) and solid-state drives (SSD). The latter really don’t have any surface, but if the HDD or SSD drive will freeze every time during the test, which means that the electronics are most likely faulty - it needs to be replaced or repaired (the latter is unlikely).

If you are unable to diagnose a disk under Windows, the computer crashes or freezes, then try doing this using the MHDD utility from the Hiren’s BootCD boot disk.

Problems with the controller (electronics) and the disk surface lead to error windows in the operating system, short-term and complete freezes of the computer. Typically these are messages about the inability to read a particular file and memory access errors.

Such errors can be mistaken for problems with the RAM, while the disk may well be to blame. Before you panic, try updating the disk controller driver or, conversely, returning the original one Windows driver as described in .

15. Testing the optical drive

To check an optical drive, it is usually enough to simply burn a verification disc. For example, using the “Astroburn” program, it is in the “” section.

After burning a disc with a message about successful verification, try copying its entire contents on another computer. If the disk is readable and the drive reads other disks (except for hard-to-read ones), then everything is fine.

Some of the problems I have encountered with the drive include electronics failures that completely freeze or prevent the computer from turning on, failures of the retractable mechanism, contamination of the laser head lens, and damage to the head as a result of improper cleaning. In most cases, everything is solved by replacing the drive; fortunately, they are inexpensive and even if they have not been used for several years, they die from dust.

16. Body check

The case also sometimes breaks, sometimes the button gets stuck, sometimes the wiring from the front panel falls off, sometimes it shorts out in the USB connector. All this can lead to unpredictable behavior of the PC and can be solved by thorough inspection, cleaning, a tester, a soldering iron and other available means.

The main thing is that nothing short-circuits, as evidenced by a non-working light bulb or connector. If in doubt, disconnect all wires from the front panel of the case and try working on the computer for a while.

17. Checking the motherboard

Often, checking a motherboard comes down to checking all components. If all components individually work normally and pass tests, operating system reinstalled, but the computer still crashes, perhaps the problem is with the motherboard. And here I can’t help you; only an experienced electronics engineer can diagnose it and identify a problem with the chipset or processor socket.

The exception is the crash of a sound or network card, which can be solved by disabling them in the BIOS and installing separate expansion cards. You can resolder the capacitors in the motherboard, but, say, replacing the north bridge is usually not advisable, since it is expensive and there are no guarantees; it is better to immediately buy a new motherboard.

18. If all else fails

Of course, it is always better to discover the problem yourself and determine The best way solutions, since some unscrupulous repairmen try to pull the wool over your eyes and rip off your skin.

But it may happen that you follow all the recommendations, but cannot identify the problem, this has happened to me. In this case, the problem is most often in the motherboard or in the power supply; there may be a microcrack in the PCB and it makes itself felt from time to time.

In this case, there is nothing you can do, take the entire system unit to a more or less well-established computer company. There is no need to carry components in parts if you are not sure what is wrong, the issue will never be resolved. Let them sort it out, especially if the computer is still under warranty.

Computer store specialists usually don’t worry, they have a lot of different components, they just change something and see if the problem goes away, thus quickly and easily fixing the problem. They also have enough time to conduct tests.

19. Links

Transcend JetFlash 790 8GB
HDD Western Digital Caviar Blue WD10EZEX 1TB
Transcend StoreJet 25A3 TS1TSJ25A3K