What is the difference between server RAM and regular memory? Is there really a difference? How to find out the type of RAM What is the difference between server memory and RAM

More and more people are faced with the problem of RAM incompatibility with their computer. They install the memory, but it does not work and the computer does not turn on. Many users simply do not know that there are several types of memory and which type is suitable for their computer and which is not. In this guide, I will briefly talk from personal experience about RAM and where each is used.

You don't know what it means U in the RAM marking, which means E, What means R or F? These letters indicate the type of memory - U(Unbuffered, unbuffered), E(error correction memory, ECC), R(register memory, Registered), F(FB-DIMM, Fully Buffered DIMM - fully buffered DIMM). Now let's look at all these types in more detail.

Types of memory used in computers:

1. Unbuffered memory . Regular memory for regular desktop computers, it is also called UDIMM. A memory stick usually has 2, 4, 8 or 16 memory chips on one or both sides. For such memory, the marking usually ends with the letter U (Unbuffered) or without a letter at all, for example DDR2 PC-6400, DDR2 PC-6400U, DDR3 PC-8500U or DDR3 PC-10600. And for laptop memory, the marking ends with the letter S, apparently this is an abbreviation for SO-DIMM, for example DDR2 PC-6400S. A photo of unbuffered memory can be seen below.

2. Error Correcting Memory (ECC memory). Regular unbuffered memory with error correction. Such memory is usually installed in branded computers sold in Europe (NOT SERVERS), the advantage of this memory is its greater reliability during operation. Most memory errors can be corrected during operation, even if they appear, without losing data. Typically, each stick of such memory has 9 or 18 memory chips; one or 2 chips are added. Most regular computers (not servers) and motherboards can handle ECC memory. For such memory, the marking usually ends with the letter E (ECC), for example DDR2 PC-4200E, DDR2 PC-6400E, DDR3 PC-8500E or DDR3 PC-10600E. A photo of unbuffered ECC memory can be seen below.

The difference between memory with ECC and memory without ECC can be seen in the photo:

Although most boards sold support this memory, it is better to find out compatibility with a specific board and processor in advance before purchasing. From personal experience, 90-95% of motherboards and processors can handle ECC memory. Among those that cannot work: boards based on Intel G31, Intel G33, Intel G41, Intel G43, Intel 865PE chipsets. All motherboards and processors starting from the first generation Intel Core can all work with ECC memory and this does not depend on the motherboard. For AMD processors, in general, almost all motherboards can work with ECC memory, except for cases of individual incompatibility (this happens in the rarest cases).

3. Register memory (Registered). SERVER memory type. Usually he always released with ECC(error correction) and with a "Buffer" chip. The “buffer” chip allows you to increase the maximum number of memory sticks that can be connected to the bus without overloading it, but this is unnecessary data, we will not delve into the theory. Recently, the concepts buffered and registered are almost not distinguished. To exaggerate: register memory = buffered. This memory works ONLY on server motherboards capable of working with memory using a “buffer” chip.

Typically, register memory strips with ECC have 9, 18 or 36 memory chips and another 1, 2 or 4 “buffer” chips (they are usually in the center and differ in size from the memory chips). For such memory, the marking usually ends with the letter R (Registered), for example DDR2 PC-4200R, DDR2 PC-6400R, DDR3 PC-8500R or DDR3 PC-10600R. Also in the marking of register (server) (buffered) memory there is usually an abbreviation for the word Registered - REG. A photo of buffered (registered) memory with ECC can be seen below.

Remember! Registered memory with ECC is 100% likely NOT to work on regular motherboards. It only works on servers!

4. FB-DIMM Fully Buffered DIMM(Fully Buffered DIMM) is a computer memory standard that is used to improve the reliability, speed, and density of the memory subsystem. In traditional memory standards, data lines are connected from the memory controller directly to the data lines of each DRAM module (sometimes through buffer registers, one register chip per 1-2 memory chips). As the channel width or data transfer rate increases, the signal quality on the bus deteriorates and the bus layout becomes more complicated. This limits memory speed and density. FB-DIMM takes a different approach to solve these problems. This is a further development of the idea of ​​registered modules - Advanced Memory Buffer buffers not only address signals, but also data, and uses a serial bus to the memory controller instead of a parallel one.

The FB-DIMM has 240 pins and is the same length as other DDR DIMMs, but differs in the shape of the tabs. Suitable for server platforms only.

FB-DIMM specifications, like other memory standards, are published by JEDEC.

Intel used FB-DIMM memory in systems with Xeon 5000 and 5100 series processors and later (2006-2008). FB-DIMM memory is supported by server chipsets 5000, 5100, 5400, 7300; only with Xeon processors based on the Core microarchitecture (socket LGA771).

In September 2006, AMD also abandoned plans to use FB-DIMM memory.

If you find it difficult to choose memory for your computer, check with the seller and tell him the motherboard model and processor model.

P.S.: Recently, another cheap and interesting type of memory has appeared - I call it “Chinese Counterfeit”. For those who haven't encountered it yet, I'll tell you. This is the kind of memory that can always be recognized by its contacts; usually they are oxidized, and even if they are cleaned, within a month or two they oxidize again, become cloudy, dirty, and the memory may malfunction or not work at all. There is not even a smell of gold on the contacts of this memory. Another difference between this memory and the original one is that it works on certain motherboards or processors, for example ONLY on AMD, or only strictly on some chipsets. Moreover, the list of these chipsets is very small. What is the secret of this “memory” is not yet clear to me, but many people buy it - after all, it is 40-50% cheaper than a similar one. And what’s most surprising is that the new “Chinese Counterfeit” usually costs less than the original used memory :) I won’t talk about the reliability and durability of the work, everything is clear here.

27. 06.2018

Blog of Dmitry Vassiyarov.

What is the difference between server RAM and regular memory? Is there really a difference?

Good day, my dear readers and I am glad to communicate with you again. Today's topic cannot be called popular, because it does not concern ordinary computers. But in fact, the question of how server RAM differs from regular RAM has increasingly become a concern for ordinary users.

I would attribute this to unsuccessful upgrade attempts based on the logical assumption that hardware for equipment operating 24/7 would be of better quality and more reliable.

But in fact, server hardware is highly specialized components. Therefore, let's figure it out.

There is a significant difference between a server and a regular work or gaming computer due to the responsibility for the tasks being solved. Therefore, the requirements for the installed hardware are fundamentally different.

For server equipment that operates 24 hours a day, it must be not just reliable, but fault-tolerant. In server DDR memory this is achieved in different ways.

Hardware support

In particular, on servers it is installed, which differs from the usual one by the presence of an additional chip that acts as a buffer. It is smaller in size, located in the center of the bar, so you can easily distinguish such a module. Typically, for every 8 row chips, 1 buffer is installed. What is it for?

The fact is that on modern motherboards the RAM controller is an integral component of the processor. But since when simultaneously accessing several memory modules it is exposed to serious current loads (due to changes in the electrical capacitance of the chip during the “writing-reading” process), it needs reliable protection. This function is performed by the server register memory module buffer. Without it, the server processor could easily fail during intensive work.

Software method

During the process of reading information from memory chips, an error may occur due to the influence of external factors. Don’t be surprised, neutrons from cosmic and powerful electromagnetic radiation can easily change the state of a memory bit.

To minimize the consequences of this situation, the ECC (Error Correcting Code) correction function is used, which is also used in some individual modifications of conventional memory. The algorithm used in it is capable of independently detecting and correcting errors using mathematical methods of digital code processing. Need I say how important this is for the stable operation of the server?

I would like to immediately draw the readers’ attention to the marking of server memory. You may know that modules with ECC are designated by the letter “E”. But this does not mean at all that such a module is a server one.

Remember: only register memory can be server memory, and ECC is its mandatory component. The server memory bar is designated by letters marked “R” or “REG”, which means “Registered”. The type of RAM itself is called FB-DIMM (Full Buffered).

It is also worth adding that the fault tolerance of server RAM is ensured not only by the above methods. In addition to this, it undergoes special testing that simulates conditions of long-term operation (heating up to 100˚C) under intense load. After this, the memory modules are checked for compatibility with different software and hardware server platforms. This allows you to identify defective modules in a short time. If their quantity is more than required (2 strips out of 10,000 pieces), then the entire batch is rejected.

Differences That Matter

As you can see, the reliability of server RAM is simply amazing and it is quite natural that some users want to use it on a regular computer. But, my dear friends, there are several nuances here and I want you to know about them:

  • Exchange of information through a buffer will require additional clock cycles from the processor; in addition, the ECC algorithm is used, which also requires additional processing time. As a result, server memory demonstrates much lower operating speed;

  • You understand perfectly well that the presence of additional chips and high requirements for the quality/reliability of the product significantly affect the final cost of the product. Therefore, the price of server memory is much higher than usual;
  • And finally, the main information for those who want to know: will register memory work on a regular motherboard? Will. But not on every one. Both server and gaming ones support different levels. They may have the ability to work with a RAM buffer. This technology allows you to significantly increase the amount of RAM without creating additional load on the processor. Therefore, always check the technical specifications of your motherboard and, perhaps, you will be able to install reliable server memory on your PC.

You now know how server RAM differs from regular memory. There are not many differences, but they are quite significant. This concludes my story and says goodbye to you. I hope to please you with new interesting articles soon.

See you and all the best to you!

Due to its specific nature, server topics are a relatively rare guest on the covers of IT periodicals. If you take an interest in the quantitative ratio of “server” and “desktop” information existing on the Internet, you can get approximately 1:20. As for even more subtle matters, such as organizing a data storage system on enterprise-scale servers, obtaining such knowledge is very difficult. Today we will touch upon perhaps one of the quietest, but still present areas of the professional components market, namely server memory.

Since the phrase “server memory” itself is rather an abstract slang expression containing too many different meanings, we decided to combine the “general education” part with determining the economic feasibility of the existence of this direction as such.

What is server memory? To conduct our conversation more substantively, it would be a good idea to first understand the terminology and explain what is meant by this phrase in this publication. So, server memory (hereinafter referred to as SP) are memory modules with parity control and error correction, as well as additional functionality to ensure greater stability (register buffered memory), created according to standards different from those used in desktop products and certified for use in servers from A-brands. We will not pretend that this definition will be included in the textbook, but it seems to us that it reflects the essence.

The second question that I would like to discuss is how server memory differs from desktop memory, in addition to the above additional bits, registers and buffers?

Differences between server and desktop memory

Manufacturers

The number of brands in this market is much more modest than in the “desktop” field. In order not to get confused about who does what and for whom, we will divide the companies producing joint ventures into several subcategories.

A-brands- memory created for specific server manufacturers with their unique markings applied to chips and modules. All first-tier developers (HP, Dell, etc.) usually use a similar approach to increase overall profits. In this case, it is probably not worth focusing special attention on who specifically produces this or that module and/or microcircuit (it could be any company that has won a tender or has direct contracts). The most important thing to know is that modules sold, for example, under the Dell brand, are exactly compatible with Dell servers and are 100% likely to work in them.

Chip A-brands (manufacturers of memory chips and modules on them)- companies engaged in the production of both microcircuits and memory modules under their own brand. This category includes Micron, Samsung, Hynix, Quimonda (formerly Infineon). By and large, they regulate the memory market as a whole, since they collectively produce about 70% of DRAM chips. All of the above companies have product lines of server modules of any standard. Of course, a strict connection to production allows us to have more competitive prices compared to companies focusing exclusively on server brands, but in this case there is another side to the coin - difficulties with certification. For example, changing series or generations of chips leads to a change in markings (sometimes only chips, and sometimes modules), which requires server manufacturers to conduct new tests to issue a conclusion about whether the new memory is suitable for use in their systems or not. There are situations when, under identical labels, there are essentially two products on the market with different characteristics (and both are original), which causes a lot of headaches for server builders.

Modular A-brands (manufacturers of modules on third-party chips)- the most common category. Among them are such well-known names as Kingston, Corsair, Transcend, Apacer. In fact, such companies are often called “test” in relation to joint ventures, because their engineers spend most of their time testing modules for functionality with existing commercially available server platforms. As a result, a situation arises that is in many respects similar to JVs from server A-brands, in addition, such companies have much fewer problems associated with labeling. Therefore, the end consumer or server builder can easily obtain information that a module with “such” markings is suitable for use in “such and such” server, and it does not matter which manufacturer’s chips it is based on.

All three approaches have their positive and negative sides, but we cannot expect any changes in the presence or placement of brands in the joint venture market in the near future.

fault tolerance

In relation to a joint venture, such a rather desktop term as “reliability” is usually replaced by "fault tolerance", which more accurately reflects the meaning. Since such equipment must operate non-stop 99.9% of the time from the moment it is put into operation, manufacturing and testing uses much more stringent approaches than desktop products.

For example, the “artificial aging” technology ensures the identification of manufacturing defects within two days - during testing, server modules are heated to 100 °C, which allows them to quickly be brought into a condition corresponding to two months of operation. Next in the set of tests, which heavily load the memory subsystem, there is a check for the compatibility of modules with various server platforms, which takes about another day. As a result, a joint venture enters the channel with a defect rate of about 0.02% (one module per five thousand).

In this section it is worth saying a few words about such a concept as a “successful model”, which again came from the desktop world. It is a well-known fact that there are “successful” video cards that work equally well on any platform, “successful” hard drives that are compatible with most controllers, and “successful” memory modules that ensure stable operation with almost all motherboards. As for the joint venture, everything is exactly the opposite. The main criterion that guides the manufacturer is the absence of “unsuccessful” models, because a well-created joint venture bar should work everywhere and always and without any reservations. So, if a product from brand X does not work with a server from brand Y, then such a joint venture will most likely not be allowed into the production line until the reason is determined. Of course, no one can afford a similar approach to components for mass-produced computers.

Criterias of choice

The end buyer, when choosing memory when assembling or upgrading his computer, is usually guided by the following criteria: brand (this also includes the warranty), price, test results. In other words, it is important to him who the manufacturer is, what timings and frequencies can be obtained, and how much he will have to pay for all this. If you go up a step and look at what principles the assembly company is guided by, then price immediately comes first. She, due to the very fierce competition in the market, usually chooses the cheapest option from the problem-free ones.

With joint ventures, the situation is as follows - integrators, as a rule, are primarily guided by the experience of operating certain modules in certain platforms and focus on one (maximum two) suppliers capable of strictly fulfilling conditions such as stability of supply and quick resolution of technical issues with the manufacturer in case of their occurrence. Although price is, of course, an important component, it is far from being the determining factor. If a joint venture from one brand turns out to be 20% more expensive than another, but at the same time provides better compatibility, then the choice will most likely fall on it.

What determines the demand for memory: opinions of server manufacturers

Evgeniy Bobruiko

product manager for server hardware at everest

In accordance with company policy, we try to offer the optimal solution for specific applications, always with a small margin for the future. If the client has “outgrown” his configuration, then in many cases it is more expedient for him to purchase a more modern and powerful server, and transfer the old one to other, less responsible and labor-intensive tasks.

Server modernization services are in very low demand in our company (about 2% of systems undergo it). After all, a prematurely arising problem of improving the memory subsystem indicates either a significant increase in the client’s streaming tasks (for example, due to an increase in staff), or an error by the integrator who proposed an ineffective solution.

As for memory types, FB-DIMM has excellent performance for streaming data, which is very popular when working with databases (especially OLAP), and also allows you to install a significant amount of RAM in the server. On the other hand, it has high latency, which is not always good. For DDR2 this figure is lower, but it also has less bandwidth. It is to combat high latency that Intel has a good “weapon” - Woodcrest’s large cache, common for two cores.

In general, it seems to me that FB-DIMM has very good prospects, since DDR2, due to the parallel method of data transfer, is approaching the “ceiling” as the frequency increases. However, I do not rule out that at first Intel may make a budget version of the system logic on DDR2.

Andrey Tishchenko

Entry company manager

In my opinion, upgrading servers is a thankless task, since the entire component base becomes obsolete and many problems cannot be solved by simply adding memory. Most of our customers are growing companies that are constantly updating their server fleet; they adapt older models for less resource-intensive applications without upgrading. Therefore, requests to install additional memory are rare for us; such sales do not exceed 2-3% of the total supply.

I must say that there is no competition between DDR2 and FB-DIMM as such - there is a planned transition of the two main chipmakers to a new architecture. The popularity of a particular memory depends on who finds this period easier. Intel has taken a big step forward by pushing the limits of the shared processor bus and reducing the power consumption of new CPUs. However, betting on FB-DIMMs may work against the company in the near future. Compared to DDR2, its performance is worse: latency, power consumption and heat dissipation are higher, and the cost of the modules is about 10%.

AMD, with the transition to Socket 1207 (Rev F) and the adaptation of the memory controller on the core for DDR2, retains the ability to work with memory at the core frequency, use the most popular standard, and the scalability of multiprocessor multicore platforms. In the future, the company plans to switch to FB-DIMM memory, but sees its trump cards elsewhere. Figuratively speaking, if Intel constantly stimulates developers of new types of memory and maintains a high rate of evolution of DDR-DDR2-FB-DIMMs, then AMD is more focused on meeting the price/performance/power consumption criteria.

Igor Przhegarlinsky

commercial director of Onyx company

If previously most servers had a 32-bit operating system installed, which imposed restrictions on the amount of memory up to 4 GB, then with the transition to 64-bit OS the limit will increase to 32 GB (for 1-4 processor servers based on Windows Server 2003 Standard Edition ).

With the release of new processors from Intel (Dempsey and Woodcrest) and AMD (Socket F), FB-DIMM and DDR2-667 ECC Reg memory types are entering the market. Both chipmakers have long since abandoned support for a single standard, and competition between memory types will eventually come down to competition between chip manufacturers.

The bulk of server memory today is used in new systems - the share of modules sold for upgrades does not exceed 5%.

Performance

For a very long time, the SP significantly lagged behind the desktop one in terms of speed characteristics. Suffice it to remember that the DDR400 standard, which managed to become widespread for desktop systems and laptops, came to server systems only with the advent of Opteron. Even modern chipsets for powerful workstations continued to use register-buffered DDR266 for a long time.

The next leap in professional systems based on Intel was the use of DDR2-400 - this despite the fact that in desktops DDR2 started at 533 MHz. For reference, we note that for Itanium 2-based servers, only DDR200 with 128-bit access was generally used to provide the necessary bandwidth. The reasons why platform creators chose lower frequencies are clear: increasing reliability and reducing the load on the memory controller integrated with the main logic set.

Today, server technology is also required to perform at maximum speed, which forces the use of the most modern standards that are not inferior to, and sometimes even superior to, those existing for desktop systems. Pay attention to the specifications of the latest server platforms - the total performance of the northbridge buses for Intel chipsets and integrated controllers for Opteron can exceed the fantastic 30 GBps mark. And most importantly, at such speeds it is necessary to ensure 24/7 operation when executing very resource-intensive applications.

Perhaps the situation is even more complicated with the memory subsystem. Today's standards are FB-DIMM 667 MHz for Intel and register-buffered DDR2-667 with double parity for Opteron. The amount of memory to ensure comfortable work is constantly increasing, and, as is known, the probability of an error occurring in RAM grows exponentially with increasing volume. As a result, manufacturers of modules and, first of all, SP chips have to look for ways to ensure reliability no less than with previously used DDR200/266 standards, as well as with increased volumes, now reaching 32 GB, huge frequencies of 667 MHz and preservation the difference in cost is no more than 40% compared to desktop modules.

Price

Although the joint venture market is not subject to strong fluctuations, which are often observed in the desktop segment, at the same time it is quite dynamic, and such methods of competition as dumping, marketing promotions, OEM supplies at reduced prices, etc. d. Today, the cost of SP modules differs on average by 20-50% from desktop ones of similar volume. On the one hand, this is not small, on the other hand, one can recall a more complex technical implementation (additional chips for parity control and buffering), the need to conduct a set of tests after the product leaves the production line, certification from the server manufacturer and, of course, a lifetime warranty. As a result, it turns out that the vendor’s earnings on SP modules are hardly much more than on desktop “slats”. And the modern market requires a constant reduction in prices: just remember that FB-DIMMs began to become cheaper literally from the day they appeared on the public market. So, if at first the cost of FB-DIMM was approximately twice as high as the cost of the DDR2 and DDR standards, now this difference is much more modest and amounts to about 30-40%.

What determines the demand for memory: opinions of suppliers

Elena Krivoshienko

Head of Sales Department of Memory Modules and Flash Products at Kyiv-TEK

According to our data, the volume of the Ukrainian server memory segment accounts for 10-13% of the memory market as a whole, which in financial terms equals approximately $5-7 million per year.

Over the past months of 2006, the sales structure of server modules has changed somewhat. DDR and DDR2 are sold in approximately the same volumes, and recently quite a lot of DDR modules have been ordered for upgrading servers. As for FB-DIMMs, today their share in the total sales structure is no more than 2%.

In quantitative terms, the server memory market, according to our forecasts, will increase. It can be assumed that in the next six months the sales structure will shift towards DDR2. Regarding FB-DIMM modules, I believe that their share will gradually increase, although in the next six months it is unlikely to be significant. Solutions using this type of memory are still very expensive - primarily due to the cost of processors.

Dmitry Borovsky

general manager of TNG company

Due to the difficult political situation this year, the corporate segment of the IT market has slowed down quite significantly. This has led to a decrease in the supply of complex solutions based on servers. There is hope that the situation will change at the end of the year, but it is difficult to guarantee that during the “hot” season the demand for components, including server memory, will be fully satisfied. Based on this, the annual volume of the Ukrainian server memory market can only be estimated approximately - according to our forecasts, it will be from 40 to 60 thousand modules, and in financial terms - from 4 to 5 million dollars.

I note that in 2006, sales of DDR333 server memory sharply decreased, and almost the same thing is happening with DDR400 - their share is now less than 5%. The main product today is DDR2-400 modules, and the demand for FB-DIMMs is due to the availability of motherboards for them on the market. Deliveries of FB-DIMMs began only in the third quarter, and so far they account for less than 5% of our sales (however, there is an obvious upward trend in this indicator).

In general, there is no particular difference between trends in the domestic and international markets. For systems in which fault tolerance is one of the most important factors, manufacturers prefer memory from well-established vendors. Perhaps the only difference is that large global brands mainly use first-hand server memory in their products - from Samsung, Hynix, Micron. On the local market, there is significant demand for modules from Corsair, Kingstone, etc.

Andrey Semenovsky

Nebesa company manager

The server memory segment is just a part of the server market, although due to the high cost of modules it plays a more prominent role than memory in the desktop market. If for desktop PCs the release of successful memory models around which marketing is built is important, then for servers it is the absence of unsuccessful ones. Problems with the availability and quality of server memory can lead to costs that are many times higher than the price of the modules, which means that the responsibility for delivery times, declared model compatibility and quality in this case is an order of magnitude higher. And given the low profitability of server memory sales, one can imagine how difficult this segment is from a business point of view.

Among the features of the domestic market, I would like to note the fairly high potential of Ukrainian server builders. The volume of memory consumption is about 6-8 thousand modules per quarter, and the average price of the bracket is close to $150.

Changes in the structure of memory sales depend on those in the model range of platform manufacturers (primarily Intel and AMD). By the time FB-DIMM was released, the sales ratio of DDR2 and DDR was 70:30. With the advent of fully buffered memory, a transition process began: the demand for platforms with DDR2 support is noticeably decreasing, and those with FB-DIMMs are not being purchased very actively due to the rather high price and lack of development. It is noteworthy that during this period, sales of DDR increased - their share in the supply structure approached 50%.

Victor Shcherbyak

Head of Sales Department at ASBIS Ukraine

The release of new server operating systems has made it possible to significantly increase the maximum memory capacity in the system (early OS versions had significant limitations on this parameter). Therefore, module sales both in unit and monetary terms have increased. According to our data, the server memory segment today accounts for about 5% of the entire market for these products.

The demand for DDR has noticeably decreased with the advent of new server platforms. The largest sales now come from DDR2 modules; sales of FB-DIMMs are also growing, although the share of the latter is still insignificant. According to my estimates, sales volumes of both types of memory will most likely be equal by the end of the year.

In general, servers are required every year to increase scalability, flexibility, capacity, speed, as well as reliability and safety of investment. Based on new technologies, server memory manufacturers are releasing modules that enable these improvements.

Ukrainian realities

According to available information from integrators, about 10,000 servers are sold in Ukraine per year (let us clarify: not computers acting as a server, but systems using a joint venture). Even if this figure is too optimistic, the point is that the number of SP modules sold is approximately 20,000 “slats”, of which about a third goes to modernize previously assembled systems. For averaging, let’s assume that the cost of one module is $100, therefore, the total volume of the joint venture segment in our country is approximately $2 million. Moreover, the number of companies offering similar products does not exceed five, and they work mainly with vendors such as Kingston, Samsung and Hynix, of which the first two share about 70% of the Ukrainian market.

Probably, the information provided in this article will seem too “basic” for some, but we believe that for the first topic of the issue, dedicated to server memory, it will clearly not hurt. We invite you to learn more about the technical aspects of this type of product in the following material.

0

Hello Giktimes! Popular belief says that your neighbor’s grass is always greener, and the computers that meticulous entrepreneurs purchase for their needs are more reliable and productive than retail models flavored with marketing. A whole caste of enthusiasts hunts for server components and idolizes the performance of enterprise-class hardware. Let's figure out whether large organizations are really splashing around in the “IT paradise”, or have the geeks created an idol for themselves out of thin air?


There are no barriers to enthusiasts, especially if these barriers are erected by insidious marketers who have divided all electronic devices into corporate and consumer ones! Because even in the media with advertisements about the mysterious “user experience,” software and hardware developers say, “The camera of this smartphone provides professional quality pictures!” And in other ways, the cliche about professionals who don’t use nonsense has been exploited for a long time. And if you’re looking for the notorious “professional equipment” and quality of services, then it’s better to ask for enterprise-class hardware and service methods, right?

The motives that guide restless enthusiasts lie on the surface - even though consumer technology is developing more vigorously due to the appetites of buyers, “battle-hardened” corporate-class components will clearly be more reliable, and even cheaper on the secondary market. Somehow geeks play on video cards for workstations and assemble powerful and “eternal” home PCs with server hardware! So, it makes sense to try your luck?

And, of course, there is a bit of this sense in such an undertaking, but with the acquisition of corporate “attributes” for home conditions, you can “get stuck” and, at best, overpay for unclaimed functionality, and at worst, go into the red in comparison with the options available to retail buyers. Let's figure out what the catch is in using hardware designed for corporations.

The server one is also a gaming one. Intel Xeon in home PCs

The first thing technology enthusiasts like to use from the enterprise segment is server processors. Not exotic ones, but the most “understandable” ones, that is, based on the x86 architecture. This pleasure is not cheap, so “Zeonovods”, relatively speaking, include two camps with slightly different guidelines in PC construction:


Xeon - initially not for games and “racing” in benchmarks, but sometimes they are useful

Enthusiasts focused on High-End components. This is a level where large-scale versions of Intel Core i7 are no longer enough, and when looking at the LGA-2011 platform (of any generation), thoughts come to mind that “supercharged” Core i7s offer “the same eggs,” only in smaller quantities and without acceleration.

Because, since we’re talking about price, there have been times in history when eight-core Xeons turned out to be a third cheaper and significantly “cooler” than 6-core Core i7 Extreme Editions. For example, this was the case after the debut of Intel Haswell-E chips in 2014 - firstly, the price difference between the six-core Core i7-5960X and the “civilian” quad-core i7-4790K was a measly 15%. And secondly, the junior eight-core server Xeon E5-2609 v4 cost about 30% less than the candidate from the Haswell-E camp. At the same time, unlike the “just” Core i7, the Xeon has a lower TDP level and does not have graphics integrated into the processor that are useless for enthusiasts.

At the same time, there is tons of L3 cache in all three models, and the frequency, although lower in the Xeon, does not allow the belief that “there is no such thing as a superfluous core” and “very soon games will be optimized so that they run quickly on 8 or more cores.” thrifty lovers of resting speed, after which hot guys send junior versions of Xeon to the Intel X99 chipset and... do not admit to anyone how things are in games.

Therefore, four cores, diluted with the help of Hyper-Threading, are almost always more effective in games than eight low-frequency “pots” in the Xeon, which cannot even be overclocked (locked multiplier, near-zero overclocking on the bus).

"Kulibins" who wanted to modernize the old platform at minimal cost. For example, to replace the old Core 2 Duo processor, buy not the old Quad, but a much cooler and high-frequency quad-core Xeon X5460, which, using a simple adapter, can be installed not in a server motherboard with Socket 771, but in a “civilian” one for Socket 775.

The main thing in this scenario is to take care of high-quality cooling (server “stones” sport a TDP of about 120 W instead of 95 W for standard quad-core processors), but in the end this option of upgrading from a very old platform to a “tolerably old” one justifies itself, especially since on some On motherboards the processor can be overclocked up to 4 GHz.

And after all, “Zions” have advantages with which they compensate for their multi-core sluggishness in games! For example, the ability to support multiprocessor configurations, with which video/music/photo encoding and CAD modeling are much faster than in the top-end Core i7 Extreme. Support for register memory with ECC, for example, allows you to correct errors on the fly, and this comes in handy when uptime is high (it’s a server!). Support for huge amounts of RAM and a huge number of cores will also come in handy when the server needs to process incoming connections as quickly as possible. But all this is almost useless on a home PC.

And it is useful for it - many cores at high frequencies. If these conditions are met, the processor itself is compatible with the LGA 2011 or LGA 2011-3 platforms and is cheaper than “just” Core i7 - there is a point in purchasing it. Otherwise, it’s better to either get by with mainstream quad-core processors with eight threads, or design a workstation for specific use cases (rendering, encoding).


High-frequency Intel Xeons (if they are cheaper than mainstream CPUs) can be a good help not only in work, but also in games (source: ferra.ru)

Mow down frags on a workstation with hacked NVIDIA drivers

If using a server processor can be played in spite of, rather than because of, the installed hardware, then graphics, which should be used for video modeling or design, have historically been cool in gaming disciplines. In the confrontation between AMD and NVIDIA, even the scenarios for “misuse” of video accelerators have always been different: “red” gaming video cards were in great demand among miners until recently, and NVIDIA Quadro, historically, was persuaded to retrain as a gaming video card.


Professional NVIDIA Quadro video cards are significantly more productive than their gaming counterparts

Moreover, Quadro is quite suitable for these purposes - the fact is that gaming GeForce is most often a professional video card with partially disabled GPU pipelines (from marketing reasons to chip rejection) at a more affordable price. For example, the new professional video card Quadro P6000 contains the most “complete” version of the GP102 graphics chip and for this reason outperforms the cool gaming GeForce 1080 by almost 20%, and the mighty Titan X based on the same Pascal architecture invariably leaves behind.

In general, among fans of NVIDIA video cards, a proprietary sport has long been formed - using hardware modifications to bring the GeForce closer to the Quadro (for example, the GTX 680 is similar to the Quadro K5000 in terms of performance), while game lovers, on the contrary, cross a hedgehog with a snake, “picking” drivers and make professional video cards work faster in post-shooting games/ride-alongs/adventure games. Such activity does not allow one to “play as intended,” but one can only envy the persistence of the enthusiasts.

In mobile workstations, almost every NVIDIA Quadro video card has a funny pattern: every mobile NVIDIA Quadro video accelerator is equal to a gaming GeForce of a lower class in gaming tasks and a couple of levels cooler to a gaming GeForce in CAD disciplines.


Performance of mobile NVIDIA Quadro in comparison with GeForce analogues (source: msi.com)

For example, the Quadro M2000M performs at the level of the GeForce GTX 960M in games, but as soon as it comes to simulation, it “jumps” in results to the GeForce GTX 980M. Roughly the same ratio holds true for other Quad models: the M5000M competes with the GTX 980M in games, and the M1000M competes with the 950M in games.


NVIDIA Quadro M6000 compared to the fastest gaming graphics cards
(source: techgage.com

Ice cream for the kids, flowers for the lady: priorities in corporate memory and storage

Server RAM is not compatible with motherboards in home PCs, not because someone decided so “to spite” end customers. It’s just that server RAM is designed a little differently - it contains a register between the chips and the system memory controller in order to reduce the electrical load on the controller and be able to install more modules in one memory channel.

In other words, additional chips and the ability to automatically recognize and correct errors greatly increases the fault tolerance of this type of memory, but also increases its cost. In a word, don’t be surprised if you find that even low-frequency (by the standards of the DDR4 standard) modules will be 50% or more more expensive than their “household” counterparts - the inhuman requirements for endurance in systems that are turned on 24/7 have noticeably modified server RAM. In everyday use, it will be neither faster nor more efficient than its “civilian” counterparts, so for high performance you should turn to gaming kits - for example, HyperX Savage, if you need easy-to-overclock memory for gamers, and HyperX Predator, if you want to get the most out of the subsystem RAM maximum. For standard frequencies, the budget Kingston ValueRAM is great - reliable, install it once and forget it.


A server processor in a home PC can be useful, but instead of register memory it is better to purchase a standard DDR3/DDR4 kit

Enterprise-class SSDs have also undergone “tuning” towards reliability - they, for example, have the ability to flexibly manage the reserve volume to suit the needs of the controller. The larger the volume, the lower the wear of the cells and the higher the durability of the drive. And a huge number of algorithms that are effective in difficult operating conditions, especially in terms of data safety in case the drive turns off in emergency mode. The firmware has been reconfigured for minimal latency in multi-user access mode and strives for stable performance even with an abnormally large volume of write and read operations. A home computer does not survive such a load, even if you “torture” the SSD with torrents. On the other hand, industrial SSDs are also not record holders in typical operations - typical SATA drives will quickly become obsolete “morally” in terms of memory capacity, rather than completely exhaust the number of rewrite cycles available for the cells - verified by a long-term comparative test involving HyperX models. And speed records with the same level of reliability have long been passed to drives based on the NVMe interface, which are implemented in one of the newfangled form factors “on top” of PCI-Express. In the Kingston/HyperX model line, the “king of the hill” was and remains the Predator SSD PCI-E.


The longevity benefit of purchasing an enterprise-class SSD does not compare to the performance benefits of a PCI-e gaming drive.

If you can’t, but really want to, then you can

Enterprise-class hardware is not so different from its “civilian” counterparts that it is considered unsuitable for use as a home PC, you just always need to proceed from whether the game is worth the candle. Because the situation is as follows:

Buying a platform that uses error correcting error correction (ECC) register memory for your home is a bad idea. Excessive durability does not compensate for expensive components and the average (in comparison with gaming analogues) level of performance will not please, especially since the prices for server memory are noticeably higher than for the average DDR3/DDR4 module.

Enterprise-class drives in a home computer are needed if you are paranoid, extremely worried about the safety of data in the event of power outages, and are worried about the reliability of modern SSDs in general. Organization-oriented drives will allow you to “maximize” reliability indicators so that your soul can be at peace.

A server processor for games... an interesting and quite effective idea, but only when we are talking about a cheaper (compared to mainstream analogues) and, most importantly, high-frequency model. Or about upgrading an old computer to a server CPU at “little cost,” that is, for almost nothing. And yes, ideally the platform should be borrowed from the “regular” Extreme series of mass-produced processors.

Professional video cards do an excellent job not only with modeling, but also with games. But it should be remembered that in mobile workstations (with a “stifled” TDP) a professional middle-class video accelerator will be able to compete in gaming disciplines only with budget-class gaming video cards. And desktop professional video cards, in turn, although fast in all work scenarios, are exorbitantly expensive, and are certainly not suitable as an economical option for “work and play.”

Be that as it may, you cannot skimp on high-quality and fast RAM... But today you can! We remind you that from February 2 to February 20, all memory kits