Technical optimization solutions for servers. SO Optimization of server infrastructure. main parameters of technical optimization

Recently I received a request for help in setting up a dedicated server for running an online store on 1C-Bitrix. The reason for the request is the slow operation of the site.
We looked at the site - indeed, some pages take more than a minute to load!!! The first thing that came to mind when looking at the site was the suboptimal performance of components developed by another developer. But not by the same amount...

Initial data: Server on Xeon - 2GB of memory, RAID. OS - FreeBSD. BUS - Business.

Well, let's try to somehow rectify the situation...
Let me make a reservation right away that this article is not instructions for working with the module, just a real life case of using the module. Maybe it will be useful to someone.

After the audit, the following main problems were identified:
1. You need to install a PHP accelerator on the server
2. On the /price/ page, the “nvisions:menu.sections” component has big problems - a request is generated to the database which is processed for almost a minute - this is the main reason for the long loading time of the page, as well as a large load on the server.
3. The database works slowly (687 write requests per second is very little); the problem may be in the server configuration. You need to convert tables to InnoDB and configure InnoDB
4. The file system is not very fast, this could be due to hardware features of the server (for example RAID), but in principle the site should work well at this speed
5. There is a problem in the site template (there are non-existent links), it needs to be removed - it takes a lot of resources.
6. It is necessary to configure a two-level architecture on the server (serve static content via nginx), this will significantly reduce the load on the Apache server, stabilize memory consumption during loads, therefore speed up work and increase the reliability of the project as a whole.

Let's analyze the information from the 1C-Bitrix performance module:

The figure clearly shows problems with the database server, most likely the settings are not optimal, because dedicated server.
The number of file operations is also suspiciously low.


Obvious problems with the code or components on the /price/index.php page
Suspiciously long generation time for /bitrix/urlrewrite.php – look further:

Yeah, this is the source of the problems: the template contains a link to a non-existent image, this generates a 404 error, and forces Apache to process this error and generate a full-fledged page.

The same problem affects all pages on the site associated with the problematic template:


And here are the problematic components on the page:


The menu component has caching disabled.
Page summary:

Well, here's a quick analysis. How handy the performance module tells you “where the problems are.” Let's start troubleshooting:

We added a picture to which there was a link, we just added a picture, and did not remove the links, because... there were many links, including in third-party components. Also on this page we disabled a problematic third-party component (nsvision:menu.sections), because its purpose is not clear. (after disconnecting, nothing has changed externally)
Result:


Urlrewrite.php is now not called on every hit



As you can see, the speed of work has increased by 2 times (!).

Let's move on:
Installing eaccelerator. I won’t describe here how the accelerator is installed, because... this information, if necessary, can always be found on the Internet.







Result after installing eAccelerator: Another two-fold increase in productivity.

Let's move on: Optimizing the Database(transfer to InnoDB and optimize settings)


As can be seen from the performance module test, the speed of the database has increased significantly
In general, the overall performance after optimizing the database remained unchanged, possibly due to the slow operation of the file system.

UPDATE:
Performance Module Recommendations.
Following the recommendations of the module, we disable the "open_basedir" parameter, because the server is dedicated only to our project, we mean that security as a whole will not be compromised.

Result:


The result, as they say, is AVAILABLE

All that remains is to rewrite the “crooked” components and the project will fly.

We also installed and configured nginx as a proxy server for Apache. I don’t include pictures, because... the numbers have remained virtually unchanged. But according to subjective assessment, the pages now load a couple of times faster.

The template still takes quite a long time to generate (the generation time is almost the same as the system kernel) - apparently, the code was not optimally written by the previous developer. There is no time, no budget, no desire to parse someone else's code. It's easier, faster and cheaper to write your code from scratch.

In general: The Performance Module is a very useful and convenient tool for debugging the operation of a project and server. For which thanks to its developers.

P.S. Personally, I have little experience working with Linux. I became closely acquainted with FreeBSD for the first time. I was surprised that after installing some software, the config files are completely empty (for example, MySQL). I was pleased with the ease of installing the software from the “ports”.

", direction "Data transmission systems".

Before going into the technical intricacies of WAN optimization, let's figure out what it is and what it is intended for.

Recently, the migration of IT structures to a decentralized computing model has become evident, in which companies distribute their processing centers around the world. As a result, the volume of data and the number of IT resources stored outside of corporate data centers (DCs) has increased, and department heads are now looking for ways to consolidate their IT infrastructure. Enterprises have realized the benefits that consolidation brings in terms of reducing infrastructure complexity, reducing costs, improving resource utilization, and protecting data.

Centralizing resources and data demonstrates the benefits described above, but there are various pitfalls that organizations planning to optimize their IT infrastructure should keep in mind. One of the problems they will face is slower application performance. The popularity of the distributed computing model was largely driven by the need to keep IT resources as close as possible to distributed network users to ensure maximum performance. Consolidating servers in a central location reverses the resource allocation pattern and therefore degrades the performance of many applications.

To solve the problem, organizations are expanding the capacity of WAN links in an attempt to reduce response times. Then they discover that expanding the channels has virtually no (or minimal) impact on the speed of applications, since the problem lies in the large delay in data transmission over the channel and the use of protocols that are ineffective for working with WAN. In addition, expanding bandwidth outside of Moscow may not be cost-effective overall. And it is precisely for such tasks that WAN channel optimization equipment is used.

Globally, such WAN optimization solutions can reduce costs for organizations in several ways:

    reduce the cost of communication channel bandwidth. In fact, organizations will be able to do without purchasing additional bandwidth, which is a key condition for many companies when starting projects to implement WAN optimizers;

    consolidate infrastructure in a data center. Companies will be able to remove a significant part of the IT infrastructure (file and mail servers, software distribution servers, SharePoint portals, tape drives, etc.) from remote offices without loss of performance and manageability;

    simplify the remote office infrastructure. Some manufacturers offer a software platform in their devices that allows users to host some of the services remaining after data center consolidation (for example, print server, DHCP server, file services) directly on the optimization device. This makes it possible to further reduce operating costs.

What is WAN optimization? The solution for optimizing the functioning of network applications uses client-server architecture and the session principle of network applications. Its main task is to optimize application sessions. Essentially, this is a set of devices to improve the performance of applications installed in the center and in each regional (local) office of the company. They pass all traffic through themselves, “intercepting” and optimizing application work sessions.

There are a number of manufacturers offering solutions in the field of optimizing traffic transmission over long WAN channels. The most famous of them on the Russian market include Riverbed (with its SteelHead product), Cisco (WAAS product), Juniper (WXC product) and BlueCoat (ProxySG product).

The process of optimizing the equipment they offer is based on approximately the same mechanisms, which include data compression, caching, optimization of the TCP protocol and optimization of the operating logic of the business applications themselves.

All application optimization mechanisms under consideration use session segmentation, dividing it between the client and the server into three segments: between the optimization device and the workstation, between devices over the WAN network, and between the optimization device and the data center (server). In the first and third segments, the session runs over the LAN, and shortcomings in the TCP protocol do not affect application latency. The second segment is optimized by adjusting the TCP speed. As a result, the necessary minimums are ensured: in terms of delay when transmitting traffic over the WAN and in application response time. Let's look at the mechanisms that, in one form or another, underlie the decisions of each optimizer manufacturer.

Compression mechanisms are able to speed up data transfer by increasing the information content of information transfer per unit of time. Most often, data transmitted over the network is presented in a non-optimal format and is unreasonably large in volume. Now, with the active use in application development, for example, XML or other languages ​​for presenting information in text form, there is no need to worry about data representation. This increases the speed and ease of development, but at the same time leads to essentially unstructured data being transmitted over the network, introducing large amounts of redundancy into the traffic.

Traffic compression eliminates this drawback. Application optimization engines use a lossless data compression algorithm (such as Lempel-Ziv) and a duplicate block elimination algorithm. The combination of these two algorithms allows us to achieve the highest degree of lossless information compression, thereby ensuring fast transmission of information even over relatively low-speed channels.

Compression functionality, in one form or another, is found in almost every modern router and, in fact, this is where modern optimizers began their journey. Very often, network administrators believe that this is the notorious optimization, convincing their managers that there is no need to purchase special devices. And this is where they are wrong, as we will see later.

Caching Mechanisms also help reduce the amount of transmitted traffic. In a distributed network, situations often arise when all employees of a company need to transfer the same data. For example, when updating software products or anti-virus software databases, transmitting requests from company management, multimedia files and training programs, public document libraries. Using optimization devices allows this information to be cached, that is, transmitted once over the WAN, and subsequently provided to each user locally (from the hard drive of the nearest optimization device), rather than from a remote global resource.

An important difference from conventional caching devices is the fact that optimizers break information into parts/blocks and save them to the hard drive. This is interesting from the point of view that if we change some of the information in the newly transferred file (for example, insert a slide or picture into a document), then it is the change that will be transferred, and not the entire file. The mechanisms for dynamically dividing transmitted information into blocks and tracking changes are proprietary and are not subject to disclosure. If we talk about the features of the work, manufacturers use 2 approaches. A distinctive feature of the first of them is its unification, i.e. when transferring one file to different branches, only one copy of the file will be saved in the central optimizer for all remote optimization devices. In the second case, the hard disk space is dynamically divided in proportion to the number of remote offices (remote optimizers), and if one file is transferred to all branches, a similar copy will be reflected in each hard disk segment “responsible” for its branch.

Obviously, the caching mechanism works in tandem with the compression mechanism. It is thanks to these two mechanisms that optimizer manufacturers show beautiful graphs where the optimization level can reach 150-200X. We were able to obtain the same data when sending the same large data file multiple times, because after the first transfer it was stored in the device cache and then only kilobytes of links pointing to the file’s location on the hard drive were transferred. A logical question immediately arises here: what is the capacity of the hard drive and is it possible to connect external storage to optimizers? Some manufacturers once mentioned the possibility of the emergence of this kind of equipment (but it will already be intended exclusively for installation in the data center).

TCP optimization mechanisms work at the transport level. This is the main “battlefield” of optimizer manufacturers before they began to “climb” to higher levels (application). The TCP transport protocol was developed in 1980, and today has not undergone major changes, while data transmission technologies have changed significantly. When packets are lost, the standard TCP protocol sharply reduces the speed - almost by half, and its increase from this level subsequently occurs linearly and in small steps. Therefore, even a relatively small level of packet loss (2-3% of losses is considered normal) leads to frequent and sharp losses in network speed.

The optimized TCP protocol, when a loss occurs, reduces the speed not by 2 times, but by only a few percent, and with a single packet loss, the speed decreases very slightly. It turns out that the solution to optimizing the functioning of network applications primarily increases the speed of information transfer. Maximum utilization of the entire bandwidth of data transmission channels is ensured by the improved operating procedure of the TCP protocol.

Application Level Optimization Mechanisms offer acceleration of business applications themselves via WAN channels. It is the implementation of some protocols in popular products that, unfortunately, is far from perfect. In particular, the CIFS (Common Internet File System) protocol, which is actively used in Microsoft networks, creates an excessive volume of service messages (delivery confirmation, device readiness, etc.). In a local network, these excesses do not introduce a significant delay in response time, but in a distributed network they become significant. Optimization devices are able to process the majority of unimportant messages locally, without transmission over the WAN, reducing the amount of traffic and reducing the response time of a number of network application functions, such as network printing, access to file services, etc. Actually, today it is precisely in this area that manufacturers are competing. The most frequently optimized protocols include CIFS, NFS, MAPI, Video, HTTP, SSL and Windows printing. This “gentleman’s set” is present in the portfolio of almost any manufacturer, but they are optimized in different ways.

From all of the above, it follows that traffic from the source to the recipient passes through at least two optimization devices, and on each of them it is processed up to the application.

It’s not hard to guess that all optimizers work with TCP-based applications, which means the rest of the traffic passes through without optimization. The same can be said about encrypted traffic (the exception, perhaps, is SSL - many optimizers can “break” the session, optimize the traffic, and encrypt it back).

Companies with a distributed structure that want to reduce costs for telecom operators may be interested in such a solution. This can manifest itself both in the case of using per-megabyte tariffs (the effect is obvious) and in the case of unlimited ones (switching to lower-speed tariff plans). Today, perhaps, this is the most interesting purpose for using such devices. Other bonuses, not so obvious and transparent, may be: consolidation of servers, reduction in the number of IT personnel in remote offices, increased productivity due to increased application speed.

In the fight for interest in optimizers, manufacturers also offer opportunities to optimize the work of mobile employees by installing specialized software on laptops and the ability to install virtual servers based on one optimizer in a remote office. Software for laptops is similar in code to software on the optimizers themselves, i.e. The laptop becomes like an optimizer.

In addition to companies with a distributed structure, this solution may also be of interest to operators who can provide companies with optimization services (for example, rental). Such services are becoming popular in Europe.

The most common optimization solution is, of course, Cisco WAAS. Good vendor marketing, a good solution and development strategy do their job. With the advent of a series of affordable and reliable WAVE, Cisco's position has become even stronger.

Juniper's WXC solution differs in that all traffic is packaged into a UDP tunnel, i.e. optimization occurs over all traffic. This approach certainly has its advantages. I would include a fairly high “hospital average” optimization value over all traffic (based on testing with one large customer).

Riverbed came to Russia not so long ago, but is actively developing its partner network. It has significant advantages over competing solutions (for example, a competent caching mechanism, application optimization), but the high price for the solution is still preventing the growth of its popularity.

Summarizing all of the above, I would like to note that WAN optimization is an interesting solution, quite transparent for business, but, unfortunately, has not yet received much demand in Russian companies. Based on the implementations, it was possible to reduce traffic by an average of 2-3.5 times and significantly speed up application responses. For example, one of our customers, on satellite lines, saved about 20 hours of responses during a month of testing. For our company, the implementation of this solution allowed us to achieve double savings when paying for network traffic, as well as increase the speed of corporate applications by an average of 1.7 times. At the same time, the return on investment in the project was only 3 months.

In any case, if you are interested, it is best to first test the solution for about a month. Only based on the results of such testing will it be possible to say how effective the implementation of optimizers is in relation to a specific network. It is best to involve experienced system integrators to develop a solution, conduct testing and installation.

Effective SEO can be hindered by just one annoying mistake in the technical optimization of the site, but this will lead to the fact that the search engine robots will not be able to correctly index the resource, understand the structure of the site, and users will not find the information they need. All this, in turn, will lead to low ranking of the site.

Technical website optimization is a set of measures that are aimed at adjusting the technical aspects of the resource in order to improve its interaction with search engine robots. Technical optimization allows for the fastest and most complete indexing of site pages.

5 main technical optimization parameters

1. Robots.txt file

It is important to note that the robots.txt file must be contained in the root directory of each resource. This is the first file that PS robots access when they visit the site, and in which instructions for them are stored.

This file specifies the site's indexing parameters: which pages should be included in the search database and which should be excluded. In addition, it can specify directives both for all search engine robots at once, and for the robots of each search engine separately. You can learn more about compiling this file and setting it up on the Yandex webmaster help website.

You can check the file in the Yandex.Webmaster service, menu item “Analysis of robots.txt” (https://webmaster.yandex.ru/robots.xml).

2. Sitemap - site map

A site map is one of the resource pages, the information on which is similar to the content of a regular book. This page is used as a navigation element. The site map contains a complete list of sections and/or all pages posted on the resource.

An HTML sitemap is needed by users to quickly and easily find information, and XML is needed by search engines to improve site indexing.

With the help of a site map, search robots see the entire structure and index new pages faster.

Checking the site map(https://webmaster.yandex.ru/sitemaptest.xml)

An example of a correct sitemap in .html format:

3. Redirects (redirections)

A redirect is used to redirect website visitors from one page to another. There are a lot of examples of why redirects are needed:

  1. Changing the domain name of the site.
  2. Plywood mirrors. Many sites do not have a 301 redirect configured from a domain that contains www in the address to a domain without www, or vice versa.

Redirects must be entered in the .htaccess file. Since search engines may consider site.ru and www.site.ru to be different sites, duplicates may appear in the results. This will create difficulties with ranking in search results, etc.

Main redirect status codes:

  • 300 - Multiple Choices (several options to choose from);
  • 301 - Moved Permanently (moved forever);
  • 302 - Temporary Redirect;
  • 303 - See Other (the requested resource can be found at another address);
  • 304 - Not Modified (the content has not been changed - these can be pictures, style sheets, etc.);
  • 305 - Use Proxy (access must be through a proxy);
  • 306 - Unused (not in use).

Useful service for determining page responses: http://www.bertal.ru/

4. Customizing URL page views

It is important to check the site to ensure that the addresses of all its pages are consistent. For example, throughout the site, pages must have a closing slash: http://site.ru/katalog/ and http://site.ru/products/ . If some pages look like http://site.ru/katalog, and some look like http://site.ru/products/, this is incorrect.

It will be convenient to check the addresses of internal resource pages for errors after creating a site map.

5. Site errors

When any page on a site is loaded, a request is sent to the server, which responds with an HTTP status code and loads (or does not load) the page.

Basic status codes:

  • 200 - the page is fine;
  • 404 - non-existent page;
  • 503 - server is temporarily unavailable.

“404 error” is one of the most important technical parameters of optimization, which must be improved.

If the page exists, and the server informs about a 404 error when requesting it, then the page will not be indexed by search engines. Otherwise, a large number of pages with the same text may end up in the index, which has an extremely negative effect on ranking.

You can check status codes using http://www.bertal.ru/ or Yandex.Webmaster.

We have considered only the main parameters of the technical improvement of the site, which you need to pay attention to first. If you find such errors on your website or have difficulty eliminating them, contact only a professional SEO company.

There are several methods you can use to increase server performance, but the best is optimization.

Operating system optimization (FreeBSD)

  • Transition to 7.x is useful for multi-core systems as the new ULE 3.0 Scheduler and jemalloc can be used. If you are using a legacy 6.x system and it cannot cope with the load, then it’s time to switch to 7.x.
  • Transition to 7.2 will allow you to increase KVA, optimize by default sysctl and use superpages. A new FreeBSD 8.0 is already in preparation, which will help significantly increase productivity.
  • Transition to amd64 makes it possible to increase KVA and Shared Mem volumes to more than 2Gb. It is necessary to create conditions for the development of the server, because databases are constantly increasing and require larger sizes.
  • Network subsystem unloading in FreeBSD will help optimize the server. This process can be done in two stages: tuning the ifconfig parameters and sysctl.conf/loader.conf settings. At the preparation stage, you should check the capabilities of the network card. Drivers from Yandex will help increase speed by using multiple threads; they are often used for multi-core processes. For a third-rate network card, polling is the best solution. The latest updated version of FreeBSD 7 tuning will help solve the problem.
  • FreeBSD and a huge number of files work great thanks to caching of file names in the directory. Searching the hash table will help you quickly find the required file. Although the maximum amount of memory is about 2MB, you can increase it as long as vfs.ufs.dirhash_mem allows it.
  • Softupdates, gjournalAndmount options- These are new terabyte screws that have excellent performance. If the power goes out, their fsck will take a very long time, so you can use softupdates or log via gjournal.

Frontend optimization (nginx)

This type can be classified as premature optimization, although it will help increase the overall response time of the site. Among the standard optimizations, it is worth paying attention to reset_timedout_connection; sendfile; tcp_nopush and tcp_nodelay.

  • Accept Filters is a technology that makes it possible to transfer information from the kernel to the process in the event of new data arriving or receiving a valid http request. These filters will help relieve the server when there are a huge number of connections.
  • Caching nginx is characterized by flexibility, and is produced from fastcgi or proxy backends. Everyone can use caching smartly in their project.
  • AIO is very useful for some specific server loads, because it saves response time while reducing the number of workers. New versions of nginx make it possible to use aio in tandem with sendfile.

Backend optimization

  • APC is a framework that allows you to reduce the load by caching compiled code in the OP. APC locking is worth updating, as it can slow down and many people are starting to use eAccelerator instead of APC. It is worth replacing locking with spinlock or pthread mutex. The APC hints value should be raised if there is a huge number of .php files or if they are frequently cached in the APC user cache. APC fragmentation is a sign that you are using APC inappropriately. He cannot independently delete records by TTL or LRU.
  • PHP 5.3 will help increase productivity gains, so it's worth upgrading your PHP version, although the list of deprecated functions may scare many.

Database optimization

There are a lot of ideas for improving the performance of MySQL on the Internet, because every web project sooner or later faces limitations in the amount of memory, disk or processor. Therefore, simple solutions will not help to cope with the problem; it is worth spending more time on profilers (dtrace, systemtap and oprofile), as well as using a large number of additional software. It is necessary not only to be perfectly able to use indexes, sort and group them, but also to know how this all functions inside MySQL. You also need to know the advantages and disadvantages of different storage engines, understand Query cache and EXPLAIN.

There are several ways to optimize MySQL, even without changing the codes, because half of the server tuning can be done in a semi-automatic mode using the tuningprimer, mysqltuner and mysqlsla utilities.

  • Transition to 5.1 provides many advantages, among which it is worth highlighting optimizer optimization, Partitioning, InnoDB plugin and Row based replication. To speed up the site, some extreme sports enthusiasts are already testing version 5.4.
  • Switching to InnoDB provides many benefits. It is ACID compliant, so any operation is performed with just one transaction. It has row-level locking, which makes it possible to simultaneously read and write many threads in isolation from each other.
  • Built-in MySQL cache – Query Cache is quite difficult to understand, so many users use it irrationally or turn it off. For him, more does not mean better, so it is not worth maximizing this subsystem. Query Cache is parallelized; as a result, if more than eight processes are used, it will only slow down the entire process and not help reduce site loading time. The contents of this subsystem, which relate to a specific table, are invalidated when changes are made to that table. This means that Query Cache only gives positive results when using well-designed tables.
  • Indexes can be harmful both for SELECT (if there are none) and for INSERT/UPDATE (if there are extra ones). An index that is no longer in use still takes up memory and thereby slows down data changes. To deal with this problem, you should use a simple SQL query.

PostgreSQL

The Postgres system is quite versatile, because it belongs to the Enterprise class and Skype works great on it, but at the same time it can even be installed on a mobile phone. Among the 200 parameters available, 45 of them are basic and are responsible for tuning.

You can find a lot of useful information on tuning Postgres on the Internet. But some articles are already outdated, so you should start from the date of publication and pay attention to the information where the vacuum_mem key is used, or in new versions of maintenance_mem. Advanced programmers will be able to find many high-quality treatises; below we will list only those basics that will help the average user improve their project.

  • Indexes PostgreSQL is always in first place, while in MySQL they always occupy the last positions, and this can be explained by the fact that PostgreSQL indexes have enormous capabilities. The programmer must have a good understanding of such indexes, and know when and which one should be used, such as GiST, GIN, hash and B-tree, as well as partial, multicolumn and on expressions.
  • pgBouncer and its alternatives must first be installed on the database server. Without a connection pooler, each request creates a separate process that uses RAM. It seems like nothing bad, but when creating more than 200 connections, even a very powerful server has difficulty processing information. pgBouncer helps solve this problem.
  • pgFouine is an indispensable program, since it can be safely called an analogue of mysqlsla in php. In tandem with Playr, it can perform query optimization in difficult conditions on staging servers.

Unloading the database

To optimize the operation of the database and increase its performance, you should use it as little as possible.

  • SphinxQL can be used as a MySQL server. To do this, you just need to create sphinx.conf, as well as entries for indexer in cron and switch to another database. With these actions there is no need to even change the code. Switching to SphinxQL will help improve search speed and quality, and forget about MyISAM and FTS.
  • Non-RDBMS storage allows you not to use a relational database. You can choose Hive or Oracle. The key-value database, due to its speed, uses selections from relational databases for further caching. Owners of large PHP projects can use the excellent opcode cache feature to store all custom data. With its help, you can reliably save even changes of global significance, because they take up little space and practically do not occupy memory, and the sampling speed can also increase significantly. If for a large project a block of global changes is written to only one machine, then the traffic increases and it begins to slow down greatly. To solve this problem, you need to store global variables in an opcode cacher or clone variables across all servers and add exceptions to the consistency hashing algorithm.
  • Encodings refer to effective methods for unloading a database. It is worth noting that UTF-8 is an excellent choice, but in Russian it takes up a lot of space, so for a monolingual contingent, you should first think about the rational use of the encoding.
  • Asynchrony will help reduce the response time of an application or website, as well as significantly reduce the load on the server itself. Batch requests are made much faster than the usual single ones. For huge projects you can use RabbitMQ, ApacheMQ or ZeroMQ messages, and for small projects you can use just cron.

Additional applications for optimization

  • SSHGuard or its alternative is standard practice for ssh. Anti-brute force helps create reliable server protection from bot attacks.
  • Xtrabackup from Percona is an excellent MySQL backup tool that has a lot of settings. But the ideal solution is still to call clones in ZFS, because they are created very quickly, and in order to restore the database, it is enough to change the paths to the files in the muscle configuration. Clones allow you to restore your system from scratch.
  • Transfer mail to another host will save traffic and IOPs if your server is simply bombarded with spam.
  • Integration with third party software will help optimize mysql server. For example, you can use the smtp/imap connection to exchange messages, which will not take up much memory. To create a chat, it is enough to use the basis of a jabber server with a javascript client. These systems, which are based on adapters to off-the-shelf products, are highly scalable.
  • Monitoring is a very important component, because it is impossible to optimize anything without detailed analysis. It is necessary to monitor performance metrics, free resources and delays; Zabbix, Cacti, Nagios and other tools will help with this. Web Performance Test allows you to calculate the loading speed of a website or project, so it is very helpful in monitoring. When setting up a performance server, remember that only a thorough analysis will help eliminate all problems and perform optimization.

If you didn’t understand half of what was written, it doesn’t matter.