Gilev test 8.3 results. Standard load test. What do the test results mean?

A mandatory operation for any implementation or change of an existing information system is to assess the required system performance and plan the necessary computing resources for its implementation. Currently, there is no exact solution to this problem in general view, and if, despite its complexity and cost, such an algorithm is proposed by any manufacturer, then even small changes in hardware, software version, system configuration or the number or standard behavior of users will lead to the appearance of significant errors.

However, there are a sufficient number of ways to evaluate the software and configuration required to achieve the required performance. hardware. All of these methods can be used in the selection process, but the consumer must understand their applications and limitations.

Most existing performance evaluation methods rely on some type of testing.

There are two main types of testing: component and integral.

Component testing involves testing individual components of a solution, ranging from the performance of processors or storage subsystems to testing the performance of the server as a whole, but without the payload in the form of a particular business application.

The integrated approach is characterized by an assessment of the performance of the solution as a whole, both its software and hardware parts. In this case, both a business application can be used, which will be used in the final solution, as well as some model applications that emulate some standard business processes and loads.

The green color of the graph, together with some conditionally selected indicators on the right, allows us to make a cross-platform generalized assessment of “good” performance.

How to be happy about your test results

You received a certain performance (speed) index as a result. It doesn’t matter whether the result is good or bad - this is the result of the PLATFORM running on your hardware. In the case of the client - server version, this is the result of a complex chain of requests passing through various sections. You get the total actual result, which is determined by the bottleneck in the system. There is always a bottleneck.

In other words, both DBMS settings, OS settings, and hardware have an impact on the overall team result.

Which server is better

This test, performed on a specific server, gives the result based on the totality of hardware settings, operating system, database, etc. Nevertheless, a high result on a specific server equipment means that, under normal conditions, the same result will be on identical server hardware. This test is a free tool to help you compare the installation of 1C:Enterprise under Windows and Linux, three different DBMSs supported by the 1C:Enterprise 8 platform.

Test safety

The test is absolutely safe. It does not lead to a “crash” of the server (there is no “stress” algorithm) and does not require preliminary measures even on a “combat” server. Confidential data is also not recorded in the test results. Information about CPU, RAM, HDD parameters is collected. Serial numbers devices are not collected. You can easily verify all this - the test code is 100% open. It is impossible to send any information without your knowledge.

Classification TPC-A-local Throughput / TPC-1C-GILV-A

The test belongs to the section of universal integral cross-platform tests. Moreover, it is applicable for file and client-server options for using 1C:Enterprise. The test works for all DBMSs supported by 1C.

Universality allows you to make a general assessment of performance without being tied to a specific typical configuration platforms.

On the other hand, this means that for accurate calculations of a custom project, the test allows you to make preliminary assessment before specialized load testing.

Download test

This test is not commercial and can be downloaded for free for 8.2 and free for 8.3.

Technical details

What happens in the test within the framework of “one” operation cycle?

Features of using the test on a PostgreSQL database

Set the value of the standard_conforming_strings parameter to configuration file postgresql.conf set to 'off'

How to measure iron load

It should be noted that the test itself already partially performs the measurement. For a more detailed picture, I recommend using Mark Rusinovich’s Process Explorer utility.

The figure shows an example of measurement for the file version.

Accounting and management accounting products from 1C are the most common in the Russian Federation. Thousands of companies conduct their business based on standard and specialized 1C configurations. With such massive use, a number of questions regularly arise regarding optimizing the software budget and wise use of resources. Disputes continue to swirl around the server parts of this complex, in particular, which operating system to base the 1C server on and which DBMS to entrust the processing of 1C databases to. During our tests we will try to answer these questions.

Test participants

MS Server operating system and MS SQL DBMS

  • The 1C company openly positions this combination as the main working model; accordingly, 1C products are created primarily for it
  • Availability of a protocol for direct high-speed information exchange SharedMemory
  • There is an official technical support and service contracts
  • There is a knowledge base and tons of information on installation and fine-tuning of 1C+MS SQL

Unix operating system and PostgreSQL DBMS

  • The system is completely free (except for the license for the 1C:Enterprise server)
  • It is possible to flexibly configure many parameters that improve the performance of the DBMS
  • 1C products announced support for PostgreSQL DBMS
  • There is a possibility of database replication

Of course, the cost of the project, fault tolerance and technical support are important criteria when choosing an information system for 1C. However, there is a factor that in most cases radically influences decision making - speed.

Since there is simply a great amount of technical literature on these two systems on the Internet, one could argue for a long time about long comparative tables that, depending on the goals, highlight the benefits of a particular product. You can debate about this or that parameter among hundreds of others of the same kind - how unique it is in its kind and how it affects the achievement of the result. But theory without practice is dead - in this article we propose to omit the theory and go directly to the facts in order to test in practice the performance of both information systems with a certain level of recommended settings and in various server architecture options (see Table 2).

Test methods

In our tests, we will rely on two methods of synthetic load generation and simulating user work in 1C. This is the Gilev test (TPC-1C) and a special 1C test “Test Center” from the 1C: KIP tools with special user scenarios.

Gilev test (TPC-1C)

Gilev test belongs to the section of universal cross-platform load tests. It can be used for both file and client-server architectures of 1C:Enterprise. The test measures the amount of work per unit of time in one thread and is suitable for assessing the speed of single-threaded workloads, including the speed of interface drawing, the impact of resource costs, re-posting documents, month-end closing procedures, payroll calculations, etc. Versatility allows you to make a summary performance assessment without being tied to one platform configuration. The test result is a total assessment of the measured 1C system, expressed in conventional units.

Specialized test from the Test Center 1C tools: Instrumentation

Test center– a tool for conducting multi-user load tests of systems based on 1C:Enterprise 8 (see Figure 1). With its help, you can simulate the work of a company without the participation of real users, which allows you to evaluate the applicability, performance and scalability of an information system in real conditions. The system is a configuration that provides a mechanism for controlling the testing process. For testing information base it is necessary to integrate the Test Center configuration into the configuration of the tested base by comparing and combining configurations. As a result of the merger, objects and common modules necessary for the operation of the Test Center will be added to the metadata of the tested database.

Figure 1 - Scheme of work of “Test Center” 1C: Instrumentation

Thus, using the 1C: KIP tools, based on the available data in real 1C production bases, the programmer creates a full-fledged automatic testing script based on the list of documents and reference books that are key for of this type configurations (request for spending funds, order to a supplier, sale of goods and services, etc.). When you run the script, Test Center will automatically play the multi-user activity described in the script. To do this, the Test Center will create the required number of virtual users (in accordance with the list of roles) and start performing actions.

Test parameters

When setting up testing scenarios to reliably simulate the simultaneous work of a large number of users, certain testing parameters are set for each type of document (see Table 1):

  • Document – ​​indicates a specific document in the working database on the basis of which load testing will be carried out
  • Launch priority – determines the order in which tests are launched for each type of document
  • Number of documents – determines the volume of generated test documents
  • Pause, seconds – delay when starting a series of tests within one type of document
  • The number of lines in the document is an information pointer indicating the “massiveness” of the test document, which affects processing time and resource load

Tests are performed in 3 iterations, the results are recorded in a table. Thus, the obtained test results, measured in seconds, most realistically and objectively reflect the level of performance of 1C databases in conditions that are as close as possible to real ones (see tables 3.1 and 3.2).

Table 1. Test scenario parameters

Buyer's invoice
Document Launch priority Number of documents Pause, seconds Number of lines in document
Role 1 Buyer's invoice 1 25 51 62
Receipt of goods 2 25 80
Sales of goods 3 25 103
Money orders 4 25 1
Buyer Returns 5 25 82
Role 25 10 65 79
Receipt of goods 1 22 80
Sales of goods 2 25 103
Money orders 3 25 1
Buyer Returns 4 25 75
Role 3 Buyer's invoice 4 15 45 76
Receipt of goods 5 26 80
Sales of goods 1 52 103
Money orders 2 26 1
Buyer Returns 3 32 90
Role 4 Buyer's invoice 3 45 38 70
Receipt of goods 4 30 80
Sales of goods 5 30 103
Money orders 1 20 1
Buyer Returns 2 20 86
Role 5 Buyer's invoice 2 30 73 76
Receipt of goods 3 30 80
Sales of goods 4 30 103
Money orders 5 18 1
Buyer Returns 1 18 91
Role 6 Buyer's invoice 1 40 35 86
Receipt of goods 2 40 80
Sales of goods 3 40 103
Money orders 4 40 1
Buyer Returns 5 40 88
Role 7 Buyer's invoice 5 25 68 80
Receipt of goods 1 25 80
Sales of goods 2 25 103
Money orders 3 25 1
Buyer Returns 4 25 90
Role 8 Buyer's invoice 3 25 62 87
Receipt of goods 4 25 80
Sales of goods 5 25 103
Money orders 1 25 1
Buyer Returns 2 25 92
Role 9 Buyer's invoice 2 20 82 82
Receipt of goods 4 20 80
Sales of goods 5 20 103
Money orders 1 20 1
Buyer Returns 3 20 98
Role 10 Buyer's invoice 4 50 2 92
Receipt of goods 1 50 80
Sales of goods 2 50 103
Money orders 5 50 1
Buyer Returns 3 50 98

Table 2. Specifications test bench

No. Role of the system CPU\vCPU RAM, GB Disk system input/output
1 Terminal Servervirtual machine for test management 4 cores
2.9 GHz
16 GB Intel SATA SSD Raid1
2 Scenario 1. Server 1C + DBMS hardware Intel Xeon E5-2690
16 cores
96 GB Intel Sata SSD Raid1
3 Scenario 2. Server 1C + virtual DBMS 16 cores
2.9 GHz
64 GB Intel Sata SSD Raid1
4 Scenario 3. Server 1C virtual 16 cores
2.9 GHz
32 GB Intel Sata SSD Raid1
5 Scenario 4. Virtual DBMS server 16 cores
2.9 GHz
32 GB Intel Sata SSD Raid1
6 Software
  • Microsoft Windows Server 2016 Data Center
  • Microsoft Windows Server 2016 Standard
  • Microsoft SQL Server 2016 SP1 (13.0.4001.0)
  • Hyper-V hypervisor
  • Server 1C:Enterprise 8.3.10.2667
  • CentOS 7.4.1708 (x64)
  • PostgreSQL 9.6.5+Patch PostgreSQL 9.6.5-4.1C
7 1C configurations
  • Single-threaded synthetic test of the 1C:Enterprise platform + Multi-threaded disk write test (2.1.0.7) Vyacheslav Valerievich Gilev
  • Size 0.072 GB
  • Configuration: Enterprise accounting KORP, edition 3.0 (3.0.52.39)
  • Application: Thin Client
  • Interface option: Taxi
  • Size 9.2 GB
  • Platform: 1C:Enterprise 8.3 (8.3.10.2667)
  • Configuration: Trade Management, Revision 11 (11.3.4.21)
  • Mode: Server (compression: enhanced)
  • Application: Thin Client
  • Localization: Information base: Russian (Russia), Session: Russian (Russia)
  • Interface option: Taxi
  • Size 11.8 GB

Table 3.1 Test results using the Gilev test (TPC-1C). Considered optimal highest value

Table 3.2 Test results using a special 1C:KIP test. The smallest value is considered optimal

operating system Microsoft Server Unix-class operating system
List of tests (average value based on a series of 3 tests) Hardware server 1C+DBMS, SharedMemory protocol Virtual server 1C+DBMS, SharedMemory protocol 1C hardware server and DBMS hardware server, TCP-IP protocol Virtual server 1C and virtual server DBMS, TCP-IP protocol
Conducting 1C:KIP tests on an existing database, Enterprise Accounting configuration
Turnover balance sheet 1.741 sec 2.473 sec 2.873 sec 2.522 sec 13.866 sec 9.751 sec
Carrying out the return of goods from customers 0.695 sec 0.775 sec 0.756 sec 0.781 sec 0.499 sec 0.719 sec
Carrying out payment orders 0.048 sec 0.058 sec 0.063 sec 0.064 sec 0.037 sec 0.065 sec
Conducting technical training 0.454 sec 0.548 sec 0.535 sec 0.556 sec 0.362 sec 0.568 sec
Sales of goods and services 0.667 sec 0.759 sec 0.747 sec 0.879 sec 0.544 sec 0.802 sec
Posting an invoice for payment 0.028 sec 0.037 sec 0.037 sec 0.038 sec 0.026 sec 0.038 sec
Calculation of cost estimates 3.071 sec 3.657 sec 4.094 sec 3.768 sec 15.175 sec 10.68 sec
Conducting 1C:KIP tests on an existing database, Trade Management configuration
Carrying out and returning from the client 2.192 sec 2.113 sec 2.070 sec 2.418 sec 1.417 sec 1.494 sec
Carrying out and returning goods to the supplier 1.446 sec 1.410 sec 1.359 sec 1.467 sec 0.790 sec 0.849 sec
Posting a customer order 0.355 sec 0.344 sec 0.335 sec 0.361 sec 0.297 sec 0.299 sec
Conducting a recount of goods 0.140 sec 0.134 sec 0.131 sec 0.144 sec 0.100 sec 0.097 sec
Conducting admission to technical specifications 1.499 sec 1.438 sec 1.412 sec 1.524 sec 1.097 sec 1.189 sec
Implementation of specifications 1,390 sec 1.355 sec 1.308 sec 1.426 sec 1.093 sec 1.114 sec
Carrying out RKO 0.759 sec 0.729 sec 0.713 sec 0.759 sec 0.748 sec 0.735 sec
  1. In a special 1C test, “data reading and complex calculations” operations, such as “Turnover balance sheet” and “Calculation of cost estimates” are performed several times faster on the MS SQL DBMS from Microsoft.
  2. When performing “data recording and document posting” operations, in most tests the best result is shown by the PostgreSQL DBMS, optimized for 1C.
  3. Gilev's synthetic test also shows the advantage of PostgreSQL. This fact is due to the fact that the synthetic test is based on measuring the speed of creating and posting certain types of documents, which is also considered the operations of “recording data and posting documents.”

Let's finish with the cross-platform comparison, let's move on to comparisons within each system:

  1. As expected, 1C tests on a hardware platform show better results than on a virtual one. The difference in the results of the special 1C test in both cases is small, which indicates gradual optimization by virtual hypervisor manufacturers.
  2. It is also expected that the use of shared memory technology (SharedMemory) speeds up the process of data exchange between the 1C server and the DBMS. Accordingly, the test results are slightly better than the scheme with network interaction of these two services via the TCP-IP protocol.

We can conclude that with the correct configuration of 1C and the DBMS, you can achieve significant results even on a free software. Therefore, when designing a new IT structure for 1C, it is necessary to take into account the level of load on the system, the type of prevailing operations in the database, the available budget, the presence of a specialist in non-standard DBMS, the need for integration with external services, etc. Based on this data, it is already possible to select the required solution.

Read the continuation of testing.

For 1C server roles, MS SQL 2008 DBMS server for 50 users.

According to a server expert, we collect hardware:

Choosing a platform: IBM x3650 M3
Select a processor: Intel Xeon E5506 - 1 pc.
Choosing RAM: 4 sticks of 4GB each
Selecting a hard drive: 3 SAS 146 GB RAID5

Software used:

OS MS Windows 2008 x64
DBMS MS SQL 2008 x64
Server 1C 8.2 x64

Test environment: to carry out load testing, the 1C 8.2 configuration was used: “Standard load test”.

Test progress:

On local server A 1C client session was launched in agent mode and in testing mode.
In the test configuration, the initial number of emulated standard 1C users creating and deleting documents and reports was specified as 20. The step to increase the number of users after the tests was set to 20 users.

Initially (without user connections), the DBMS occupies 569 MB of RAM (2 databases were created: 1C 8.2 configuration: UPP and test configuration), the memory occupied by the system is 2.56 GB.
During testing (up to 110 users), memory for the DBMS is allocated up to 12 GB, one 1C test session occupies 55 MB (55 MB x 200 = 11 GB). For comparison, one real user session (1C client application) takes about 300 - 500 MB. The size of memory allocated for the 1C client application is indicated for a user working in the standard 1C: Trade or 1C: UPP configuration. The 1C server service (rphost) practically does not use the OP, since it only translates requests from the client part to the DBMS (according to the standard, port TCP 1541 and TCP 475 are used for the 1C security server).

CPU resource usage was shared between the 1C server service (rphost) and the DBMS service (sqlservr). With a load of 40 users, rphost took 37% of the CPU power, sqlservr took 30%. With a load of 60 users, rphost occupied 47% of the CPU power, sqlservr occupied 29%.

While deleting created documents, the sqlsrvr service accessed the disk subsystem for recording at speeds of up to 6.5 MB/sec (about 52 MB/sec).

The network load between the 1C server and the DBMS (on the local lookback interface) was 10 Mb/s.
Test result issued test configuration 1C:

Parameters: Run test 000000006 from 05/24/2012 12:44:16
Standard load test, version 2.0.4.11
Start of testing 05/23/2012 12:36:39. Running time: 57.1 minutes.
Test conditions
"Server 1C: Enterprise: test
Infobase name: testcenter_82
Virtual users: TEST,"

Conclusions:

It is necessary to relax the server configuration, since the current one is 100% redundant for 50 users.
It is necessary to perform testing using a second server to launch emulated users and check the network load, the expected load is 10 Mb/sec.
The 1C architecture consists of 4 blocks: 1C server, DBMS, 1C security server and 1C client. In this test, all these functions were launched on one server.

When there is a heavy load on the 1C server, there are the following recommendations:

Separate the roles of the 1C server, DBMS server, 1C protection server and 1C client applications (for greater performance, it is better to run 1C client applications on a terminal server).
On the DBMS server, you must use the following structure for data storage systems: the OS should be located on RAID 1, DBMS data files (.mdf, .ndf) on a separate RAID 0, log files (.ldf) on a separate RAID 0, temporary files and a swap file on a separate disk.

Computers (conventional name) participating in the tests - description (disks are indicated only for the database):

(clarification between servers 1 Gbit network)

1) IT33- desktop on Core i5 4 cores 2.8 GHz, DDR3 3 GB, one HDD 7200 r/s.

2) REAL- THE MOST POWERFUL as I thought)) 8 Xeon cores at 3 GHz, DDR2 48 GB, RAID10 on SSD

3) REAL2- 8 Xeon cores at 2 GHz, DDR2 22 GB,RAID10 on hard drives SAS 10,000 rps

Tests were carried out in configuration 1c from Gilev:

"SQL Server" ---> "1C Server" ---> "Evaluation" + "Name of client computer (if not specified, then it is the same one in the list)"

>1)REAL2--->REAL2--->25.64(TCP--SQL)
>2)REAL2--->REAL2--->26.32(SQL--Shared Memory)

>3)REAL2--->REAL2--->25.64(SQL--Shared Memory) + IT33(client) - from client to Servers network=10 Mbit

>4 )REAL2--->REAL2--->24.27(SQL--Shared Memory) + REAL(client) - hmm.. strange 1 Gbit network... why are there fewer parrots..
>5)REAL2--->REAL2--->37.59(File)

** **** **************************
>1)REAL--->REAL--->8.73(TCP--SQL)

>2)REAL---> Real2--->11.99(TCP--SQL) --- this is already starting to give me some thoughts))

>3)REAL--->REAL--->17.48 (File)

** **** ******************************

>1)IT33--->IT33--->26.88(TCP--SQL)
>2)IT33--->IT33--->34.72(SQL--Shared Memory)
>3)IT33--->IT33--->59.52(File)

Results:

I looked at the test results... twisted this way and that)) and then it dawned on me (I took measurements of the speed of the RAM),

what about the speed of 1s 8.x (I note that the Test Results are based on SINGLE-USER mode, but also for the client-server version with multi-user work - I think they will also have a considerable share of influence) -

So 1C speed is affected by: CPU bus frequency + RAM memory frequency

----> what affects WRITE and READ speeds in RAM. Which is the basis of the performance of 1s 8.x.

Computers that shared prizes in terms of operating speed 1s))

1)IT33--->IT33--->59.52(File)

RAM DDR 3 (Read 11089 MB/s, Write 7047 MB/s) ------ as I expected the difference will be significant with servers

2)REAL2--->REAL2--->37.59(File)
- RAM DDR2 (Read=3474, Write=2068)

3)REAL--->REAL--->17.48(File)
- RAM DDR2 (Read=1737 MB/s, Write=1042 MB/s) - as it turned out, the speed is lower than on Real2 - exactly 2 times,

Due to the enabled Virtual Cores (Hyper-trading), we will most likely disable it.

CONCLUSIONS:

The highest operating speed of 1s 8.x is achieved:

I) for the File option (I personally am not interested)

A) launching the Client (any) on a computer at high speed with RAM. (for example Terminal Server

DB there).

II) for Client-Server option

1) Thick clients 1C on " Terminal server" - with +

2) Thin clients 1C- there is no particular difference where... but it is advisable to configure it via "HTTP://".
3a) "SQL Server" + "1C Enterprise Server"(in Shared Memory mode) - on one car with Highest speed Writing/Reading RAM + Highest frequency GHz CPU cores disks

Clarifications:

- supportShared Memory- appeared on the engine starting from 8.2.17 (ATTENTION in the configuration - compatibility mode with previous versions engine), on previous engines Naimed Pipes will be used - also showing good results))

- RAID on SSD drives - it is advisable to use RAID10 - for fault tolerance, while taking into account the Write SCALE:

example RAID10 (4 pcs Write penalty = 2), Write speed = 4/2 = 2 disks, No read penalty.

You can also further increase the reliability and stability of the SSD speed - using not the entire disk capacity.

example (raising the reliability of a Desktop SSD to the level of a Server SSD):

If, for example, SSD Intel 520 series 120GB, and allocate 81 GB, and leave the rest of the space unallocated -

then about 32% of the SSD space will be allocated for over provisioning in addition to the already existing hidden 8%. In total we get about 40%

The difference between the server SSD Intel 710 series and the desktop SSD Intel 320 series is precisely the difference in over provisioning: more than 40% for the Intel 710 and 8% for the Intel 320.

If there are a lot of 1C clients from 100 onwards:

1) On current Ethernet network technologies - It is NOT advisable to delete "SQL" "Server 1C".

for example due to latency (delays) in the Gigabit network Ethernet - real exchange speed with SQL = 30 Megabytes/s - which is not enough even for intensive work with the Database of 1 user.

2) Because in fact, "Server 1C" = "Object DBMS" (multidimensional objects), and "SQL" = "Relational DBMS"(flat-tabular data storage)

=> in the SQL database, a FLAT projection of 1C Objects is stored and 1C Server collects an Object from this projection, then works with this Object and finally, upon completion of the work, again lays it out in a flat view and stores it in SQL.

As a result, between “SQL” and “1C Server”, you have to give up splitting it into two physical servers. But you can use the full implementation of NUMA nodes. ( This must be supported by the OS and the processors themselves).


3b) Let's spread it SQL server and Server 1c separately: On current Ethernet technologies- for example Gigabit - NOT Practical
-SQL to server with Highest speed Writing/Reading RAM + Highest frequency GHz CPU cores
-Some PHYSICAL servers in Cluster 1c c Highest speed Writing/Reading RAM + Highest frequency GHz CPU cores+ it is advisable to use RAID on SSD- disks

Results of the TPC-1 load test of 1C performance according to Gilev for a configuration with a file database:

Server performance is assessed not by workload and CPU queues, but by the ability to perform a certain number of operations per unit of time.
Contention for resources such as the processor reduces the speed of operations when response time is determined by:

  • operation time
  • equipment waiting time
  • time of logical waits like locks

The key characteristic is the speed of the operation.

Note. For a processor, the most significant characteristic is the processor frequency and not the load. Below is a screenshot of the test results (Click on the image to enlarge).

System performance and planning of the necessary computing resources for its implementation is a mandatory operation for any implementation or change of an existing IT system.

Most existing performance evaluation methods rely on some type of testing.

There are two main types of testing: component and integral.

Component testing involves testing individual components of a solution, ranging from the performance of processors or storage subsystems to testing the performance of the server as a whole, but without the payload in the form of a particular business application.

The integrated approach is characterized by an assessment of the performance of the solution as a whole, both its software and hardware parts. In this case, both a business application can be used, which will be used in the final solution, as well as some model applications that emulate some standard business processes and loads.

Our test uses exactly this approach.

We received as a result a certain performance (speed) index. This is the result of the platform as a whole running on our hardware. In the case of the client - server version, this is the result of a complex chain of requests passing through various sections. You get the total actual result, which is determined by the bottleneck in the system. DBMS settings, OS settings, and hardware settings affect the overall performance of the system.

The test evaluates the amount of work per unit of time in one thread and is suitable for assessing the speed of single-threaded loads, including the speed of interface rendering, the impact of costs on maintaining the virtual environment and, if any, transfer of documents, month-end closing, payroll calculation, etc.