1c and postgres installation on Windows. Install PostgreSQL. Connecting an external data source

This article does not claim to be a complete presentation of all PostgreSQL configuration options, and in comparative testing I do not cover all database operating modes. I advise those interested to study the book at the link

Introduction

I've worked a lot with PostgreSQL and think it's an excellent DBMS. I have a multi-gigabyte working database (not 1C) that instantly processes huge amounts of data. PostgreSQL makes excellent use of indexes, copes well with parallel workloads, the functionality of stored procedures is excellent, there are good administration and performance tools out of the box, and the community has created useful utilities. But I was surprised to learn that many 1C administrators have a poor opinion of PostgreSQL, that it is slow and barely outperforms the file version of the database, and only MSSQL can save the situation.

After researching the question, I found many articles on installing PostgreSQL step by step for dummies, both on Linux and on Windows. But the vast majority of articles describe the installation until “installed - let’s create a database”, and do not touch upon the issue of configuration at all. In the remaining ones, configuration is mentioned only at the “specify such values” level, with virtually no explanation for why.

And if the “one-button installation” approach is applicable to MSSQL and many products for Windows in general, then, unfortunately, it does not apply to PostgreSQL. The default settings greatly limit its memory use, so that you can install it even on a calculator and it will not interfere with the operation of the rest of the software. PostgreSQL must be configured for a specific system, and only then can it show its best. In especially severe cases, you can tune the settings of PostgreSQL, database and file system each other, but this applies to a greater extent to Linux systems, where there are more opportunities to customize everything.

It should be recalled that for 1C the PostgreSQL assembly from the DBMS developers is not suitable, only assembled from patched 1C source codes. Ready-made compatible assemblies are offered by 1C (via ITS disks and an account for those with a support subscription) and EterSoft

Testing was carried out in Windows environment, but all configuration recommendations are not platform specific and apply to any OS.

Testing and comparison

When testing, I did not set out to carry out tests in all operating modes and scenarios, only a rough check of successful configuration.

For testing I used the following configuration:
Host machine: Win7, Core i5-760 2.8MHz, 4 cores, 12GB RAM, VMWare 10
Virtual: Win7 x64, 2 cores, 4GB RAM, separate physical HDD for hosting a database (not SSD)
MSSQL Express 2014
PostgreSQL EtherSoft 9.2.1
1C 8.3.5 1383

A database was used, dt-upload 780MB.
After restoring the database:
file size 1CD in file version - 10GB,
PostgreSQL database size - 8GB,
MSSQL database size is 6.7GB.

For the test, I used a request for a sample of counterparty agreements (21k) with a selection of additional details from various registers; for each agreement, a separate sample was actually made from the registers. I took the configuration that was at hand - heavily modified on the basis of Accounting 3.0.

During testing, I ran the request with one and two clients several times until stable results were obtained. I ignored the first runs.

Testing with one client:

Sampling on the host from a file version with the database placed on SSD - 31c
Selecting from a file variant in virtual machine(With hard drive) - 46s
Sampling from an MSSQL database - first pass - 25s or 9s (apparently depending on the relevance of the DBMS cache) (memory consumption by the DBMS process was approximately 1.3GB)
Sampling from PostgreSQL with default settings - 43s (memory consumption did not exceed 80MB per connection)
Sampling from optimized PostgreSQL - 21s (memory consumption was 120MB per connection)

Testing with two clients:

Sampling on the host from a file version with the database placed on an SSD - 34s each
Sampling from a file version in a virtual machine (from a hard drive) - 56s each
Sampling from an MSSQL database - 50s or 20s (apparently depending on the relevance of the DBMS cache)
Sampling from PostgreSQL with default settings - 60s each
Sampling from optimized PostgreSQL - 40s each

Testing notes:

  1. After adding the third core, PostgreSQL and MSSQL variants began to work in the “two clients” test almost with the performance of the “one client” test, i.e. successfully parallelized. What prevented them from paralleling work on two cores remained a mystery to me.
  2. MSSQL took up a lot of memory at once, PostgreSQL required significantly less in all modes, and released almost all of it immediately after completing the query.
  3. MSSQL runs as a single process. PostgreSQL launches a separate process per connection + service processes. This allows even the 32-bit variant to use memory efficiently when processing requests from multiple clients.
  4. Increasing memory for PostgreSQL in settings above the values ​​shown below did not lead to a noticeable increase in performance.
  5. The first tests in all cases took longer than in subsequent measurements; I did not take any special measurements, but MSSQL subjectively started faster.

Configuring PostgreSQL

There is an excellent book in Russian about configuring and optimizing PostgreSQL: It makes sense for every elephant lover to bookmark this link. The book describes many techniques for optimizing DBMS, creating fault-tolerant and distributed systems. But now we will look at something that will be useful to everyone - configuring memory usage. PostgreSQL will not use more memory than allowed by the settings, and with default settings PostgreSQL uses a minimum of memory. At the same time, you should not specify more memory than is available for use - the system will begin to use the swap file with all the ensuing dire consequences for server performance. A number of tips for setting up PostgreSQL are given on the ITS disk.

On Windows, PostgreSQL configuration files are located in the installation directory in the Data directory:

  • postgresql.conf- main file with DBMS settings
  • pg_hba.conf- a file with access settings for clients. In particular, here you can specify which users from which IP addresses can connect to certain databases, and whether it is necessary to check the user’s password, and if so, by what method.
  • pg_ident.conf- a file with conversion of usernames from system to internal (most users are unlikely to need it)

The files are text, you can edit them with notepad. Lines starting with # are considered comments and are ignored.

Parameters related to memory capacity can be supplemented with suffixes kB, MB, GB - kilobytes, megabytes, gigabytes, for example, 128MB. Parameters describing time intervals can be supplemented with suffixes ms, s, min, h, d - milliseconds, seconds, minutes, hours, days, for example, 5min

If you have forgotten the password for Postgress, it’s no problem, you can write it in pg_hba.conf line:

Host all all 127.0.0.1/32 trust

And connect by any user (for example, postgres) to the DBMS on the local machine at 127.0.0.1 without checking the password.

Optimization memory usage

Memory usage settings are located in postgresql.conf

Optimal parameter values ​​depend on the volume of free random access memory, the size of the database and individual elements of the database (tables and indexes), the complexity of queries (in principle, you should assume that the queries will be quite complex - multiple connections in queries are a typical scenario) and the number of simultaneous active clients. By the way, PostgreSQL stores database tables and indexes in separate files (<каталог установки PG>\data\base\<идентификатор БД>\), and the sizes of objects can be estimated. You can also use the included pgAdmin utility to connect to the database, expand “Schemas” - “public”, and generate a statistics report for the “Tables” element.

Next, I will give approximate values ​​from which you can start tuning. After initial setup It is recommended to run the server in operating modes and monitor memory consumption. Depending on the results obtained, it may be necessary to adjust the parameter values.

When setting up the server for testing, I relied on the following calculations:
Only 4GB of RAM. Consumers - Windows OS, 1C server, PostgreSQL and system disk cache. I assumed that up to 2.5GB of RAM can be allocated for the DBMS

Values ​​can be specified with the suffixes kB, MB, GB (values ​​in kilobytes, megabytes or gigabytes). After changing the values, you need to restart the PostgreSQL service.

shared_buffers - Shared server buffer

The size of the PostgreSQL read and write cache shared by all connections. If the data is not in the cache, it is read from disk (possibly cached by the OS)

If the buffer size is insufficient to store frequently used work data, then it will be constantly written and read from the OS cache or from disk, which will have an extremely negative impact on performance.

But this is not all the memory required for operation, you should not specify too much great importance, otherwise there will be no memory left both for actually executing client requests (and the more there are, the higher the memory consumption), and for the OS and other applications, for example, the 1C server process. The server also relies on the OS cache and tries not to keep in its buffer what is most likely cached by the system.

The test used

shared_buffers = 512MB

work_mem- memory for sorting, data aggregation, etc.

Allocated for each request, possibly several times for complex requests. If there is not enough memory, PostgreSQL will use temporary files. If the value is too large, RAM may be overused and the OS will begin to use the swap file with a corresponding drop in performance.

There is a recommendation when calculating to take the amount of available memory minus shared_buffers, and divide by the number of simultaneously executed requests. In the case of complex queries, the divisor should be increased, i.e. reduce the result. For the case under consideration, based on 5 active users (2.5GB-0.5GB (shared_buffers))/5=400MB. If the DBMS considers the queries to be quite complex, or if additional users appear, the value will need to be reduced.

For simple queries, small values ​​are sufficient - up to a couple of megabytes, but for complex queries (and this is a typical scenario for 1C) more will be required. Recommendation - for memory 1-4GB you can use values ​​of 32-128MB. I used it in the test

work_mem = 128MB

maintenance_work_mem- memory for garbage collection commands, statistics, index creation.

It is recommended to set the value to 50-75% of the size of the largest table or index, but so that there is enough memory to run the system and applications. It is recommended to set values ​​greater than work_mem. I used it in the test
maintenance_work_mem = 192MB

temp_buffers- a buffer for temporary objects, mainly for temporary tables.

You can install about 16 MB. I used it in the test
temp_buffers = 32MB

effective_cache_size- approximate size of the file system disk cache.

The optimizer uses this value when building a query plan to estimate the likelihood of data being found in a cache (with fast random access) or on a slow disk. In Windows, the current amount of memory allocated for the cache can be viewed in the task manager.

Autovacuum - "garbage collection"

PostgreSQL, as a typical representative of “versioned” DBMSs (as opposed to blocking ones), does not independently block tables and records from reading transactions when changing data (in the case of 1C, the 1C server itself does this). Instead, a copy of the modified record is created, which becomes visible to subsequent transactions, while existing ones continue to see the data that was current at the beginning of their transaction. As a result, outdated data accumulates in tables - previous versions changed records. In order for the DBMS to use the freed up space, it is necessary to perform “garbage collection” - to determine which of the records are no longer used. This can be done explicitly with a SQL command VACUUM, or wait for the table to be processed by the automatic garbage collector - AUTOVACUUM. Also, until a certain version, garbage collection was associated with statistics collection (the planner uses data on the number of records in tables and the distribution of values ​​of indexed fields to build an optimal query plan). On the one hand, garbage collection must be done so that tables do not grow and efficiently use disk space. On the other hand, suddenly started garbage collection puts additional load on the disk and tables, which leads to an increase in query execution time. A similar effect is created by automatic statistics collection (obviously it can be launched with the command ANALYZE or together with garbage collection VACUUM ANALYZE). And although PostgreSQL improves these mechanisms from version to version to minimize Negative influence on performance (for example, in earlier versions, garbage collection completely blocked access to the table, since version 9.0 the work VACUUM accelerated), there is something to configure here.

You can completely disable autovacuum with the following parameter:

autovacuum = off

Also, for Autovacuum to work, the track_counts = on parameter is required, otherwise it will not work.

By default, both options are enabled. In fact, autovacuum cannot be completely disabled - even with autovacuum = off, sometimes (after a large number of transactions) autovacuum will start.

Comment: VACUUM usually does not reduce the table file size, only marks free areas available for reuse. If you want to physically free up excess space and minimize the occupied disk space, you will need the command VACUUM FULL. This option blocks access to the table while it is running and is not typically required. More information about using the VACUUM command can be found in the documentation (in English).

If Autovacuum is not completely disabled, you can configure its influence on query execution using the following parameters:

autovacuum_max_workers- the maximum number of parallel running cleaning processes.

autovacuum_naptime - the minimum interval less than which autovacuum will not start. Default is 1 minute. You can increase it, then if the data changes frequently, the analysis will be performed less frequently.

autovacuum_vacuum_threshold, - the number of changed or deleted records in the table required to trigger the garbage collection process VACUUM or collecting statistics ANALYZE. Default is 50.

autovacuum_vacuum_scale_factor , autovacuum_analyze_scale_factor - coefficient of table size in records added to autovacuum_vacuum_threshold And autovacuum_analyze_threshold respectively. The default values ​​are 0.2 (i.e. 20% of the number of records) and 0.1 (10%) respectively.

Let's consider an example with a table with 10,000 records. Then, with default settings, after 50+10000*0.1=1050 changed or deleted records, statistics collection will be started ANALYZE, and after 2050 changes - garbage collection VACUUM.

If you increase threshold and scale_factor, maintenance processes will run less often, but small tables can grow significantly. If the database consists primarily of small tables, the overall increase in disk space consumption can be significant, so you can increase these values, but wisely.

Thus, it may make sense to increase the autovacuum_naptime interval, and slightly increase the threshold and scale_factor. In loaded databases, it may be an alternative to significantly raise the scale_factor (a value of 1 will allow the tables to “swell” twice) and set daily execution to the scheduler VACUUM ANALYZE during periods of minimal database load.

default_statistics_target - assigns the amount of statistics collected by the command ANALYZE. The default value is 100. Larger values ​​increase the execution time of the ANALYZE command, but allow the scheduler to build more efficient query plans. There are recommendations to increase to 300.

Performance can be controlled AUTOVACUUM, making it longer but less stressful on the system.

vacuum_cost_page_hit- the size of the “fine” for processing a block located in shared_buffers. Associated with the need to block access to the buffer. Default value 1

vacuum_cost_page_miss - the size of the "fine" for processing a block on disk. Associated with blocking a buffer, searching for data in a buffer, reading data from disk. Default value 10

vacuum_cost_page_dirty- the size of the “fine” for block modification. Associated with the need to reset modified data to disk. Default value 20

vacuum_cost_limit- the maximum amount of “fines” after which the assembly process can be “frozen” for the duration of vacuum_cost_delay. Default 200

vacuum_cost_delay- time to “freeze” the garbage collection process upon reaching vacuum_cost_limit. Default value 0ms

autovacuum_vacuum_cost_delay- time to “freeze” the garbage collection process for autovacuum. Default is 20ms. If set to -1, the vacuum_cost_delay value will be used

autovacuum_vacuum_cost_limit- the maximum size of the "fine" for autovacuum. Default value -1 - vacuum_cost_limit value is used

Reported use vacuum_cost_page_hit = 6, vacuum_cost_limit = 100, autovacuum_vacuum_cost_delay = 200ms reduces the impact of AUTOVACUUM by up to 80%, but triples its execution time.

Setting up disk recording

When a transaction completes, PostgreSQL first writes data to a special transaction log WAL (Write-ahead log), and then to the database after the log data is guaranteed to be written to disk. The default mechanism is fsync, when PostgreSQL forcefully flushes data (log) from the OS disk cache to disk, and only after a successful write (log) is the client informed that the transaction completed successfully. Using a transaction log allows you to complete a transaction or restore the database if a failure occurs while writing data.

In busy systems with large write volumes, it may make sense to move the transaction log to a separate physical disk (but not to another partition of the same disk!). To do this, you need to stop the DBMS, move the pg_xlog directory to another location, and create a symbolic link in the old location, for example, using the junction utility. Far Manager (Alt-F6) can also create links. In this case, you need to make sure that the new location has access rights for the user who is running PostgreSQL (usually postgres).

If there are a large number of data modification operations, you may need to increase the checkpoint_segments value, which controls the amount of data that can wait to be transferred from the log to the database itself. The default value is 3. It should be taken into account that space is allocated for the log, calculated by the formula (checkpoint_segments * 2 + 1) * 16 MB, which with a value of 32 will already require more than 1 GB of disk space.

PostgreSQL flushes data from the OS file cache to disk after each writing transaction completes. On the one hand, this guarantees that the data on the disk is always up to date, on the other hand, with a large number of transactions, performance decreases. Disable completely fsync possible by specifying

fsync=off
full_page_writes = off

This can only be done if you 100% trust the equipment and the UPS (source uninterruptible power supply). Otherwise, in the event of a system crash, there is a risk of getting a destroyed database. And in any case, a RAID controller with a battery to power the memory of unwritten data would also not hurt.

A definite alternative might be to use the parameter

synchronous_commit = off

In this case, after a successful response to complete the transaction, it may take some time before the transaction is safely written to disk. In the event of a sudden shutdown, the database will not be destroyed, but data from recent transactions may be lost.

If you do not disable fsync completely, you can specify the synchronization method in the parameter. The article from the ITS disk refers to the pg_test_fsync utility, but it was not found in my PostgreSQL build. According to 1C, in their case in Windows the method showed itself to be optimal open_datasync(apparently, this is the method that is used by default).

If many small writing transactions are used (in the case of 1C this may be a mass update of the directory outside the transaction), a combination of the parameters commit_delay (transaction completion delay time in microseconds, default 0) and commit_siblings (default 5) can help. When the options are enabled, transaction completion may be delayed by commit_delay if this moment At least commit_siblings transactions are executed. In this case, the result of all completed transactions will be written together to optimize disk writes.

Other parameters that affect performance

wal_buffers- the amount of memory in shared_buffers for maintaining transaction logs. Recommendation: with 1-4GB of available memory, use values ​​of 256KB-1MB. The documentation states that using the value "-1" automatically adjusts the value depending on the value of shared_buffers.

random_page_cost- the “cost” of random reading, used when searching for data using indexes. Default is 4.0. The unit is the time of sequential data access. For fast disk arrays, especially SSDs, it makes sense to lower the value; in this case, PostgreSQL will make more active use of indexes.

The book at the link has some other parameters that can be configured. It is also strongly recommended that you read the PostgreSQL documentation on the assignment of specific parameters.

It is recommended to change the parameters from the QUERY TUNING section, especially those related to prohibiting the scheduler from using specific search methods, only if you have a full understanding of what you are doing. It's very easy to optimize one type of query and ruin the performance of all others. The effectiveness of changing most parameters in this section depends on the data in the database, queries to this data (i.e., the version of 1C used, among other things) and the version of the DBMS.

Conclusion

PostgreSQL is a powerful DBMS in capable hands, but it requires careful configuration. It can be used in conjunction with 1C and get decent performance, and its free nature will be a very nice bonus.

Criticism and additions to this article are welcome.

useful links

http://postgresql.leopard.in.ua/ - book website " Working with PostgreSQL configuration and scaling ", the most complete and understandable guide, in my opinion, for configuring and administering PostgreSQL

http://etersoft.ru/products/postgre - here you can download a 1C-compatible build of PostgreSQL for Windows and various distributions and versions of Linux. For those who do not have a subscription to ITS or require a version for Linux version, which is not presented on v8.1c.ru.

http://www.postgresql.org/docs/9.2/static/ - official documentation on PostgreSQL (in English)

Articles from the ITS disk on setting up PostgreSQL

Article edit history

  • 01/29/2015 - initial version published
  • 01/31/2015 - the article was supplemented with a section on AUTOVACUUM, a link to the original documentation was added.

In the future, I intend to test the operation of the DBMS in the mode of adding and changing data.

We will install an assembly from the Postgres Professional company. On the page with the version for 1C:Enterprise we will find information about installing the latest version of PostgreSQL on CentOS 7.

Let's connect the repositories and install PostgreSQL 9.6:

Sudo rpm -ivh http://1c.postgrespro.ru/keys/postgrespro-1c-centos96.noarch.rpm sudo yum makecache sudo yum install postgresql-pro-1c-9.6

Basic PostgreSQL setup

We initialize service databases with Russian localization:

Su postgres /usr/pgsql-9.6/bin/initdb -D /var/lib/pgsql/9.6/data --locale=ru_RU.UTF-8 exit service postgresql-9.6 initdb

Start the PostgreSQL service and add it to startup:

Systemctl enable postgresql-9.6 systemctl start postgresql-9.6 systemctl status postgresql-9.6

We set a password for the postgres user in order to be able to connect to the server remotely:

Su - postgres psql ALTER USER postgres WITH ENCRYPTED PASSWORD "yourpassword"; \q exit

Mcedit /var/lib/pgsql/9.6/data/pg_hba.conf

in the file that opens, uncomment and change the lines:

host all all 127.0.0.1/32 ident on host all all 127.0.0.1/32 md5

host all all 0.0.0.0/0 ident on host all all 0.0.0.0/0 md5

Optimizing PostgreSQL settings (postgresql.conf) for 1C:Enterprise

Here will be the settings for PostgreSQL running in an ESXi 6.5 virtual machine.

Resources allocated for the VM:

processor - 8 vCPU;

memory - 48 GB;

disk for OS - 50 GB on LUN hardware RAID1 from SAS HDD;

disk for database - 170 GB on software RAID1 from SSD

disk for logs - 100 GB on software RAID1 from SSD

To edit the settings, run the command:

Mcedit /var/lib/pgsql/9.6/data/postgresql.conf

Commented out parameters that we will change must be activated.

CPU

autovacuum_max_workers = 4

autovacuum_max_workers = NCores/4..2 but not less than 4

Number of autovacuum processes. The general rule is that the more write requests, the more processes. On a read-only database, one process is enough.

ssl=off

Turn off encryption. For secure data centers, encryption is meaningless, but leads to increased CPU load

Memory

shared_buffers = 12GB

shared_buffers = RAM/4

The amount of memory allocated by PgSQL for the shared page cache. This memory is shared among all PgSQL processes. operating system It caches the data itself, so there is no need to allocate all available RAM for the cache.

temp_buffers = 256MB

Maximum number of pages for temporary tables. Those. this is the upper limit on the size of temporary tables in each session.

work_mem = 64MB

work_mem = RAM/32..64 or 32MB..128MB

Memory limit for processing one request. This memory is individual for each session. Theoretically, the maximum required memory is equal to max_connections * work_mem, in practice this does not happen because most sessions are almost always waiting. This advisory value is used by the optimizer: it tries to predict the size of the required memory for the query, and if this value is greater than work_mem, it tells the executor to immediately create a temporary table. work_mem is not a limit in the full sense: the optimizer may miss, and the request will take up more memory, perhaps many times more. This value can be reduced by monitoring the number of temporary files created:

maintenance_work_mem = 2GB

maintenance_work_mem = RAM/16..32 or work_mem * 4 or 256MB..4GB

Memory limit for maintenance tasks, such as collecting statistics (ANALYZE), garbage collection (VACUUM), creating indexes (CREATE INDEX), and adding foreign keys. The size of memory allocated for these operations should be comparable to physical size the largest index on the disk.

effective_cache_size = 36GB

effective_cache_size = RAM - shared_buffers

Estimating file system cache size. Increasing the parameter increases the system's propensity to select IndexScan plans. And this is good.

Discs

effective_io_concurrency = 5

An estimate of the simultaneous requests to a disk system that it can service at one time. For a single disk = 1, for RAID - 2 or more.

random_page_cost = 1.3

random_page_cost = 1.5-2.0 for RAID, 1.1-1.3 for SSD

Cost of reading a random page (default 4). The smaller the seek time of the disk system, the smaller (but > 1.0) this parameter should be. An excessively large parameter value increases PgSQL's tendency to select plans that scan the entire table (PgSQL considers it cheaper to read the entire table sequentially than to randomly read the index). And that's bad.

autovacuum=on

Turning on the autovacuum.

autovacuum_naptime = 20s

Autovacuum process sleep time. If the value is too large, the tables will not have time to vacuum and, as a result, the bloat and the size of tables and indexes will increase. A small value will result in unnecessary heating.

bgwriter_delay = 20ms

Sleep time between disk write cycles of the background write process. This process is responsible for synchronizing pages located in shared_buffers with disk. Too large a value for this parameter will increase the load on the checkpoint process and processes serving sessions (backend). A small value will result in one of the cores being fully loaded.

bgwriter_lru_multiplier = 4.0

bgwriter_lru_maxpages = 400

Options that control the recording intensity of the background recording process. In one cycle, bgwriter writes no more than what was written in the last cycle, multiplied by bgwriter_lru_multiplier, but no more than bgwriter_lru_maxpages.

synchronous_commit = off

Disable disk synchronization at the time of commit. Creates a risk of losing the last few transactions (within 0.5-1 seconds), but guarantees the integrity of the database; there are no gaps in the commit chain. But it significantly increases productivity.

wal_keep_segments = 256

wal_keep_segments = 32..256

Maximum number of WAL segments between checkpoints. Too frequent checkpoints lead to a significant write load on the disk subsystem. Each segment is 16MB in size

wal_buffers = 16 MB

The amount of shared memory that will be used to buffer WAL data not yet written to disk. The default value of -1 specifies a size equal to 1/32 (about 3%) of , but no less than 64 KB and no more than the size of a single WAL segment (usually 16 MB). This value can be set manually if the one automatically selected is too small or large, but any positive number less than 32 KB will be treated as 32 KB. This parameter can only be set at server startup.

The contents of WAL buffers are written to disk when each transaction is committed, so very large values ​​are unlikely to provide much benefit. However, a value of at least a few megabytes can improve writing performance on a busy server when many clients are committing transactions at once. Autotuning, which operates at the default value (-1), selects reasonable values ​​in most cases.

default_statistics_target = 1000

Sets the default statistics target limit that applies to columns for which ALTER TABLE SET STATISTICS has not specified individual limits. The higher the value set, the longer it takes to run ANALYZE, but the higher the quality of the scheduler's estimates can be. The default value for this parameter is 100.

checkpoint_completion_target = 0.9

The degree of “smearing” of the checkpoint. The recording speed during a checkpoint is adjusted so that the checkpoint time is equal to the time elapsed since the past, multiplied by the checkpoint_completion_ target.

min_wal_size = 4G
max_wal_size = 8G

min_wal_size = 512MB .. 4G
max_wal_size = 2 * min_wal_size

Minimum and maximum size of WAL files. Similar to checkpoint_segments

fsync=on

Disabling the option results in increased performance, but there is a significant risk of losing all data if the power is suddenly turned off. Attention: if the RAID has a cache and is in write-back mode, check the presence and functionality of the RAID controller cache battery! Otherwise, data written to the RAID cache may be lost when the power is turned off, and as a result, PgSQL does not guarantee data integrity.

row_security = off

Disabling Record Level Resolution Control

enable_nestloop = off

Enables or disables the scheduler's use of nested loop join plans. It is not possible to completely eliminate nested loops, but if you turn this option off, the scheduler will not use this method, if others can be applied. By default, this setting is on.

Locks

max_locks_per_transaction = 256

Maximum number of index/table locks in one transaction

Settings for the 1C platform

standard_conforming_strings = off

Allow \ character to be used for escaping

escape_string_warning = off

Don't warn about using the \ character for escaping

Security Settings

Let's make sure that the PostgreSQL server is visible only to the 1C: Enterprise server installed on the same machine.

listen_addresses = 'localhost'

If the 1C: Enterprise server is installed on another machine or there is a need to connect to the DBMS server using the PGAdmin snap-in, then instead localhost you need to specify the address of this machine.

Database storage

PostgreSQL, like almost any DBMS, is critical to the disk subsystem, therefore, to increase the performance of the DBMS, we will place the PostgreSQL system, logs and the databases themselves on different disks.

Stopping the server

Systemctl stop postgresql-9.6

We transfer logs to a 120GB SSD:

Mv /var/lib/pgsql/9.6/data/pg_xlog /raid120 mv /var/lib/pgsql/9.6/data/pg_clog /raid120 mv /var/lib/pgsql/9.6/data/pg_log /raid120

Ln -s /raid120/pg_xlog /var/lib/pgsql/9.6/data/pg_xlog ln -s /raid120/pg_clog /var/lib/pgsql/9.6/data/pg_clog ln -s /raid120/pg_log /var/lib/ pgsql/9.6/data/pg_log

We will also transfer the directory with the databases:

Mv /var/lib/pgsql/9.6/data/base /raid200

Ln -s /raid200/base /var/lib/pgsql/9.6/data/base

Let's start the server and check its status

Systemctl start postgresql-9.6 systemctl status postgresql-9.6

Use case as a PostgreSQL database server on windows platform not very popular, but it usually occurs when you need to somehow save money on products from MS. There are also specialized applications that work best with PostgreSQL. For 1c there is a modified build of PostgreSQL that, as the developers assure, gives a level of performance and fault tolerance comparable to MSSQL. Is this really so, let's check it in practice :)

1. Install PostgreSQL

Download the latest build of PostgreSQL 64-bit 9.1.2-1.1C from the 1c website, unpack the archive, run the msi package, the one without int does not have a large file size.

Click Start.
Leave the installation options as default.

Set a password for the user postgres from which the service will start . Click Next. If you are installing PostgreSQL for the first time, the wizard will prompt you to create a user postgres.

At the stage of database initialization, select UTF8 encoding. Set the login and password for the internal postgres user. Attention! The PostgreSQL service user passwords and the internal PostgreSQL database user password must not be the same. The password must be at least four characters long. If you plan to run the 1C server and PostgreSQL on different machines, then you need to check the “Support connections from any IP, not just localhost” checkbox. Click Next and Next again. :)

Click Next two more times and wait for the installation to complete.

Then go to Start\All Programs\PostgreSQL 9.1.2-1.1C(x64). Launch the pgAdmin III administration utility. Let's try to connect to the database. Enter the password that you specified during installation.
And we get the following error: Error connecting to the server: FATAL: password authentication failed for user “postgres”.

Quite unexpected, considering that the password was typed correctly. I decided to tinker with pg_hba.conf, but at first glance everything is fine there.

# TYPE DATABASE USER ADDRESS METHOD # IPv4 local connections: host all postgres::1/128 md5 host all postgres 127.0.0.1/32 md5 host all postgres 192.168.1.0/24 md5

I decided to change the authorization method from md5 to trust. I restart the service and try to connect to the database again. This time I receive this message.
Indeed, more than one is available on the pgAdmin website a new version. After which the connection to the database is successful!!?!! I remember that previously md5 did not cause such problems, apparently this glitch is really related to old version pgAdmin.
Now we can create a database for the needs of 1C, or do it using 1C itself :)

2. Installation 1C enterprise 8.2.

For installation, we note the following components: 1C:Enterprise, 1C:Enterprise Server, Web server extension modules, 1C:Enterprise server administration.
At the “Install 1C Enterprise as a Service” installation stage, we set the password for the user USR1C82.
Click next and monitor the installation progress :) To the user USR1CV82 The following rights must be assigned during installation:

Login as a service (Log on as a service), Login as a batch job (Log on as a batch job). You can view it at Local Computer Policy\Computer Configuration\Windows Settings\Security Settings\Local Policies\User Right Assignments.

Let's go to the equipment Administration of 1C Enterprise servers, We see that the cluster has risen and is hanging on port 1541. Our server is also present on the “Working Servers” tab. Now, you can add the database to the 1C server. To do this, go to the tab “ Information bases"Right-click and select New - Information base. Set the necessary parameters to connect to the PostgreSQL server. Click OK. Let's launch 1C: Enterprise. We choose to add an existing infobase on the server.
Next, set the connection parameters. Click “Next” and finally “Finish”.
The operation to create a database can be done directly from 1C: Enterprise. To do this, at startup, select “Create a new information base».

To connect 1C clients to the server from the outside and operate the database server on the firewall, the following ports must be open:

Server agent ( ragent) - tcp:1540 Chief cluster manager ( rmngr) - tcp:1541 Range of network ports for dynamic distribution of worker processes - tcp:1560-1591, tcp:5432 - Postgresql. Let's create a rule through the standard interface, or using the command:

netsh advfirewall firewall add rule name="1Cv8-Server" dir=in action=allow protocol=TCP localport=1540,1541,5432,1560-1590 enable=yes profile=ANY remoteip=ANY interfacetype=LAN

Now we can easily launch the 1C:Enterprise client from another computer and add the existing information base newdb. Don't forget about licenses and software/hardware protection. Now, we can download the Gilev test and measure the performance of our system.

On VirtualBox with 1GB of memory, Dual-Core 2.6 GHz, 319-release 1c, the Gilev test gives 11.42 points, about the same as on CentOS. At 16.362 there are a little more than 11.60 points. Optimizing the settings using the EnterpriseDB Tuning Wizard did not give a noticeable increase (11.66 and 11.62), although it may be generally beneficial. :)

3. Routine work on the PostgreSQL server.

Backup.

Launch the pgAdmin III administration utility and right-click on the desired database. Select "Backup".
Select the format (Custom (compression level from 0 to 9), Tar, Simple, Catalog). In terms of compression level, “custom format” of any compression level compresses best, then “directory”, then “simple” and finally “tar”. We specify the encoding UTF8, the role name is postgresql. We leave all additional options as default. Click the “Backup” button. The “Messages” field displays a list of all performed operations with a completion code. If 0, then success. Here you can also see how to run a similar operation from the command line.

F)\pgAdmin III\1.16\pg_dump.exe" --host 192.168.1.200 --port 5432 --username "postgres" --role "postgres" --no-password --format custom --blobs --compress 9 --encoding UTF8 --verbose --file "G:\Backups\gilev_dump.backup" "newdb"

Accordingly, the automatic script Reserve copy, which we add to the scheduler might look something like this:

"C:\Program Files (x86)\pgAdmin III\1.16\pg_dump.exe" --host 192.168.1.200 --port 5432 --username "postgres" --role "postgres" --no-password --format custom --blobs --compress 9 --encoding UTF8 --verbose --file "G:\Backups\gilev_dump_%date:~0.2%_%date:~3.2%_%date:~6.4% .backup" "newdb"

Recovery.

To restore, select the database into which we want to restore data from backup copy, preferably empty. Right-click and select “Recovery”. Set the backup file, role name: postgres, click “Restore”
Using the command line:

"C:\Program Files (x86)\pgAdmin III\1.16\pg_restore.exe" --host 192.168.1.200 --port 5432 --username "postgres" --dbname "testdb" --role "postgres" --no -password --verbose "G:\Backups\gilev_dump_26_09_2012.backup"

where testdb is an empty database into which the backup archive is restored.

Maintenance Operations:

VACUUM Command:

Sequentially cleans all tables of the currently connected database, deletes temporary data and frees up disk space. Most often, the VACUUM command is executed precisely to obtain the maximum amount of free disk space on the disk and increase the speed of data access.

VACUUM— marks the space occupied by old versions of records as free. Using this variant of the command, as a rule, does not reduce the size of the file containing the table, but allows you to prevent it from growing uncontrollably, fixing it at some acceptable level. When running VACUUM, parallel access to the table being processed is possible. There are several additional options using VACUUM: VACUUM FULL, VACUUM FREEZE, VACUUM ANALYZE.

VACUUM FULL attempts to remove all old versions of records and, accordingly, reduce the size of the file containing the table. This command option completely locks the table being processed.

VACUUM FREEZE - If VACUUM FULL removes “garbage” from tables and moves records so that the tables are located compactly on the disk and consist of the smallest number of fragments, while compression takes a long time and blocks records, then VACUUM FREEZE simply removes “garbage” from tables, but the records themselves does not move, so it is faster and does not block writes. Currently, this option is replaced by autovacuum - automatic garbage collection in postgresql.conf plus several additional options that expand functionality:

autovacuum=on# Enables automatic garbage collection.
log_autovacuum_min_duration = -1# Setting it to zero logs all autovacuum actions. Minus one (by default) prohibits output to the log. For example, if you set the value
equal to 250 ms, then all autovacuum and analyze actions that run for 250 ms or more will be logged. Enabling this setting may be useful for tracking autovacuum.
This option can only be set in the postgresql.conf file or in command line server.
autovacuum_naptime = 10min# Time in seconds after which the database is checked for the need for garbage collection. By default this happens once per minute.
autovacuum_vacuum_threshold= 1800 # Threshold for the number of deleted and modified records in any table, upon exceeding which garbage collection occurs (VACUUM).
autovacuum_analyze_threshold= 900 # Threshold for the number of inserted, deleted and modified records in any table, upon exceeding which the analysis process (ANALYZE) is started.
autovacuum_vacuum_scale_factor= 0.2 # Percentage of changed and deleted records in relation to the table, above which garbage collection is triggered.
autovacuum_analyze_scale_factor= 0.1 # Same as the previous variable, but relative to analysis.
VACUUM ANALYZE— If the database has tables in which the data is not changed or deleted, but only added, then for such tables you can use a separate ANALYZE command. It is also worth using this command for a separate table after adding a large number of records to it.

ANALYZE command:

Serves to update information about the distribution of data in the table. This information is used by the optimizer to select the fastest query execution plan. Typically the command is used in conjunction with VACUUM ANALYZE.

REINDEX command (reindexing):

Used to rebuild existing indexes. It makes sense to use it in case

— damage to the index;

- constant increase in its size.

The second case requires some explanation. An index, like a table, contains blocks with old versions of records. PostgreSQL cannot always reuse these blocks, and therefore the index file gradually grows in size. If the data in the table changes frequently, it can grow quite quickly. If you notice this behavior of an index, you should configure it to periodically run the REINDEX command. Please note: the REINDEX command, like VACUUM FULL, completely locks the table, so it must be executed when the server load is minimal.

The question of which DBMS - Postgresql or MS SQL for 1C is the most optimal - has been the subject of many articles. In this article, we will look at the optimization steps for both. Each vendor's DBMS has both its own configuration recommendations and recommendations from 1C. It should be noted that depending on the equipment, server configuration and the number of users setting different loads, the details of the process of optimizing the DBMS for 1C and implementing recommendations may change.

Setting up PostgreSQL for 1C

Experience in operating 1C databases on PostgreSQL has shown that the highest performance and optimal performance of 1C and PostgreSQL were achieved on Linux, so it is advisable to use it. But regardless of the operating system, it is important to remember that the default settings specified when installing PostgreSQL are intended only for starting the DBMS server. There can be no talk of any industrial exploitation! The next step after launch will be optimizing PostgreSQL for 1C:

  • First, we disable Energy Saving (otherwise delays in responses from the database may increase unpredictably) and prohibit shared memory swapping.
  • We configure the basic parameters of the DBMS server (recommendations for configuration are described in sufficient detail, both on the official website of the vendor and by 1C, so we will focus only on the most important ones).
  • The standard recommendations of the 1C company suggest disabling HyperThreading mechanisms. But testing Postgres-pro on servers with SMT (simultaneous multi threading) enabled showed different results.
Setting shared_buffers to RAM/4 is the default recommendation, but the Sql Server example suggests that the more memory allocated to it, the better its performance (with page flushing disabled). That is, the more data pages are located in RAM, the fewer disk accesses. The question arises: why such a small cache? The answer is simple: if shared_buffers is large, then some of the unused pages are swapped to disk. But how to track the moment when the reset stops and the parameter indicator is optimal? To achieve and reach the optimal shared_buffers indicator, its value must be raised in production daily (if possible) with a certain increment and watch at what moment pages will begin to be flushed to disk (swap will increase).
  • In addition, the “large parameter” is negatively affected by working with many small pages, which by default have a size of 8Kb. Working with them increases overhead costs. What can be done with this to optimize for 1C? PostgreSQL 9.4 introduced the huge_pages parameter, which can be enabled, but only on Linux. By default, huge pages are included with a default size of 2048 kB. Additionally, support for these pages must be enabled in the OS. Thus, by optimizing the storage structure, you can achieve a larger shared_buffers indicator.
  • work_mem = RAM/32..64 or 32MB..128MB Sets the amount of memory for each session that will be used for internal sorting, merging, etc. operations before using temporary files. If this volume is exceeded, the server will use temporary files on disk, which can significantly reduce the speed of processing requests. This parameter is used when executing operators: ORDER BY, DISTINCT, merge joins, etc.
  • Calculate additionally this parameter can be done as follows: (Shared memory shared_buffers - memory for other programs) / number of active connections. This value can be reduced by monitoring the number of temporary files created. Such statistics on the size and number of temporary files can be obtained from the pg_stat_database system view.
  • effective_cache_size = RAM - shared_buffers The main purpose of this parameter is to tell the query optimizer which method of retrieving data to choose: full scan or index scan. The higher the parameter value, the greater the likelihood of using index scanning. In this case, the server does not take into account that the data can remain in memory when executing a request, and the next request does not need to retrieve it from disk.
  • Installing PostgreSQL

    Installing 1C on PostgreSQL under Windows is a fairly simple process. When running the installation package, you must specify UTF-8 encoding. In fact, this is the only interesting nuance and no further configuration of PostgreSQL for 1C 8.3 from Windows is required. Installing and configuring PostgreSQL for 1C on Linux OS can cause a number of difficulties. To overcome them, as an example, let’s consider running (using distribution kits from the leading Russian vendor PostgreSQL-Pro and the 1C company) PostgreSQL on an Ubuntu 16.04 x64 server

    Installation of 1C distribution kits for PostgreSQL DBMS

    1.Download the specified position of the PostgreSQL DBMS distribution kit:

    2. Upload PostgreSQL to the server;

    3.You can unpack the PostgreSQL DBMS installer with the command:

    tar -xvf postgresql-9.4.2-1.1C_amd64_deb.tar.bz2

    4.Before installing the PostgreSQL DBMS distribution kit, let’s check the presence of the required locale in the system (by default ru_RU.UTF-8):


    5.If the system with which PostgreSQL will work was installed in a language other than Russian, you need to create new locales:

    locale-gen ru_RU update-locale LANG=ru_RU.UTF8 dpkg-reconfigure locales

    6.If the required locale is still available, install it by default:

    locale –a nano /etc/default/locale Replace the contents with LANG=ru_RU.UTF-8

    7.After the reboot, install the necessary packages for our version of PostgreSQL:

    apt-get install libxslt1.1 ssl-cert

    8.PostgreSQL package version 9.4.2-1.1C is linked to libicu package version libicu48. The required version is no longer in the repository, but you can download it;

    9.Download and place in the directory where downloaded files for PostgreSQL are stored;

    10.By going to the directory with the PostgreSQL files, we perform the installation by sequentially typing the following commands:

    CD<Путь к папке с файлами>dpkg -i libicu48_4.8.1.1-3ubuntu0.6_amd64.deb dpkg -i libpq5_9.4.2-1.1C_amd64.deb dpkg -i postgresql-client-common_154.1.1C_all.deb dpkg -i postgresql-common_154.1.1C_all.deb dpkg -i postgresql-client-9.4_9.4.2-1.1C_amd64.deb dpkg -i postgresql-9.4_9.4.2-1.1C_amd64.deb dpkg -i postgresql-contrib-9.4_9.4.2-1.1C_amd64.deb

    11.Done. The PostgreSQL DBMS distribution kit is installed.

    Installing PostgreSQL-Pro distributions

    To install the server, you must run the following commands in succession:

    sudo sh -c "echo "deb http:// 1c.postgrespro.ru/deb/ $(lsb_release -cs) main" > /etc/apt/sources.list.d/postgrespro-1c.list" wget --quiet -O - ​​http:// 1c.postgrespro.ru/keys/GPG-KEY-POSTGRESPRO-1C-92 | sudo apt-key add - && sudo apt-get update sudo apt-get install postgresql-pro-1c-9.4

    To access the server, edit the parameters in the file pg_hba.conf

    cd<Путь до каталога pg_hba.conf>cp pg_hba.conf pg_hba.conf.old bash -c "echo "local all postgres trust" > pg_hba.conf" bash -c "echo "host all all all md5" >> pg_hba.conf"

    The file itself has the following structure:


    The file is well documented, but English language. Let's briefly look at the main parameters:

    • Local local connection only via unix
    • Host TCP/IP connection
    • Hostssl encrypted SSL connection via TCP/IP (the server must be built with SSL support, the ssl parameter must also be set)
    • Hostnossl unencrypted connection via TCP/IP
    • trust admit without authentication
    • reject refuse without authentication
    • password clear text password request
    • md5 password request in MD5 form
    • ldap verifying username and password using LDAP server
    • radius Verifying username and password using RADIUS server
    • pam verifying username and password using plugin service

    More detailed and detailed information can be found in the documentation for the PostgreSQL product.

    root@NODE2:/home/asd# service --status-all |grep postgres [ - ] postgresql root@NODE2:/home/asd# service postgresql start root@NODE2:/home/asd# service --status-all | grep postgres [ + ] postgresql

    After completing the basic installation, you need to configure configuration file server postgresql.conf, according to the specifics of PostgreSQL, 1C server and Ubuntu server configuration.

    Optimization of 1C for MS SQL Server

    Install Latest updates for SQL Sever.

    The operating system reserves space and fills it with zeros, which takes quite a long time in the following events:

    • Database creation;
    • Adding data files, transaction log, to an existing database;
    • Increasing the size of an existing file (including Autogrow operations);
    • We restore databases or groups of files.

    Is being decided this problem adding the role (under which the server is running) to the item local politics security "Performing volume maintenance tasks."

    If possible, it is necessary to distribute the TempDB database (it is used especially intensively in the RCSI managed locking mode) and the transaction log on different disks.

    On the server where it works SQL server, the power saving mode should be set to "High Performance".

    The folder with the database files should not be compressed.

    On the “Memory” tab for the server, we set the minimum level to 50% of the total memory. We calculate the maximum using one of the formulas:

    • Maximum memory = Total volume - size according to OS - size for 1C (If it exists, having previously measured the memory used with counters) or
    • Maximum Memory = Total Size – (1024* Total Size/16384).

    We limit the DOP parameter “Max degree of parallelism” and set it to “1”.

    We update statistics according to schedule. Beginning with SQL Server 2008, updating statistics causes queries to be recompiled and, accordingly, clears the procedural cache, so there is no need to perform a separate procedure to clear the procedural cache.

    We periodically re-index the table and defragment the indexes.

    We establish the correct reservation policy. If you do not need to recover to the last point in time before a system crash, and the last 5 minutes or more are not critical for your business, then set the recovery model to “Simple”. This will speed up your recording speed significantly. The main thing is that the differentiated backup can be completed within the specified time.

    We achieve improvements in working with TempDB during I/O by creating additional data files. If there are fewer than 8 logical processors, it is recommended that you create a data file for each logical processor. If there are more than 8 logical processors, it is recommended to create 8 data files and, increasing by one at a multiple of 4, be sure to estimate the load on TempDB.

    1 Nov 2012 Advantages of using freely distributed software obvious. Unfortunately, the disadvantages are also obvious - there is no official support, documentation is often contradictory, incomplete and scattered throughout different sources. This article will help you understand the process of installing PosgreSQL for 1C:Enterprise 8, avoiding pitfalls that are not described in the official documentation.

    Required components for installation

    The PostgreSQL DBMS is distributed free of charge and is included in the delivery package of the 1C application server. The 1C:Enterprise 8 application server comes in two versions: 32-bit and 64-bit. Postgre can handle both.

    So, we have distribution kits on hand:

    • Postgre: postgresql-9_1_2-1_1Cx64.zip, kindly provided by 1C.
    • Distribution of the 1C:Enterprise application server for Windows x64, version 8.2.16.368.

    It would seem that it couldn’t be simpler - just launch and install. Easily! But installing in standard mode will give one small limitation: the cluster will be located in the “Program Files” folder. Not everyone will like it. Let's consider two installation options, simple and advanced.

    The article is divided into 5 sections:

    1) Installation of 1C server.

    2) Install PostgreSQL in a standard form, sufficient to run 1C without additional settings.

    3) Install PostgreSQL and select the cluster storage folder.

    4) Creation of a new 1C information base.

    5) Specifying the folder for storing database files on the DBMS server.

    Be sure to read the entire article before installation!

    Installation of 1C application server

    We launch setup.exe from the folder with the 1C server distribution kit.

    If you install the application server not as a service, you will need to manually start it each time. This option is rarely needed. We install it as a service, and decide under which user it will be launched. For security reasons, it is better to create a separate user USR1CV82 rather than allowing the service to run with full rights.

    After installing the application server, the system will prompt you to install the HASP protection key driver. We agree:

    We are waiting for a message:

    If the message is different, there are most likely “tails” left in the system from previous installations of HASP drivers. Delete them all and try again.

    Done, we have installed the 1C:Enterprise 8 application server successfully.

    Installing PostgreSQL in a standard form, sufficient to run 1C without additional settings

    Run "postgresql-9.1.2-1.1C(x64).msi"

    You don’t have to change the installation options, 1C will work. Further.

    Postgre, like the 1C server, can itself create a user under which you will run the service. I draw your attention to the fact that if you indicate account with administrator rights, the service will not work correctly. Be sure to create a new user.

    Next installation window.

    We initialize the cluster. If our database server and 1C application server are located on different computers, then check the box “Support connections from any IP”, otherwise we don’t touch it. Be sure to specify UTF8 encoding. Create a DBMS superuser. Further…

    For the initial work we don’t need anything additional, uncheck the box and complete the installation.

    The result of our efforts is ready-to-use PostgreSQL. If we are satisfied that the databases will be located in Program Files\PostgreSQL\9.1.2-1.1C\data, we end there, open the databases and enjoy the process. However, more often than not, databases “lie” on specially designed disk arrays, and not on system disk. To configure the data location, please read the following section before installation.

    Installing Postgre with choosing a cluster storage location

    We proceed to install Postgre and perform all the steps until we are prompted to initialize the cluster:

    Uncheck "Initialize database cluster" and click "Next".

    Yes, we are sure.

    Uncheck “Run Stack Builder upon exit” and complete the installation.

    1. It is necessary to give full rights to the folder in which we installed PostgreSQL, usually this is C:\Program Files\PostgreSQL

    2. Launch cmd as administrator. If you do this in win7, then run it as Administrator.

    3. Create a folder where the cluster will be stored. For example d:\postgredata.

    md d:\postgredata

    4. We initialize the cluster manually, indicating the path where it will be located.

    “C:\Program Files\PostgreSQL\9.1.2-1.1C\bin\initdb.exe” -D d:\postgredata --locale=Russian_Russia --encoding=UTF8 -U postgres

    5. Remove the PostgreSQL service that was installed during installation.

    sc delete pgsql-9.1.2-1.1C-x64

    Where pgsql-9.1.2-1.1C-x64 is the name of the service. If you don’t know the name exactly, you can look at the properties of the “PostgreSQL Database Server...” service (Start - Control Panel - Administrative Tools - Services)

    6. Create new service indicating our cluster

    “C:\Program Files\PostgreSQL\9.1.2-1.1C\bin\pg_ctl” register -N pgsql -U postgresql -P password -D d:/postgredata

    7. Now let's go to services. Start – Control Panel – Administration – Services and start our service.

    Creating a new 1C database on a server with PostgreSQL

    There are several options for creating a database. You can try creating a database through pgAdmin3, the 1C server administration console. But here you will be faced with a lot of incomprehensible questions and a bunch of errors, the answers to which you will spend a long time looking for. Leave that to the experts. Our goal is to get a working base with minimal effort. Let's describe the easiest way to achieve this.

    We launch the 1C client.

    Click "Add...".

    We come up with a name for the database, indicate “On the 1C:Enterprise server”, then.

    Server cluster 1C:Enterprise– localhost, if we are creating a database on the same computer where the 1C server is installed, or the name of the 1C application server, if on a different one.

    Name of the infobase in the cluster- in the future, this name will be indicated when connecting from other computers.

    DBMS type– Select PostgreSQL.

    Database server- indicate the name of the PostgreSQL server. If we create a database on the server, we also specify localhost.

    Database name– with this name a database will be created in PostgreSQL.

    User, password– the name of the user we specified as the superuser when installing PostgreSQL. Be sure to check the “Create a database if it does not exist” checkbox.

    The question arises - where will the database be physically stored? In the Base folder of the specified cluster. What if we don’t want it to lie where all the bases are? There’s nothing we can do about it yet, we’ll just create a base and move on. Further…

    Specifying the database storage folder

    So, we have created a base. In most cases, this is where the installation ends. However, if there are many databases, and there are several disk arrays for different groups of databases, you need to indicate where the databases should be physically located. To do this, run pgAdmin3 from Start – Programs – PostgreSQL. Connect to our server.

    When you first connect, Postgre will ask for a password for the postgres user (which we created during installation).

    We create a new TableSpace, this will be the folder in which our databases will be stored.

    Specified the storage location for the database files. OK.

    Now we open the properties of the previously created database, the location of which we want to change.

    Change the Tablespace property. After clicking "OK", the database files will be automatically moved. Ready! We hope that the article was useful to you. If so, leave comments and share links to this page. Thank you!