Thread synchronization tools in Windows OS (critical sections, mutexes, semaphores, events). Synchronization objects in Windows Synchronizing processes using events

Lecture No. 9. Synchronizing processes and threads

1. Goals and means of synchronization.

2. Synchronization mechanisms.

1.Goals and means of synchronization

There is a fairly broad class of operating system tools that ensure mutual synchronization of processes and threads. The need for thread synchronization arises only in a multiprogram operating system and is associated with the joint use of hardware and information resources of the computer system. Synchronization is necessary to avoid races and deadlocks when exchanging data between threads, sharing data, and accessing the processor and I/O devices.

In many operating systems, these tools are called interprocess communication tools (IPC), which reflects the historical primacy of the concept of “process” in relation to the concept of “thread”. Typically, IPC tools include not only interprocess synchronization tools, but also interprocess data exchange tools.

The execution of a thread in a multiprogramming environment is always asynchronous. It is very difficult to say with complete certainty what stage of execution a process will be at at a certain point in time. Even in single-program mode, it is not always possible to accurately estimate the time it takes to complete a task. This time in many cases significantly depends on the value of the source data, which affects the number of cycles, the direction of program branching, the execution time of I/O operations, etc. Since the source data at different times when the task is launched can be different, so can the execution time individual stages and the task as a whole is a very uncertain value.


Even more uncertain is the execution time of a program in a multiprogramming system. The moments when threads are interrupted, the time they spend in queues for shared resources, the order in which threads are selected for execution - all these events are the result of a confluence of many circumstances and can be interpreted as random. At best, one can estimate the probabilistic characteristics of a computational process, for example, the probability of its completion over a given period of time.

Thus, threads in the general case (when the programmer has not taken special measures to synchronize them) flow independently, asynchronously with each other. This is true both for threads in the same process executing a common program code, and for threads in different processes, each executing its own program.

Any interaction between processes or threads is related to their synchronization, which consists in coordinating their speeds by suspending the flow until the occurrence of a certain event and then activating it when this event occurs. Synchronization is at the core of any thread interaction, whether it involves resource sharing or data exchange. For example, the receiving thread should access data only after it has been buffered by the sending thread. If the receiving thread accessed the data before it entered the buffer, then it must be suspended.

When sharing hardware resources, synchronization is also absolutely necessary. When, for example, an active thread needs access to a serial port, and another thread that is currently in the waiting state is working with this port in exclusive mode, the OS suspends the active thread and does not activate it until the port it needs is free . Synchronization with events external to the computer system, for example, the reaction to pressing the Ctrl+C key combination, is often also necessary.

Every second, hundreds of events occur in the system related to the allocation and release of resources, and the OS must have reliable and efficient means that would allow it to synchronize threads with events occurring in the system.

To synchronize application program threads, the programmer can use both his own synchronization tools and techniques, and operating system tools. For example, two threads of the same application process can coordinate their work using a global Boolean variable available to them both, which is set to one when some event occurs, for example, one thread produces the data necessary for the other to continue working. However, in many cases, the synchronization facilities provided by the operating system in the form of system calls are more effective, or even the only possible ones. Thus, threads belonging to different processes are not able to interfere in any way with each other’s work. Without the mediation of the operating system, they cannot suspend each other or notify each other about the occurrence of an event. Synchronization tools are used by the operating system not only to synchronize application processes, but also for its internal needs.

Typically, operating system developers provide a wide range of synchronization tools at the disposal of application and systems programmers. These tools can form a hierarchy, when more complex ones are built on the basis of simpler tools, and also be functionally specialized, for example, tools for synchronizing threads of one process, tools for synchronizing threads of different processes when exchanging data, etc. Often the functionality of different system calls synchronizations overlap, so that a programmer can use several calls to solve one problem, depending on his personal preferences.


The need for synchronization and race

Neglecting synchronization issues in a multi-threaded system can lead to incorrect solution of the problem or even system crash. Consider, for example (Fig. 4.16), the task of maintaining a database of clients of a certain enterprise. Each client is assigned a separate record in the database, which contains, among other fields, the Order and Payment fields. The program that maintains the database is designed as a single process that has several threads, including thread A, which enters information about orders received from customers into the database, and thread B, which records in the database information about customer payments for invoices. Both of these threads work together on a common database file using the same algorithms, which include three steps.

2. Enter a new value in the Order (for flow A) or Payment (for flow B) field.

3. Return the modified record to the database file.

https://pandia.ru/text/78/239/images/image002_238.gif" width="505" height="374 src=">

Rice. 4.17. The influence of relative flow velocities on the result of solving the problem

Critical section

An important concept in thread synchronization is the concept of a “critical section” of a program. Critical section is a part of a program whose execution result can change unpredictably if variables related to that part of the program are changed by other threads while the execution of that part is not yet completed. The critical section is always defined in relation to certain critical data if changed in an uncoordinated manner, undesirable effects may occur. In the previous example, the critical data was the database file records. All threads working with critical data must have a critical section defined. Note that in different threads the critical section generally consists of different sequences of commands.

To eliminate the effect of races on critical data, it is necessary to ensure that only one thread is in the critical section associated with that data at any time. It does not matter whether this thread is in an active or suspended state. This technique is called mutual exclusion. The operating system uses different ways to implement mutual exclusion. Some methods are suitable for mutual exclusion when only threads of one process enter the critical section, while others can provide mutual exclusion for threads of different processes.

The simplest and at the same time the most inefficient way to ensure mutual exclusion is for the operating system to allow the thread to disable any interrupts while it is in the critical section. However, this method is practically not used, since it is dangerous to trust a user thread to control the system - it can occupy the processor for a long time, and if a thread crashes in a critical section, the entire system will crash, because interruptions will never be allowed.

2. Synchronization mechanisms.

Blocking Variables

To synchronize threads of one process, an application programmer can use global blocking variables. The programmer works with these variables, to which all threads of the process have direct access, without resorting to OS system calls.

Which would disable interrupts throughout the entire verification and installation operation.

Implementing mutual exclusion in the manner described above has a significant drawback: during the time that one thread is in the critical section, another thread that needs the same resource, having access to the processor, will continuously poll the blocking variable, wasting the processor time allocated to it, which could be used to execute some other thread. To eliminate this drawback, many operating systems provide special system calls for working with critical sections.

In Fig. Figure 4.19 shows how these functions implement mutual exclusion in the Windows NT operating system. Before starting to modify critical data, the thread issues the EnterCriticalSection() system call. This call first performs, as in the previous case, a check of the blocking variable reflecting the state of the critical resource. If the system call determines that the resource is busy (F(D) = 0), unlike the previous case, it does not perform a cyclic poll, but puts the thread in the waiting state (D) and makes a note that this thread should be activated, when the corresponding resource becomes available. The thread that is currently using this resource, after exiting the critical section, must execute the LeaveCriticalSectionO system function, as a result of which the blocking variable takes the value corresponding to the free state of the resource (F(D) = 1), and the operating system looks through the queue of those waiting for this resource threads and moves the first thread from the queue to the ready state.

Overhead costs" href="/text/category/nakladnie_rashodi/" rel="bookmark">OS overhead costs for implementing the function of entering and exiting the critical section may exceed the savings obtained.

Semaphores

A generalization of blocking variables are the so-called Dijkstra semaphores. Instead of binary variables, Dijkstra proposed using variables that can take non-negative integer values. Such variables, used to synchronize computing processes, are called semaphores.

To work with semaphores, two primitives are introduced, traditionally denoted P and V. Let the variable S represent a semaphore. Then the actions V(S) and P(S) are defined as follows.

* V(S): The variable S is increased by 1 as a single action. Sampling, building and storing cannot be interrupted. Variable S is not accessed by other threads while this operation is performed.

* P(S): Decreases S by 1 if possible. If 5=0 and it is impossible to reduce S while remaining in the region of non-negative integer values, then the thread calling operation P waits until this reduction becomes possible. A successful check and decrement is also an indivisible operation.

No interruptions are allowed during the execution of the V and P primitives.

In the special case where the semaphore S can only take the values ​​0 and 1, it becomes a blocking variable, which for this reason is often called a binary semaphore. Activity P has the potential to put the thread that is executing it into a waiting state, while activity V may, under some circumstances, wake up another thread that was suspended by activity P.

Let's look at the use of semaphores using a classic example of the interaction of two threads running in multiprogramming mode, one of which writes data to the buffer pool, and the other reads it from the buffer pool. Let the buffer pool consist of N buffers, each of which can contain one entry. In general, the writer thread and the reader thread can have different speeds and access the buffer pool with varying intensity. In one period, the write speed may exceed the read speed, in another - vice versa. To work together correctly, the writer thread must pause when all buffers are busy and wake up when at least one buffer is freed. In contrast, the reader thread should pause when all buffers are empty and wake up when at least one write appears.

Let's introduce two semaphores: e - the number of empty buffers, and f - the number of filled buffers, and in the initial state e = N, a f = 0. Then the operation of threads with a common buffer pool can be described as follows (Fig. 4.20).

The writer thread first performs a P(e) operation, by which it checks whether there are any empty buffers in the buffer pool. In accordance with the semantics of the P operation, if the semaphore e is equal to 0 (that is, there are no free buffers at the moment), then the writer thread enters the waiting state. If the value of e is a positive number, then it reduces the number of free buffers, writes data to the next free buffer, and then increases the number of occupied buffers with the operation V(f). The reader thread acts in a similar way, with the difference that it starts by checking for full buffers, and after reading the data, it increases the number of free buffers.

DIV_ADBLOCK860">

A semaphore can also be used as a blocking variable. In the example discussed above, in order to eliminate collisions when working with a shared memory area, we will assume that writing to and reading from the buffer are critical sections. We will ensure mutual exclusion using the binary semaphore b (Fig. 4.21). Both threads, after checking the availability of buffers, must check the availability of the critical section.

https://pandia.ru/text/78/239/images/image007_110.jpg" width="495" height="639 src=">

Rice. 4.22. The occurrence of deadlocks during program execution

NOTE

Deadlocks must be distinguished from simple queues, although both arise when resources are shared and look similar in appearance: a thread is suspended and waits for a resource to become free. However, a queue is a normal phenomenon, an inherent sign of high resource utilization when requests arrive randomly. A queue appears when a resource is not available at the moment, but will be released after some time, allowing the thread to continue executing. A deadlock, as its name suggests, is a somewhat unsolvable situation. A necessary condition for a deadlock to occur is that a thread needs several resources at once.

In the examples considered, the deadlock was formed by two threads, but more threads can mutually block each other. In Fig. Figure 2.23 shows such a distribution of resources Ri between several threads Tj, which led to the occurrence of deadlocks. The arrows indicate the flow's resource requirements. A solid arrow means that the corresponding resource has been allocated to the thread, and a dotted arrow connects the thread to the resource that is needed, but cannot yet be allocated because it is occupied by another thread. For example, thread T1 needs resources R1 and R2 to perform work, of which only one is allocated - R1, and resource R2 is held by thread T2. None of the four threads shown in the figure can continue their work, since they do not have all the resources necessary for this.

The inability of threads to complete the work they have started due to deadlocks reduces the performance of the computing system. Therefore, much attention is paid to the problem of preventing deadlocks. In the event that a deadlock does occur, the system must provide the operator operator with a means by which he can recognize a deadlock and distinguish it from a normal block due to temporary unavailability of resources. Finally, if a deadlock is diagnosed, then means are needed to remove deadlocks and restore the normal computing process.

The owner" href="/text/category/vladeletc/" rel="bookmark">the owner, setting it to an unsignaled state, and enters the critical section. After the thread has completed work with the critical data, it “gives up” the mutex, setting it into the signaled state. At this moment, the mutex is free and does not belong to any thread. If any thread is waiting for it to be released, then it becomes the next owner of this mutex, at the same time the mutex goes into the non-signaled state.

An event object (in this case the word "event" is used in a narrow sense, as a designation of a specific type of synchronization object) is usually used not to access data, but to notify other threads that some actions have completed. Let, for example, in some application, work is organized in such a way that one thread reads data from a file into a memory buffer, and other threads process this data, then the first thread reads a new portion of data, and other threads process it again, and so on. At the start of execution, the first thread sets the event object to a non-signaled state. All other threads have made a call to Wait(X), where X is an event pointer, and are in a suspended state, waiting for that event to occur. As soon as the buffer is full, the first thread reports this to the operating system by calling Set(X). The operating system scans the queue of waiting threads and activates any threads that are waiting for this event.

Signals

Signal Allows a task to respond to an event, the source of which may be the operating system or another task. Signals include interrupting a task and executing predetermined actions. Signals can be generated synchronously, that is, as a result of the work of the process itself, or they can be sent to a process by another process, that is, generated asynchronously. Synchronous signals most often come from the processor's interrupt system and indicate process actions that are blocked by the hardware, such as division by zero, addressing error, memory protection violation, etc.

An example of an asynchronous signal is a signal from a terminal. Many operating systems provide for prompt removal of a process from execution. To do this, the user can press a certain key combination (Ctrl+C, Ctrl+Break), as a result of which the OS generates a signal and sends it to the active process. The signal can arrive at any time during a process's execution (that is, it is asynchronous), requiring the process to terminate immediately. In this case, the response to the signal is the unconditional completion of the process.

A set of signals can be defined in the system. The program code of the process that received the signal can either ignore it, or respond to it with a standard action (for example, exit), or perform specific actions defined by the application programmer. In the latter case, it is necessary to provide special system calls in the program code, with the help of which the operating system is informed which procedure should be performed in response to the receipt of a particular signal.

Signals provide logical communication between processes and between processes and users (terminals). Since sending a signal requires knowledge of the process identifier, interaction via signals is possible only between related processes that can obtain information about each other's identifiers.

In distributed systems consisting of multiple processors, each of which has its own RAM, locking variables, semaphores, signals, and other similar shared memory-based features are unsuitable. In such systems, synchronization can only be achieved through message exchange.

A process is an instance of a program loaded into memory. This instance can create threads, which are a sequence of instructions to execute. It is important to understand that it is not processes that are running, but rather threads.

Moreover, any process has at least one thread. This thread is called the main (main) thread of the application.

Since there are almost always many more threads than there are physical processors to execute them, the threads are not actually executed simultaneously, but in turn (processor time is distributed between the threads). But switching between them happens so often that they seem to be running in parallel.

Depending on the situation, threads can be in three states. First, a thread can execute when it is allocated CPU time, i.e. it may be in a state of activity. Secondly, it may be inactive and waiting for the processor to be allocated, i.e. be in a state of readiness. And there is a third, also very important state - the blocking state. When a thread is blocked, it is not allocated any time at all. Typically, a block is placed while waiting for some event. When this event occurs, the thread is automatically moved from the blocked state to the ready state. For example, if one thread performs calculations, and the other must wait for the results to save them to disk. The second could use a loop like "while(!isCalcFinished) continue;", but it is easy to verify in practice that during the execution of this loop the processor is 100% busy (this is called active waiting). Such cycles should be avoided if possible, in which the locking mechanism provides invaluable assistance. The second thread can block itself until the first thread raises an event indicating that the read is complete.

Synchronizing threads in Windows OS

Windows implements preemptive multitasking - this means that at any time the system can interrupt the execution of one thread and transfer control to another. Previously, in Windows 3.1, an organization method called cooperative multitasking was used: the system waited until the thread itself transferred control to it, and that is why, if one application froze, the computer had to be rebooted.

All threads belonging to the same process share some common resources - such as RAM address space or open files. These resources belong to the entire process, and therefore to each of its threads. Therefore, each thread can work with these resources without any restrictions. But... If one thread has not yet finished working with some shared resource, and the system switches to another thread using the same resource, then the result of the work of these threads can be extremely different from what was intended. Such conflicts can also arise between threads belonging to different processes. Whenever two or more threads share any shared resource, this problem occurs.

Example. Out-of-sync threads: If you temporarily pause the display thread (pause), the background array population thread will continue to run.

#include #include int a; HANDLE hThr; unsigned long uThrID; void Thread(void* pParams) ( int i, num = 0; while (1) ( for (i=0; i<5; i++) a[i] = num; num++; } } int main(void) { hThr=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)Thread,NULL,0,&uThrID); while(1) printf("%d %d %d %d %d\n", a, a, a, a, a); return 0; }

This is why a mechanism is needed to allow threads to coordinate their work with shared resources. This mechanism is called the thread synchronization mechanism.

This mechanism is a set of operating system objects that are created and managed programmatically, are common to all threads in the system (some are shared by threads belonging to the same process), and are used to coordinate access to resources. Resources can be anything that can be shared by two or more threads - a disk file, a port, a database entry, a GDI object, and even a global program variable (which can be accessed by threads belonging to the same process).

There are several synchronization objects, the most important of which are the mutex, the critical section, the event, and the semaphore. Each of these objects implements its own synchronization method. Also, processes and threads themselves can be used as synchronization objects (when one thread waits for another thread or process to complete); as well as files, communication devices, console input, and change notifications.

Any synchronization object can be in the so-called signal state. For each type of object this state has a different meaning. Threads can check the current state of an object and/or wait for a change in this state and thus coordinate their actions. This ensures that when a thread works with synchronization objects (creates them, changes state), the system will not interrupt its execution until it completes this action. Thus, all final operations with synchronization objects are atomic (indivisible.

Working with synchronization objects

To create one or another synchronization object, a special WinAPI function of type Create... is called (for example, CreateMutex). This call returns a handle to an object (HANDLE) that can be used by all threads belonging to this process. It is possible to access a synchronization object from another process - either by inheriting a handle to this object, or, preferably, by using a call to the object's opening function (Open...). After this call, the process will receive a handle that can later be used to work with the object. An object, unless it is intended to be used within a single process, must be given a name. The names of all objects must be different (even if they are of different types). For example, you cannot create an event and a semaphore with the same name.

Using the existing descriptor of an object, you can determine its current state. This is done using the so-called. pending functions. The most commonly used function is WaitForSingleObject. This function takes two parameters, the first of which is the object handle, the second is the timeout in ms. The function returns WAIT_OBJECT_0 if the object is in a signaled state, WAIT_TIMEOUT if it timed out, and WAIT_ABANDONED if the mutex object was not freed before its owning thread exited. If the timeout is specified as zero, the function returns the result immediately, otherwise it waits for the specified amount of time. If the object's state becomes signal before this time expires, the function will return WAIT_OBJECT_0, otherwise the function will return WAIT_TIMEOUT. If the symbolic constant INFINITE is specified as the time, the function will wait indefinitely until the object's state becomes signal.

A very important fact is that calling a waiting function blocks the current thread, i.e. While a thread is in the idle state, it is not allocated any CPU time.

Critical sections

A critical section object helps the programmer isolate the section of code where a thread accesses a shared resource and prevent concurrent use of the resource. Before using the resource, the thread enters the critical section (calls the EnterCriticalSection function). If any other thread then tries to enter the same critical section, its execution will pause until the first thread leaves the section by calling LeaveCriticalSection. Used only for threads of one process. The order of entry into the critical section is not defined.

There is also a TryEnterCriticalSection function that checks whether the critical section is currently occupied. With its help, the thread, while waiting for access to a resource, can not be blocked, but perform some useful actions.

Example. Synchronizing threads using critical sections.

#include #include CRITICAL_SECTION cs; int a; HANDLE hThr; unsigned long uThrID; void Thread(void* pParams) ( int i, num = 0; while (1) ( EnterCriticalSection(&cs); for (i=0; i<5; i++) a[i] = num; num++; LeaveCriticalSection(&cs); } } int main(void) { InitializeCriticalSection(&cs); hThr=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)Thread,NULL,0,&uThrID); while(1) { EnterCriticalSection(&cs); printf("%d %d %d %d %d\n", a, a, a, a, a); LeaveCriticalSection(&cs); } return 0; }

Mutual exclusions

Mutual exclusion objects (mutexes, mutex - from MUTual EXclusion) allow you to coordinate the mutual exclusion of access to a shared resource. The signal state of an object (i.e., the "set" state) corresponds to a point in time when the object does not belong to any thread and can be "captured". Conversely, the "reset" (non-signal) state corresponds to the moment when some thread already owns this object. Access to an object is granted when the thread that owns the object releases it.

Two (or more) threads can create a mutex with the same name by calling the CreateMutex function. The first thread actually creates a mutex, and the next ones get a handle to an already existing object. This allows multiple threads to obtain a handle to the same mutex, freeing the programmer from having to worry about who actually creates the mutex. If this approach is used, it is advisable to set the bInitialOwner flag to FALSE, otherwise there will be some difficulty in determining the actual creator of the mutex.

Multiple threads can obtain a handle to the same mutex, allowing inter-process communication. The following mechanisms for this approach can be used:

  • A child process created using the CreateProcess function can inherit a mutex handle if the lpMutexAttributes parameter was specified when creating the mutex with the CreateMutex function.
  • A thread can obtain a duplicate of an existing mutex using the DuplicateHandle function.
  • A thread can specify the name of an existing mutex when calling the OpenMutex or CreateMutex functions.

To declare a mutual exception as belonging to the current thread, you must call one of the waiting functions. The thread that owns the object can reacquire it as many times as it wants (this will not lead to self-locking), but it must release it the same number of times using the ReleaseMutex function.

To synchronize the threads of one process, it is more efficient to use critical sections.

Example. Synchronizing threads using mutexes.

#include #include HANDLE hMutex; int a; HANDLE hThr; unsigned long uThrID; void Thread(void* pParams) ( int i, num = 0; while (1) ( WaitForSingleObject(hMutex, INFINITE); for (i=0; i<5; i++) a[i] = num; num++; ReleaseMutex(hMutex); } } int main(void) { hMutex=CreateMutex(NULL, FALSE, NULL); hThr=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)Thread,NULL,0,&uThrID); while(1) { WaitForSingleObject(hMutex, INFINITE); printf("%d %d %d %d %d\n", a, a, a, a, a); ReleaseMutex(hMutex); } return 0; }

Events

Event objects are used to notify waiting threads that an event has occurred. There are two types of events - with manual and automatic reset. Manual reset is performed by the ResetEvent function. Manual reset events are used to notify multiple threads at once. When using an auto-reset event, only one waiting thread will receive the notification and continue executing; the rest will continue to wait.

The CreateEvent function creates an event object, SetEvent - sets the event to the signal state, ResetEvent - resets the event. The PulseEvent function sets an event, and after the threads waiting for this event resume (all of them in case of manual reset and only one in case of automatic reset), resets it. If there are no waiting threads, PulseEvent simply resets the event.

Example. Synchronizing threads using events.

#include #include HANDLE hEvent1, hEvent2; int a; HANDLE hThr; unsigned long uThrID; void Thread(void* pParams) ( int i, num = 0; while (1) ( WaitForSingleObject(hEvent2, INFINITE); for (i=0; i<5; i++) a[i] = num; num++; SetEvent(hEvent1); } } int main(void) { hEvent1=CreateEvent(NULL, FALSE, TRUE, NULL); hEvent2=CreateEvent(NULL, FALSE, FALSE, NULL); hThr=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)Thread,NULL,0,&uThrID); while(1) { WaitForSingleObject(hEvent1, INFINITE); printf("%d %d %d %d %d\n", a, a, a, a, a); SetEvent(hEvent2); } return 0; }

Semaphores

A semaphore object is actually a mutex object with a counter. This object allows itself to be “captured” by a certain number of threads. After this, “capturing” will be impossible until one of the threads that previously “captured” the semaphore releases it. Semaphores are used to limit the number of threads simultaneously working with a resource. The maximum number of threads is transferred to the object during initialization; after each “capture” the semaphore counter is decreased. The signal state corresponds to a counter value greater than zero. When the counter is zero, the semaphore is considered not installed (reset).

The CreateSemaphore function creates a semaphore object indicating its maximum possible initial value, OpenSemaphore - returns a descriptor of an existing semaphore, the semaphore is captured using waiting functions, and the semaphore value is decreased by one, ReleaseSemaphore - the semaphore is released with the semaphore value increased by the value specified in the parameter number.

Example. Synchronizing threads using semaphores.

#include #include HANDLE hSem; int a; HANDLE hThr; unsigned long uThrID; void Thread(void* pParams) ( int i, num = 0; while (1) ( WaitForSingleObject(hSem, INFINITE); for (i=0; i<5; i++) a[i] = num; num++; ReleaseSemaphore(hSem, 1, NULL); } } int main(void) { hSem=CreateSemaphore(NULL, 1, 1, "MySemaphore1"); hThr=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)Thread,NULL,0,&uThrID); while(1) { WaitForSingleObject(hSem, INFINITE); printf("%d %d %d %d %d\n", a, a, a, a, a); ReleaseSemaphore(hSem, 1, NULL); } return 0; }

Protected access to variables

There are a number of functions that allow you to work with global variables from all threads without worrying about synchronization, because these functions monitor it themselves - their execution is atomic. These functions are InterlockedIncrement, InterlockedDecrement, InterlockedExchange, InterlockedExchangeAdd and InterlockedCompareExchange. For example, the InterlockedIncrement function atomically increments the value of a 32-bit variable by one, which is convenient to use for various counters.

To obtain complete information about the purpose, use and syntax of all WIN32 API functions, you must use the MS SDK help system included in the Borland Delphi or CBuilder programming environments, as well as MSDN, supplied as part of the Visual C programming system.

Threads can be in one of several states:

    Ready(ready) – located in the pool of threads awaiting execution;

    Running(execution) - running on the processor;

    Waiting(waiting), also called idle or suspended, suspended - in a state of waiting, which ends with the thread starting to execute (Running state) or entering the state Ready;

    Terminated (completion) - execution of all thread commands is completed. It can be deleted later. If the stream is not deleted, the system can reset it to its original state for later use.

Thread synchronization

Running threads often need to communicate in some way. For example, if multiple threads are trying to access some global data, then each thread needs to protect the data from being changed by another thread. Sometimes one thread needs to know when another thread will complete a task. Such interaction is mandatory between threads of both the same and different processes.

Thread synchronization ( thread synchronization) is a general term referring to the process of interaction and interconnection of threads. Please note that synchronizing threads requires the operating system itself to act as an intermediary. Threads cannot interact with each other without her participation.

In Win32, there are several methods for synchronizing threads. It happens that in a particular situation one method is more preferable than another. Let's take a quick look at these methods.

Critical sections

One method for synchronizing threads is to use critical sections. This is the only thread synchronization method that does not require the Windows kernel. (The critical section is not a kernel object.) However, this method can only be used to synchronize threads of a single process.

A critical section is a section of code that can only be executed by one thread at a time. If the code used to initialize an array is placed in a critical section, then other threads will not be able to enter that section of code until the first thread has finished executing it.

Before using a critical section, you must initialize it using the Win32 API procedure InitializeCriticalSection(), which is defined (in Delphi) as follows:

procedure InitializeCriticalSection(var IpCriticalSection: TRTLCriticalSection); stdcall;

The IpCriticalSection parameter is a record of type TRTLCriticalSection that is passed by reference. The exact definition of the TRTLCriticalSection entry doesn't matter much, since you're unlikely to ever need to look at its contents. All you have to do is pass an uninitialized entry to the IpCtitical Section parameter, and this entry will be immediately populated by the procedure.

After filling out the entry in the program, you can create a critical section by placing some portion of its text between calls to the EnterCriticalSection() and LeaveCriticalSection() functions. These procedures are defined as follows:

procedure EnterCriticalSection(var IpCriticalSection: TRTLCriticalSection); stdcall;

procedure LeaveCriticalSection(var IpCriticalSection: TRTLCriticalSection); stdcall;

The IpCriticalSection parameter that is passed to these procedures is nothing more than an entry created by the InitializeCriticalSection() procedure.

Function EnterCriticalSection checks whether some other thread is already executing the critical section of its program associated with the given critical section object. If not, the thread gets permission to execute its critical code, or rather, it is not prevented from doing so. If so, then the thread making the request is put into a waiting state, and a record is made of the request. Because records need to be created, the critical section object is a data structure.

When function LeaveCriticalSection called by a thread that currently has permission to execute its critical section of code associated with a given critical section object, the system can check to see if there is another thread in the queue waiting for that object to be released. The system can then remove the waiting thread from the waiting state, and it will continue its work (in the time slices allocated to it).

When you are finished working with the TRTLCriticalSection record, you must free it by calling the DeleteCriticalSection() procedure, which is defined as follows:

procedure DeleteCriticalSection(var IpCriticalSection: TRTLCriticalSection); stdcall;

Sometimes when working with multiple threads or processes it becomes necessary synchronize execution two or more of them. The reason for this is most often that two or more threads may require access to a shared resource that really cannot be provided to multiple threads at once. A shared resource is a resource that can be accessed simultaneously by multiple running tasks.

The mechanism that ensures the synchronization process is called restriction of access. The need for it also arises in cases where one thread is waiting for an event generated by another thread. Naturally, there must be some way by which the first thread will be suspended before the event occurs. After this, the thread should continue its execution.

There are two general states that a task can be in. Firstly, the task can be carried out(or be ready to execute as soon as it gains access to CPU resources). Secondly, the task can be blocked. In this case, its execution is suspended until the resource it needs is freed or a certain event occurs.

Windows has special services that allow you to restrict access to shared resources in a certain way, because without the help of the operating system, a separate process or thread cannot determine for itself whether it has sole access to a resource. The Windows operating system contains a procedure that, during one continuous operation, checks and, if possible, sets the resource access flag. In the language of operating system developers, this operation is called check and installation operation. Flags used to provide synchronization and control access to resources are called semaphores(semaphore). The Win32 API provides support for semaphores and other synchronization objects. The MFC library also includes support for these objects.

Synchronization objects and mfc classes

The Win32 interface supports four types of synchronization objects - all of them are somehow based on the concept of a semaphore.

The first type of object is the semaphore itself, or classic (standard) semaphore. It allows a limited number of processes and threads to access a single resource. In this case, access to the resource is either completely limited (one and only one thread or process can access the resource in a certain period of time), or only a small number of threads and processes receive simultaneous access. Semaphores are implemented using a counter whose value decreases when a semaphore is allocated to a task, and increases when the task releases a semaphore.

The second type of synchronization objects is exclusive (mutex) semaphore. It is designed to completely restrict access to a resource so that only one process or thread can access the resource at any given time. In fact, this is a special type of semaphore.

The third type of synchronization objects is event, or event object. It is used to block access to a resource until some other process or thread declares that the resource can be used. Thus, this object signals the completion of the required event.

Using a fourth type synchronization object, you can prohibit the execution of certain sections of program code by several threads at the same time. To do this, these areas must be declared as critical section. When one thread enters this section, other threads are prohibited from doing the same until the first thread exits the section.

Critical sections, unlike other types of synchronization objects, are used only to synchronize threads within the same process. Other types of objects can be used to synchronize threads within a process or to synchronize processes.

In MFC, the synchronization mechanism provided by the Win32 interface is supported by the following classes, which are derived from the CSyncObject class:

    CCriticalSection- implements the critical section.

    CEvent- implements an event object

    CMutex- implements an exclusive semaphore.

    CSemaphore- implements a classic semaphore.

In addition to these classes, MFC also defines two auxiliary synchronization classes: CSingleLock And CMultiLock. They control access to the synchronization object and contain methods used to provide and release such objects. Class CSingleLock controls access to a single synchronization object, and the class CMultiLock- to several objects. In what follows we will only consider the class CSingleLock.

Once any synchronization object is created, access to it can be controlled using the class CSingleLock. To do this, you must first create an object of type CSingleLock using the constructor:

CSingleLock(CSyncObject* pObject, BOOL bInitialLock = FALSE);

The first parameter passes a pointer to a synchronization object, for example a semaphore. The value of the second parameter determines whether the constructor should attempt to access this object. If this parameter is non-zero, then access will be obtained, otherwise no attempts will be made to gain access. If access is granted, then the thread that created the class object CSingleLock, will be stopped until the corresponding synchronization object is released by the method Unlock class CSingleLock.

When an object of type CSingleLock is created, access to the object pointed to by pObject can be controlled using two functions: Lock And Unlock class CSingleLock.

Method Lock is intended to gain access to an object to a synchronization object. The thread that called it is suspended until the method completes, that is, until access to the resource is obtained. The value of the parameter determines how long the function will wait for access to the required object. Each time a method completes successfully, the value of the counter associated with the synchronization object is decremented by one.

Method Unlock Releases the synchronization object, allowing other threads to use the resource. In the first version of the method, the value of the counter associated with this object is increased by one. In the second option, the first parameter determines how much this value should be increased. The second parameter points to the variable into which the previous counter value will be written.

When working with a class CSingleLock The general procedure for controlling access to a resource is:

    create an object of type CSyncObj (for example, a semaphore) that will be used to control access to the resource;

    using the created synchronization object, create an object of type CSingleLock;

    to gain access to a resource, call the Lock method;

    access a resource;

    Call the Unlock method to release the resource.

The following describes how to create and use semaphores and event objects. Once you understand these concepts, you can easily learn and use two other types of synchronization objects: critical sections and mutexes.

Hello! Today we will continue to consider the features of multi-threaded programming and talk about thread synchronization.

What is “synchronization”? Outside of the programming realm, this refers to some kind of setup that allows two devices or programs to work together. For example, a smartphone and computer can be synchronized with a Google account, and a personal account on a website can be synchronized with accounts on social networks in order to log in using them. Thread synchronization has a similar meaning: it is setting up how threads interact with each other. In previous lectures, our threads lived and worked separately from each other. One was counting something, the second was sleeping, the third was displaying something on the console, but they did not interact with each other. In real programs such situations are rare. Several threads can actively work, for example, with the same set of data and change something in it. This creates problems. Imagine that multiple threads are writing text to the same location - for example, a text file or the console. This file or console in this case becomes a shared resource. Threads do not know about each other’s existence, so they simply write down everything they can manage in the time that the thread scheduler allocates to them. In a recent lecture of the course, we had an example of what this would lead to, let’s remember it: The reason lies in the fact that the threads worked with a shared resource, the console, without coordinating actions with each other. If the thread scheduler has allocated time to Thread-1, it instantly writes everything to the console. What other threads have already managed to write or haven’t had time to write is not important. The result, as you can see, is disastrous. Therefore, in multi-threaded programming a special concept was introduced mutex (from the English “mutex”, “mutual exclusion” - “mutual exclusion”). Mutex task- provide a mechanism so that only one thread has access to an object at a certain time. If Thread-1 has acquired the mutex of object A, other threads will not have access to it to change anything in it. Until object A's mutex is released, the remaining threads will be forced to wait. Real life example: imagine that you and 10 other strangers are participating in a training. You need to take turns expressing ideas and discussing something. But, since you are seeing each other for the first time, so as not to constantly interrupt each other and not slide into a hubbub, you use the “talking ball” rule: only one person can speak - the one who has the ball in his hands. This way the discussion turns out to be adequate and fruitful. So, a mutex, in essence, is such a ball. If an object's mutex is in the hands of one thread, other threads will not be able to access the object. You don't need to do anything to create a mutex: it's already built into the Object class, which means every object in Java has one.

How the synchronized operator works

Let's get acquainted with a new keyword - synchronized. It marks a certain piece of our code. If a block of code is marked with the synchronized keyword, it means that the block can only be executed by one thread at a time. Synchronization can be implemented in different ways. For example, create an entire synchronized method: public synchronized void doSomething() ( //...method logic) Or write a block of code where synchronization is carried out on some object: public class Main ( private Object obj = new Object () ; public void doSomething () ( synchronized (obj) ( ) ) ) The meaning is simple. If one thread enters a block of code that is marked with the word synchronized, it instantly acquires the object's mutex, and all other threads that try to enter the same block or method are forced to wait until the previous thread completes its work and releases the monitor. By the way! In the course lectures you already saw examples of synchronized , but they looked different: public void swap () ( synchronized (this) ( //...method logic) ) The topic is new for you, and of course there will be confusion with the syntax at first. Therefore, remember right away so as not to get confused later in the writing methods. These two ways of writing mean the same thing: public void swap () ( synchronized (this) ( //...method logic) ) public synchronized void swap () ( ) ) In the first case, you create a synchronized block of code immediately upon entering the method. It is synchronized by the this object, that is, by the current object. And in the second example you put the word synchronized on the entire method. There is no longer any need to explicitly indicate any object on which synchronization is carried out. Once an entire method is marked with a word, this method will automatically be synchronized for all objects of the class. Let's not delve into the discussion of which method is better. For now, choose what you like best :) The main thing is to remember: you can declare a method synchronized only when all the logic inside it is executed by one thread at the same time. For example, in this case it would be an error to make the doSomething() method synchronized: public class Main ( private Object obj = new Object () ; public void doSomething () ( //...some logic available to all threads synchronized (obj) ( //logic that is only available to one thread at a time) ) ) As you can see, a piece of the method contains logic for which synchronization is not required. The code in it can be executed by several threads simultaneously, and all critical places are allocated to a separate synchronized block. And one moment. Let's look under the microscope at our name swap example from lecture: public void swap () ( synchronized (this) ( //...method logic } } Pay attention: synchronization is carried out using this . That is, on a specific MyClass object. Imagine that we have 2 threads (Thread-1 and Thread-2) and just one object MyClass myClass . In this case, if Thread-1 calls the myClass.swap() method, the object's mutex will be busy, and Thread-2, when trying to call myClass.swap(), will hang waiting for the mutex to become free. If we have 2 threads and 2 MyClass objects - myClass1 and myClass2 - on different objects our threads can easily simultaneously execute synchronized methods. The first thread executes: myClass1. swap(); The second one does: myClass2. swap(); In this case, the synchronized keyword inside the swap() method will not affect the operation of the program, since synchronization is performed on a specific object. And in the latter case, we have 2 objects. Therefore, the threads do not create problems for each other. After all two objects have 2 different mutexes and their acquisition is independent of each other.

Features of synchronization in static methods

What to do if you need to synchronize static method? class MyClass ( private static String name1 = "Olya" ; private static String name2 = "Lena" ; public static synchronized void swap () ( String s = name1; name1 = name2; name2 = s; ) ) It is not clear what will fulfill the role mutex in this case. After all, we have already decided that every object has a mutex. But the problem is that to call the static method MyClass.swap() we don’t need objects: the method is static! So, what is next? :/ Actually, there is no problem with this. The creators of Java took care of everything :) If the method that contains the critical “multithreaded” logic is static, synchronization will be carried out by class. For greater clarity, the above code can be rewritten as: class MyClass ( private static String name1 = "Olya" ; private static String name2 = "Lena" ; public static void swap () ( synchronized (MyClass. class ) ( String s = name1 ; name1 = name2; name2 = s; ) ) ) In principle, you could have thought of this on your own: since there are no objects, then the synchronization mechanism must somehow be “hardwired” into the classes themselves. That’s how it is: you can also synchronize across classes.