Asynchronous file input output c. Terms: Data input-output synchronous and asynchronous. Example: Using a Wait Timer

An application programmer doesn't have to think about things like how system programs work with device registers. The system hides details of low-level work with devices from applications. However, the difference between organizing I/O by polling and by interrupts is also reflected at the level of system functions, in the form of functions for synchronous and asynchronous I/O.

Execute a function synchronous I/O involves starting an I/O operation and waiting for that operation to complete. Only after I/O is complete does the function return control to the calling program.

Synchronous I/O is the most familiar way for programmers to work with devices. Standard programming language input/output routines work this way.

Calling a function asynchronous I/O means only starting the corresponding operation. After this, the function immediately returns control to the calling program without waiting for the operation to complete.

Consider, for example, asynchronous data entry. It is clear that the program cannot access data until it is sure that its input is complete. But it is quite possible that the program can do other work for now, rather than stand idle waiting.

Sooner or later, the program must still start working with the entered data, but first make sure that the asynchronous operation has already completed. For this purpose, various operating systems provide tools that can be divided into three groups.

· Waiting for the operation to complete. This is like “the second half of a synchronous operation.” The program first started the operation, then performed some extraneous actions, and now waits for the end of the operation, as with synchronous input/output.

· Checking the completion of the operation. In this case, the program does not wait, but only checks the status of the asynchronous operation. If the input/output is not yet completed, then the program has the opportunity to walk for some time.

· Assignment of completion procedure. In this case, when starting an asynchronous operation, the user program indicates to the system the address of the user procedure or function that should be called by the system after the operation completes. The program itself may no longer be interested in the progress of input/output; the system will remind it of this at the right time by calling specified function. This method is the most flexible, since the user can provide any actions in the completion procedure.

In a Windows application, all three ways to complete asynchronous operations are available. UNIX does not have asynchronous I/O functions, but the same asynchronous effect can be achieved another way, by running an additional process.

Asynchronous I/O can improve performance in some cases and provide additional functionality. Without such simplest form asynchronous input, like "keyboard input without waiting", numerous computer games and simulators. At the same time, the logic of a program using asynchronous operations is more complex than with synchronous operations.

What is the connection mentioned above between synchronous/asynchronous operations and the methods of organizing input/output discussed in the previous paragraph? Answer this question yourself.

Asynchronous I/O using multiple threads

Overlapping and extended I/O allow I/O to be performed asynchronously within a single thread, although the OS creates its own threads to support this functionality. In one form or another, methods of this type are often used in many early operating systems to support limited forms of performing asynchronous operations on single-threaded systems.

However, Windows provides multi-threading support, so it is possible to achieve the same effect by performing synchronous I/O operations on multiple, independently running threads. These capabilities were previously demonstrated using multithreaded servers and the grepMT program (Chapter 7). Additionally, threads provide a conceptually consistent and presumably much simpler way to perform asynchronous I/O operations. An alternative to the methods used in Programs 14.1 and 14.2 would be to give each thread its own file descriptor, so that each thread could process every fourth record synchronously.

This way of using threads is demonstrated in the atouMT program, which is not given in the book, but is included in the material posted on the Web site. The atouMT program is not only capable of running under the control of any Windows versions, but also simpler than either of the two versions of asynchronous I/O programs, since accounting for resource use in this case is less complicated. Each thread simply maintains its own buffers on its own stack and loops through a sequence of synchronous read, convert, and write operations. At the same time, the program performance remains at a fairly high level.

Note

The program atouMT.c, which is located on the Web site, contains comments about several possible pitfalls that can await you when allowing multiple threads to access the same file simultaneously. In particular, all individual file handles must be created using the CreateHandle function rather than the DuplicateHandle function.

Personally, I prefer to use multi-threaded file processing rather than asynchronous I/O. Threads are easier to program and provide better performance in most cases.

There are two exceptions to this general rule. The first of these, as shown earlier in this chapter, concerns situations in which there may only be one outstanding operation, and a file descriptor can be used for synchronization purposes. A second, more important exception occurs in the case of asynchronous I/O completion ports, which will be discussed at the end of this chapter.

From the book Let's Build a Compiler! by Crenshaw Jack

From the book Programming in Prolog author Kloksin U.

From the book The C# 2005 Programming Language and the .NET 2.0 Platform. by Troelsen Andrew

From the book Informix Database Administrator's Guide. author Kustov Viktor

From the book Microsoft Visual C++ and MFC. Programming for Windows 95 and Windows NT author Frolov Alexander Vyacheslavovich

2.2.3.2 Asynchronous I/O To speed up I/O operations, the server uses its own Asynchronous I/O (AIO) package or the Kernel Asynchronous I/O (KAIO) package, if available. User I/O requests are processed asynchronously,

From the book Fundamentals of Object-Oriented Programming by Meyer Bertrand

I/O As you know, operators<< и >>perform a shift numerical value left and right by a certain number of bits. The programs in our book also use these statements to enter information from the keyboard and display it on the screen. If on the left side

From the book System Programming in Windows environment by Hart Johnson M

Input and Output Two KERNEL library classes provide basic input and output facilities: FILE and STD_FILES. Among the operations defined on an object f of type FILE are the following: create f.make ("name") -- Associates f with a file named name. f.open_write -- Open f for writing f.open_read -- Open f for

From the book Programming in Ruby [Language ideology, theory and practice of application] by Fulton Hal

CHAPTER 14 Asynchronous I/O and Completion Ports I/O operations are inherently slower than other types of processing. The reasons for this slowdown are the following factors: Delays due to time spent searching

From the book Programming in Prolog for Artificial Intelligence author Bratko Ivan

10.1.7. Simple I/O You're already familiar with some of the I/O methods from the Kernel module; we called them without specifying a caller. These include the gets and puts functions, as well as print, printf, and p (the latter calls the object's inspect method to print it in a way we can understand).

From the book C Programming Language for a Personal Computer author Bochkov S. O.

From the book Linux Programming with Examples author Robbins Arnold

Chapter 6 Input and Output In this chapter, we will look at some built-in facilities for writing data to and reading data from a file. Such tools can also be used to format program data objects in order to obtain the desired form of their external representation.

From the book Basics of Java Programming author Sukhov S. A.

Input and Output The input and output functions in the standard C library allow you to read data from files or receive it from input devices (such as the keyboard) and write data to files or output it to various devices(for example, to a printer). Input/output functions

From the book QT 4: GUI Programming in C++ by Blanchette Jasmine

4.4. Input and Output All Linux I/O operations are performed through file descriptors. This section introduces file descriptors, describes how to obtain and free them, and explains how to execute with them.

From the book The Ideal Programmer. How to become a software development professional author Martin Robert S.

From the author's book

Chapter 12: I/O Almost every application requires reading or writing files or performing other I/O operations. Qt provides excellent I/O support with QIODevice, a powerful abstraction of "devices" that can read and write.

From the author's book

Input and Output I also find it very important that my results are fueled by appropriate “input”. Writing program code- creative work. My creativity is usually at its best when I am faced with a creative challenge.

The task that issued the request for the I/O operation is placed by the supervisor in the state of waiting for the completion of the ordered operation. When the supervisor receives a message from the completion section that the operation has completed, it puts the task in the ready state and it continues its work. This situation corresponds to synchronous I/O. Synchronous I/O is standard on most operating systems. To increase the speed of application execution, it was proposed to use asynchronous I/O when necessary.

The simplest version of asynchronous output is the so-called buffered output to an external device, in which data from the application is not transferred directly to the I/O device, but to a special system buffer. In this case, logically, the output operation for the application is considered completed immediately, and the task may not wait for the actual process of transferring data to the device to complete. Process real output Data from the system buffer is handled by the I/O supervisor. Naturally, a special system process at the direction of the I/O supervisor is responsible for allocating a buffer from the system memory area. So, for the considered case, the output will be asynchronous if, firstly, the I/O request indicated the need for data buffering, and secondly, if the I/O device allows such asynchronous operations and this is noted in the UCB. You can also organize asynchronous data entry. However, to do this, it is necessary not only to allocate a memory area for temporary storage of data read from the device and to associate the allocated buffer with the task that ordered the operation, but also to split the request for the I/O operation into two parts (into two requests). The first request specifies the operation to read data, similar to what is done with synchronous I/O. However, the type (code) of the request is different, and the request specifies at least one additional parameter - the name (code) of the system object that the task receives in response to the request and which identifies the allocated buffer. Having received the name of the buffer (we will conventionally call this system object this way, although different operating systems also use other terms to designate it, for example, class), the task continues its work. It is very important to emphasize here that as a result of an asynchronous data input request, the task is not placed in a state by the I/O supervisor to wait for the I/O operation to complete, but rather remains in a running or ready-to-execute state. After some time, having executed the necessary code that was defined by the programmer, the task issues a second request to complete the I/O operation. In this second request to the same device, which, of course, has a different code (or request name), the task specifies the name of the system object (buffer for asynchronous data input) and, if the data reading operation is successful, immediately receives it from the system buffer. If the data has not yet been completely transferred from the external device to the system buffer, the I/O supervisor switches the task to the state of waiting for the completion of the I/O operation, and then everything resembles normal synchronous data input.

Typically, asynchronous I/O is provided in most multi-programming OSes, especially if the OS supports multitasking using a thread mechanism. However, if there is no explicit asynchronous input/output, you can implement its ideas yourself by organizing an independent thread for data output.

I/O hardware can be considered as a collection hardware processors, which are capable of working in parallel relative to each other, as well as relative to the central processor (processors). On such “processors” the so-called external processes. For example, for an external device (input/output device), the external process may be a set of operations that translate the print head, advance the paper one position, change the ink color, or print some characters. External processes, using input/output hardware, interact both with each other and with ordinary “software” processes running on the central processor. An important fact is that the speed of execution of external processes will differ significantly (sometimes by an order of magnitude or more) from the speed of execution of conventional ones (“ internal") processes. For their normal operation, external and internal processes must be synchronized. To smooth out the effect of a strong speed mismatch between internal and external processes, the buffering mentioned above is used. Thus, we can talk about a system of parallel interacting processes (see Chapter 6).

Buffers are a critical resource in relation to internal (software) and external processes, which, during their parallel development, interact informationally. Through a buffer(s), data is either sent from some process to an addressable external one (data output operation to an external device), or from an external process is transferred to some software process (data read operation). The introduction of buffering as a means of information interaction raises the problem of managing these system buffers, which is solved by means of the supervisory part of the OS. In this case, the supervisor is tasked not only with allocating and freeing buffers in the system memory area, but also with synchronizing processes in accordance with the state of operations to fill or free buffers, as well as waiting for them if there are no free buffers available and the input request/ output requires buffering. Typically, the I/O supervisor uses standard synchronization tools adopted in a given OS to solve the listed tasks. Therefore, if the OS has developed tools for solving problems of parallel execution of interacting applications and tasks, then, as a rule, it also implements asynchronous input/output.

synchronous model input/output. The read(2) , write(2) system calls and their analogues return control only after data has already been read or written. This often results in the thread becoming blocked.

Note

In reality, it's not that simple. read(2) does have to wait for the data to be physically read from the device, but write(2) operates in write-lazy mode by default: it returns after the data has been transferred to the system buffer, but generally before the data is physically transferred to the device. This usually significantly improves the observed performance of the program and allows memory from the data to be used for other purposes immediately after write(2) returns. But delayed recording also has significant disadvantages. The main one is that you will learn about the result of a physical operation not immediately by the return code of write(2) , but only some time after the return, usually by the return code of the next write(2) call. For some applications - transaction monitors, many real-time programs, etc. - this is unacceptable and they are forced to turn off lazy recording. This is done by the O_SYNC flag, which can be set when the file is opened and changed at open file by calling fcntl(2) .

Synchronization of individual writes can be ensured by calling fsync(2) . For many applications involving multiple devices and/or network connections synchronous model inconvenient. Working in polling mode is also not always acceptable. The fact is that select(3C) and poll(2) consider a file descriptor ready for reading only after data physically appears in its buffer. But some devices begin to send data only after they are explicitly asked to do so.

Also, for some applications, especially real-time applications, it is important to know the exact moment when data begins to arrive. For such applications it may also be unacceptable that select(3C) and poll(2) consider regular files always ready to read and write. Really, file system read from disk and although it works much faster than most network connections, but still accessing it is associated with some delays. For hard real-time applications, these delays may be unacceptable - but without an explicit read request file system does not give out data!

For hard real-time applications, another aspect of the I/O problem may be significant. The fact is that hard RT applications have a higher priority than the kernel, so they execute system calls - even non-blocking ones! - can lead to priority inversion.

The solution to these problems has been known for a long time and is called asynchronous input/output. In this mode, I/O system calls return control immediately after a request is made to the device driver, typically even before the data has been copied to the system buffer. Forming a request consists of placing an entry (IRP, Input/Output Request Packet, input/output request packet) into a queue. To do this, you only need to briefly capture the mutex that protects the “tail” of the queue, so the problem of priority inversion can be easily overcome. In order to find out whether the call has ended, and if it has ended, then how exactly, and whether the memory in which the data was stored can be used, a special API is provided (see Fig. 8.1)


Rice. 8.1.

Asynchronous model was the main I/O model in operating systems such as DEC RT-11, DEC RSX-11, VAX/VMS, OpenVMS. Almost everyone supports this model in one form or another. Real time OS. Unix systems have used several incompatible APIs for asynchronous I/O since the late 1980s. In 1993, ANSI/IEEE adopted POSIX 1003.1b, which describes a standardized API that we will explore later in this section.

In Solaris 10, asynchronous I/O functionality is included in the libaio.so library. To build programs that use these functions, you must use the -laio switch. To generate requests for asynchronous I/O, the functions aio_read(3AIO), aio_write(3AIO) and lio_listio(3AIO) are used.

The functions aio_read(3AIO) and aio_write(3AIO) have a single parameter, structaiocb *aiocbp. The aiocb structure is defined in the file< aio.h> and contains the following fields:

  • int aio_fildes - file descriptor
  • off_t aio_offset - offset in the file starting from which writing or reading will begin
  • volatile void* aio_buf - a buffer into which data should be read or in which data to be written lies.
  • size_t aio_nbytes - buffer size. Like traditional read(2) , aio_read(3AIO) can read less data than was requested, but will never read more.
  • int aio_reqprio - request priority
  • struct sigevent aio_sigevent - method of notifying that a request has completed (discussed later in this section)
  • int aio_lio_opcode - not used for aio_read(3AIO) and aio_write(3AIO), used only by the lio_listio function.

The lio_listio(3AIO) function allows you to generate multiple I/O requests with one system call. This function has four parameters:

  • int mode - can take the values ​​LIO_WAIT (the function waits for all requests to complete) and LIO_NOWAIT (the function returns control immediately after all requests are generated).
  • struct aiocb *list - a list of pointers to aiocb structures with descriptions of requests.

    Requests can be either read or write, this is determined by the aio_lio_opcode field. Requests to a single descriptor are executed in the order in which they are listed in the list array.

  • int nent - number of entries in the list array.
  • struct sigevent *sig - a way to notify that all requests have completed. If mode==LIO_WAIT this parameter is ignored.

The POSIX AIO library provides two ways to notify a program that a request has completed, synchronous and asynchronous. Let's look at the synchronous method first. The aio_return(3AIO) function returns the request status. If the request has already completed and completed successfully, it returns the size of the data read or written in bytes. Like traditional read(2), aio_return(3AIO) returns 0 bytes in case of end of file. If the request failed or has not yet completed, -1 is returned and errno is set. If the request has not yet completed, the error code is EINPROGRESS.

The function aio_return(3AIO) is destructive; if called on a completed request, it will destroy the system object that stores information about the status of the request. Calling aio_return(3AIO) multiple times on the same request is therefore not possible.

The aio_error(3AIO) function returns the error code associated with the request. If the request completes successfully, 0 is returned, if an error occurs - an error code, for incomplete requests - EINPROGRESS.

The aio_suspend(3AIO) function blocks a thread until one of its specified asynchronous I/O requests completes or for a specified period of time. This function has three parameters:

  • const struct aiocb *const list- an array of pointers to query descriptors.
  • int nent - number of elements in the list array.
  • const struct timespec *timeout- timeout accurate to nanoseconds (in fact, accurate to resolution system timer).

The function returns 0 if at least one of the operations listed in the list has completed. If the function fails, it returns -1 and sets errno. If the function timed out, it also returns -1 and errno==EINPROGRESS .

An example of using asynchronous I/O with synchronous request status checking is given in Example 8.3.

Const char req="GET / HTTP/1.0\r\n\r\n"; int main() ( int s; static struct aiocb readrq; static const struct aiocb *readrqv=(&readrq, NULL); /* Open socket […] */ memset(&readrq, 0, sizeof readrq); readrq.aio_fildes=s ; readrq.aio_buf=buf; readrq.aio_nbytes=sizeof buf; if (aio_read(&readrq)) ( /* ... */ ) write(s, req, (sizeof req)-1); while(1) ( aio_suspend(readrqv , 1, NULL); size=aio_return(&readrq); if (size>0) ( write(1, buf, size); aio_read(&readrq); ) else if (size==0) ( break; ) else if ( errno!=EINPROGRESS) ( perror("reading from socket"); ) ) ) 8.3. Asynchronous I/O with synchronous check of request status. The code is shortened, socket opening and error handling are excluded from it.

Asynchronous notification of an application about the completion of operations consists of signal generation when the operation is completed. To do this, you need to make the appropriate settings in the aio_sigevent field of the request descriptor. The aio_sigevent field is of type struct sigevent . This structure is defined in and contains the following fields:

  • int sigev_notify - notification mode. Valid values ​​are SIGEV_NONE (do not send acknowledgments), SIGEV_SIGNAL (generate a signal when the request completes), and SIGEV_THREAD (run the specified function in a separate thread when the request completes). Solaris 10 also supports the SIGEV_PORT alert type, which is discussed in the appendix to this chapter.
  • int sigev_signo - the number of the signal that will be generated when using SIGEV_SIGNAL.
  • union sigval sigev_value - parameter that will be passed to the signal handler or processing function. When used for asynchronous I/O, this is usually a pointer to the request.

    When using SIGEV_PORT, this should be a port_event_t pointer structure containing the port number and possibly additional data.

  • void (*sigev_notify_function)(union sigval) is the function that will be called when SIGEV_THREAD is used.
  • pthread_attr_t *sigev_notify_attributes- attributes of the thread in which it will be launched
  • sigev_notify_function when using SIGEV_THREAD .

Not all libaio implementations support the SIGEV_THREAD notification. Some Unix systems use the non-standard SIGEV_CALLBACK alert instead. Later in this lecture we will discuss only signal notification.

Some applications use SIGIO or SIGPOLL as the signal number (in Unix SVR4 these are the same signal). SIGUSR1 or SIGUSR2 are also often used; This is convenient because it ensures that a similar signal will not arise for another reason.

Real-time applications also use signal numbers ranging from SIGRTMIN to SIGRTMAX. Some implementations allocate a special signal number SIGAIO or SIGASYNCIO for this purpose, but there is no such signal in Solaris 10.

Of course, before executing asynchronous requests notified by a signal, you should install a handler for this signal. For notification, you must use signals processed in the SA_SIGINFO mode. It is not possible to install such a handler using the signal(2) and sigset(2) system calls; you must use sigaction(2) . Installing handlers using sigaction

I/O control.

block-oriented devices and byte-oriented

main idea

Key the principle is device independence

· Interrupt handling,

· Device Drivers,

It seems clear that a wide variety of interruptions are possible for a variety of reasons. Therefore, a number is associated with an interruption - the so-called interruption number.

This number uniquely corresponds to a particular event. The system can recognize interrupts and, when they occur, launches a procedure corresponding to the interrupt number.

Some interrupts (the first five in numerical order) are reserved for use central processor in case of any special events like an attempt to divide by zero, overflow, etc. (these are true internal interrupts J).

Hardware interrupts always occur asynchronously with respect to running programs. In addition, several interruptions can occur simultaneously!

To ensure that the system does not get confused when deciding which interrupt to service first, there is a special priority scheme. Each interrupt is assigned its own priority. If multiple interrupts occur simultaneously, the system gives priority to the one with the highest priority, deferring the processing of the remaining interrupts for a while.

The priority system is implemented on two Intel 8259 (or similar) chips. Each chip is an interrupt controller and serves up to eight priorities. Chips can be combined (cascaded) to increase the number of priority levels in the system.

The priority levels are abbreviated IRQ0 - IRQ15.


24. I/O control. Synchronous and asynchronous I/O.

One of the main functions of the OS is to manage all the computer's input/output devices. The OS must send commands to devices, intercept interrupts, and handle errors; it must also provide an interface between the devices and the rest of the system. For development purposes, the interface should be the same for all device types (device independence). More information about IV control, question 23.

Principles of protection

Since the UNIX OS from its very inception was conceived as a multi-user operating system, the problem of authorizing access of different users to files in the file system has always been relevant. By access authorization we mean system actions that allow or deny access given user To this file depending on the user's access rights and access restrictions set for the file. The access authorization scheme used in the UNIX OS is so simple and convenient and at the same time so powerful that it has become the de facto standard of modern operating systems (which do not pretend to be systems with multi-level protection).

File protection

As is customary in a multi-user operating system, UNIX maintains a uniform access control mechanism for files and file system directories. Any process can access a file if and only if the access rights specified for the file match the capabilities of the process.

Protecting files from unauthorized access in UNIX is based on three facts. Firstly, any process that creates a file (or directory) is associated with some user identifier unique in the system (UID - User Identifier), which can be further interpreted as the identifier of the owner of the newly created file. Second, each process that attempts to gain some access to a file has a pair of identifiers associated with it - the current user and group identifiers. Thirdly, each file is uniquely associated with its descriptor - i-node.

The last fact is worth dwelling on in more detail. It is important to understand that file names and files as such are not the same thing. In particular, when there are multiple hard links to the same file, multiple filenames actually represent the same file and are associated with the same i-node. Any i-node used in a file system always uniquely corresponds to one and only one file. The I-node contains a lot of different information (most of it is available to users through the stat and fstat system calls), and among this information there is part that allows the file system to evaluate the right of a given process to access a given file in the required mode.

General principles protections are the same for all existing versions of the system: The i-node information includes the UID and GID of the current owner of the file (immediately after the file is created, the identifiers of its current owner are set to the corresponding valid identifier of the creator process, but can later be changed by the chown and chgrp system calls). In addition, the i-node of the file stores a scale that indicates what the user - its owner - can do with the file, what users belonging to the same user group as the owner can do with the file, and what others can do with the file. users. Small implementation details in different options systems vary.

28. Managing access to files in Windows NT. Lists of access rights.

The access control system in Windows NT is characterized by a high degree of flexibility, which is achieved due to the wide variety of access subjects and objects, as well as the granularity of access operations.

File access control

For shared resources in Windows NT, a common object model is used, which contains such security characteristics as a set of allowed operations, an owner identifier, and an access control list.

Objects in Windows NT are created for any resources when they are or become shared - files, directories, devices, memory sections, processes. The characteristics of objects in Windows NT are divided into two parts - a general part, the composition of which does not depend on the type of object, and an individual part, determined by the type of object.
All objects are stored in tree structures hierarchical structures, the elements of which are branch objects (directories) and leaf objects (files). For file system objects, this relationship scheme is a direct reflection of the hierarchy of directories and files. For objects of other types, the hierarchical relationship diagram has its own content, for example, for processes it reflects parent-child relationships, and for devices it reflects membership in a certain type of device and the connection of the device with other devices, for example, a SCSI controller with disks.

Checking access rights for objects of any type is performed centrally using the Security Reference Monitor running in privileged mode.

For system Windows security NT is characterized by the presence of a large number of different predefined (built-in) access subjects - both individual users and groups. So, the system always has such users as Adininistrator, System and Guest, as well as groups Users, Adiniiiistrators, Account Operators, Server Operators, Everyone and others. The point of these built-in users and groups is that they are endowed with certain rights, making it easier for the administrator to create an effective access control system. When adding a new user, the administrator can only decide which group or groups to assign this user to. Of course, an administrator can create new groups, as well as add rights to built-in groups to implement his own security policy, but in many cases, built-in groups are quite sufficient.

Windows NT supports three classes of access operations, which differ in the type of subjects and objects involved in these operations.

□ Permissions are a set of operations that can be defined for subjects of all types in relation to objects of any type: files, directories, printers, memory sections, etc. Permissions in their purpose correspond to access rights to files and directories in QC UNIX.

□ Rights (user rights) - are defined for subjects of the group type to perform certain system operations: setting the system time, archiving files, turning off the computer, etc. These operations involve a special access object - the operating system as a whole.

It is primarily rights, not permissions, that differentiate one built-in user group from another. Some rights for a built-in group are also built-in - they cannot be removed from this group. Other rights of the built-in group can be deleted (or added from the general list of rights).

□ User abilities are determined for individual users to perform actions related to the formation of their operating environment, for example, changing the composition of the main program menu, the ability to use the Run menu item, etc. By reducing the set of capabilities (which are available to the user by default), the administrator can force the user to work with the operating environment that the administrator considers most suitable and protects the user from possible errors.

The rights and permissions given to a group are automatically granted to its members, allowing the administrator to treat a large number of users as a unit of accounting information and minimize their actions.

When a user logs into the system, a so-called access token is created for him, which includes the user ID and the IDs of all groups to which the user belongs. The token also contains: a default access control list (ACL), which consists of permissions and applies to objects created by the process; list of user rights to perform system actions.

All objects, including files, threads, events, even access tokens, are provided with a security descriptor when they are created. The security descriptor contains an access control list - ACL.

File descriptor- a non-negative integer assigned by the OS to a file opened by the process.

ACL(English) Access Control List- access control list, pronounced "ekl" in English) - determines who or what can access a specific object, and what operations this subject is allowed or prohibited from performing on the object.

Access control lists are the basis of selective access control systems. ( Wiki)

The owner of an object, typically the user who created it, has selective control over access to the object and can change the object's ACL to allow or prevent others from accessing the object. Built-in Windows administrator NT, unlike the UNIX superuser, may not have some permissions to access an object. To implement this feature, administrator and administrator group IDs can be included in the ACL, just like ordinary user IDs. However, the administrator still has the ability to perform any operations with any objects, since he can always become the owner of the object, and then, as the owner, receive the full set of permissions. However, the administrator cannot return ownership to the previous owner of the object, so the user can always find out that the administrator has worked with his file or printer.

When a process requests an operation to access an object in Windows NT, control always passes to the security monitor, which compares the user and user group identifiers from the access token with the identifiers stored in the object's ACL elements. Unlike UNIX, Windows NT ACL elements can contain both lists of allowed and lists of operations prohibited for a user.

Windows NT clearly defines the rules by which an ACL is assigned to a newly created object. If the calling code, when creating an object, explicitly specifies all access rights to the newly created object, then the security system assigns this ACL to the object.

If the calling code does not supply the object with an ACL, and the object has a name, then the principle of permission inheritance applies. The security system looks at the ACL of the object directory in which the name of the new object is stored. Some of the object directory ACL entries can be marked as inheritable. This means that they can be assigned to new objects created in this directory.

In the case where a process has not explicitly specified an ACL for the object being created, and the directory object does not have inheritable ACL entries, the default ACL from the process's access token is used.


29. Java programming language. Java Virtual Machine. Java technology.

Java is an object-oriented programming language developed by Sun Microsystems. Java applications are typically compiled to custom bytecode so they can run on any Java virtual machine (JVM), regardless of computer architecture. Java programs are translated into bytecode that is executed virtual machine Java ( JVM) - a program that processes byte code and transmits instructions to the equipment as an interpreter, but with the difference that byte code, unlike text, is processed much faster.

The advantage of this method of executing programs is the complete independence of the bytecode from operating system and hardware, allowing you to run Java applications on any device for which a corresponding virtual machine exists. Another important feature of Java technology is its flexible security system due to the fact that program execution is completely controlled by the virtual machine. Any operation that exceeds the program's established permissions (for example, an attempt to unauthorizedly access data or connect to another computer) causes an immediate interruption.

Often, the disadvantages of the virtual machine concept include the fact that the execution of bytecode by a virtual machine can reduce the performance of programs and algorithms implemented in the Java language.

Java Virtual Machine(abbreviated as Java VM, JVM) - the Java virtual machine is the main part of the Java runtime system, the so-called Java Runtime Environment (JRE). The Java Virtual Machine interprets and executes Java bytecode pre-generated from the source code of a Java program by the Java compiler (javac). The JVM can also be used to run programs written in other programming languages. For example, Ada source code can be compiled into Java bytecode, which can then be executed by the JVM.

The JVM is a key component of the Java platform. Since Java virtual machines are available for many hardware and software platforms, Java can be considered both as middleware and as a platform in its own right, hence the “write once, run anywhere” principle. Using a single bytecode across multiple platforms allows Java to be described as “compile once, run anywhere.”

Runtime environment

Programs intended to run on the JVM must be compiled in a standardized portable binary format, which is usually represented as .class files. A program can consist of many classes located in different files. To make it easier to host large programs, some .class files can be packaged together into a so-called .jar file (short for Java Archive).

The JVM executes .class or .jar files by emulating instructions written for the JVM by interpreting or using a just-in-time (JIT) compiler such as HotSpot from Sun microsystems. These days, JIT compilation is used in most JVMs to achieve greater speed.

Like most virtual machines, the Java Virtual Machine has a stack-oriented architecture similar to microcontrollers and microprocessors.

The JVM, which is an instance of the JRE (Java Runtime Environment), comes into play when Java programs are executed. After execution completes, this instance is deleted by the garbage collector. JIT is a part of the Java Virtual Machine that is used to speed up the execution time of applications. JIT simultaneously compiles parts of the bytecode that have similar functionality and therefore reduces the amount of time required for compilation.

j2se (java 2 standard edition) – the standard library includes:

GUI, NET, Database...


30. .NET Platform. Main ideas and provisions. .NET programming languages.

.NET Framework - software technology from Microsoft, designed for creating regular programs and web applications.

One of the main ideas of Microsoft .NET is the interoperability of different services written in different languages. For example, a service written in C++ for Microsoft .NET might call a class method from a library written in Delphi; in C# you can write a class inherited from a class written in Visual Basic.NET, and an exception thrown by a method written in C# can be caught and handled in Delphi. Each library (assembly) in .NET has information about its version, which allows you to eliminate possible conflicts between different versions assemblies.

Applications can also be developed in a text editor and use a console compiler.

Like Java technology, the .NET development environment creates bytecode for execution by a virtual machine. The input language of this machine in .NET is called MSIL (Microsoft Intermediate Language), or CIL (Common Intermediate Language, a later version), or simply IL.

The use of bytecode allows you to achieve cross-platform functionality at the compiled project level (in .NET terms: assembly), and not only at the source text level, as, for example, in C. Before starting the assembly in the CLR runtime, the bytecode is converted by the JIT compiler built into the environment (just in time, on-the-fly compilation) into machine codes of the target processor. It is also possible to compile the assembly into native code for the selected platform using the NGen.exe utility supplied with the .NET Framework.

During the translation procedure, the source code of the program (written in SML, C#, Visual Basic, C++ or any other programming language that is supported by .NET) is converted by the compiler into a so-called assembly and saved as a dynamically linked library file (Dynamically Linked). Library, DLL) or executable file(Executable, EXE).

Naturally, for each compiler (be it a C# language compiler, csc.exe or Visual Basic, vbc.exe), the runtime environment performs the necessary mapping of the types used into CTS types, and the program code into the code of the “abstract machine” .NET - MSIL (Microsoft Intermediate Language).

Eventually software project is formed in the form of an assembly - a self-sufficient component for deployment, replication and reuse. The assembly is identified digital signature author and a unique version number.

Built-in programming languages ​​(included with the .NET Framework):

C#; J#; VB.NET; JScript .NET; C++/CLI - a new version C++ (Managed).


31. Functional components of the OS. File management

Functional OS components:

The functions of a stand-alone computer's operating system are typically grouped either according to the types of local resources that the OS manages or according to specific tasks that apply to all resources. Sometimes such groups of functions are called subsystems. The most important resource management subsystems are the process, memory, file, and external device management subsystems, and the subsystems common to all resources are the subsystems user interface, data protection and administration.

File management:

The ability of the OS to “shield” the complexities of real hardware is very clearly manifested in one of the main OS subsystems - the file system.

The file system links storage media on one side and an API (application programming interface) for accessing files on the other. When an application program accesses a file, it has no idea how the information in a particular file is located, nor what type of physical media (CD, hard disk, magnetic tape, or flash memory unit) it is recorded on. All the program knows is the file name, its size and attributes. It receives this data from the file system driver. It is the file system that determines where and how the file will be written on physical media (for example, a hard drive).

From the operating system's point of view, the entire disk is a collection of clusters ranging in size from 512 bytes and larger. File system drivers organize clusters into files and directories (which are actually files containing a list of files in that directory). These same drivers keep track of which clusters are currently in use, which are free, and which are marked as faulty.

However, the file system is not necessarily directly associated with the physical storage medium. There are virtual file systems, as well as network file systems, which are just a way to access files located on a remote computer.

In the simplest case, all files on a given disk are stored in one directory. This single-level scheme was used in CP/M and the first version of MS-DOS 1.0. The hierarchical file system with nested directories first appeared in Multics, then in UNIX.

Catalogs for different drives can form several separate trees, as in DOS/Windows, or merge into one tree common to all disks, as in UNIX-like systems Oh.

In fact, in DOS/Windows systems, as well as in UNIX-like systems, there is one root directory with subdirectories named “c:”, “d:”, etc. Hard disk partitions are mounted in these directories. That is, c:\ is just a link to file:///c:/. However, unlike UNIX-like file systems, in Windows, writing to the root directory is prohibited, as is viewing its contents.

In UNIX, there is only one root directory, and all other files and directories are nested under it. To access files and directories on a disk, you need to mount the disk using the mount command. For example, to open files on a CD, you need, in simple terms, to tell the operating system: “take the file system on this CD and show it in the /mnt/cdrom directory.” All files and directories located on the CD will appear in this /mnt/cdrom directory, which is called the mount point. On most UNIX-like systems removable disks(floppy disks and CDs), flash drives and other external storage devices are mounted in the /mnt, /mount or /media directory. Unix and UNIX-like operating systems also allow disks to be mounted automatically when the operating system boots.

Please note the use of slashes in file Windows systems, UNIX and UNIX-like operating systems (In Windows, the backslash “\” is used, and in UNIX and UNIX-like operating systems, a simple slash “/”)

In addition, it should be noted that the above system allows you to mount not only the file systems of physical devices, but also individual directories (the --bind parameter) or, for example, ISO image(loop option). Add-ons such as FUSE also allow you to mount, for example, an entire directory on FTP and a very large number of different resources.

An even more complex structure is used in NTFS and HFS. In these file systems each file is a set of attributes. Attributes include not only the traditional read-only, system, but also the file name, size, and even content. So for NTFS and HFS, what is stored in a file is just one of its attributes.

Following this logic, one file can contain several variations of content. Thus, several versions of the same document can be stored in one file, as well as additional data (file icon, program associated with the file). This organization is typical for HFS on the Macintosh.


32. Functional components of the OS. Process management.

Process management:

The most important part of the operating system, which directly affects the functioning of the computer, is the process control subsystem. A process (or in other words, a task) is an abstraction that describes a running program. For the operating system, a process is a unit of work, a request to consume system resources.

In a multitasking (multiprocess) system, a process can be in one of three main states:

RUNNING - the active state of a process, during which the process has all the necessary resources and is directly executed by the processor;

WAITING - the passive state of a process, the process is blocked, it cannot be executed for its own internal reasons, it is waiting for some event to occur, for example, the completion of an I/O operation, receiving a message from another process, or the release of some resource it needs;

READY is also a passive state of the process, but in this case the process is blocked due to circumstances external to it: the process has all the resources required for it, it is ready to execute, but the processor is busy executing another process.

During life cycle each process moves from one state to another in accordance with the process scheduling algorithm implemented in a given operating system.

CP/M standard

The creation of operating systems for microcomputers began with OS SR/M. It was developed in 1974, after which it was installed on many 8-bit machines. Within the framework of this operating system, a significant amount of software was created, including translators from BASIC, Pascal, C, Fortran, Cobol, Lisp, Ada and many others, text languages. They allow you to prepare documents much faster and more conveniently than using a typewriter.

MSX standard

This standard determined not only the OS, but also the characteristics of hardware for school PCs. According to the MSX standard, the car had to have RAM a volume of at least 16 K, a permanent memory of 32 K with a built-in BASIC language interpreter, a color graphic display with a resolution of 256x192 pixels and 16 colors, a three-channel 8-octave sound generator, a parallel port for connecting a printer and a controller for controlling an external drive connected externally .

The operating system of such a machine had to have the following properties: required memory - no more than 16 K, compatibility with CP/M at the level of system calls, compatibility with DOS in file formats on external drives based on floppy magnetic disks, support for translators of BASIC, C, Fortran and Lisp languages.

Pi - system

During the initial period of development personal computers The USCD p-system operating system was created. The basis of this system was the so-called P-machine - a program emulating a hypothetical universal computer. The P-machine simulates the operation of the processor, memory and external devices by executing special instructions called P-code. Software components Pi systems (including compilers) are compiled in P code, application programs are also compiled into P code. Thus, the main distinctive feature The system had minimal dependence on the features of PC equipment. This is what ensured the portability of the Pi-system to Various types cars The compactness of the P-code and the conveniently implemented paging mechanism made it possible to execute relatively large programs on PCs with small RAM.

I/O control.

I/O devices are divided into two types: block-oriented devices and byte-oriented devices. Block-oriented devices store information in fixed-size blocks, each of which has its own address. The most common block-oriented device is a disk. Byte-oriented devices are not addressable and do not allow search operations; they generate or consume a sequence of bytes. Examples are terminals, line printers, network adapters. The electronic component is called a device controller or adapter. The operating system deals with the controller. The controller performs simple functions, monitors and corrects errors. Each controller has several registers that are used to communicate with the central processor. The OS performs I/O by writing commands to the controller registers. The IBM PC floppy disk controller accepts 15 commands such as READ, WRITE, SEEK, FORMAT, etc. When the command is accepted, the processor leaves the controller and does other work. When the command completes, the controller issues an interrupt to transfer control of the processor to the operating system, which must check the results of the operation. The processor obtains the results and status of the device by reading information from the controller registers.

main idea organization of I/O software consists in dividing it into several levels, and the lower levels provide shielding of the equipment features from the upper ones, and those provide user-friendly interface for users.

Key the principle is device independence. The type of program should not depend on whether it reads data from a floppy disk or from hard drive. Another important issue for I/O software is error handling. Generally speaking, errors should be handled as close to the hardware as possible. If the controller detects a read error, it must try to correct it. If it fails, then the device driver must fix the errors. And only if the lower level cannot cope with the error, it reports the error to the upper level.

Another key issue is the use of blocking (synchronous) and non-blocking (asynchronous) transfers. Most physical I/O operations are performed asynchronously - the processor starts a transfer and moves on to other work until an interrupt occurs. It is necessary that I/O operations be blocking - after the READ command, the program automatically pauses until the data reaches the program buffer.

The last problem is that some devices are shared (disks: multiple users accessing the disk at the same time is not a problem), while others are dedicated (printers: lines printed by different users cannot be mixed).

To solve these problems, it is advisable to divide the I/O software into four layers (Figure 2.30):

· Interrupt handling,

· Device drivers,

· Device-independent layer of the operating system,

· Custom software layer.

The concept of hardware interrupt and its processing.

Asynchronous or external (hardware) interrupts are events that come from external sources (for example, peripheral devices) and can occur at any arbitrary moment: a signal from a timer, network card or disk drive, pressing keyboard keys, moving the mouse; They require immediate reaction (processing).

Almost all input/output systems in a computer operate using interrupts. Specifically, when you press keys or click a mouse, the hardware generates interrupts. In response to them, the system, accordingly, reads the code of the key pressed or remembers the coordinates of the mouse cursor. Interrupts are generated by the disk controller, adapter local network, serial ports, audio adapter and other devices.