Computer and telecommunication networks. Telecommunications and computer networks Information systems telecommunications computers computer networks

Mouse

Keyboard

Keyboardkeyboard control device for a personal computer. Used to enter alphanumeric data as well as control commands. The monitor and keyboard combination provides the simplest user interface.

Keyboard functions do not need to be supported by special system programs (drivers). The software you need to get started with your computer is already in the read-only memory (ROM) chip in the basic input/output system, so your computer responds to keystrokes as soon as you turn it on.

A standard keyboard has more than 100 keys, functionally distributed across several groups.

A group of alphanumeric keys is intended for entering character information and commands typed by letter. Each key can operate in several modes (registers) and, accordingly, can be used to enter several characters.

The function key group includes twelve keys located at the top of the keyboard. The functions assigned to these keys depend on the properties of the specific operating system. this moment program, and in some cases from the properties of the operating system. It is a common convention for most programs that the F1 key calls up the help system, where you can find help about the actions of other keys.

Service keys are located next to the alphanumeric group keys. Due to the fact that they have to be used frequently, they have an increased size. These include the keys SHIFT, ENTER, ALT, CTRL, TAB, ESC, BACKSPACE, etc.

Two groups of cursor keys are located to the right of the alphanumeric pad.

The group of keys on the additional panel duplicates the action of the numeric and some symbol keys on the main panel. The appearance of an additional keyboard dates back to the early 80s. At that time, keyboards were relatively expensive devices. The original purpose of the additional panel was to reduce wear on the main panel when carrying out cash and settlement calculations, as well as when controlling computer games. Nowadays, keyboards are classified as low-value wearable devices and fixtures, and there is no significant need to protect them from wear and tear.

Mouse – manipulator-type control device. It is a flat box with two or three buttons. Moving the mouse on a flat surface is synchronized with the movement of a graphic object (mouse pointer) on the monitor screen.

Unlike a keyboard, a mouse is not a standard control, and a personal computer does not have a dedicated port for it. There is no permanent dedicated interrupt for the mouse, and the basic input and output facilities do not contain software to handle mouse interrupts. Due to this, the mouse does not work the first moment after turning on the computer. It requires the support of a special system program - a mouse driver. The mouse driver is designed to interpret the signals coming through the port. In addition, it provides a mechanism for transmitting information about the position and state of the mouse operating system and running programs.



The computer is controlled by moving the mouse along the plane and briefly pressing the right and left buttons (clicks). Unlike a keyboard, a mouse cannot be used directly to enter character information - its control principle is event-based. Mouse movements and mouse button clicks are events from the point of view of its driver program. By analyzing these events, the driver determines when the event occurred and where the pointer was located on the screen at that moment. This data is transferred to the application program with which the user is currently working. Based on them, the program can determine the command that the user had in mind and begin executing it.

The combination of a monitor and a mouse provides the most modern type of user interface, called a graphical one. The user watches on the screen graphic objects and controls. Using the mouse, he changes the properties of objects and activates controls computer system, and with the help of the monitor receives a response in graphical form.

Adjustable mouse parameters include: sensitivity (expresses the amount of movement of the pointer on the screen for a given linear movement of the mouse), right and left button functions, and double-click sensitivity (the maximum time interval at which two clicks of the mouse button are regarded as one double-click). ).

Computer network (CN) a collection of computers and terminals connected via communication channels in unified system, meeting the requirements of distributed data processing.

In general, under telecommunications network (TS ) understand a system consisting of objects that perform the functions of generation, transformation, storage and consumption of a product, called points (nodes) of the network, and transmission lines (communications, communications, connections) that transfer the product between points.

Depending on the type of product - information, energy, mass - information, energy and material networks are distinguished, respectively.

Information network (IS) communication network, in which the product of generating, processing, storing and using information is information. Traditionally, telephone networks are used to transmit audio information, television is used to transmit images, and telegraph (teletype) is used to transmit text. Currently, informational integrated service networks, allowing the transmission of sound, image and data in a single communication channel.

Computer network) information network, which includes computing equipment. Components of a computer network can be computers and peripherals, which are sources and receivers of data transmitted over the network.

Aircraft are classified according to a number of characteristics.

1. Depending on the distance between network nodes, aircraft can be divided into three classes:

· local(LAN, LAN – Local Area Network) - covering a limited area (usually within the distance of stations no more than a few tens or hundreds of meters from each other, less often 1...2 km);

· corporate (enterprise scale ) – a set of interconnected LANs covering the territory where one enterprise or institution is located in one or more closely located buildings;

· territorial– covering significant geographical area; Among territorial networks, one can distinguish regional networks (MAN - Metropolitan Area Network) and global networks (WAN - Wide Area Network), having a regional or global scale, respectively.

Topic 9. Telecommunications

Lecture outline

1. Telecommunications and computer networks

2. Characteristics of local and global networks

3. System software

4. OSI model and information exchange protocols

5. Data transmission media, modems

6. Capabilities of teleinformation systems

7. Possibilities of the World Wide Web

8. Prospects for creating an information highway

Telecommunications and computer networks

Communication is the transfer of information between people, carried out using various means (speech, symbolic systems, communication systems). As communication developed, telecommunications appeared.

Telecommunications - transfer of information over a distance using technical means(telephone, telegraph, radio, television, etc.).

Telecommunications are an integral part of the country's industrial and social infrastructure and are designed to meet the needs of physical and legal entities, public authorities in telecommunications services. Thanks to the emergence and development of data networks, a new highly efficient way of interaction between people has emerged - computer networks. The main purpose of computer networks is to provide distributed data processing and increase the reliability of information and management solutions.

A computer network is a collection of computers and various devices that provide information exchange between computers on the network without the use of any intermediate storage media.

In this case, there is a term - network node. A network node is a device connected to other devices as part of a computer network. Nodes can be computers or special network devices such as a router, switch or hub. A network segment is a part of the network limited by its nodes.

A computer on a computer network is also called a “workstation.” Computers on a network are divided into workstations and servers. At workstations, users solve application problems (work in databases, create documents, make calculations). The server serves the network and provides its own resources to everyone network nodes including workstations.

Computer networks are used in various fields, affect almost all areas of human activity and are effective tool connections between enterprises, organizations and consumers.

The network provides more fast access to various sources of information. Using the network reduces resource redundancy. By connecting several computers together, you can get a number of advantages:

· expand the total amount of available information;


· share one resource with all computers (common database, network printer, etc.);

· simplifies the procedure for transferring data from computer to computer.

Naturally, the total amount of information accumulated on computers connected to a network, compared to one computer, is incomparably greater. As a result, the network provides new level employee productivity and effective communication of the company with manufacturers and customers.

Another purpose of a computer network is to ensure the efficient provision of various computer services to network users by organizing their access to resources distributed in this network.

In addition, the attractive side of networks is the availability of e-mail and workday planning programs. Thanks to them, managers of large enterprises can quickly and effectively interact with a large staff of their employees or business partners, and planning and adjusting the activities of the entire company is carried out with much less effort than without networks.

Computer networks as a means of realizing practical needs find the most unexpected applications, for example: selling air and railway tickets; access to information from reference systems, computer databases and data banks; ordering and purchasing consumer goods; payment of utility costs; exchange of information between the teacher’s workplace and students’ workplaces (distance learning) and much more.

Thanks to the combination of database technologies and computer telecommunications it became possible to use the so-called distributed databases data. Huge amounts of information accumulated by humanity are distributed across various regions, countries, cities, where they are stored in libraries, archives, and information centers. Typically, all large libraries, museums, archives and other similar organizations have their own computer databases that contain the information stored in these institutions.

Computer networks allow access to any database that is connected to the network. This relieves network users from the need to maintain a giant library and makes it possible to significantly increase the efficiency of searching for the necessary information. If a person is a user of a computer network, then he can make a request to the appropriate databases, receive an electronic copy of the necessary book, article, archival material over the network, see what paintings and other exhibits are in a given museum, etc.

Thus, the creation of a unified telecommunications network should become the main direction of our state and be guided by the following principles (the principles are taken from the Law of Ukraine “On Communications” dated February 20, 2009):

  1. consumer access to publicly available telecommunications services that
    they need to satisfy their own needs, participate in political,
    economic and social life;
  2. interaction and interconnectedness of telecommunication networks to ensure
    communication capabilities between consumers of all networks;
  3. ensuring the sustainability of telecommunication networks and managing these networks with
    taking into account their technological features on the basis of uniform standards, norms and rules;
  4. state support for the development of domestic production of technical
    telecommunications means;

5. encouraging competition in the interests of consumers of telecommunication services;

6. increasing the volume of telecommunications services, their list and the creation of new jobs;

7. implementation of world achievements in the field of telecommunications, attraction and use of domestic and foreign material and financial resources, latest technologies, management experience;

8. promoting the expansion of international cooperation in the field of telecommunications and the development of the global telecommunications network;

9. ensuring consumer access to information on the procedure for obtaining and the quality of telecommunications services;

10. efficiency, transparency of regulation in the field of telecommunications;

11. creation of favorable conditions for activity in the field of telecommunications, taking into account the characteristics of technology and the telecommunications market.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru/

ALL-RUSSIANCORRESPONDENTFINANCIAL AND ECONOMIC

INSTITUTE

DEPARTMENT OF AUTOMATED PROCESSING

ECONOMIC INFORMATION

COURSE WORK

By discipline « COMPUTER SCIENCE"

on the topic “Computer networks and telecommunications”

Performed:

Plaksina Natalya Nikolaevna

Specialty of State Medical University

Record book number 07МГБ03682

Checked:

Sazonova N.S.

Chelyabinsk - 2009

  • INTRODUCTION
  • THEORETICAL PART
    • 1. CLASSIFICATION OF COMPUTER NETWORKS
  • 2. LAN CONSTRUCTION TOPOLOGY
  • 3. METHODS OF ACCESS TO THE TRANSMISSION MEDIA IN THE LAN
  • 4. CORPORATE INTERNET NETWORK
  • 5. PRINCIPLES, TECHNOLOGIES, INTERNET PROTOCOLS
  • 6. INTERNET DEVELOPMENT TRENDS
  • 7. MAIN COMPONENTS WWW, URL, HTML
  • PRACTICAL PART
  • CONCLUSION
  • BIBLIOGRAPHY

INTRODUCTION

Behind last years The global Internet has become a global phenomenon. The network, which until recently was used by a limited number of scientists, government officials and educational workers in their professional activities, has become available to large and small corporations and even to individual users. computer LAN network Internet

Initially, the Internet was a fairly complex system for the average user. As soon as the Internet became available to businesses and private users, software development began to work with various useful Internet services, such as FTP, Gopher, WAIS and Telnet. Specialists also created a completely new type of service, for example, the World Wide Web - a system that allows you to integrate text, graphics and sound.

In this work I will look at the structure of the Network, its tools and technologies and the applications of the Internet. The question I am studying is extremely relevant because the Internet today is experiencing a period of explosive growth.

THEORETICAL PART

1. CLASSIFICATION OF COMPUTER NETWORKS

Networks of computers have many advantages over a collection of individual systems, including the following:

· Resource sharing.

· Increasing the reliability of the system.

· Load distribution.

· Extensibility.

Resource sharing.

Network users can have access to certain resources of all network nodes. These include, for example, data sets, free memory on remote nodes, computing power of remote processors, etc. This allows you to save significant money by optimizing the use of resources and their dynamic redistribution during operation.

Increasing the reliability of system operation.

Since the network consists of a collection of individual nodes, if one or more nodes fail, other nodes will be able to take over their functions. At the same time, users may not even notice this; the redistribution of tasks will be taken over by the network software.

Load distribution.

In networks with variable load levels, it is possible to redistribute tasks from some network nodes (with increased load) to others where free resources are available. Such redistribution can be done dynamically during operation; moreover, users may not even be aware of the peculiarities of scheduling tasks on the network. These functions can be taken over by network software.

Extensibility.

The network can be easily expanded by adding new nodes. Moreover, the architecture of almost all networks makes it easy to adapt network software to configuration changes. Moreover, this can be done automatically.

However, from a security perspective, these strengths turn into vulnerabilities, creating serious problems.

The features of working on a network are determined by its dual nature: on the one hand, the network should be considered as a single system, and on the other, as a set of independent systems, each of which performs its own functions; has its own users. The same duality is manifested in the logical and physical perception of the network: at the physical level, the interaction of individual nodes is carried out using messages of various types and formats, which are interpreted by protocols. At the logical level (i.e., from the point of view of upper-level protocols), the network is represented as a set of functions distributed across various nodes, but connected into a single complex.

Networks are divided:

1. By network topology (classification by organization physical level).

Common bus.

All nodes are connected to a common high-speed data bus. They are simultaneously configured to receive a message, but each node can only receive the message that is intended for it. The address is identified by the network controller, and there can only be one node in the network with a given address. If two nodes are simultaneously busy transmitting a message (packet collision), then one or both of them stop it, wait for a random time interval, then resume attempting transmission (collision resolution method). Another case is possible - at the moment a node transmits a message over the network, other nodes cannot begin transmission (conflict prevention method). This network topology is very convenient: all nodes are equal, the logical distance between any two nodes is 1, and the message transmission speed is high. For the first time, the “common bus” network organization and the corresponding lower-level protocols were developed jointly by DIGITAL and Rank Xerox, it was called Ethernet.

Ring.

The network is built in the form of a closed loop of unidirectional channels between stations. Each station receives messages via an input channel; the beginning of the message contains address and control information. Based on it, the station decides to make a copy of the message and remove it from the ring or transmit it via the output channel to a neighboring node. If no message is currently being transmitted, the station itself can transmit a message.

Ring networks use several in various ways controls:

Daisy chain - control information is transmitted through separate sets (chains) of ring computers;

Control token -- control information is formatted in the form of a specific bit pattern circulating around the ring; only when a station receives a token can it issue a message to the network (the most well-known method, called token ring);

Segmental - a sequence of segments circulates around the ring. Having found an empty one, the station can place a message in it and transmit it to the network;

Register insertion - a message is loaded into a shift register and transmitted to the network when the ring is free.

Star.

The network consists of one hub node and several terminal nodes connected to it, not directly connected to each other. One or more terminal nodes can be hubs of another network, in which case the network acquires a tree topology.

The network is managed entirely by the hub; terminal nodes can communicate with each other only through it. Typically, only local data processing is performed on terminal nodes. Processing of data relevant to the entire network is carried out at the hub. It is called centralized. Network management is usually carried out using a polling procedure: the hub, at certain intervals, polls the terminal stations in turn to see if there is a message for it. If there is, the terminal station transmits a message to the hub; if not, the next station is polled. The hub can transmit a message to one or more terminal stations at any time.

2. By network size:

· Local.

· Territorial.

Local.

A data network connecting a number of nodes in one local area (room, organization); Network nodes are usually equipped with the same type of hardware and software (although this is not necessary). Local networks provide high speeds of information transfer. Local networks are characterized by short (no more than a few kilometers) communication lines, a controlled operating environment, a low probability of errors, and simplified protocols. Gateways are used to connect local networks with territorial ones.

Territorial.

They differ from local ones by the greater length of communication lines (city, region, country, group of countries), which can be provided by telecommunications companies. A territorial network can connect several local networks, individual remote terminals and computers, and can be connected to other territorial networks.

Area networks rarely use any standard topological designs, since they are designed to perform other, usually specific, tasks. Therefore, they are usually built in accordance with an arbitrary topology, and control is carried out using specific protocols.

3. According to the organization of information processing (classification at the logical level of presentation; here the system is understood as the entire network as a single complex):

Centralized.

Systems of such organization are the most widespread and familiar. They consist of a central node, which implements the entire range of functions performed by the system, and terminals, whose role is limited to partial input and output of information. Basically, peripheral devices play the role of terminals from which the information processing process is controlled. The role of terminals can be performed by display stations or personal computers, both local and remote. All processing (including communication with other networks) is performed through a central node. A feature of such systems is the high load on the central node, due to which it must have a highly reliable and high-performance computer. The central node is the most vulnerable part of the system: its failure disables the entire network. At the same time, security problems in centralized systems are solved most simply and actually come down to protecting the central node.

Another feature of such systems is the inefficient use of the resources of the central node, as well as the inability to flexibly rearrange the nature of work (the central computer must work all the time, which means that some part of it can be idle). Currently, the share of centrally controlled systems is gradually falling.

Distributed.

Almost all nodes of this system can perform similar functions, and each individual node can use the hardware and software of other nodes. The main part of such a system is a distributed OS, which distributes system objects: files, processes (or tasks), memory segments, and other resources. But at the same time, the OS can distribute not all resources or tasks, but only part of them, for example, files and free memory on the disk. In this case, the system is still considered distributed; the number of its objects (functions that can be distributed across individual nodes) is called the degree of distribution. Such systems can be either local or territorial. In mathematical terms, the main function of a distributed system is to map individual tasks to a set of nodes on which they are executed. A distributed system must have the following properties:

1. Transparency, that is, the system must ensure the processing of information regardless of its location.

2. A resource allocation mechanism, which must perform the following functions: ensure interaction of processes and remote calling of tasks, support virtual channels, distributed transactions and naming services.

3. A naming service that is uniform for the entire system, including support for a unified directory service.

4. Implementation of services of homogeneous and heterogeneous networks.

5. Controlling the functioning of parallel processes.

6. Security. In distributed systems, the security problem moves to a qualitatively new level, since it is necessary to control the resources and processes of the entire system as a whole, as well as the transfer of information between system elements. The main components of protection remain the same - access control and information flows, network traffic control, authentication, operator control and security management. However, control in this case becomes more complicated.

A distributed system has a number of advantages that are not inherent in any other organization of information processing: optimal use of resources, resistance to failures (failure of one node does not lead to fatal consequences - it can be easily replaced), etc. However, new problems arise: methods of resource distribution, ensuring security, transparency, etc. Currently, all the capabilities of distributed systems are far from being fully realized.

Recently, the concept of client-server information processing has become increasingly recognized. This concept is transitional from centralized to distributed and at the same time combines both of the latter. However, client-server is not so much a way of organizing a network as a way of logical presentation and processing of information.

Client-server is an organization of information processing in which all functions performed are divided into two classes: external and internal. External functions consist of user interface support and user-level information presentation functions. Internal ones concern the execution of various requests, the process of information processing, sorting, etc.

The essence of the client-server concept is that the system has two levels of elements: servers that perform data processing (internal functions), and workstations that perform the functions of generating requests and displaying the results of their processing (external functions). There is a stream of requests from the workstations to the server, and in the opposite direction - the results of their processing. There can be several servers in the system and they can perform different sets of lower-level functions (print servers, file and network servers). The bulk of information is processed on servers, which in this case play the role of local centers; information is entered and displayed using workstations.

The distinctive features of systems built on the client-server principle are as follows:

The most optimal use of resources;

Partial distribution of the information processing process in the network;

Transparent access to remote resources;

Simplified management;

Reduced traffic;

Possibility of more reliable and simpler protection;

Greater flexibility in using the system as a whole, as well as heterogeneous equipment and software;

Centralized access to certain resources,

Separate parts of one system can be built according to different principles and combined using appropriate matching modules. Each class of networks has its own specific characteristics, both in terms of organization and in terms of protection.

2.TOPOLOGY OF LAN CONSTRUCTION

The term network topology refers to the path that data travels across a network. There are three main types of topologies: bus, star, and ring.

Figure 1. Bus (linear) topology.

The “common bus” topology involves the use of one cable to which all computers on the network are connected (Fig. 1). In the case of "common bus" the cable is shared by all stations in turn. Special measures are taken to ensure that when working with a common cable, computers do not interfere with each other transmitting and receiving data.

In a common bus topology, all messages sent by individual computers connected to the network. Reliability here is higher, since the failure of individual computers will not disrupt the functionality of the network as a whole. Finding faults in the cable is difficult. In addition, since only one cable is used, if a break occurs, the entire network is disrupted.

Figure 2. Star topology.

In Fig. Figure 2 shows computers connected in a star. In this case, each computer through a special network adapter connected by a separate cable to the unifying device.

If necessary, you can combine several networks together with a star topology, resulting in branched network configurations.

From a reliability point of view, this topology is not

the best solution, since failure of the central node will lead to the shutdown of the entire network. However, when using a star topology, it is easier to find faults in the cable network.

The “ring” topology is also used (Fig. 3). In this case, data is transferred from one computer to another as if in a relay race. If a computer receives data intended for another computer, it passes it on around the ring. If the data is intended for the computer that received it, it is not transmitted further.

The local network can use one of listed topologies. This depends on the number of computers being combined, their relative location and other conditions. You can also combine several local networks using different topologies into a single local network. Maybe, for example, a tree topology.

Figure 3. Ring topology.

3. METHODS OF ACCESS TO THE TRANSMISSION MEDIA IN THE LAN

The undoubted advantages of information processing in computer networks result in considerable difficulties in organizing their protection. Let us note the following main problems:

Sharing shared resources.

Due to the sharing of a large number of resources among different network users, possibly located on long distance from each other, the risk of NSD greatly increases - it can be done easier and more unnoticeably online.

Expansion of control zone.

The administrator or operator of a particular system or subnetwork must monitor the activities of users outside its reach, perhaps in another country. At the same time, he must maintain working contact with his colleagues in other organizations.

Combination of various software and hardware.

Connecting several systems, even homogeneous in characteristics, into a network increases the vulnerability of the entire system as a whole. The system is configured to meet its specific security requirements, which may be incompatible with those on other systems. When disparate systems are connected, the risk increases.

Unknown perimeter.

The easy expandability of networks means that it is sometimes difficult to determine the boundaries of a network; the same node can be accessible to users of different networks. Moreover, for many of them it is not always possible to accurately determine how many users have access to a particular node and who they are.

Multiple attack points.

In networks, the same set of data or message can be transmitted through several intermediate nodes, each of which is a potential source of threat. Naturally, this cannot improve the security of the network. In addition, many modern networks can be accessed using dial-up lines and a modem, which greatly increases the number of possible points of attack. This method is simple, easy to implement and difficult to control; therefore it is considered one of the most dangerous. The list of network vulnerabilities also includes communication lines and various types of communication equipment: signal amplifiers, repeaters, modems, etc.

Difficulty in managing and controlling access to the system.

Many attacks on a network can be carried out without gaining physical access to a specific node - using the network from remote points. In this case, identifying the offender may be very difficult, if not impossible. In addition, the attack time may be too short to take adequate measures.

At their core, the problems of protecting networks are due to the dual nature of the latter: we talked about this above. On the one hand, the network is a single system with uniform rules for processing information, and on the other hand, it is a collection of separate systems, each of which has its own rules for processing information. In particular, this duality applies to protection issues. An attack on a network can be carried out from two levels (a combination of these is possible):

1. Upper - an attacker uses the properties of the network to penetrate another node and perform certain unauthorized actions. The protection measures taken are determined by the potential capabilities of the attacker and the reliability of the security measures of individual nodes.

2. Lower - an attacker uses the properties of network protocols to violate the confidentiality or integrity of individual messages or the flow as a whole. Disturbance in the flow of messages can lead to information leakage and even loss of control over the network. The protocols used must ensure the security of messages and their flow as a whole.

Network protection, like the protection of individual systems, pursues three goals: maintaining the confidentiality of information transmitted and processed on the network, the integrity and availability of resources and network components.

These goals determine actions to organize protection against attacks from the top level. The specific tasks that arise when organizing network protection are determined by the capabilities of high-level protocols: the wider these capabilities, the more tasks have to be solved. Indeed, if the network's capabilities are limited to the transfer of data sets, then the main security problem is to prevent tampering with data sets available for transfer. If the network capabilities allow you to organize remote launch of programs or work in virtual terminal mode, then it is necessary to implement a full range of protective measures.

Network protection should be planned as a single set of measures covering all features of information processing. In this sense, the organization of network protection, the development of security policy, its implementation and protection management are subject to the general rules that were discussed above. However, it must be taken into account that each network node must have individual protection depending on the functions performed and the capabilities of the network. In this case, the protection of an individual node must be part of the overall protection. On each individual node it is necessary to organize:

Control access to all files and other data sets accessible from local network and other networks;

Monitoring processes activated from remote nodes;

Network diagram control;

Effective identification and authentication of users accessing this node from the network;

Controlling access to local node resources available for use by network users;

Control over the dissemination of information within the local network and other networks connected to it.

However, the network has a complex structure: to transfer information from one node to another, the latter goes through several stages of transformation. Naturally, all these transformations must contribute to the protection of the transmitted information, otherwise attacks from the lower level can compromise the network's security. Thus, the protection of the network as a single system consists of the protection measures for each individual node and the protection functions of the protocols of this network.

The need for security functions for data transfer protocols is again determined by the dual nature of the network: it is a collection of separate systems that exchange information with each other using messages. On the way from one system to another, these messages are transformed by protocols at all levels. And because they are the most vulnerable element of the network, protocols must be designed to secure them to maintain the confidentiality, integrity, and availability of information transmitted over the network.

Network software must be included with the network node, otherwise network operation and security may be compromised by changing programs or data. At the same time, protocols must implement requirements for ensuring the security of transmitted information, which are part of the overall security policy. The following is a classification of network-specific threats (low-level threats):

1. Passive threats (violation of confidentiality of data circulating on the network) - viewing and/or recording of data transmitted over communication lines:

Viewing a message - an attacker can view the contents of a message transmitted over the network;

Graph analysis - an attacker can view the headers of packets circulating in the network and, based on the service information contained in them, make conclusions about the senders and recipients of the packet and the conditions of transmission (time of departure, message class, security category, etc.); in addition, it can figure out the message length and graph size.

2. Active threats (violation of the integrity or availability of network resources) - unauthorized use of devices with access to the network to change individual messages or a flow of messages:

Failure of messaging services - an attacker can destroy or delay individual messages or the entire flow of messages;

- “masquerade” - an attacker can assign someone else’s identifier to his node or relay and receive or send messages on someone else’s behalf;

Injection of network viruses - transmission of a virus body over a network with its subsequent activation by a user of a remote or local node;

Message Flow Modification - An attacker can selectively destroy, modify, delay, reorder, and duplicate messages, as well as insert forged messages.

It is quite obvious that any manipulations described above with individual messages and the flow as a whole can lead to network disruptions or leakage of confidential information. This is especially true for service messages that carry information about the state of the network or individual nodes, about events occurring on individual nodes (remote launch of programs, for example) - active attacks on such messages can lead to loss of control over the network. Therefore, protocols that generate messages and put them into the stream must take measures to protect them and ensure undistorted delivery to the recipient.

The tasks solved by protocols are similar to those solved when protecting local systems: ensuring the confidentiality of information processed and transmitted in the network, the integrity and availability of network resources (components). These functions are implemented using special mechanisms. These include:

Encryption mechanisms that ensure the confidentiality of transmitted data and/or information about data flows. Used in this mechanism the encryption algorithm can use a secret or public key. In the first case, the presence of mechanisms for managing and distributing keys is assumed. There are two encryption methods: channel, implemented using the data link layer protocol, and end (subscriber), implemented using the application or, in some cases, representative layer protocol.

In the case of channel encryption, all information transmitted over the communication channel, including service information, is protected. This method has the following features:

Revealing the encryption key for one channel does not lead to compromise of information in other channels;

All transmitted information, including service messages, service fields of data messages, is reliably protected;

All information is open at intermediate nodes - relays, gateways, etc.;

The user does not participate in the operations performed;

Each pair of nodes requires its own key;

The encryption algorithm must be sufficiently strong and provide encryption speed at the level of channel throughput (otherwise there will be a message delay, which can lead to blocking of the system or a significant decrease in its performance);

The previous feature leads to the need to implement the encryption algorithm in hardware, which increases the cost of creating and maintaining the system.

End-to-end (subscriber) encryption allows you to ensure the confidentiality of data transferred between two application objects. In other words, the sender encrypts the data, the recipient decrypts it. This method has the following features (compare with channel encryption):

Only the content of the message is protected; all proprietary information remains open;

No one except the sender and recipient can recover the information (if the encryption algorithm used is strong enough);

The transmission route is unimportant - information will remain protected in any channel;

Each pair of users requires a unique key;

The user must be familiar with encryption and key distribution procedures.

The choice of one or another encryption method or a combination of them depends on the results of the risk analysis. The question is as follows: what is more vulnerable - the individual communication channel itself or the content of the message transmitted through various channels. Channel encryption is faster (other, faster algorithms are used), transparent to the user, and requires fewer keys. End-to-end encryption is more flexible and can be used selectively, but requires user participation. In each specific case, the issue must be resolved individually.

Mechanisms digital signature, which include procedures for closing data blocks and checking a closed data block. The first process uses secret key information, the second process uses public key information, which does not allow the recovery of secret data. Using secret information, the sender forms a service data block (for example, based on a one-way function), the recipient, based on publicly available information, checks the received block and determines the authenticity of the sender. Only a user who has the appropriate key can form a genuine block.

Access control mechanisms.

They check the authority of a network object to access resources. Authorization is checked in accordance with the rules of the developed security policy (selective, authoritative or any other) and the mechanisms implementing it.

Mechanisms to ensure the integrity of transmitted data.

These mechanisms ensure the integrity of both an individual block or field of data and a stream of data. The integrity of the data block is ensured by the sending and receiving objects. The sending object adds an attribute to the data block, the value of which is a function of the data itself. The receiving object also evaluates this function and compares it with the received one. In case of discrepancy, a decision is made on violation of integrity. Detection of changes may trigger data recovery efforts. In the event of a deliberate violation of integrity, the value of the control sign can be changed accordingly (if the algorithm for its formation is known); in this case, the recipient will not be able to detect the violation of integrity. Then it is necessary to use an algorithm for generating a control feature as a function of the data and the secret key. In this case, it will be impossible to correctly change the control characteristic without knowing the key and the recipient will be able to determine whether the data has been modified.

Protection of the integrity of data streams (from reordering, adding, repeating or deleting messages) is carried out using additional forms of numbering (control of message numbers in the stream), time stamps, etc.

The following mechanisms are desirable components of network security:

Mechanisms for authenticating network objects.

To ensure authentication, passwords, verification of object characteristics, and cryptographic methods (similar to a digital signature) are used. These mechanisms are typically used to authenticate peer network entities. The methods used can be combined with the “triple handshake” procedure (three times exchange of messages between the sender and the recipient with authentication parameters and confirmations).

Text filling mechanisms.

Used to provide protection against chart analysis. Such a mechanism can be used, for example, by generating fictitious messages; in this case, the traffic has a constant intensity over time.

Route control mechanisms.

Routes can be selected dynamically or predefined in order to use physically secure subnets, repeaters, and channels. End systems, when detecting intrusion attempts, may require the connection to be established via a different route. In addition, selective routing can be used (that is, part of the route is set explicitly by the sender - bypassing dangerous sections).

Inspection mechanisms.

Characteristics of data transferred between two or more objects (integrity, source, time, recipient) can be confirmed using an attestation mechanism. Confirmation is provided by a third party (arbitrator) who is trusted by all parties involved and who has the necessary information.

In addition to the security mechanisms listed above, implemented by protocols at various levels, there are two more that do not belong to a specific level. Their purpose is similar to control mechanisms in local systems:

Event detection and processing(analogous to means of monitoring dangerous events).

Designed to detect events that lead or may lead to a violation of network security policy. The list of these events corresponds to the list for individual systems. In addition, it may include events indicating violations in the operation of the protection mechanisms listed above. Actions taken in this situation may include various recovery procedures, event logging, one-way disconnect, local or peripheral event reporting (logging), etc.

Security scan report (similar to a scan using the system log).

A security audit is an independent review of system records and activities against a specified security policy.

The security functions of protocols at each level are determined by their purpose:

1. Physical layer - control electromagnetic radiation communication lines and devices, maintaining communication equipment in working order. Protection on this level is ensured with the help of shielding devices, noise generators, and means of physical protection of the transmission medium.

2. Data link level - increasing the reliability of protection (if necessary) by encrypting data transmitted over the channel. In this case, all transmitted data, including service information, is encrypted.

3. The network level is the most vulnerable level from a security point of view. All routing information is generated on it, the sender and recipient appear explicitly, and flow control is carried out. In addition, packets are processed by network layer protocols on all routers, gateways and other intermediate nodes. Almost all specific network violations are carried out using protocols of this level (reading, modification, destruction, duplication, redirection of individual messages or a flow as a whole, masquerading as another node, etc.).

Protection against all such threats is carried out by network and transport layer protocols and using cryptographic protection tools. At this level, for example, selective routing can be implemented.

4. Transport layer - controls the functions of the network layer at the receiving and transmitting nodes (at intermediate nodes the transport layer protocol does not function). Transport layer mechanisms check the integrity of individual data packets, packet sequences, the route traveled, departure and delivery times, identification and authentication of the sender and recipient, and other functions. All active threats become visible at this level.

The integrity of transmitted data is guaranteed by cryptoprotection of data and service information. No one other than those who have the secret key of the recipient and/or sender can read or change the information in such a way that the change goes unnoticed.

Graph analysis is prevented by the transmission of messages that do not contain information, but which, however, appear to be real. By adjusting the intensity of these messages depending on the amount of information transmitted, you can constantly achieve a uniform schedule. However, all these measures cannot prevent the threat of destruction, redirection or delay of the message. The only defense against such violations may be the parallel delivery of duplicate messages along other paths.

5. Upper-level protocols provide control over the interaction of received or transmitted information with the local system. Session and representative level protocols do not perform security functions. Application layer protocol security features include controlling access to specific data sets, identifying and authenticating specific users, and other protocol-specific functions. These functions are more complex in the case of implementing an authoritative security policy on the network.

4. CORPORATE INTERNET NETWORK

The corporate network is a special case corporate network large company. It is obvious that the specifics of the activity impose strict requirements on information security systems in computer networks. An equally important role when building a corporate network is played by the need to ensure trouble-free and uninterrupted operation, since even a short-term failure in its operation can lead to huge losses. Finally, large amounts of data must be transferred quickly and reliably because many applications must operate in real time.

Corporate network requirements

The following basic requirements for a corporate network can be identified:

The network connects everything into a structured and controlled closed system company-owned information devices: individual computers and local area networks (LAN), host servers, workstations, telephones, faxes, office PBXs.

The network ensures reliable operation and powerful information security systems. That is, trouble-free operation of the system is guaranteed both in the event of personnel errors and in the event of an unauthorized access attempt.

There is a well-functioning communication system between departments at different levels (both city and non-resident departments).

In connection with modern development trends, there is a need for specific solutions. The organization of prompt, reliable and secure access plays a significant role remote client to modern services.

5. PRINCIPLES, TECHNOLOGIES, INTERNET PROTOCOLS

The main thing that distinguishes the Internet from other networks is its protocols - TCP/IP. In general, the term TCP/IP usually means everything related to protocols for communication between computers on the Internet. It covers an entire family of protocols, application programs, and even the network itself. TCP/IP is an internetworking technology, internet technology. A network that uses internet technology is called "internet". If we are talking about a global network that unites many networks with internet technology, then it is called the Internet.

The TCP/IP protocol gets its name from two communication protocols (or communication protocols). These are Transmission Control Protocol (TCP) and Internet Protocol (IP). Despite the fact that the Internet uses a large number of other protocols, the Internet is often called the TCP/IP network, since these two protocols are, of course, the most important.

Like any other network on the Internet, there are 7 levels of interaction between computers: physical, logical, network, transport, session level, presentation and application level. Accordingly, each level of interaction corresponds to a set of protocols (i.e. rules of interaction).

Physical layer protocols determine the type and characteristics of communication lines between computers. The Internet uses almost all currently known communication methods, from a simple wire (twisted pair) to fiber-optic communication lines (FOCL).

For each type of communication line, a corresponding logical level protocol has been developed to control the transmission of information over the channel. Towards logical level protocols for telephone lines The protocols include SLIP (Serial Line Interface Protocol) and PPP (Point to Point Protocol). For communication via LAN cable, these are package drivers for LAN cards.

Network layer protocols are responsible for transmitting data between devices on different networks, that is, they are responsible for routing packets in the network. Network layer protocols include IP (Internet Protocol) and ARP (Address Resolution Protocol).

Transport layer protocols control the transfer of data from one program to another. Transport layer protocols include TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

Session layer protocols are responsible for establishing, maintaining, and destroying appropriate channels. On the Internet, this is done by the already mentioned TCP and UDP protocols, as well as the UUCP (Unix to Unix Copy Protocol).

Representative layer protocols serve application programs. Representative-level programs include programs that run, for example, on a Unix server to provide various services to subscribers. These programs include: telnet server, FTP server, Gopher server, NFS server, NNTP (Net News Transfer Protocol), SMTP (Simple Mail Transfer Protocol), POP2 and POP3 (Post Office Protocol), etc.

Application layer protocols include network services and programs for providing them.

6. INTERNET DEVELOPMENT TRENDS

In 1961, DARPA (Defense Advanced Research Agency), on behalf of the US Department of Defense, began a project to create an experimental packet transmission network. This network, called ARPANET, was originally intended to study methods for providing reliable communications between different types of computers. Many methods for transmitting data via modems were developed on the ARPANET. At the same time, network data transfer protocols - TCP/IP - were developed. TCP/IP is a set of communication protocols that define how different types of computers can communicate with each other.

The ARPANET experiment was so successful that many organizations wanted to join it to use it for daily data transfer. And in 1975, ARPANET evolved from an experimental network to a working network. Responsibility for network administration was assumed by DCA (Defense Communication Agency), currently called DISA (Defense Information Systems Agency). But ARPANET's development didn't stop there; TCP/IP protocols continued to evolve and improve.

In 1983, the first standard for the TCP/IP protocols was released, included in the Military Standards (MIL STD), i.e. to military standards, and everyone who worked on the network was required to switch to these new protocols. To facilitate this transition, DARPA approached the company's leaders with a proposal to implement TCP/IP protocols on Berkeley(BSD) UNIX. This is where the union of UNIX and TCP/IP began.

After some time, TCP/IP was adapted into a common, that is, publicly available, standard, and the term Internet came into general use. In 1983, MILNET was spun off from ARPANET and became part of the US Department of Defense. The term Internet began to be used to refer to a single network: MILNET plus ARPANET. And although the ARPANET ceased to exist in 1991, the Internet exists, its size is much greater than its original size, as it united many networks around the world. Figure 4 illustrates the growth in the number of hosts connected to the Internet from 4 computers in 1969 to 8.3 million in 1996. An Internet host is a computer running a multitasking operating system (Unix, VMS), supporting TCP\IP protocols and providing users of any network services.

7. MAIN COMPONENTS WWW, URL, HTML

World Wide Web is translated into Russian as “ The World Wide Web" And, in essence, this is true. WWW is one of the most advanced tools for working on the global Internet. This service appeared relatively recently and is still rapidly developing.

The largest number of developments are related to the homeland of WWW - CERN, European Particle Physics Laboratory; but it would be a mistake to think of the Web as a tool designed by physicists and for physicists. The fruitfulness and attractiveness of the ideas underlying the project have turned WWW into a system of global scale, providing information in almost all areas of human activity and covering approximately 30 million users in 83 countries.

The main difference between WWW and other tools for working with the Internet is that WWW allows you to work with almost all types of documents currently available on a computer: these can be text files, illustrations, sound and video clips, etc.

What is WWW? It is an attempt to organize all the information on the Internet, plus any local information you choose, as a set of hypertext documents. You navigate the web by following links from one document to another. All these documents are written in a language specially developed for this purpose, called HyperText Markup Language (HTML). It is somewhat reminiscent of the language used to write text documents, only HTML is simpler. Moreover, you can use not only the information provided by the Internet, but also create your own documents. In the latter case there is a series practical recommendations to writing them.

The whole benefit of hypertext is to create hypertext documents; if you are interested in any item in such a document, then you just need to point your cursor there to get the information you need. It is also possible to make links in one document to others written by other authors or even located on a different server. While it appears to you as one whole.

Hypermedia is a superset of hypertext. In hypermedia, operations are performed not only on text but also on sound, images, and animation.

There are WWW servers for Unix, Macintosh, MS Windows and VMS, most of them are freely distributed. By installing a WWW server, you can solve two problems:

1. Provide information to external consumers - information about your company, catalogs of products and services, technical or scientific information.

2. Provide your employees with convenient access to the organization’s internal information resources. This could be the latest management orders, an internal telephone directory, answers to frequently asked questions for users of application systems, technical documentation and everything that the imagination of the administrator and users suggests. The information you want to provide to WWW users is formatted as files on HTML language. HTML is a simple markup language that allows you to mark fragments of text and set links to other documents, highlight headings at several levels, break text into paragraphs, center them, etc., turning simple text into a formatted hypermedia document. It is quite easy to create an HTML file manually, however, there are specialized editors and converters for files from other formats.

Basic components of World Wide Web technology

By 1989, hypertext represented a new, promising technology that had a relatively large number of implementations on the one hand, and on the other hand, attempts were made to build formal models of hypertext systems that were more descriptive in nature and were inspired by the success of the relational approach to describing data. T. Berners-Lee's idea was to apply the hypertext model to information resources distributed on the network, and to do it in the simplest possible way. He laid three cornerstones of the four existing system, developing:

hypertext markup language HTML documents(HyperText Markup Lan-guage);

* universal method addressing resources in the URL network (Universal Resource Locator);

* protocol for exchanging hypertext information HTTP (HyperText Transfer Protocol).

* CGI (Common Gateway Interface) universal gateway interface.

The HTML idea is an example of an extremely successful solution to the problem of building a hypertext system using a special display control tool. The development of hypertext markup language was significantly influenced by two factors: research in the field of interfaces of hypertext systems and the desire to provide simple and quick way creating a hypertext database distributed over a network.

In 1989, the problem of the interface of hypertext systems was actively discussed, i.e. methods for displaying hypertext information and navigation in the hypertext network. The importance of hypertext technology has been compared with the importance of printing. It was argued that a sheet of paper and computer display/reproduction means are significantly different from each other, and therefore the form of presentation of information should also be different. Contextual hypertext links were recognized as the most effective form of hypertext organization, and in addition, the division into links associated with the entire document as a whole and its individual parts was recognized.

The easiest way to create any document is to type it in text editor. There was experience in creating documents well marked for subsequent display in CERN - it is difficult to find a physicist who does not use the TeX or LaTeX system. In addition, by that time there was a markup language standard - Standard Generalized Markup Language (SGML).

It should also be taken into account that, according to his proposals, Berners-Lee intended to combine the existing information resources of CERN into a single system, and the first demonstration systems were to be systems for NeXT and VAX/VMS.

Typically hypertext systems have special software building hypertext connections. The hypertext links themselves are stored in special formats or even constitute special files. This approach is good for local system, but not for distributed on many different computer platforms. In HTML, hypertext links are embedded in the body of the document and stored as part of it. Systems often use special data storage formats to improve access efficiency. In WWW, documents are ordinary ASCII files that can be prepared in any text editor. Thus, the problem of creating a hypertext database was solved extremely simply.

...

Similar documents

    Computer networks and their classification. Computer network hardware and local network topologies. Technologies and protocols of computer networks. Addressing computers on the network and basic network protocols. Advantages of using network technologies.

    course work, added 04/22/2012

    Purpose and classification of computer networks. Generalized structure of a computer network and characteristics of the data transfer process. Managing the interaction of devices on the network. Typical topologies and access methods of local networks. Working on a local network.

    abstract, added 02/03/2009

    Topologies and concepts for building computer networks. Services provided by the Internet. Teaching the course "Computer Networks" at Vyatka State Polytechnic University. Guidelines on creating a course "Network Technologies".

    thesis, added 08/19/2011

    Classification of computer networks. Purpose of a computer network. Main types of computer networks. Local and global computer networks. Methods for building networks. Peer-to-peer networks. Wired and wireless channels. Data transfer protocols.

    course work, added 10/18/2008

    Advantages of computer networks. Fundamentals of construction and operation of computer networks. Selection of network equipment. Layers of the OSI model. Basic network technologies. Implementation of interactive communication. Session level protocols. Data transmission medium.

    course work, added 11/20/2012

    Classification and characteristics of access networks. Multiple access network technology. Selecting broadband access technology. Factors influencing ADSL quality parameters. Methods for configuring subscriber access. Basic components of a DSL connection.

    thesis, added 09.26.2014

    Controlling access to the transmission medium. Procedures for data exchange between workstations of subscriber network systems, implementation of access methods to the transmission medium. Estimation of the maximum response time to a network subscriber request for various access methods.

    course work, added 09/13/2010

    Computer network topologies. Methods of accessing communication channels. Data transmission media. Structural model and OSI levels. IP and TCP protocols, principles of packet routing. Characteristics of the DNS system. Creation and calculation of a computer network for an enterprise.

    course work, added 10/15/2010

    The role of computer networks, principles of their construction. Token Ring network building systems. Information transfer protocols, topologies used. Methods of data transmission, means of communication in the network. Software, deployment and installation technology.

    course work, added 10/11/2013

    The essence and classification of computer networks according to various criteria. Network topology is a diagram of connecting computers into local networks. Regional and corporate computer networks. Internet networks, the concept of WWW and the uniform resource locator URL.

Computer and telecommunication networks

Computer network (CN) – a set of computers and terminals connected via communication channels into a single system that meets the requirements of distributed data processing.

In general, under telecommunications network (TN) understand a system consisting of objects that perform the functions of generation, transformation, storage and consumption of a product, called points (nodes) of the network, and transmission lines (communications, communications, connections) that transfer the product between points.

Taking into account the dependence on the type of product - information, energy, mass - information, energy and material networks are distinguished, respectively.

Information network (IS) – a communication network in which the product of generating, processing, storing and using information is information. Traditionally, telephone networks are used to transmit audio information, television is used to transmit images, and telegraph (teletype) is used to transmit text. Today informational integrated service networks, allowing the transmission of sound, image and data in a single communication channel.

Computer network (CN)– an information network that includes computing equipment. The components of a computer network are computers and peripheral devices, which are sources and receivers of data transmitted over the network.

Aircraft are classified according to a number of characteristics.

1. Taking into account the dependence of the distance between network nodes, aircraft can be divided into three classes:

· local(LAN, LAN - Local Area Network) - covering a limited area (usually within the distance of stations no more than a few tens or hundreds of meters from each other, less often 1...2 km);

· corporate (enterprise scale)– a set of interconnected LANs covering the territory where one enterprise or institution is located in one or more closely located buildings;

· territorial– covering a significant geographical area; Among territorial networks, one can distinguish regional networks (MAN - Metropolitan Area Network) and global networks (WAN - Wide Area Network), having a regional or global scale, respectively.

The global Internet network is especially highlighted.

2. An important feature of the classification of computer networks is their topology, which determines the geometric location of the basic resources of the computer network and the connections between them.

Taking into account the dependence on the topology of node connections, networks of bus (backbone), ring, star, hierarchical, and arbitrary structures are distinguished.

The most common among LANs are:

· bus– a local network in which communication between any two stations is established through one common path and the data transmitted by any station simultaneously becomes available to all other stations connected to the same data transmission medium;

· ring– nodes are connected by a ring data line (only two lines are suitable for each node). Data, passing through the ring, becomes available in turn to all network nodes;

· star– there is a central node from which data transmission lines diverge to each of the other nodes.

The topological structure of the network has a significant impact on its throughput, the network’s resistance to equipment failures, the logical capabilities and cost of the network.

3. Taking into account the dependence on the control method, networks are distinguished:

· ʼʼclient-serverʼʼ- they allocate one or several nodes (their name is servers) that perform control or special maintenance functions in the network, and the remaining nodes (clients) are terminal nodes, where users work. Client-server networks differ in the nature of the distribution of functions between servers, i.e., by type of server (for example, file servers, database servers). When specializing servers for certain applications, we have distributed computing network. Such networks are also distinguished from centralized systems built on mainframes;

· peer-to-peer– all nodes in them are equal. Since in general, a client is usually understood as an object (device or program) that requests certain services, and a server is an object that provides these services, each node in peer-to-peer networks can perform the functions of both a client and a server.

4. Taking into account the dependence on whether identical or different computers are used in the network, networks of similar computers are distinguished, called homogeneous, and various types of computers - heterogeneous (heterogeneous). In large automated systems As a rule, networks turn out to be heterogeneous.

5. Considering the dependence of sending property on the network, they are networks common use(public) or private (privat).

Any communication network must include the following basic components: transmitter, message, transmission media, receiver.

Transmitter – device that is the source of the data.

Receiver – device receiving data.

The receiver can be a computer, terminal or some other digital device.

Message - digital data of a certain format intended for transmission.

It must be a database file, table, query response, text, or image.

Transmission media – physical transmission medium and special equipment that ensures the transmission of messages.

Various types of communication channels are used to transmit messages in computer networks. The most common are dedicated telephone channels and special channels for transmitting digital information. Radio channels and satellite communication channels are also used.

Communication channel call the physical environment and hardware that transfer information between switching nodes.

The needs of forming a single world space led to the creation of the global Internet. Today, the Internet attracts users with its information resources and services, which are used by about a billion people in all countries of the world. Online services include bulletin board systems (BBS), Email(e-mail), teleconferences or news groups (News Group), file sharing between computers (FTR), parallel conversations on the Internet (Internet Relay Chat - IRC), search enginesʼʼWorld Wide Webʼʼ.

Each local or corporate network usually has at least, one computer that has a permanent connection to the Internet using a high-bandwidth link (Internet server).

The Internet provides a person with inexhaustible opportunities to search for the necessary information of various types.

Almost all programs contain, in addition to a help system, electronic and printed documentation. This documentation is the source useful information about the program and should not be neglected.

Getting to know the program begins with the information screens that accompany its installation. While the installation is underway, you should learn as much as possible about the purpose of the program and its capabilities. This helps you understand what to look for in the program after you install it.

Printed documentation is included with programs purchased in stores. These are usually quite extensive manuals, up to several hundred pages long. It is the length of such manuals that often suppresses the desire to read it carefully. Indeed, there is no point in studying the manual if the answer to the question can be obtained more by simple means. Moreover, in case of difficulties, the program manual is one of the most convenient sources of extremely important information.

In many cases additional reference Information according to the program is presented in the form text files included in the distribution kit. Historically, these files were usually named README, derived from the English phrase: ʼʼRead meʼʼ.

Typically, a README file contains information about installing the program, additions and clarifications to the printed manual, and any other information. For shareware programs and small utilities distributed over the Internet, this file may contain the entire electronic version manuals.

Programs distributed over the Internet may include other text information files.

In cases where no “ordinary” sources allow you to obtain the necessary information about the program, you can turn to the bottomless treasury of information that is the Internet. Searching for information on the Internet is fraught with some difficulties, but the Internet has answers to any questions.

All the major computer software companies and authors have a presence on the Internet. Using a search engine, it is not difficult to find a Web page dedicated to the desired program or a series of programs. Such a page may contain a review or short description, information about latest version programs, “patches” related to improving the program or correcting errors, as well as links to other Web documents devoted to the same issues. Here you can often find free, shareware, demo and trial versions programs.

The Internet is growing at a very fast pace, and finding the information you need among billions of Web pages and files is becoming increasingly difficult. To search for information, special search servers are used, which contain more or less complete and constantly updated information about Web pages, files and other documents stored on tens of millions of Internet servers.

Different search servers may use different mechanisms for searching, storing, and presenting information to the user. Internet search servers can be divided into 2 groups:

general purpose search engines;

· specialized search engines.

Modern search engines are often information portals that provide users not only with the ability to search for documents on the Internet, but also with access to other information resources (news, weather information, exchange rate information, interactive geographic maps, and so on).

General purpose search engines are databases containing thematically grouped information about the information resources of the World Wide Web.

These search engines allow you to find Web sites or Web pages using keywords in a database or by searching a hierarchical directory system.

The interface of such general purpose search engines contains a list of directory sections and a search field. In the search field, the user can enter keywords to search for a document and select a specific section in the catalog, which narrows the search field and thus speeds up the search.

Databases are filled using special robot programs that periodically “bypass” Internet Web servers.

Robot programs read all the documents they encounter, highlight keywords in them and enter them into a database containing the URLs of the documents.

Since information on the Internet is constantly changing (new Web sites and pages are created, old ones are deleted, their URLs change, and so on), search efforts do not always have time to track all these changes. The information stored in the search engine database may differ from the real state of the Internet, and then the user, as a result of the search, may receive the address of a document that no longer exists or has been moved.

In order to ensure greater consistency between the content of a search engine's database and the actual state of the Internet, most search engines allow the author of a new or moved Web site to enter information into the database by filling out a registration form. In the process of filling out the questionnaire, the site developer enters the site's URL, its name, a brief description of the site's content, as well as keywords that will make it easier to find the site.

Sites in the database are registered by the number of visits per day, week or month. Site traffic is determined using special counters that are installed on the site. The counters record each visit to the site and transmit information about the number of visits to the search engine server.

Searching for a document in the search engine database is carried out by entering queries into the search field. A simple request contains one or more keywords, which are central to this document. You can also use complex queries using logical operations, templates and so on.

Specialized search systems allow you to search for information in other information “layers” of the Internet: file archive servers, mail servers, etc.

Computer and telecommunication networks - concept and types. Classification and features of the category "Computer and telecommunication networks" 2017, 2018.

1.Types of computer networks. Types, main components of LAN.

Types of computer networks:

Computer network (computer network, data network)- a communication system between two or more computers. Various physical phenomena can be used to transmit information, usually various types of electrical signals or electromagnetic radiation. Types of computer networks: Personal Network is a network built “around” a person. These networks are designed to unite all the user’s personal electronic devices (phones, pocket personal computers, smartphones, laptops, headsets, etc.). The standards for such networks currently include Bluetooth. LAN– serves to connect computers located at a short distance from each other. Such a network usually does not extend beyond one premises. City computer network(eng. MAN - Metropolitan Area Network) covers several buildings within one city or the entire city. Corporate network– a set of LANs, powerful computers and terminal systems that use a common information highway for exchange. National Network– a network connecting computers within one state (National LambdaRail, GEANT) Global computing network– a data transmission network designed to serve a significant territory using publicly accessible communication lines.

Types: By type of functional interaction: Peer-to-peer - the simplest and intended for small work groups. With their help, users of several computers can use shared disks, printers and other devices, transfer messages to each other and perform other collective operations. Here, any computer can perform both the role of a server and a client. Such a network is cheap and easy to maintain, but cannot provide information protection for large network sizes). Multi-rank (they use dedicated computer servers to store shared data and programs for using shared access resources. Such a network has good expansion capabilities, high performance and reliability, but requires constant qualified maintenance). By type of network topology: Tire, Star, Ring, Lattice. Mixed topology. By network OS: Windows, UNIX, Mixed.

Types, main components of LAN:

Slave station– computer, intended for local network. A network adapter is a special board that allows the computer to interact with other devices on the same network. It carries out physical communication with network devices via a network cable. Server– some serving device, the cat in the LAN acts as a control center and data concentrator. This is a combination of hardware and software that is used to manage shared network resources.

3. Network topology. Network standards (types of networks) Data transmission medium (network cable).

Network topology(from Greek τόπος, place) - description of the network configuration, layout and connection of network devices.

The network topology can be:

physical- describes the actual location and connections between network nodes.

logical- describes the signal flow within the physical topology.

There are many ways to connect network devices, of which five basic topologies can be distinguished: bus, ring, star, mesh and lattice. The remaining methods are combinations of the basic ones. In general, such topologies are called mixed or hybrid, but some of them have their own names, for example “Tree”.

Ring- the basic topology of a computer network in which workstations are connected in series to each other, forming a closed network. The ring does not use a competitive method of sending data; a computer on the network receives data from a neighbor and redirects it further if it is not addressed to it. To determine who can transfer data to, a token is usually used. Data goes around in circles, only in one direction.

Advantages: Easy to install; Almost complete absence of additional equipment; Possibility of stable operation without a significant drop in data transfer speed under heavy network load, since the use of a marker eliminates the possibility of collisions.

Disadvantages: Failure of one workstation and other problems (cable break) affect the performance of the entire network; Complexity of configuration and setup; Difficulty in troubleshooting;

Tire, is a common cable (called a bus or backbone) to which all workstations are connected. There are terminators at the ends of the cable to prevent signal reflection.

The message sent by the workstation is distributed to all computers on the network. Each machine checks who the message is addressed to and if it is addressed to her, then processes it. In order to exclude the simultaneous sending of data, either a “carrier” signal is used, or one of the computers is the main one and “gives the floor” to the other stations. Advantages: Short network installation time; Cheap (less cable and network devices required); Easy to set up; Failure of a workstation does not affect the operation of the network;

Disadvantages Any problems in the network, such as a cable break or failure of the terminator, completely destroy the operation of the entire network; Difficult fault localization; As new workstations are added, network performance decreases.

Star- the basic topology of a computer network in which all computers on the network are connected to a central node (usually a network hub), forming a physical segment of the network. Such a network segment can function either separately or as part of a complex network topology (usually a “tree”).

The workstation to which data needs to be sent sends it to the hub, which determines the recipient and gives him the information. At a certain point in time, only one machine on the network can send data; if two packets arrive at the hub at the same time, both packets are not received and the senders will need to wait a random period of time to resume data transmission.

Advantages: failure of one workstation does not affect the operation of the entire network; good network scalability; easy troubleshooting and network breaks; high network performance (subject to proper design); flexible administration options.

Disadvantages: Failure of the central hub will result in the inoperability of the network (or network segment) as a whole; laying a network often requires more cable than most other topologies; the finite number of workstations in a network (or network segment) is limited by the number of ports in the central hub.

Mesh topology(in English mesh) - connects each workstation network with all other workstations on the same network. The topology refers to fully connected, in contrast to others - partially connected.

The sender of the message connects to network nodes in turn until he finds the one he needs, which will accept the data packets from him.

Comparison with other topologies

Advantages: reliability; if the computer’s cable breaks, there are enough connection paths left on the network.

Disadvantages: high installation cost; complexity of setup and operation;

In wired networks, this topology is rarely used, because due to excessive cable consumption it becomes too expensive. However, in wireless technologies, networks based on mesh technology are becoming more common as network media costs do not increase and network reliability comes to the fore.

Lattice- a concept from the theory of computer network organization. This is a topology in which the nodes form a regular multidimensional lattice. In this case, each lattice edge is parallel to its axis and connects two adjacent nodes along this axis. A one-dimensional “lattice” is a chain connecting two external nodes (which have only one neighbor) through a number of internal nodes (which have two neighbors - on the left and on the right). By connecting both external nodes, a “ring” topology is obtained. Two- and three-dimensional lattices are used in supercomputer architecture.

Advantages: high reliability. Disadvantages: complexity of implementation.

Computers act as a physical medium for signal transmission

Network cable.Coaxial– comp. made of a copper core, insulation, its surrounding copper braid and outer sheath. May have an additional layer of foil. A thin coax cable is flexible, approximately 0.5 cm in diameter, capable of transmitting signals over a distance of up to 185 m without noticeable distortion. Capable of transmitting data at a speed of 10 Mbit/s, allows for the implementation of a bus and ring topology. A thick coax cable is approximately 1 cm in diameter, the copper core is thicker than that of a thin one. It transmits signals over a distance of 500 m. To connect to it, a special device is used - a transceiver, the cat is equipped with a special connector. twisted pair– two insulated copper wires twisted around each other. Twisting the wires allows you to get rid of electrical interference induced by neighboring pairs and other sources. STP (shielded twisted pair) and UTP (unshielded twisted pair) - allows you to transmit a signal up to 100 m. There are 5 categories of UTP: 1) traditional telephone cable for transmitting analogue signals 2) a cable of 4 twisted pairs, capable of transmitting signals at a speed of 4 Mbit/s 3) a cable of 4 twisted pairs, capable of transmitting signals at a speed of 10 Mbit/s 4) 16 Mbit/s 5) 100-1000 Mbit/s c (The higher the category of the pair, the shorter the twisting steps). An RJ-45 connector is used to connect twisted pair to the network. Used in star topology. Fiber optic– data is transmitted via optical fibers in the form of modulated light pulses. It is a reliable and secure method of transmission, since electrical signals are not transmitted, therefore, the fiber optic cable cannot be opened and data intercepted. Fiber optic lines are designed for moving large amounts of data at high speeds. The signal in them practically does not fade and is not distorted. It consists of a thin glass cylinder, called a core, covered with a layer of glass (cladding) with a distortion coefficient different from that of the core. Sometimes the optical fiber is made of plastic. Each optical fiber transmits signals in only one direction, so the cable consists of 2 fibers with separate connectors (for transmission and for reception). Singlemode and multimode– for communication over short distances, because it is easier to install. Optical fiber is used for laying information highways, corporate networks, and for transmitting data over significant distances. (2 kilometers in full duplex mode over multimode optical fiber and up to 32 kilometers over single-mode).

Wireless LAN (WLAN) - wireless local area network. Wi-Fi is one of the Wireless LAN options. Allows you to deploy a network without laying cables and can reduce the cost of network deployment and expansion. Standards 802.11a/b/g speeds from 11 to 53 Mbps. WiMAX is a broadband radio protocol (Worldwide Interoperability for Microwave Access), developed by a consortium (English WiMAX Forum). . Unlike WiFi networks(IEEE 802.11x), where access to an access point is provided to clients randomly, in WiMAX each client is given a clearly regulated period of time. In addition, WiMAX supports mesh topology.