Cybernetic (alphabetic) approach to information measurement. Cybernetic (alphabetic) approach to information measurement 2 lotteries are held 4 out of 32

summary of other presentations

“Alphabetical approach to information measurement” - Units of information measurement. Information volume of the message. Formulation of the solution to problem No. 3. Information volume of the text. Number of possible information messages. The amount of information in the message. Units. Translate. N of the alphabet of Russian letters is equal to 32. Formulation of the solution to problem No. 2. Formulation of the solution to the problem. Alphabet. Amount of information. 32-character alphabet. Text symbol. The number of characters in the alphabet of the sign system.

“Hartley and Shannon Formulas” - Hartley Formula. Number. Event. Hartley and Shannon formulas. Offensive. Hartley. Amount of information. Task. American engineer Hartley. American scientist Claude Shannon. Hartley's formula: I=log2N where I is the amount of information, N is the number. Shannon's formula. Examples of equally probable messages. Writing Shannon's formula. Solution.

“Substantive approach to measuring information” - Content approach. Example. Measuring information. How to measure information. The king of spades was taken from the deck of cards. Information content of the message. Unit of measurement of information. How much information does the message contain? Formula for calculating the amount of information. One of the cells is painted over. Message about a missing edge with the number 3. There are eight shelves on the bookshelf.

“Amount of information in computer science” - Tasks. Control questions. The school library has 16 shelves of books. Message about the result of the draw. Each character is encoded as one byte. Determining the amount of information. Content approach. Converting units of measurement. Information for humans. Solve problems in your notebook. Alphabetical approach. The chessboard consists of 64 squares. Independent work.

“Approaches to measuring information” - Reliable and impossible events. Content. Another way to measure the amount of information. The message takes 3 pages of 25 lines. Let's make a table from the previous examples. Alphabet. Equally probable events. Uncertainty of knowledge. During the quarter, the student received 100 marks. Number guessing strategy. What does colloid chemistry study? The number of options for one of 6 sides to appear. How to measure the amount of information.

“Unit of quantity of information” - A measure of reducing the uncertainty of knowledge. Information capacity of the sign. Examples of information messages. Information capacity of the sign of the binary sign system. Amount of information. Alphabetical approach. Formula. Derived units. Bit. Number of possible information messages. Information is encoded. Type of equation. Received message. Number of signs. Announcement. Determining the amount of information.

Questions studied:

ª What is the alphabet, the power of the alphabet.

ª What is the information weight of a symbol in the alphabet.

ª How to measure the information volume of a text from an alphabetical point of view.

ª What is a byte, kilobyte, megabyte, gigabyte.

ª Speed ​​of information flow and channel capacity.

The approach to measuring information discussed in this topic is an alternative to the content approach discussed earlier. Here we are talking about measuring the amount of information in a text (symbolic message) composed of characters of some alphabet. This measure of information has nothing to do with the content of the text. Therefore, this approach can be called objective, i.e. independent of the subject who perceives it.

The alphabetical approach is the only way measurements of information that can be applied to information circulating in information technology, in computers.

The key concept in this topic is alphabet. An alphabet is a finite set of symbols used to represent information. The number of characters in the alphabet is called power of the alphabet(the term is taken from mathematical set theory). In main content basic course the alphabetical approach is considered only from the perspective equally probable approximation. This means that it can be assumed that the probability of occurrence of all characters of the alphabet in any position in the text is the same. Of course, this does not correspond to reality and is a simplifying assumption.

In the approximation under consideration, the amount of information that each character (i) carries in the text is calculated from Hartley’s equation: 2 i = N, where N is the power of the alphabet. The value i can be called the information weight of the symbol. It follows that the amount of information in the entire text (i), consisting of TO symbols is equal to the product of the information weight of the symbol by K: I= i´ TO. This value can be called the information volume of the text. This approach to measuring information is also called volumetric approach.

It is useful to discuss the following question with students: what is the minimum power of the alphabet with which information can be written (encoded)? This question is directly related to task No. 3 to § 3 of the textbook, which reads like this: “Prove that, based on the alphabetic approach, a message of any length using a one-character alphabet contains zero information.”

Let's assume that the alphabet used consists of just one character, such as "1". Intuitively, it is impossible to communicate anything using a single symbol. But this is proven strictly from the point of view of the alphabetical approach. The information weight of a symbol in such an alphabet is found from the equation: 2 i = 1. But since 1 = 2°, it follows that i = 0 bits. The resulting conclusion can be illustrated by the following figurative example. Imagine a thick book of 1000 pages, all pages of which are written with the same units (the only symbol of the alphabet used). How much information does it contain? Answer: not at all, zero. Moreover, such an answer can be obtained from any position, both substantive and alphabetical.

The minimum power of an alphabet suitable for transmitting information is 2. This alphabet is called binary alphabet. The information weight of a character in the binary alphabet is easy to determine. Since 2 i = 2, then i = 1 bit. So, One character of the binary alphabet carries 1 bit of information. Students will encounter this circumstance again when they become familiar with the alphabet of the computer's internal language - the binary coding language.

A bit is the basic unit of information. In addition to it, other units are also used. Students should pay attention to the fact that in any metric system there are basic (standard) units and derivatives from them. For example, the basic physical unit of length is the meter. But there is a millimeter, a centimeter, a kilometer. It is convenient to express distances of different sizes in terms of different units. The same is true with the measurement of information. 1 bit is the original unit. The next largest unit is a byte. A byte is entered as the information weight of a character from an alphabet with a power of 256. Since 256 = 2 8, then 1 byte = 8 bits. We again encounter a topic that is a kind of propaedeutic for the future study of computers.

Already within the framework of this topic, you can tell students that the computer uses an alphabet with a capacity of 256 to externally represent texts and other symbolic information(in the internal representation, any information in a computer is encoded in the binary alphabet). In fact, to express volume computer information The byte is used as the basic unit.

When introducing larger units to students: kilobyte, megabyte, gigabyte, you need to draw their attention to the fact that we are accustomed to perceive the prefix “kilo” as an increase of 1000 times. This is not the case in computer science. A kilobyte is 1024 times larger than a byte, and the number 1024 = 2 10. The same applies to “mega” in relation to “kilo”, etc. However, a factor of 1000 is often used for approximate calculations.

As part of an in-depth course, the teacher can present the alphabetic approach in a more adequate version, without assuming the equiprobability of symbols. Theoretical and practical material on this topic can be found in the manual in subsection 1.4.

Examples of problem solving

Tasks on the topic “Measuring information. Content approach" are associated with the use of the equation 2 i = N. There are two possible options for the problem condition: 1) given N, find i; 2) given i, find N.

In cases where N is equal to a whole power of two, it is advisable for students to perform calculations “in their heads”. As mentioned above, it is useful to remember a series of integer powers of the number 2, at least up to 2 10. Otherwise, you should use the equation solution table 2 i = N, given in and , which considers the values N from 1 to 64.

For the basic level of study of the basic course, tasks related to reporting equally probable events are proposed. Students must understand this and be sure to justify it qualitatively, using the term “equally probable events.”

Example 1. How many bits of information does the message that a queen of spades was drawn from a deck of 32 cards carry?

Solution. When cards are randomly drawn from a shuffled deck, no card has any advantage over the others. Consequently, the random selection of any card, including the queen of spades, is an equally probable event. It follows that the uncertainty of knowledge about the result of pulling out a card is equal to 32 - the number of cards in the deck. If i is the amount of information in the message about the result of pulling out one card (queen of spades), then we have the equation:

Since 32 = 2 5, then, therefore, i = 5 bits.

The teacher can offer several more tasks on the topic of this task. For example: how much information is conveyed by the message that a red card was taken from a deck of cards? (1 bit, since there are the same number of red and black cards).

How much information is conveyed by the message that a card of diamonds was taken from a deck of cards? (2 bits, since there are 4 suits in the deck and the number of cards in them is equal).

Example 2. There are two lotteries: “4 out of 32” and “5 out of 64”. The message about the results of which lottery contains more information?

Solution. This task has a “pitfall” that the teacher may come across. The first solution is trivial: pulling any number from a lottery drum is an equally probable event. Therefore, in the first lottery the amount of information in the message about one number is 5 bits (2 5 = 32), and in the second - 6 bits (2 b = 64). The message about four numbers in the first lottery carries 5´4 = 20 bits. The message about five numbers of the second lottery carries 6´5 = 30 bits. Consequently, the message about the results of the second lottery carries more information than the results of the first.

But another way of reasoning is also possible. Imagine that you are watching a lottery draw. The first ball is selected from the 32 balls in the drum. The result carries 5 bits of information. But the 2nd ball will be selected from 31 numbers, the 3rd from 30 numbers, the 4th from 29. This means that the amount of information carried by the 2nd number is found from the equation: 2 i = 31. Using the table solving this equation, we find: i = 4.95420 bits. For the 3rd number: 2 i = 30; i = 4.90689 bits. For the 4th number: 2 i " = 29; i = 4.85798 bits. In total we get: 5 + 4.95420 + 4.90689 + 4.85798 = = 19.71907 bits. Likewise for the second lottery. Of course , such calculations will not be reflected in the final conclusion. It was possible, without calculating anything at all, to immediately answer that the second message carries more information than the first. But here the very way of calculations taking into account the “dropout of participants” is interesting.

The sequence of events in this case is not independent of each other(except the first one). This, as we have seen, is reflected in the difference in the information content of messages about each of them. The first (trivial) solution to the problem was obtained under the assumption of independence of events and is in this case inaccurate.

In terms of tasks on the topic “Measuring information. Alphabetical approach" the following quantities are interconnected: the power of the symbolic alphabet - N; information weight of the symbol - /; number of characters in the text (text volume) - TO; the amount of information contained in the text (information volume of the text) - I. In addition, when solving problems, it is necessary to know the relationship between various units of information: bit, byte, kilobyte, megabyte, gigabyte.

Problems corresponding to the level of the minimum content of the basic course consider only the approximation of an equally probable alphabet, i.e. the assumption that the appearance of any character in any position of the text is equally probable. Advanced level problems use a more realistic assumption about the unequal probability of symbols. In this case, another parameter appears - the symbol probability (R).

Example 3. The two texts contain the same number of characters. The first text is composed in an alphabet with a capacity of 32 characters, the second - with a capacity of 64 characters. How many times does the amount of information in these texts differ?

Solution. In an equiprobable approximation, the information volume of a text is equal to the product of the number of characters and the information weight of one character:

Since both texts have the same number of characters (TO), then the difference in information volumes is determined only by the difference in the information content of the alphabet characters (i). Let's find i 1 for the first alphabet and i 2 for the second alphabet:

2 i1 = 32, hence i 1 = 5 bits;

2 i2 = 64, hence i 2 = 6 bits.

Consequently, the information volumes of the first and second texts will be equal:

I 1 = K× 5 bits, 1 2 =K×6 bit.

It follows that the amount of information in the second text is 6/5, or 1.2 times, greater than in the first.

Example 4. The size of the message, containing 2048 characters, was 1/512 of a MB. What is the size of the alphabet in which the message is written?

Solution. Let's convert the information volume of the message from megabytes to bits. To do this, multiply this value twice by 1024 (we get bytes) and once by 8:

I = 1/512 1024 1024 8 = 16384 bits.

Since 1024 characters carry such a volume of information (TO), then for one character there is:

i = I/K = 16384/1024 = 16 bits.

It follows that the size (power) of the alphabet used is 2 16 = 65,536 characters.

Note that it is precisely this alphabet that will, after some time, become the international standard for representing symbolic information in a computer (Unicode encoding).

A bit is the basic unit of information. In addition to it, other units are also used. The next largest unit is a byte. A byte is entered as the information weight of a character from an alphabet with a power of 256. Since 256 = 28, then 1 byte = 8 bits.

When introducing larger units to students: kilobyte, megabyte, gigabyte, you need to pay attention to the fact that we are accustomed to perceiving the prefix “kilo” as an increase of 1000 times. This is not the case in computer science. A kilobyte is 1024 times larger than a byte, and the number 1024 = 210. The same applies to “mega” in relation to “kilo”, etc. Nevertheless, a factor of 1000 is often used for approximate values.

As part of an in-depth course, the teacher can present the alphabetic approach in a more adequate version, without assuming the equiprobability of symbols.

Many textbooks contain a content line “Information and information processes begin in the same way, with the fact that the concept of “Information” has become one of the fundamental concepts in modern science. Along with the concepts of “matter”, “energy”, “space” and “time”. It forms the basis of the scientific picture of the world.

2.3. Methodology for solving problems on topics in the “Information” section

Tasks on the topic “Measuring information. Content approach" are associated with the use of the equation 2i = N. There are two possible solutions to the problem:

Given N, find i;

Given i, find N.

In cases where N is equal to an integer power of two, it is advisable for students to perform calculations “in their heads.” As mentioned above, it is useful to remember the series of integer powers of 2, at least up to 210. Otherwise, you should use the solution table for the equation 2i = N, which covers values ​​of N from 1 to 64.

For the basic level of study of the basic course, tasks related to reporting equally probable events are proposed. Students must understand this and be sure to justify it qualitatively, using the term “equally probable events.”

How many bits of information does the message that a queen of spades was drawn from a deck of 32 cards carry?

Solution: When randomly drawing cards from a shuffled deck, no card has the advantage of being chosen over the others. Consequently, the random selection of any card, including the queen of spades, is an equally probable event. It follows that the uncertainty of knowledge about the result of drawing a card is equal to 32 - the number of cards in the deck. If i is the amount of information in the message about the result of pulling out one card (queen of spades), then we have the equation:

Since 32 = 25, then i = 5 bits.

The teacher can offer several more tasks on the topic of this task. For example: how much information is conveyed by the message that a red card was taken from a deck of cards? (1 bit, since there are the same number of red and black cards).

How much information is conveyed by the message that a card of diamonds was taken from a deck of cards? (2 bits, since there are four suits in the deck and the number of cards in them is equal).

There are two lotteries: “4 out of 32” and “5 out of 64”. The message about the results of which lottery contains more information?

Solution: This task has a “pitfall” that the teacher may come across. The first solution is trivial: pulling any number from a lottery drum is an equally probable event. Therefore, in the first lottery the amount of information in a message about one number is 5 bits (25 = 32), and in the second - 6 bits (26 = 64). The message about four numbers in the first lottery carries 5 * 4 = 20 bits. Consequently, the message about the results of the second lottery carries more information than the results of the first.

But another way of reasoning is also possible. Imagine that you are watching a lottery draw. The first ball is selected from the 32 balls in the drum. The result carries 5 bits of information. But the second ball will be selected from 31 numbers, the third from 30 numbers, the fourth from 29. This means that the amount of information carried by the second number is found from the equation: 2i = 31. Using the table for solving this equation, we find: i = 4 ,95420 bits, for the third number: 2 i = 30; i = 4.90689 bits, for the fourth number: 2 i = 29; i = 4.85798 bits. In total we get: 5 + 4.95420 + 4.85798 + 4.90689 = 19.71907 bits. Likewise for the second lottery. Of course, such calculations will not affect the final conclusion. It was possible, without calculating anything, to immediately answer that the second message carries more information than the first. But what is interesting here is the way of calculations taking into account the “dropout of participants”.

The sequence of events in this case is not independent of each other (except for the first). This, as we have seen, is reflected in the difference in the information content of the message about each of them. The first (trivial) solution to the problem was obtained under the assumption of independence of events and is in this case inaccurate.

In terms of tasks on the topic “Measuring information. Alphabetical approach” the following quantities are interconnected: the power of the symbolic alphabet – N; information weight of the symbol – i; number of characters in the text (text volume) – K; the amount of information contained in the text (information volume of the text) – I. In addition, when solving problems, it is required to know the relationship between various units of information: bit, byte, KB, MB, GB.

Problems corresponding to the level of the minimum content of the basic course consider only the approximation of an equally probable alphabet, i.e. the assumption that the appearance of any character in any position in the text is equally probable. The advanced level problem uses a more realistic assumption about the unequal probability of symbols. In this case, another parameter appears - the symbol probability (p).

Solution: In an equiprobable approximation, the information volume of a text is equal to the product of the number of characters and the information weight of one character:

Since both texts have the same number of characters (K), the differences in information volumes are determined only by the difference in the information content of the alphabet characters (i). Let's find i1 for the first alphabet and i2 for the second alphabet:

2i1 = 32, hence i1 = 5 bits;

2i2 = 64, hence i2 = 6 bits.

Consequently, the information volumes of the first and second texts will be equal:

I1 = K*5 bits, I2 = K*6 bits.

It follows that the amount of information in the second text is 6/5, or 1.2 times, greater than in the first.

Tasks on the topic “Information”

1. Presentation of information.

1. Suppose that in the “Martian” language the expression lot do may means the cat ate the mouse; may si – gray mouse; ro do - he ate. How to write “gray cat” in “Martian” language?

Answer: lot si.

2. The phrase in some language “Kalya malya” translated into Russian means “Red Sun”, “Falya malya bala” - “Big Red Pear”, “Tsalya bala” - “Big Apple”. How to write the words: pear, apple, sun in this language?

Answer: “Tsalya” - “Apple”, “Balya” - “Pear”, “Kalya” - “Sun”.

Laboratory work No. 1

Measuring information (content approach)

1 bit– the amount of information that reduces the uncertainty of knowledge by half. Problems on the topic are related to the use of R. Hartley’s formula:

i = log 2 N or 2 i = N,

where i is the amount of information, N is the number of equally probable outcomes of the event.

There are two possible options for task conditions:

1) given N, find i;

given i, find N.

Equally probable events

1.4 teams participate in the competition. How much information is there in the message that the 3rd team won?

– The message reduces the original uncertainty by exactly four times (twice two) and carries two bits of information.

2. The ball is in one of the 64 boxes. How many pieces of information will the message contain about where the ball is?

6 bits (64 = 2 6)

3. When guessing an integer in a certain range, 8 bits of information were received. How many numbers did this range contain?

5. How many bits of information does the message convey that a queen of spades was drawn from a deck of 32 cards?

The solution to this problem should be described as follows: when cards are randomly drawn from a shuffled deck, no card has an advantage over the others to be chosen. Consequently, the random selection of any card, including the queen of spades, is an equally probable event. It follows that the uncertainty of knowledge about the result of pulling out a card is equal to 32 - the number of cards in the deck. If i is the amount of information in the message about the result of drawing one card (queen of spades), then we have the equation

Since 32= 2 5, therefore i = 5 bits.

6. The ball is in one of three urns: A, B or C. Determine how many bits of information the message that it is in urn B contains.

Such a message contains I = log 2 3 = 1.585 bits of information.

7. You throw two dice with numbers from 1 to 6 printed on the sides. Determine how many bits of information carry the message that one dice came up with a three and the other with a five.

log 2 6 + log 2 6 = 2.585 + 2.585 = 5.17 (bits)

8. There are two lotteries: “4 out of 32” and “5 out of 64”. The message about the results of which lottery contains more information?

The first solution is trivial: pulling any number from a lottery drum is an equally probable event. Therefore, in the first lottery the amount of information in the message about one number is 5 bits (2 5 = 32), and in the second - 6 bits (2 6 = 64). The message about four numbers in the first lottery carries 5x4 = 20 bits. The message about five numbers of the second lottery carries 6x5 = 30 bits. Consequently, the message about the results of the second lottery carries more information than about the first.

But this way of reasoning is also possible. Imagine that you are watching a lottery draw. The first ball is selected from the 32 balls in the drum. The result carries 5 bits of information. But the 2nd ball will be selected from 31 numbers, the 3rd from 30 numbers, the 4th from 29. This means that the amount of information carried by the 2nd number is found from the equation:

2 i = 31, from here i= 4.95420 bat.

For number 3: 2"= 30 ;i = 4.90689 bat.

For number 4: 2"= 29 ; i= 4.85798 bat.

In total we get: 5 + 4,95420 + 4,90689 + 4,85798 = 19,71907 bat.

and placing a banner is MANDATORY!!!

Development of a lesson on the topic: “How to measure information”

Textbook sections: § 2. Additional material: part 2, section 1.1.

Basic goals. Expand the concept of informativeness of a message from a subjective (substantive) point of view of information. Enter the unit of measurement of information - bit. Learn to calculate the amount of information in the particular case of reporting an event with a known probability (from a given finite set).

Questions studied:

o What determines the information content of a message received by a person?

o Unit of measurement of information.

o The amount of information in a message about one of N equally probable events.

1. This topic uses the concept of “message,” which is intuitive to students. However, there may be a need to decipher this concept. A message is an information flow that, in the process of transmitting information, reaches the receiver. A message is both the speech we listen to (a radio message, a teacher’s explanation) and the things we perceive visual images(a movie on TV, a traffic light), and the text of the book we are reading, etc.

2. The question of the informativeness of the message should be discussed using examples offered by the teacher and students. Rule: informative is a message that adds to a person’s knowledge, i.e. carries information for him. For different people, the same message in terms of its information content may be different. If the information is “old”, i.e. the person already knows this, or the content of the message is not clear to the person, then for him this message is uninformative. An informative message is one that contains new and understandable information.

Once again I would like to emphasize all the cognitive (for students) and methodological (for teachers) complexity of this material. The concepts of “information” and “informative content of a message” cannot be equated. The following example illustrates the difference in concepts. Question:

“Does a university textbook on higher mathematics contain information from the point of view of a first-grader?” Answer: “Yes, it does from any point of view! Because the textbook contains the knowledge of people: the authors of the textbook, the creators of the mathematical apparatus (Newton, Leibniz, etc.), modern mathematicians.” This truth is absolute. Another question: “Will the text of this textbook be informative for a first-grader if he tries to read it? In other words, can a first-grader expand his own knowledge with the help of this textbook?” Obviously the answer is no. Reading a textbook, i.e. when receiving messages, a first-grader will not understand anything, and therefore will not convert it into his own knowledge. The introduction of the concept of “message informativeness” is the first approach to studying the issue of measuring information. If a message is uninformative for a person, then the amount of information in it, from the point of view of this person, is zero. The amount of information in an informative message is greater than zero.

When explaining this topic, you can invite students to play a kind of quiz. For example, the teacher offers the children a list of questions, the answers to which they silently write down on paper. If the student does not know the answer, he puts a question mark. After this, the teacher gives the correct answers to his questions, and the students, having written down the teacher’s answers, note which of the answers turned out to be informative for them (+) and which were not (-). At the same time, for messages marked with a minus, you need to indicate the reason for the lack of information: not new (I know this), incomprehensible. For example, the list of questions and answers of one of the students may be as in the table on p. 6. 3. The definition of a bit - a unit of measurement of information - can be difficult to understand. This definition contains the concept of “uncertainty of knowledge”, which is unfamiliar to children. First of all, you need to open it. The teacher should be well aware that we are talking about a very special case: a message that contains information that "one of a finite set (N) of possible events occurred. For example, the result of throwing a coin, a game die; pulling out an exam card, etc. .p. The uncertainty of knowledge about the result of some event is a number possible options result. For a coin - 2, for a cube - b, for tickets - 30 (if there were 30 tickets on the table).

Teacher question

Student answer

Teacher's message

Informativeness of the message

Reason for lack of information

1. Which city is the capital of France?

The capital of France is Paris

The capital of France is Paris

2. What does colloid chemistry study?

Colloidal chemistry studies the dispersion states of systems with a high degree of fragmentation

Incomprehensible

3. What is the height and weight of the Eiffel Tower?

The Eiffel Tower is 300 meters high and weighs 9000 tons

4. Another difficulty is the concept of equiprobability. Here we should start from the intuitive idea of ​​​​children, supporting it with examples. Events are equally probable if none of them has an advantage over the others. From this point of view, heads and tails are equally probable; the loss of one of the six sides of the cube is equally probable. It is useful to give examples of unequally probable events. For example, in a weather report, depending on the season, information about whether it will rain or snow may have a different probability. Rain is most likely to be reported in summer, snow is most likely to be reported in winter, and in the transition period (March or November) they may be equally likely. The concept of a “more likely event” can be explained through related concepts: more expected, occurring more often under given conditions. As part of the basic course, students are not tasked with understanding the strict definition of probability or the ability to calculate probability. But they must obtain an idea of ​​equally probable and unequally probable events. Students should learn to give examples of equally probable and unequally probable events.

If you have class time, it is useful to discuss with your students the concepts of “certain event” - an event that is sure to happen, and “impossible event”. You can start from these concepts to introduce an intuitive idea of ​​the measure of probability. It is enough to say that the probability of a reliable event is 1, and that of an impossible event is 0. These are extreme values. This means that in all other “intermediate” cases the probability value lies between zero and one. In particular, the probability of each of two equally probable events is 1/2. For an in-depth study of the basic course, you should refer to section 1.1 “Probability and Information” of the second part of the textbook.

5. The textbook gives the following definition of a unit of information: “A message that reduces the uncertainty of knowledge by 2 times carries 1 bit of information.” A little further on is a definition for a special case: “The message that one of two equally probable events has occurred carries 1 bit of information.” A teacher who prefers an inductive method of explanation may begin with the second definition. Discussing the traditional example with a coin (heads-tails), it should be noted that receiving a message about the result of tossing a coin reduced the uncertainty of knowledge by half: before tossing the coin there were two equally probable options, after receiving the message about the result there was only one left. Further, it should be said that for all other cases of messages about equally probable events, when the uncertainty of knowledge is reduced by half, 1 bit of information is transmitted. The teacher can supplement the examples given in the textbook with others, and also invite students to come up with their own examples. Inductively, from particular examples, the teacher and the class arrive at a generalized formula: 2i= N. Here N is the number of options for equally probable events (uncertainty of knowledge), and i is the amount of information in the message that one of N events occurred. If N is known and i is an unknown quantity, then this formula turns into an exponential equation. As you know, an exponential equation can be solved using the logarithm function: i=log2N. Here the teacher is given two possible options:

either explain what a logarithm is ahead of math lessons, or “don’t mess with” logarithms. In the second option, students should consider solving the equation for special cases when N is an integer power of two: 2, 4, 8, 16, 32, - etc. The explanation follows the following scheme:

IF N= 2= 21, then the equation takes the form: 2i= 21, hence i = 1.

If N = 4 = 22, then the equation takes the form: 2 i = 22, hence i == 2.

If N = 8 == 23, then the equation takes the form: 2 i = 23, hence i = 3, etc.

In general, if N = 2k where k is an integer, then the equation becomes 2i = 2k and therefore i = k. It is useful for students to remember a number of whole powers of two, at least up to 210 = 1024. They will still encounter these quantities in other sections.

For those values ​​of N that are not integer powers of two, the solution to the equation 2i = N can be obtained from the table given in the textbook in § 2. It is not at all necessary to tell students that this is a table of logarithms to base 2. For example, wanting to determine how many bits of information there are carries a message about the result of throwing a six-sided die, you need to solve the equation: 2i = 6, Since 22< 6 < 23, то следует пояснить ученикам, что 2 < i < 3. Заглянув а таблицу, узнаем (с точностью до пяти знаков после запятой), что i= 2,58496 бита.

Problems on the topic of § 2 are related to the use of the equation 2i= N. There are two possible options for the conditions of the problems:

1) given N, find i;

2) given i, find N.

In cases where N is equal to an integer power of two, it is advisable for students to perform calculations “in their heads.” As mentioned above, it is useful to remember the series of integer powers of 2 at least up to 210. Otherwise, you should use Table 1.1, which covers values ​​of N from 1 to 64,

For the basic level of study of the basic course, tasks related to reporting equally probable events are proposed. Students must understand this and be sure to justify it qualitatively, using the term “equally probable events.”

Example 1. [I] Task No. 7 to § 2. How many bits of information does the message that a queen of spades was taken from a deck of 32 cards carry?

The solution to this problem should be described as follows: when cards are drawn at random and the deck is shuffled, no card has an advantage over the others to be chosen. Consequently, the random selection of any card, including the queen of spades, is an equally probable event. It follows that the uncertainty of knowledge about the result of pulling out a card is equal to 32 - the number of cards in the deck. If i is the amount of information in the message about the result of pulling out one card (queen of spades), then we have the equation;

Since 32= 25, therefore i = 5 bits.

The teacher can offer several more tasks on the topic of this task. For example:

How much information is conveyed by the message that a red card has been drawn from a deck of cards? (1 bit, since there are the same number of red and black cards.)

How much information is conveyed by the message that a card of diamonds was taken from a deck of cards? (2 bits, since there are 4 suits in the deck and the number of cards in them is equal.)

Example 2. [ 1 ] Task No. 8 to § 2. Two lotteries are held: “4 out of 32” and “5 out of 64”. The message about the results of which lottery contains more information?

This task has a “pitfall” that the teacher may come across. The first solution is trivial: pulling any number from a lottery drum is an equally probable event. Therefore, in the first lottery the amount of information in the message about one number is 5 bits (25 = 32), and in the second - 6 bits (26 = 64). The message about four numbers in the first lottery carries 5x4 = 20 bits. The message about five numbers of the second lottery carries 6x5 = 30 bits. Consequently, the message about the results of the second lottery carries more information than about the first.

But this way of reasoning is also possible. Imagine that you are watching a lottery draw. The first ball is selected from the 32 balls in the drum. The result carries 5 bits of information. But the 2nd ball will be selected from 31 numbers, the 3rd from 30 numbers, the 4th from 29. This means that the amount of information carried by the 2nd number is found from the equation: 2i = 31.

Looking at table 1.1, we find: i= 4.95420 bits. For the 3rd number: 2"= 30; r = 4.90689 bits. For the 4th number: 2"= 29; g= 4.85798 bits. In total we get: 5 + 4.95420 + 4.90689 + 4.85798 = 19.71907 bits.