Basic principles for placing speaker systems in a listening room. Soundproofing. Common mistakes and misconceptions Sound limit

8417 0

Whatever research method is used in the audiological study of auditory function, ideas about the basic physical characteristics of sound signals are essential. Below we will present only the most basic concepts of acoustics and electroacoustics.

Values ​​of the speed of propagation of a sound wave at different temperatures


Sound in nature propagates in the form of a time-varying disturbance of an elastic medium. The oscillatory movements of particles of such an elastic medium, arising under the influence of sound, are called sound vibrations, and the space of propagation of sound vibrations creates a sound field. If the medium in which sound vibrations propagate is liquid or gaseous, then the particles in these media oscillate along the line of sound propagation and therefore they are usually considered as longitudinal vibrations.

When sound propagates in solids, along with longitudinal vibrations, transverse sound vibrations are also observed. Naturally, the propagation of vibrations in a medium must have some direction. This direction is called a sound beam, and the surface connecting all adjacent points of a sound wave with the same vibration phase is called the front of a sound wave. In addition, sound waves travel at different speeds in different media. It is necessary to take into account that the speed value is determined by the density of the medium in which the sound wave propagates.

Information about the density values ​​of the sound medium is very significant, since this density creates a certain acoustic resistance to the propagation of the sound wave. The speed of propagation of a sound wave is also affected by the temperature of the medium: as the temperature of the medium increases, the speed of propagation of the sound wave increases.

The main physical characteristics of sound for an audiological examination are its intensity and frequency. That is why they will be considered in more detail.

To move on to the physical characteristic of sound intensity, it is first necessary to consider a number of other parameters of sound signals related to their intensity.

Sound pressure - p(t) - characterizes the force acting on an area located perpendicular to the movement of particles. In the SI system, sound pressure is measured in Newtons. Newton is the force that imparts an acceleration of 1 m/s to a mass of 1 kg in 1 s and acts per 1 square meter, abbreviated N/m2.

Other units of measurement of sound pressure are also given in the literature. Below is the ratio of the main units used:

1N/m2-10 dyne/cm2=10 µbar (microbar)

The energy of acoustic vibrations (E) characterizes the energy of particles moving under the influence of sound pressure (measured in joules - J).

The energy per unit area characterizes the acoustic density, measured in J/m2. The actual intensity of sound vibrations is defined as the power or density of the acoustic flux per unit time, i.e. J/m2/s or W/m2.

Humans and animals perceive a very wide range of sound pressures (from 0.0002 to 200 μbar). Therefore, for the convenience of measurement, it is customary to use relative values, namely, decimal or natural logarithm scales. Sound pressure is measured in decibels and bels (1B = 10 dB) when logarithms with a decimal base are used. Sometimes (rather rarely) sound pressure is measured in nener (1Нн = 8.67 dB); in this case are used natural logarithms, i.e. logarithms are not with decimal bases (as is the case with B and dB), but with binary bases.

However, it should be noted that the rating in bels and decibels was taken as a logarithmic measure of the power ratio. Meanwhile, power and intensity are proportional to the square of the sound pressure. Therefore, on the day of transition to sound intensity, the following relationships are established:


where N is intensity or sound pressure (P) in bels (B) or decibels (dB), I0 and P0 are conventionally accepted reading levels of intensity and sound pressure. Usually the sound pressure reading level (the abbreviation “USD” is often used in the literature, from the initial letters of the words “sound pressure level”, and in English language The abbreviation used is “SPL” (from the identical expression “Sound Pressure Level”) and is considered to be 2x10-5 N/m2. The relationship between ultrasound and other units of sound intensity is as follows:

2x10-5 N/m2=2x10-4din/cm2=2x10-4 µbar

Let us now consider the acoustic characteristics of the frequency of sound signals. In most cases, harmonic sound signals are used to examine auditory function.

A harmonic sound signal (otherwise a sinusoidal signal or a pure tone), which also has an initial phase of turning on a tone signal, in addition to sound pressure, is characterized by such an important physical characteristic as wavelength. All harmonic audio signals (or pure tones) have periodicity (ie, period T). In this case, the sound wavelength is defined as the distance between adjacent wave fronts with the same oscillation phase and is calculated by the formula:

J = c x T

Where c is the speed of propagation of sound vibrations (usually m/s), I is their periodicity. In this case, the frequency of sound vibrations (f) corresponds to the formula:

f = J/T

The frequency of a tone is estimated by the number of sound vibrations per second and is expressed in Hertz (abbreviated as Hz). Based on the range of frequencies of sound vibrations perceived by humans, frequencies in the range of 20 - 20,000 Hz are called sound frequencies, lower frequencies (f< 20 Гц) называют инфразвуками, а более высокие (f >20000 Hz) - ultrasounds.

In turn, purely for practical reasons, the range audio frequencies sometimes they are conventionally divided into low - below 500 Hz, medium - 500-4000 Hz and high - 4000 Hz and above. Note that to denote sound vibrations from 1000 Hz and above, the designation kilohertz, abbreviated kHz, is often used.


Schematic representation of the shape and spectrum of a number of sound signals used in audiological research:

1 - tone; 2 - short sound pulse (click); 3 - noise signal; 4 - short tone burst; 5 - amplitude-modulated signal (T - amplitude modulation period); 6 - frequency modulated signal.


If a sound signal contains many different frequencies (ideally all frequencies of the sound spectrum), then a so-called noise signal appears.

One of the methods of audiological examination of patients is acoustic impedance measurement. Therefore, let us consider in more detail another physical characteristic of sound signals.

It is well known that when propagating in media, different types of energy encounter a certain resistance. It was indicated above that the same resistance is encountered by acoustic energy when sound waves propagate in speaker systems Oh. From the following presentation it will become obvious that the peripheral parts of the auditory system, i.e. The outer and middle ear are, from a physical point of view, typical acoustic systems, namely, acoustic sound receivers. Therefore, it is necessary to consider the essence and characteristics of acoustic resistance, taking into account the passage of sound signals through the peripheral parts of the auditory system.

Complex acoustic impedance or acoustic impedance is defined as the total resistance to the passage of acoustic energy in loudspeaker systems. Acoustic impedance is the ratio of complex sound pressure amplitudes to vibrational volumetric velocity and is described by the formula:

Za = ReZa + ilmZa

In this equation, ReZa represents the active acoustic impedance (otherwise known as true or resistive impedance), which is related to the dissipation of energy in the acoustic system itself. Energy dissipation is understood as its dissipation into the transition of the energy of ordered processes (such as the kinetic energy of sound waves) into the energy of disordered processes (ultimately into heat). The second part of the ilmZa equation (its imaginary part) is called acoustic reactance, which is caused by inertial forces or forces of elasticity, compliance or flexibility.

Below we will describe in detail the procedure for studying the acoustic impedance of the middle ear with a number of measurements essential for an audiological examination (tympanometry, impedance measurement).

Ya.A. Altman, G. A. Tavartkiladze

In this article we will dive even deeper into the structure of the hearing aid, and, as it were, connect at the “physical” level what I wrote about in the previous three articles. Today we will touch on the topic of “loudness limit” in the next two articles. A sound signal of any nature can be described by a certain set of physical characteristics: frequency, intensity, duration, temporal structure, spectrum, etc. They correspond to certain subjective sensations that arise when the auditory system perceives sounds: volume, pitch, timbre, beats, consonances-dissonances , camouflage, localization-stereo-effect, etc. As we know, auditory sensations are not linear in perception! Usually, this is always a complex of physical parameters. For example, loudness is a sensation arising from combinations of frequencies, on the uniqueness of the spectrum and the intensity of the sound itself.

It was established in ancient timesrelationshipabout non-linear perception of hearing. This turned into lawWeber - Fechner - empirical psychophysiological law, which consists in the fact that the intensityFeel proportionallogarithm stimulus intensity.

IN 1834 E. Weber conducted a series of experiments and came to the conclusion: in order for a new stimulus to differ in sensations from the previous one, it must differ from the original one by an amount proportional to the original stimulus. Based on these observationsG. Fechner V 1860 formulated the “basic psychophysical law”, according to which the strength of sensationproportional to the logarithm of the stimulus intensity. As an example: a chandelier with 8 bulbs seems to us as much brighter than a chandelier with 4 bulbs as a chandelier with 4 bulbs is brighter than a chandelier with 2 bulbs. That is, the number of light bulbs should increase by the same number of times, so that it seems to us that the increase in brightness is constant. And vice versa, if the absolute increase in brightness (the difference in brightness “after” and “before”) is constant, then it will seem to us that the absolute increase decreases as the brightness value itself increases. For example, if you add one light bulb to a chandelier of two light bulbs, the apparent increase in brightness will be significant. If we add one light bulb to a chandelier of 12 light bulbs, we will hardly notice an increase in brightness.

From this example (although it does not completely describe the structure of “loud-perception”), we see a direct and obvious transformation of the “frequency groups” (critical bands) of the hearing aid. Their filling, like “light bulbs,” leads to a subjective increase in the sense of volume. The degree of "filling" is called the "intensity" of the sound.

But before we talk in more detail not only about loudness perception, but also about such a possibility of the hearing aid as establishing pitch, we need to dive into the structure of the “ear” in more detail and clearly understand the work of all these “chips.” I will talk about this in the next article.

Psychoacoustics, a field of science bordering between physics and psychology, studies data on a person’s auditory sensation when a physical stimulus—sound—is applied to the ear. A large amount of data has been accumulated on human reactions to auditory stimuli. Without this data, it is difficult to obtain a correct understanding of the operation of audio transmission systems. Let's consider the most important features of human perception of sound.
A person feels changes in sound pressure occurring at a frequency of 20-20,000 Hz. Sounds with frequencies below 40 Hz are relatively rare in music and do not exist in spoken language. At very high frequencies, the musical perception disappears and a certain vague sound sensation appears, depending on the individuality of the listener and his age. With age, a person's hearing sensitivity decreases, primarily in the upper frequencies of the sound range.
But it would be wrong to conclude on this basis that the transmission of a wide frequency band by a sound-reproducing installation is unimportant for older people. Experiments have shown that people, even if they can barely perceive signals above 12 kHz, very easily recognize the lack of high frequencies in a musical transmission.

Frequency characteristics of auditory sensations

The range of sounds audible to humans in the range of 20-20,000 Hz is limited in intensity by thresholds: below - audibility and above - pain.
The hearing threshold is estimated by the minimum pressure, or more precisely, the minimum increment of pressure relative to the boundary is sensitive to frequencies of 1000-5000 Hz - here the hearing threshold is the lowest (sound pressure about 2-10 Pa). Toward lower and higher sound frequencies, hearing sensitivity drops sharply.
The pain threshold determines the upper limit of the perception of sound energy and corresponds approximately to a sound intensity of 10 W/m or 130 dB (for a reference signal with a frequency of 1000 Hz).
As sound pressure increases, the intensity of the sound also increases, and the auditory sensation increases in leaps, called the intensity discrimination threshold. The number of these jumps at medium frequencies is approximately 250, at low and high frequencies it decreases and on average over the frequency range is about 150.

Since the range of intensity changes is 130 dB, the elementary jump in sensations on average over the amplitude range is 0.8 dB, which corresponds to a change in sound intensity by 1.2 times. At low hearing levels these jumps reach 2-3 dB, at high levels they decrease to 0.5 dB (1.1 times). An increase in the power of the amplification path by less than 1.44 times is practically not detected by the human ear. With a lower sound pressure developed by the loudspeaker, even doubling the power of the output stage may not produce a noticeable result.

Subjective sound characteristics

The quality of sound transmission is assessed based on auditory perception. Therefore, it is correct to determine technical requirements to the sound transmission path or its individual links is possible only by studying the patterns connecting the subjectively perceived sensation of sound and the objective characteristics of sound are height, volume and timbre.
The concept of pitch implies a subjective assessment of the perception of sound across the frequency range. Sound is usually characterized not by frequency, but by pitch.
A tone is a signal of a certain pitch that has a discrete spectrum (musical sounds, vowel sounds of speech). A signal that has a wide continuous spectrum, all frequency components of which have the same average power, is called white noise.

A gradual increase in the frequency of sound vibrations from 20 to 20,000 Hz is perceived as a gradual change in tone from the lowest (bass) to the highest.
The degree of accuracy with which a person determines the pitch of a sound by ear depends on the acuity, musicality and training of his ear. It should be noted that the pitch of a sound depends to some extent on the intensity of the sound (at high levels, sounds of greater intensity appear lower than weaker ones.
The human ear can clearly distinguish two tones that are close in pitch. For example, in the frequency range of approximately 2000 Hz, a person can distinguish between two tones that differ from each other in frequency by 3-6 Hz.
The subjective scale of sound perception in frequency is close to the logarithmic law. Therefore, doubling the vibration frequency (regardless of the initial frequency) is always perceived as the same change in pitch. The height interval corresponding to a 2-fold change in frequency is called an octave. The range of frequencies perceived by humans is 20-20,000 Hz, which covers approximately ten octaves.
An octave is a fairly large interval of change in pitch; a person distinguishes significantly smaller intervals. Thus, in ten octaves perceived by the ear, more than a thousand gradations of pitch can be distinguished. Music uses smaller intervals called semitones, which correspond to a change in frequency of approximately 1.054 times.
An octave is divided into half octaves and a third of an octave. For the latter, the following range of frequencies is standardized: 1; 1.25; 1.6; 2; 2.5; 3; 3.15; 4; 5; 6.3:8; 10, which are the boundaries of one-third octaves. If these frequencies are placed at equal distances along the frequency axis, you get a logarithmic scale. Based on this everything frequency characteristics Sound transmission devices are built on a logarithmic scale.
The loudness of the transmission depends not only on the intensity of the sound, but also on the spectral composition, the conditions of perception and the duration of exposure. So, two sounding tones, middle and low frequency, having the same intensity (or the same sound pressure), are not perceived by a person as equally loud. Therefore, the concept of loudness level in backgrounds was introduced to designate sounds of the same loudness. The sound volume level in the backgrounds is taken to be the sound pressure level in decibels of the same volume of a pure tone with a frequency of 1000 Hz, i.e. for a frequency of 1000 Hz the volume levels in backgrounds and decibels are the same. At other frequencies, sounds may appear louder or quieter at the same sound pressure.
The experience of sound engineers in recording and editing musical works shows that in order to better detect sound defects that may arise during work, the volume level during control listening should be maintained high, approximately corresponding to the volume level in the hall.
With prolonged exposure to intense sound, hearing sensitivity gradually decreases, and the more, the higher the sound volume. The detected decrease in sensitivity is associated with the reaction of hearing to overload, i.e. with its natural adaptation. After some break in listening, hearing sensitivity is restored. It should be added to this that the hearing aid, when perceiving high-level signals, introduces its own, so-called subjective, distortions (which indicates the nonlinearity of hearing). Thus, at a signal level of 100 dB, the first and second subjective harmonics reach levels of 85 and 70 dB.
A significant level of volume and the duration of its exposure cause irreversible phenomena in the auditory organ. It was noted that young people last years hearing thresholds increased sharply. The reason for this was a passion for pop music, characterized by high sound volume levels.
The volume level is measured using an electroacoustic device - a sound level meter. The sound being measured is first converted into electrical vibrations by the microphone. After amplification by a special voltage amplifier, these oscillations are measured with a pointer instrument adjusted in decibels. In order for the device readings to correspond as accurately as possible to the subjective perception of loudness, the device is equipped with special filters that change its sensitivity to the perception of sound of different frequencies in accordance with the characteristics of hearing sensitivity.
Important characteristic sound is timbre. The ability of hearing to distinguish it allows you to perceive signals with a wide variety of shades. The sound of each of the instruments and voices, thanks to their characteristic shades, becomes multicolored and well recognizable.
Timbre, being a subjective reflection of the complexity of the perceived sound, has no quantitative assessment and is characterized by qualitative terms (beautiful, soft, juicy, etc.). When transmitting a signal along an electroacoustic path, the resulting distortions primarily affect the timbre of the reproduced sound. The condition for the correct transmission of the timbre of musical sounds is the undistorted transmission of the signal spectrum. The signal spectrum is the collection of sinusoidal components of a complex sound.
The simplest spectrum is the so-called pure tone; it contains only one frequency. The sound of a musical instrument is more interesting: its spectrum consists of the frequency of the fundamental tone and several “impurity” frequencies called overtones (higher tones). Overtones are a multiple of the frequency of the fundamental tone and are usually smaller in amplitude.
The timbre of the sound depends on the distribution of intensity over overtones. The sounds of different musical instruments vary in timbre.
More complex is the spectrum of combinations of musical sounds called a chord. In such a spectrum there are several fundamental frequencies along with corresponding overtones
Differences in timbre are mainly due to the low-mid frequency components of the signal, therefore, a large variety of timbres is associated with signals lying in the lower part of the frequency range. Signals belonging to its upper part, as they increase, increasingly lose their timbre coloring, which is due to the gradual departure of their harmonic components beyond the limits of audible frequencies. This can be explained by the fact that up to 20 or more harmonics are actively involved in the formation of the timbre of low sounds, medium 8 - 10, high 2 - 3, since the rest are either weak or fall outside the range of audible frequencies. Therefore, high sounds, as a rule, are poorer in timbre.
Almost all natural sound sources, including sources of musical sounds, have a specific dependence of timbre on volume level. Hearing is also adapted to this dependence - it is natural for it to determine the intensity of a source by the color of the sound. Louder sounds are usually more harsh.

Musical sound sources

A number of factors characterizing the primary sound sources have a great influence on the sound quality of electroacoustic systems.
The acoustic parameters of musical sources depend on the composition of the performers (orchestra, ensemble, group, soloist and type of music: symphonic, folk, pop, etc.).

The origin and formation of sound on each musical instrument has its own specifics associated with the acoustic characteristics of sound production in a particular musical instrument.
An important element of musical sound is attack. This is a specific transition process during which stable sound characteristics are established: volume, timbre, pitch. Any musical sound goes through three stages - beginning, middle and end, and both the initial and final stages have a certain duration. The initial stage is called an attack. It lasts differently: for plucked instruments, percussion and some wind instruments it lasts 0-20 ms, for the bassoon it lasts 20-60 ms. An attack is not just an increase in the volume of a sound from zero to some steady value; it can be accompanied by the same change in the pitch of the sound and its timbre. Moreover, the attack characteristics of the instrument are not the same in different parts of its range with different playing styles: the violin is the most perfect instrument in terms of the wealth of possible expressive methods of attack.
One of the characteristics of any musical instrument is frequency range sound. In addition to the fundamental frequencies, each instrument is characterized by additional high-quality components - overtones (or, as is customary in electroacoustics, higher harmonics), which determine its specific timbre.
It is known that sound energy is unevenly distributed across the entire spectrum of sound frequencies emitted by a source.
Most instruments are characterized by amplification of fundamental frequencies, as well as individual overtones, in certain (one or more) relatively narrow frequency bands (formants), different for each instrument. Resonant frequencies (in hertz) of the formant region are: for trumpet 100-200, horn 200-400, trombone 300-900, trumpet 800-1750, saxophone 350-900, oboe 800-1500, bassoon 300-900, clarinet 250-600 .
Another characteristic property of musical instruments is the strength of their sound, which is determined by the greater or lesser amplitude (span) of their sounding body or air column (a greater amplitude corresponds to a stronger sound and vice versa). The peak acoustic power values ​​(in watts) are: for large orchestra 70, bass drum 25, timpani 20, snare drum 12, trombone 6, piano 0.4, trumpet and saxophone 0.3, trumpet 0.2, double bass 0.( 6, small flute 0.08, clarinet, horn and triangle 0.05.
The ratio of the sound power extracted from an instrument when played “fortissimo” to the power of sound when played “pianissimo” is usually called the dynamic range of the sound of musical instruments.
The dynamic range of a musical sound source depends on the type of performing group and the nature of the performance.
Let's consider the dynamic range of individual sound sources. The dynamic range of individual musical instruments and ensembles (orchestras and choirs of various compositions), as well as voices, is understood as the ratio of the maximum sound pressure created by a given source to the minimum, expressed in decibels.
In practice, when determining the dynamic range of a sound source, one usually operates only on sound pressure levels, calculating or measuring their corresponding difference. For example, if the maximum sound level of an orchestra is 90 and the minimum is 50 dB, then the dynamic range is said to be 90 - 50 = 40 dB. In this case, 90 and 50 dB are sound pressure levels relative to zero acoustic level.
Dynamic range for this source sound is a variable quantity. It depends on the nature of the work being performed and on the acoustic conditions of the room in which the performance takes place. Reverberation expands the dynamic range, which typically reaches its maximum in rooms with large volumes and minimal sound absorption. Almost all instruments and human voices have an uneven dynamic range across sound registers. For example, the volume level of the lowest sound on a forte for a vocalist is equal to the level of the highest sound on a piano.

The dynamic range of a particular musical program is expressed in the same way as for individual sound sources, but the maximum sound pressure is noted with a dynamic ff (fortissimo) tone, and the minimum with a pp (pianissimo).

The highest volume, indicated in the notes fff (forte, fortissimo), corresponds to an acoustic sound pressure level of approximately 110 dB, and the lowest volume, indicated in the notes ppr (piano-pianissimo), approximately 40 dB.
It should be noted that the dynamic nuances of performance in music are relative and their relationship with the corresponding sound pressure levels is to some extent conditional. The dynamic range of a particular musical program depends on the nature of the composition. Thus, the dynamic range of classical works by Haydn, Mozart, Vivaldi rarely exceeds 30-35 dB. The dynamic range of pop music usually does not exceed 40 dB, while that of dance and jazz music is only about 20 dB. Most works for orchestra of Russian folk instruments also have a small dynamic range (25-30 dB). This is also true for a brass band. However, the maximum sound level of a brass band in a room can reach a fairly high level (up to 110 dB).

Masking effect

The subjective assessment of loudness depends on the conditions in which the sound is perceived by the listener. In real conditions, an acoustic signal does not exist in absolute silence. At the same time, extraneous noise affects the hearing, complicating sound perception, masking to a certain extent the main signal. The effect of masking a pure sine wave by extraneous noise is measured by the value indicating. by how many decibels the threshold of audibility of the masked signal increases above the threshold of its perception in silence.
Experiments to determine the degree of masking of one sound signal by another show that a tone of any frequency is masked by lower tones much more effectively than by higher ones. For example, if two tuning forks (1200 and 440 Hz) emit sounds with the same intensity, then we stop hearing the first tone, it is masked by the second (by extinguishing the vibration of the second tuning fork, we will hear the first again).
If two complex sound signals, consisting of certain spectra of sound frequencies, then the effect of mutual masking occurs. Moreover, if the main energy of both signals lies in the same region of the audio frequency range, then the masking effect will be the strongest. Thus, when transmitting an orchestral piece, due to masking by the accompaniment, the soloist’s part may become poorly intelligible and inaudible.
Achieving clarity or, as they say, “transparency” of sound in the sound transmission of orchestras or pop ensembles becomes very difficult if an instrument or individual groups of orchestra instruments play in one or similar registers at the same time.
The director, when recording an orchestra, must take into account the features of camouflage. At rehearsals, with the help of the conductor, he establishes a balance between the sound strength of the instruments of one group, as well as between the groups of the entire orchestra. The clarity of the main melodic lines and individual musical parts is achieved in these cases by the close placement of microphones to the performers, the deliberate highlighting by the sound engineer of the most important this place works of instruments and other special sound engineering techniques.
The phenomenon of masking is opposed by the psychophysiological ability of the hearing organs to single out from the general mass of sounds one or more that carry the most important information. For example, when an orchestra is playing, the conductor notices the slightest inaccuracies in the performance of a part on any instrument.
Masking can significantly affect the quality of signal transmission. A clear perception of the received sound is possible if its intensity significantly exceeds the level of interference components located in the same band as the received sound. With uniform interference, the signal excess should be 10-15 dB. This feature of auditory perception is practical use, for example, when assessing the electroacoustic characteristics of media. So, if the signal-to-noise ratio of an analog record is 60 dB, then the dynamic range of the recorded program can be no more than 45-48 dB.

Temporal characteristics of auditory perception

The hearing aid, like any other oscillatory system, is inertial. When the sound disappears, the auditory sensation does not disappear immediately, but gradually, decreasing to zero. The time during which the noise level decreases by 8-10 backgrounds is called the hearing time constant. This constant depends on a number of circumstances, as well as on the parameters of the perceived sound. If two short sound pulses arrive to the listener, identical in frequency composition and level, but one of them is delayed, then they will be perceived together with a delay not exceeding 50 ms. At large delay intervals, both impulses are perceived separately, and an echo occurs.
This feature of hearing is taken into account when designing some signal processing devices, for example, electronic delay lines, reverberates, etc.
It should be noted that, due to the special property of hearing, the sensation of the volume of a short-term sound pulse depends not only on its level, but also on the duration of the pulse’s impact on the ear. Thus, a short-term sound, lasting only 10-12 ms, is perceived by the ear quieter than a sound of the same level, but affecting the hearing for, for example, 150-400 ms. Therefore, when listening to a broadcast, loudness is the result of averaging the energy of the sound wave over a certain interval. In addition, human hearing has inertia, in particular, when perceiving nonlinear distortions, it does not feel them if the duration of the sound pulse is less than 10-20 ms. That is why in level indicators of sound recording household radio-electronic equipment, the instantaneous signal values ​​are averaged over a period selected in accordance with the temporal characteristics of the hearing organs.

Spatial representation of sound

One of the important human abilities is the ability to determine the direction of a sound source. This ability is called the binaural effect and is explained by the fact that a person has two ears. Experimental data shows where the sound comes from: one for high-frequency tones, one for low-frequency tones.

The sound travels a shorter distance to the ear facing the source than to the other ear. As a result, the pressure of sound waves in the ear canals varies in phase and amplitude. The amplitude differences are significant only at high frequencies, when the sound wavelength becomes comparable to the size of the head. When the difference in amplitude exceeds a threshold value of 1 dB, the sound source appears to be on the side where the amplitude is greater. The angle of deviation of the sound source from the center line (line of symmetry) is approximately proportional to the logarithm of the amplitude ratio.
To determine the direction of a sound source with frequencies below 1500-2000 Hz, phase differences are significant. It seems to a person that the sound comes from the side from which the wave, which is ahead in phase, reaches the ear. The angle of deviation of sound from the midline is proportional to the difference in the time of arrival of sound waves to both ears. A trained person can notice a phase difference with a time difference of 100 ms.
The ability to determine the direction of sound in the vertical plane is much less developed (about 10 times). This physiological feature is associated with the orientation of the hearing organs in the horizontal plane.
Specific feature spatial perception sound by a person is manifested in the fact that the hearing organs are able to sense the total, integral localization created with the help of artificial means of influence. For example, in a room, two speakers are installed along the front at a distance of 2-3 m from each other. The listener is located at the same distance from the axis of the connecting system, strictly in the center. In a room, two sounds of equal phase, frequency and intensity are emitted through the speakers. As a result of the identity of the sounds passing into the organ of hearing, a person cannot separate them; his sensations give ideas about a single, apparent (virtual) sound source, which is located strictly in the center on the axis of symmetry.
If we now reduce the volume of one speaker, the apparent source will move towards the louder speaker. The illusion of a sound source moving can be obtained not only by changing the signal level, but also by artificially delaying one sound relative to another; in this case, the apparent source will shift towards the speaker emitting the signal in advance.
To illustrate integral localization, we give an example. The distance between the speakers is 2 m, the distance from the front line to the listener is 2 m; in order for the source to move 40 cm to the left or right, it is necessary to submit two signals with a difference in intensity level of 5 dB or with a time delay of 0.3 ms. With a level difference of 10 dB or a time delay of 0.6 ms, the source will “move” 70 cm from the center.
Thus, if you change the sound pressure created by the speaker, the illusion of moving the sound source arises. This phenomenon is called summary localization. To create summary localization, a two-channel stereophonic sound transmission system is used.
Two microphones are installed in the primary room, each of which works on its own channel. The secondary has two loudspeakers. The microphones are located at a certain distance from each other along a line parallel to the placement of the sound emitter. When moving the sound emitter, different sound pressure will act on the microphone and the time of arrival of the sound wave will be different due to the unequal distance between the sound emitter and the microphones. This difference creates a total localization effect in the secondary room, as a result of which the apparent source is localized at a certain point in space located between two loudspeakers.
It should be said about the binaural sound transmission system. With this system, called an artificial head system, two separate microphones are placed in the primary room, spaced at a distance from each other equal to the distance between a person's ears. Each of the microphones has an independent sound transmission channel, the output of which in the secondary room includes telephones for the left and right ears. If the sound transmission channels are identical, such a system accurately conveys the binaural effect created near the ears of the “artificial head” in the primary room. Having headphones and having to use them for a long time is a disadvantage.
The organ of hearing determines the distance to the sound source using a number of indirect signs and with some errors. Depending on whether the distance to the signal source is small or large, its subjective assessment changes under the influence of various factors. It was found that if the determined distances are small (up to 3 m), then their subjective assessment is almost linearly related to the change in the volume of the sound source moving along the depth. An additional factor for a complex signal is its timbre, which becomes increasingly “heavier” as the source approaches the listener. This is due to the increasing amplification of low overtones compared to high overtones, caused by the resulting increase in volume level.
For average distances of 3-10 m, moving the source away from the listener will be accompanied by a proportional decrease in volume, and this change will apply equally to the fundamental frequency and harmonic components. As a result, there is a relative strengthening of the high-frequency part of the spectrum and the timbre becomes brighter.
As the distance increases, energy losses in the air will increase in proportion to the square of the frequency. Increased loss of high register overtones will result in decreased timbral brightness. Thus, the subjective assessment of distances is associated with changes in its volume and timbre.
In a closed room, the signals of the first reflections, delayed relative to the direct reflection by 20-40 ms, are perceived by the hearing organ as coming from different directions. At the same time, their increasing delay creates the impression of a significant distance from the points from which these reflections occur. Thus, by the delay time one can judge the relative distance of secondary sources or, what is the same, the size of the room.

Some features of the subjective perception of stereophonic broadcasts.

A stereophonic sound transmission system has a number of significant features compared to a conventional monophonic one.
The quality that distinguishes stereophonic sound, volume, i.e. natural acoustic perspective can be assessed using some additional indicators that do not make sense with a monophonic sound transmission technique. Such additional indicators include: hearing angle, i.e. the angle at which the listener perceives the stereophonic sound picture; stereo resolution, i.e. subjectively determined localization of individual elements of the sound image at certain points in space within the audibility angle; acoustic atmosphere, i.e. the effect of giving the listener a feeling of presence in the primary room where the transmitted sound event occurs.

On the role of room acoustics

Colorful sound is achieved not only with the help of sound reproduction equipment. Even with fairly good equipment, the sound quality may be poor if the listening room does not have certain properties. It is known that in a closed room a nasal sound phenomenon called reverberation occurs. By affecting the organs of hearing, reverberation (depending on its duration) can improve or worsen sound quality.

A person in a room perceives not only direct sound waves created directly by the sound source, but also waves reflected by the ceiling and walls of the room. Reflected waves are heard for some time after the sound source has stopped.
It is sometimes believed that reflected signals only play a negative role, interfering with the perception of the main signal. However, this idea is incorrect. A certain part of the energy of the initial reflected echo signals, reaching the human ears with short delays, amplifies the main signal and enriches its sound. In contrast, later reflected echoes. whose delay time exceeds a certain critical value, form a sound background that makes it difficult to perceive the main signal.
The listening room should not have a long reverberation time. Living rooms, as a rule, have little reverberation due to their limited size and the presence of sound-absorbing surfaces, upholstered furniture, carpets, curtains, etc.
Obstacles of different nature and properties are characterized by a sound absorption coefficient, which is the ratio of the absorbed energy to the total energy of the incident sound wave.

To increase the sound-absorbing properties of the carpet (and reduce noise in the living room), it is advisable to hang the carpet not close to the wall, but with a gap of 30-50 mm).

1. Sound, types of sound.

2. Physical characteristics of sound.

3. Characteristics of auditory sensation. Sound measurements.

4. Passage of sound across the interface.

5. Sound research methods.

6. Factors determining noise prevention. Noise protection.

7. Basic concepts and formulas. Tables.

8. Tasks.

Acoustics. In a broad sense, it is a branch of physics that studies elastic waves from the lowest frequencies to the highest. In a narrow sense, it is the study of sound.

3.1. Sound, types of sound

Sound in a broad sense is elastic vibrations and waves propagating in gaseous, liquid and solid substances; in a narrow sense, a phenomenon subjectively perceived by the hearing organs of humans and animals.

Normally, the human ear hears sound in the frequency range from 16 Hz to 20 kHz. However, with age, the upper limit of this range decreases:

Sound with a frequency below 16-20 Hz is called infrasound, above 20 kHz -ultrasound, and the highest frequency elastic waves in the range from 10 9 to 10 12 Hz - hypersound.

Sounds found in nature are divided into several types.

Tone - it is a sound that is a periodic process. The main characteristic of tone is frequency. Simple tone created by a body vibrating according to a harmonic law (for example, a tuning fork). Complex tone is created by periodic oscillations that are not harmonic (for example, the sound of a musical instrument, the sound created by the human speech apparatus).

Noise is a sound that has a complex, non-repeating time dependence and is a combination of randomly changing complex tones (the rustling of leaves).

Sonic boom- this is a short-term sound impact (clap, explosion, blow, thunder).

A complex tone, as a periodic process, can be represented as a sum of simple tones (decomposed into component tones). This decomposition is called spectrum.

Acoustic tone spectrum is the totality of all its frequencies with an indication of their relative intensities or amplitudes.

The lowest frequency in the spectrum (ν) corresponds to the fundamental tone, and the remaining frequencies are called overtones or harmonics. Overtones have frequencies that are multiples of the fundamental frequency: 2ν, 3ν, 4ν, ...

Typically, the largest amplitude of the spectrum corresponds to the fundamental tone. It is this that is perceived by the ear as the pitch of the sound (see below). Overtones create the “color” of the sound. Sounds of the same pitch created by different instruments are perceived differently by the ear precisely because of the different relationships between the amplitudes of the overtones. Figure 3.1 shows the spectra of the same note (ν = 100 Hz) played on a piano and a clarinet.

Rice. 3.1. Spectra of piano (a) and clarinet (b) notes

The acoustic spectrum of noise is continuous.

3.2. Physical characteristics of sound

1. Speed(v). Sound travels in any medium except vacuum. The speed of its propagation depends on the elasticity, density and temperature of the medium, but does not depend on the frequency of oscillations. The speed of sound in a gas depends on its molar mass (M) and absolute temperature (T):

The speed of sound in water is 1500 m/s; The speed of sound in the soft tissues of the body is of similar importance.

2. Sound pressure. The propagation of sound is accompanied by a change in pressure in the medium (Fig. 3.2).

Rice. 3.2. Change in pressure in a medium during sound propagation.

It is changes in pressure that cause vibrations of the eardrum, which determine the beginning of such a complex process as the occurrence of auditory sensations.

Sound pressure Ρ) - this is the amplitude of those changes in pressure in the medium that occur during the passage of a sound wave.

3. Sound intensity(I). The propagation of a sound wave is accompanied by a transfer of energy.

Sound intensity is the flux density of energy transferred by a sound wave(see formula 2.5).

In a homogeneous medium, the intensity of sound emitted in a given direction decreases with distance from the sound source. When using waveguides, it is possible to achieve an increase in intensity. A typical example of such a waveguide in living nature is the auricle.

The relationship between intensity (I) and sound pressure (ΔΡ) is expressed by the following formula:

where ρ is the density of the medium; v- the speed of sound in it.

The minimum values ​​of sound pressure and sound intensity at which a person experiences auditory sensations are called threshold of hearing.

For the ear of an average person at a frequency of 1 kHz, the hearing threshold corresponds to the following values ​​of sound pressure (ΔΡ 0) and sound intensity (I 0):

ΔΡ 0 = 3x10 -5 Pa (≈ 2x10 -7 mm Hg); I 0 = 10 -12 W/m2.

The values ​​of sound pressure and sound intensity at which a person experiences severe pain are called pain threshold.

For the ear of an average person at a frequency of 1 kHz, the pain threshold corresponds to the following values ​​of sound pressure (ΔΡ m) and sound intensity (I m):

4. Intensity level(L). The ratio of intensities corresponding to the thresholds of audibility and pain is so high (I m / I 0 = 10 13) that in practice they use a logarithmic scale, introducing a special dimensionless characteristic - intensity level.

The intensity level is the decimal logarithm of the ratio of sound intensity to the hearing threshold:

The unit of intensity level is white(B).

Usually a smaller unit of intensity level is used - decibel(dB): 1 dB = 0.1 B. The intensity level in decibels is calculated using the following formulas:

Logarithmic nature of the dependence intensity level from herself intensity means that with increasing intensity 10 times intensity level increases by 10 dB.

Characteristics of frequently occurring sounds are given in Table. 3.1.

If a person hears sounds coming from one direction from several incoherent sources, then their intensities add up:

High levels of sound intensity lead to irreversible changes in the hearing aid. Thus, a sound of 160 dB can cause a rupture of the eardrum and displacement of the auditory ossicles in the middle ear, which leads to irreversible deafness. At 140 dB, a person feels severe pain, and prolonged exposure to noise of 90-120 dB leads to damage to the auditory nerve.

3.3. Characteristics of auditory sensation. Sound measurements

Sound is the object of auditory sensation. It is assessed by a person subjectively. All subjective characteristics of the auditory sensation are related to the objective characteristics of the sound wave.

Pitch, timbre

Perceiving sounds, a person distinguishes them by pitch and timbre.

Height tone is determined primarily by the frequency of the fundamental tone (the higher the frequency, the higher the sound is perceived). To a lesser extent, height depends on sound intensity (sound of greater intensity is perceived as lower).

Timbre- this is a characteristic of sound sensation, which is determined by its harmonic spectrum. The timbre of a sound depends on the number of overtones and their relative intensities.

Weber-Fechner law. Sound volume

The use of a logarithmic scale to assess sound intensity levels is in good agreement with the psychophysical Weber-Fechner law:

If you increase irritation in a geometric progression (i.e., by the same number of times), then the sensation of this irritation increases in an arithmetic progression (i.e., by the same amount).

It is the logarithmic function that has such properties.

Sound volume called the intensity (strength) of auditory sensations.

The human ear has different sensitivity to sounds of different frequencies. To take this circumstance into account, you can choose some reference frequency, and compare the perception of other frequencies with it. By agreement reference frequency taken equal to 1 kHz (for this reason, the hearing threshold I 0 is set for this frequency).

For pure tone with a frequency of 1 kHz, the volume (E) is taken equal to the intensity level in decibels:

For other frequencies, loudness is determined by comparing the intensity of auditory sensations with the volume of sound at reference frequency.

Sound volume equal to the level of sound intensity (dB) at a frequency of 1 kHz that causes the “average” person to experience the same loudness as the given sound.

The unit of sound volume is called background.

Below is an example of volume versus frequency at an intensity level of 60 dB.

Equal Loudness Curves

The detailed relationship between frequency, loudness and intensity level is depicted graphically using equal volume curves(Fig. 3.3). These curves demonstrate the dependence intensity level L dB from the frequency ν of sound at a given sound volume.

The lower curve corresponds hearing threshold. It allows you to find the threshold value of the intensity level (E = 0) at a given tone frequency.

Using equal loudness curves you can find sound volume, if its frequency and intensity level are known.

Sound measurements

Equal loudness curves reflect the perception of sound average person. For hearing assessment specific human, the method of pure tone threshold audiometry is used.

Audiometry - method of measuring hearing acuity. Using a special device (audiometer), the threshold of auditory sensation is determined, or threshold of perception, L P at different frequencies. To do this, using a sound generator, they create a sound of a given frequency and, increasing the level,

Rice. 3.3. Equal Loudness Curves

level of intensity L, fix the threshold level of intensity L p, at which the subject begins to experience auditory sensations. By changing the sound frequency, an experimental dependence L p (v) is obtained, which is called an audiogram (Fig. 3.4).

Rice. 3.4. Audiograms

Impaired function of the sound-receiving apparatus can lead to hearing loss- persistent decrease in sensitivity to various tones and whispered speech.

The international classification of degrees of hearing loss, based on the average values ​​of perception thresholds at speech frequencies, is given in Table. 3.2.

To measure volume complex tone or noise use special devices - sound level meters. The sound received by the microphone is converted into an electrical signal, which is passed through a system of filters. The filter parameters are selected so that the sensitivity of the sound level meter at various frequencies is close to the sensitivity of the human ear.

3.4. Passage of sound across the interface

When a sound wave hits an interface between two media, the sound is partially reflected and partially penetrates into the second medium. The intensities of the waves reflected and transmitted through the boundary are determined by the corresponding coefficients.

For normal incidence of a sound wave at the interface, the following formulas are valid:

From formula (3.9) it is clear that the more the wave impedances of the media differ, the greater the proportion of energy reflected at the interface. In particular, if the value X is close to zero, then the reflection coefficient is close to unity. For example, for the air-water interface X= 3x10 -4, and r = 99.88%. That is, the reflection is almost complete.

Table 3.3 shows the velocities and wave impedances of some media at 20 °C.

Note that the values ​​of the reflection and refraction coefficients do not depend on the order in which sound passes through these media. For example, for the transition of sound from air to water, the coefficients are the same as for the transition in the opposite direction.

3.5. Sound research methods

Sound can be a source of information about the state of human organs.

1. Auscultation- direct listening to sounds occurring inside the body. By the nature of such sounds, it is possible to determine exactly what processes are occurring in a given area of ​​the body, and in some cases, establish a diagnosis. Instruments used for listening: stethoscope, phonendoscope.

The phonendoscope consists of a hollow capsule with a transmitting membrane, which is applied to the body, from which rubber tubes go to the doctor’s ear. A resonance of the air column occurs in the hollow capsule, causing increased sound and, therefore, improved listening. Breath sounds, wheezing, heart sounds, and heart murmurs are heard.

The clinic uses installations in which listening is carried out using a microphone and speaker. Wide

sounds are recorded using a tape recorder on magnetic tape, which makes it possible to reproduce them.

2. Phonocardiography- graphic registration of heart sounds and murmurs and their diagnostic interpretation. Recording is carried out using a phonocardiograph, which consists of a microphone, amplifier, frequency filters, and recording device.

3. Percussion - examination of internal organs by tapping on the surface of the body and analyzing the sounds that arise. Tapping is carried out either using special hammers or using fingers.

If sound vibrations are caused in a closed cavity, then at a certain frequency of sound the air in the cavity will begin to resonate, enhancing the tone that corresponds to the size of the cavity and its position. Schematically, the human body can be represented as the sum of different volumes: gas-filled (lungs), liquid (internal organs), solid (bones). When hitting the surface of a body, vibrations occur at different frequencies. Some of them will go out. Others will coincide with the natural frequencies of the voids, therefore, they will be amplified and, due to resonance, will be audible. The condition and topography of the organ are determined by the tone of percussion sounds.

3.6. Factors determining noise prevention.

Noise protection

To prevent noise, it is necessary to know the main factors that determine its impact on the human body: the proximity of the noise source, the intensity of the noise, the duration of exposure, the limited space in which the noise operates.

Long-term exposure to noise causes a complex symptomatic set of functional and organic changes in the body (and not only the organ of hearing).

The impact of prolonged noise on the central nervous system manifests itself in a slowdown of all nervous reactions, a reduction in the time of active attention, and a decrease in performance.

After prolonged exposure to noise, the breathing rhythm and heart rate change, and an increase in the tone of the vascular system occurs, which leads to an increase in systolic and diastolic

ical blood pressure level. The motor and secretory activity of the gastrointestinal tract changes, and hypersecretion of individual endocrine glands is observed. There is an increase in sweating. There is suppression of mental functions, especially memory.

Noise has a specific effect on the functions of the hearing organ. The ear, like all sense organs, can adapt to noise. At the same time, under the influence of noise, the hearing threshold increases by 10-15 dB. After the cessation of noise exposure, the normal value of the hearing threshold is restored only after 3-5 minutes. At a high level of noise intensity (80-90 dB), its tiring effect increases sharply. One of the forms of hearing impairment associated with prolonged exposure to noise is hearing loss (Table 3.2).

Rock music has a strong impact on both the physical and psychological state of a person. Modern rock music produces noise in the range from 10 Hz to 80 kHz. It has been experimentally established that if the main rhythm set by percussion instruments has a frequency of 1.5 Hz and has powerful musical accompaniment at frequencies of 15-30 Hz, then a person becomes highly excited. With a rhythm with a frequency of 2 Hz and the same accompaniment, a person falls into a state close to drug intoxication. At rock concerts, sound intensity can exceed 120 dB, although the human ear is tuned most favorably to an average intensity of 55 dB. In this case, sound concussions, sound “burns,” hearing and memory loss may occur.

Noise also has a harmful effect on the organ of vision. Thus, prolonged exposure to industrial noise on a person in a darkened room leads to a noticeable decrease in the activity of the retina, on which the functioning of the optic nerve, and therefore visual acuity, depends.

Noise protection is quite complex. This is due to the fact that due to the relatively long wavelength, sound bends around obstacles (diffraction) and a sound shadow is not formed (Fig. 3.5).

In addition, many materials used in construction and technology do not have a high enough sound absorption coefficient.

Rice. 3.5. Diffraction of sound waves

These features require special means of combating noise, which include suppression of noise arising at the source itself, the use of mufflers, the use of elastic suspensions, soundproofing materials, elimination of cracks, etc.

To combat noise penetrating into living spaces, great importance have proper planning of the location of buildings, taking into account wind roses, creating protective zones, including vegetation. Plants are a good noise dampener. Trees and shrubs can reduce the intensity level by 5-20 dB. Green stripes between the sidewalk and the pavement are effective. Linden and spruce trees best dampen noise. Houses located behind a high pine fence can be almost completely free from street noise.

The fight against noise does not imply the creation of absolute silence, since in the long-term absence of auditory sensations a person may experience mental disorders. Absolute silence and prolonged increased noise are equally unnatural for humans.

3.7. Basic concepts and formulas. Tables

Table continuation

End of the table

Table 3.1. Characteristics of sounds encountered

Table 3.2. International classification of hearing loss

Table 3.3. Speed ​​of sound and specific acoustic resistance for some substances and human tissues at t = 25 °C

3.8. Tasks

1. A sound with an intensity level of L 1 = 50 dB on the street is heard in the room as a sound with an intensity level of L 2 = 30 dB. Find the ratio of sound intensities on the street and in the room.

2. The volume level of a sound with a frequency of 5000 Hz is equal to E = 50 von. Find the intensity of this sound using curves of equal loudness.

Solution

From Figure 3.2 we find that at a frequency of 5000 Hz, the volume E = 50 background corresponds to an intensity level L = 47 dB = 4.7 B. From formula 3.4 we find: I = 10 4.7 I 0 = 510 -8 W/m 2.

Answer: I = 5?10 -8 W/m2.

3. The fan creates sound with an intensity level of L = 60 dB. Find the sound intensity level when two adjacent fans operate.

Solution

L 2 = log(2x10 L) = log2 + L = 0.3 + 6B = 63 dB (see 3.6). Answer: L 2 = 63 dB.

4. The sound level of a jet aircraft at a distance of 30 m from it is 140 dB. What is the volume level at a distance of 300 m? Neglect reflection from the ground.

Solution

The intensity decreases in proportion to the square of the distance - it decreases by 10 2 times. L 1 - L 2 = 10xlg(I 1 /I 2) = 10x2 = 20 dB. Answer: L 2 = 120 dB.

5. The ratio of the intensities of the two sound sources is equal to: I 2 /I 1 = 2. What is the difference in the intensity levels of these sounds?

Solution

ΔL = 10xlg(I 2 /I 0) - 10xlg(I 1 /I 0) = 10xlg(I 2 /I 1) = 10xlg2 = 3 dB. Answer: 3 dB.

6. What is the intensity level of a sound with a frequency of 100 Hz that has the same volume as a sound with a frequency of 3 kHz and intensity

Solution

Using equal loudness curves (Fig. 3.3), we find that 25 dB at a frequency of 3 kHz corresponds to a loudness of 30 von. At a frequency of 100 Hz, this volume corresponds to an intensity level of 65 dB.

Answer: 65 dB.

7. The amplitude of the sound wave increased threefold. a) how many times did its intensity increase? b) by how many decibels did the volume increase?

Solution

The intensity is proportional to the square of the amplitude (see 3.6):

8. In the laboratory room located in the workshop, the noise intensity level reached 80 dB. In order to reduce noise, it was decided to line the walls of the laboratory with sound-absorbing material, reducing the sound intensity by 1500 times. What level of noise intensity will there be in the laboratory after this?

Solution

Sound intensity level in decibels: L = 10 x log(I/I 0). When the sound intensity changes, the change in the sound intensity level will be equal to:

9. The impedances of the two media differ by a factor of 2: R 2 = 2R 1 . What part of the energy is reflected from the interface and what part of the energy passes into the second medium?

Solution

Using formulas (3.8 and 3.9) we find:

Answer: 1/9 part of the energy is reflected, and 8/9 passes into the second medium.

In everyday life, we describe sound by, among other things, its volume and pitch. But from the point of view of physics, a sound wave is a periodic vibration of the molecules of the medium, propagating in space. Like any wave, sound is characterized by its amplitude, frequency, wavelength, etc. Amplitude shows how strongly a vibrating medium deviates from its “quiet” state; It is she who is responsible for the volume of sound. Frequency tells us how many times per second the vibration occurs, and the higher the frequency, the higher the pitch of the sound we hear.

Typical values ​​of volume and frequency of sound, which are found, for example, in technical standards and characteristics of audio devices, are adapted to the human ear; they are in the range of volume and frequency that is comfortable for humans. Thus, a sound with a volume above 130 dB (decibel) causes pain, and a person will not hear a sound wave with a frequency of 30 kHz at all. However, in addition to these “human” limitations, there are also purely physical limits on the volume and frequency of the sound wave.

Task

Estimate the maximum volume and maximum frequency of a sound wave that can propagate in air and water under normal conditions. Describe in general terms what will happen if you try to emit sound above these limits.


Clue

Recall that loudness, measured in decibels, is a logarithmic scale that shows how many times the pressure in a sound wave (P) is stronger than some fixed threshold pressure P 0 . The formula for converting pressure into volume is as follows: volume in decibels = 20 lg(P/P 0), where lg is the decimal logarithm. It is customary to take P0 = 20 μPa as the threshold pressure in acoustics (in water, a different threshold value is accepted: P0 = 1 μPa). For example, a sound with a pressure P = 0.2 Pa exceeds P 0 ten thousand times, which corresponds to a volume of 20 lg(10000) = 80 dB. Thus, the loudness limit arises from the maximum possible pressure that a sound wave can create.

To solve the problem, you need to try to imagine a sound wave with a very high pressure or a very high frequency and try to understand what physical limitations arise.

Solution

Let's find first volume limit. In calm air (without sound), molecules fly chaotically, but on average the density of the air remains constant. When sound propagates, in addition to rapid chaotic movement, molecules also experience a smooth back-and-forth displacement with a certain period. Because of this, alternating areas of condensation and rarefaction of air arise, that is, areas of high and low pressure. It is this deviation of pressure from the norm that is acoustic pressure (pressure in a sound wave).

In the region of vacuum, the pressure drops to P atm - P. It is clear that in the gas it must remain positive: zero pressure means that in this region in this moment There are no particles of time at all, and there can no longer be less than this. Therefore, the maximum acoustic pressure P that a sound wave can create while remaining sound is exactly equal to atmospheric pressure. P = P atm = 100 kPa. It corresponds to a theoretical volume limit equal to 20 lg (5 10 9), which gives approximately 195 dB.

The situation changes slightly if we are talking about the propagation of sound not in a gas, but in a liquid. There the pressure can become negative - this simply means that they are trying to stretch and tear the continuous medium, but due to intermolecular forces it can withstand such stretching. However, in terms of the order of magnitude, this negative pressure is small, on the order of one atmosphere. Taking into account a different value for P 0 this gives a theoretical limit for loudness in water of about 225 dB.

Now we get sound frequency limit. (In fact, this is only one of the possible limits on frequency; we will mention others in the afterword.)

One of the key properties of sound (unlike many other, more complex waves) is that its speed is practically independent of frequency. But the wave speed relates the frequency ν (that is, the time at th periodicity) with wavelength λ (spatial periodicity): c = ν·λ. Therefore, the higher the frequency, the shorter the sound wavelength.

The frequency of the wave is limited by the discreteness of the substance. The length of a sound wave cannot be less than the typical distance between molecules: after all, a sound wave is a condensation-discharge of particles and cannot exist without them. Moreover, the wavelength must be at least two or three of these distances: after all, it must include both areas of condensation and a region of rarefaction. For air under normal conditions, the average distance between molecules is approximately 100 nm, the speed of sound is 300 m/s, so the maximum frequency is about 2 GHz. In water, the discreteness scale is smaller, approximately 0.3 nm, and the speed of sound is 1500 m/s. This gives a frequency limit of about a thousand times higher, on the order of several terahertz.

Let's now discuss what happens if we try to emit sound that exceeds the found limits. A solid plate immersed in a medium, which a motor moves back and forth, is suitable as a sound wave emitter. It is technically possible to create an emitter with such a large amplitude that at maximum it creates a pressure much higher than atmospheric pressure - for this it is enough to move the plate quickly and with a large amplitude. However, then in the vacuum phase (when the plate moves back) there will simply be a vacuum. Thus, instead of a very loud sound, such a plate will be “cut A"breathe air" into thin and dense layers and throw them forward. They will not be able to propagate through the medium - when they collide with still air, they will sharply heat it up, generate shock waves, and collapse themselves.

One can imagine another situation, when an acoustic emitter oscillates with a frequency exceeding the found limit of sound frequency. Such an emitter will push the molecules of the medium, but so often that it will not give them a chance to form a synchronous vibration. As a result, the plate will simply randomly transfer energy to the approaching molecules, that is, it will simply heat the medium.

Afterword

Our consideration was, of course, very simple and did not take into account the many processes occurring in matter that also limit the propagation of sound. For example, viscosity causes a sound wave to attenuate, and the rate of this attenuation increases rapidly with frequency. The higher the frequency, the faster the gas moves back and forth, which means the faster the energy is converted into heat due to viscosity. Therefore, in a too viscous medium, high-frequency ultrasound simply will not have time to fly any macroscopic distance.

Another effect also plays a role in the attenuation of sound. From thermodynamics it follows that with rapid compression the gas heats up, and with rapid expansion- cools down. This also happens in a sound wave. But if the gas has high thermal conductivity, then with each oscillation, heat will flow from the hot zone to the cold zone, thus weakening the thermal contrast, and ultimately the amplitude of the sound wave.

It is also worth emphasizing that all the restrictions found apply to liquids and gases under normal conditions; they will change if conditions change significantly. For example, the maximum theoretical volume obviously depends on pressure. Therefore, in the atmosphere of giant planets, where the pressure is significantly higher than atmospheric pressure, an even louder sound is possible; conversely, in a very rarefied atmosphere all sounds are inevitably quiet.

Finally, let us mention one more interesting property of very high frequency ultrasound when it propagates in water. It turns out that when the frequency of sound significantly exceeds 10 GHz, its speed in water approximately doubles and is approximately comparable to the speed of sound in ice. This means that some fast processes of interaction between water molecules begin to play a significant role when oscillating with a period of less than 100 picoseconds. Relatively speaking, water acquires some additional elasticity at such time intervals, which accelerates the propagation of sound waves. The microscopic reasons for this so-called “fast sound”, however, were understood