Transmit audio signal wirelessly. Basic sound characteristics. Transmitting sound over long distances Device for receiving sound transmissions at a distance

Basic sound characteristics. Transmits sound over long distances.

Main sound characteristics:

1. Sound tone(number of oscillations per second). Low-pitched sounds (such as a bass drum) and high-pitched sounds (such as a whistle). The ear easily distinguishes these sounds. Simple measurements (oscillation sweep) show that the sounds of low tones are low-frequency oscillations in a sound wave. A high-pitched sound corresponds to a high vibration frequency. The frequency of vibration in a sound wave determines the tone of the sound.

2. Sound volume (amplitude). The loudness of a sound, determined by its effect on the ear, is a subjective assessment. The greater the flow of energy flowing to the ear, the greater the volume. A convenient measurement is sound intensity - the energy transferred by a wave per unit time through a unit area perpendicular to the direction of wave propagation. The intensity of sound increases with increasing amplitude of oscillations and the area of ​​the body performing the oscillations. Decibels (dB) are also used to measure loudness. For example, the volume of sound from leaves is estimated at 10 dB, whispering - 20 dB, street noise - 70 dB, pain threshold - 120 dB, and lethal level - 180 dB.

3. Sound timbre. Second subjective assessment. The timbre of a sound is determined by the combination of overtones. The different number of overtones inherent in a particular sound gives it a special coloring - timbre. The difference between one timbre and another is determined not only by the number, but also by the intensity of the overtones accompanying the sound of the fundamental tone. By timbre you can easily distinguish the sounds of various musical instruments and people's voices.

The human ear cannot perceive sound vibrations with a frequency of less than 20 Hz.

The sound range of the ear is 20 Hz – 20 thousand Hz.

Transmits sound over long distances.

The problem of transmitting sound over a distance was successfully solved through the creation of the telephone and radio. Using a microphone that imitates the human ear, acoustic vibrations in the air (sound) at a certain point are converted into synchronous changes in amplitude electric current(electrical signal), which is delivered through wires or using electromagnetic waves (radio waves) to the desired location and converted into acoustic vibrations similar to the original ones.

Scheme of sound transmission over a distance

1. Converter “sound - electrical signal” (microphone)

2. Electrical signal amplifier and electrical communication line (wires or radio waves)

3. Electrical signal-sound converter (loudspeaker)

Volumetric acoustic vibrations are perceived by a person at one point and can be represented as a point source of a signal. The signal has two parameters related by a function of time: vibration frequency (tone) and vibration amplitude (loudness). It is necessary to proportionally convert the amplitude of the acoustic signal into the amplitude of the electric current, maintaining the oscillation frequency.

Sound sources- any phenomena causing local pressure changes or mechanical stress. Widespread sources Sound in the form of oscillating solids. Sources Sound vibrations of limited volumes of the medium itself can also serve (for example, in organ pipes, wind musical instruments, whistles, etc.). The vocal apparatus of humans and animals is a complex oscillatory system. Extensive class of sources Sound-electroacoustic transducers, in which mechanical vibrations are created by converting electric current oscillations of the same frequency. In nature Sound is excited when air flows around solid bodies due to the formation and separation of vortices, for example, when wind blows over wires, pipes, and crests of sea waves. Sound low and infra-low frequencies occurs during explosions and collapses. There are a variety of sources of acoustic noise, which include machines and mechanisms used in technology, gas and water jets. Much attention is paid to the study of sources of industrial, transport noise and noise of aerodynamic origin due to their harmful effects on the human body and technical equipment.

Sound receivers serve to perceive sound energy and convert it into other forms. To the receivers Sound This applies, in particular, to the hearing aids of humans and animals. In reception technology Sound Electroacoustic transducers, such as a microphone, are mainly used.
The propagation of sound waves is characterized primarily by the speed of sound. In a number of cases, sound dispersion is observed, i.e., the dependence of the speed of propagation on frequency. Dispersion Sound leads to a change in the shape of complex acoustic signals, including a number of harmonic components, in particular, to the distortion of sound pulses. When sound waves propagate, the phenomena of interference and diffraction that are common to all types of waves occur. In the case when the size of obstacles and inhomogeneities in the medium is large compared to the wavelength, sound propagation obeys the usual laws of wave reflection and refraction and can be considered from the standpoint of geometric acoustics.

When a sound wave propagates in a given direction, it gradually attenuates, i.e., a decrease in intensity and amplitude. Knowledge of the laws of attenuation is practically important for determining the maximum propagation range of an audio signal.

Communication methods:

· Images

The coding system must be understandable to the recipient.

Sound communications came first.

Sound (carrier – air)

Sound wave– air pressure differences

Encoded information – eardrums

Hearing sensitivity

Decibel– relative logarithmic unit

Sound properties:

Volume (dB)

Key

0 dB = 2*10(-5) Pa

Hearing threshold - pain threshold

Dynamic range- the ratio of the loudest sound to the smallest sound

Threshold = 120 dB

Frequency Hz)

Parameters and spectrum of the sound signal: speech, music. Reverberation.

Sound- vibration that has its own frequency and amplitude

The sensitivity of our ear to different frequencies is different.

Hz – 1 fps

From 20 Hz to 20,000 Hz – audio range

Infrasounds – sounds less than 20 Hz

Sounds above 20 thousand Hz and less than 20 Hz are not perceived

Intermediate encoding and decoding system

Any process can be described by a set of harmonic oscillations

Sound signal spectrum– a set of harmonic oscillations of the corresponding frequencies and amplitudes

Amplitude changes

Frequency is constant

Sound vibration– change in amplitude over time

Dependence of mutual amplitudes

Amplitude-frequency response– dependence of amplitude on frequency

Our ear has an amplitude-frequency response

The device is not perfect, it has a frequency response

frequency response– everything related to the conversion and transmission of sound

The equalizer regulates the frequency response

340 m/s – speed of sound in air

Reverberation– sound blurring

Reverberation time– time during which the signal will decrease by 60 dB

Compression- a sound processing technique where loud sounds are reduced and quiet sounds are louder

Reverberation– characteristic of the room in which sound propagates

Sampling frequency– number of samples per second

Phonetic coding

Fragments of an information image – coding – phonetic apparatus – human hearing

Waves cannot travel far

You can increase the sound power

Electricity

Wavelength - distance

Sound=function A(t)

Convert A of sound vibrations into A of electric current = secondary encoding

Phase– delay in angular measurements of one oscillation relative to another in time

Amplitude modulation– information is contained in the change in amplitude

Frequency modulation– in frequency

Phase modulation– in phase

Electromagnetic oscillation - propagates without cause

Circumference 40 thousand km.

Radius 6.4 thousand km

Instantly!

Frequency or linear distortions occur at every stage of information transmission

Amplitude transfer coefficient

Linear– signals with loss of information will be transmitted

Can be compensated

Nonlinear– cannot be prevented, associated with irreversible amplitude distortion

1895 Oersted Maxwell discovered energy - electromagnetic vibrations can propagate

Popov invented radio

1896 Marconi bought a patent abroad, the right to use Tesla's works

Real use at the beginning of the twentieth century

The fluctuation of electric current is not difficult to superimpose on electromagnetic fluctuations

The frequency must be higher than the information frequency

In the early 20s

Signal transmission using amplitude modulation of radio waves

Range up to 7,000 Hz

AM Longwave Broadcasting

Long waves having frequencies above 26 MHz

Medium waves from 2.5 MHz to 26 MHz

No limits of distribution

Ultrashort waves (frequency modulation), stereo broadcasting (2 channels)

FM – frequency

Phase is not used

Radio carrier frequency

Broadcast range

Carrier frequency

Reliable reception area– the territory over which radio waves propagate with energy sufficient for high-quality reception of information

Dkm=3.57(^H+^h)

H – transmitting antenna height (m)

h – reception height (m)

depending on the antenna height, provided there is sufficient power

Radio transmitter– carrier frequency, power and height of the transmitting antenna

Licensed

A license is required to distribute radio waves

Broadcasting network:

Source sound content (content)

Connecting lines

Transmitters (Lunacharsky, near the circus, asbestos)

Radio

Power redundancy

Radio program– a set of audio messages

Radio station– radio program broadcast source

· Traditional: Radio editorial office (creative team), Radiodom (a set of technical and technological means)

Radiodom

Radio studio– a room with suitable acoustic parameters, soundproofed

Discretization by purity

The analog signal is divided into intervals in time. Measured in Hertz. The number of intervals needed to measure the amplitude at each segment

Quantization bit depth. Sampling frequency – dividing the signal in time into equal segments in accordance with Kotelnikov’s theorem

For undistorted transmission of a continuous signal occupying a certain frequency band, it is necessary that the sampling frequency is at least twice as high as the upper frequency of the reproduced frequency range

30 to 15 kHz

CD 44-100 kHz

Digital information compression

- or compression– the ultimate goal is to exclude redundant information from the digital flow.

Sound signalrandom process. Levels are related during correlation time

Correlation– connections that describe events in time periods: previous, present and future

Long-term – spring, summer, autumn

Short-term

Extrapolation method. From digital to sine wave

Transmits only the difference between the next signal and the previous one

Psychophysical properties of sound - allows the ear to select signals

Specific weight in signal volume

Real\impulsive

The system is noise-resistant; nothing depends on the pulse shape. Momentum is easy to restore

Frequency response – dependence of amplitude on frequency

Frequency response regulates sound timbre

Equalizer – frequency response corrector

Low, mid, high frequencies

Bass, mids, treble

Equalizer 10, 20, 40, 256 bands

Spectrum Analyzer – Delete, Voice Recognize

Psychoacoustic devices

Forces - process

Frequency processing device – plugins– modules that, when open source programs are finalized and sent

Dynamic signal processing

Applications– devices that regulate dynamic devices

Volume– signal level

Level regulators

Faders\mixers

Fade in \ Fade out

Noise reduction

Pico cutter

Compressor

Noise suppressor

Color vision

The human eye contains two types of light-sensitive cells (photoreceptors): highly sensitive rods, responsible for night vision, and less sensitive cones, responsible for color vision.

In the human retina there are three types of cones, the maximum sensitivity of which occurs in the red, green and blue parts of the spectrum.

Binocular

The human visual analyzer under normal conditions provides binocular vision, that is, vision with two eyes with a single visual perception.

Frequency ranges AM (DV, SV, HF) and FM (VHF and FM) radio broadcasting.

Radio- variety wireless communication, in which radio waves, freely propagating in space, are used as a signal carrier.

The transmission occurs as follows: a signal with the required characteristics (frequency and amplitude of the signal) is generated on the transmitting side. Further transmitted signal modulates a higher frequency oscillation (carrier). The resulting modulated signal is radiated into space by the antenna. On the receiving side of the radio wave, a modulated signal is induced in the antenna, after which it is demodulated (detected) and filtered by a low-pass filter (thus getting rid of the high-frequency component - the carrier). Thus, the useful signal is extracted. The received signal may differ slightly from that transmitted by the transmitter (distortion due to interference and interference).

In radio and television practice, a simplified classification of radio bands is used:

Ultra-long waves (VLW)- myriameter waves

Long waves (LW)- kilometer waves

Medium waves (SW)- hectometric waves

Short waves (HF) - decameter waves

Ultrashort waves (UHF) are high-frequency waves whose wavelength is less than 10 m.

Depending on the range, radio waves have their own characteristics and propagation laws:

Far East are strongly absorbed by the ionosphere; the main importance is ground waves that propagate around the earth. Their intensity decreases relatively quickly as they move away from the transmitter.

NE are strongly absorbed by the ionosphere during the day, and the area of ​​action is determined by the ground wave; in the evening, they are well reflected from the ionosphere and the area of ​​action is determined by the reflected wave.

HF propagate exclusively through reflection by the ionosphere, so there is a so-called around the transmitter. radio silence zone. During the day, shorter waves (30 MHz) propagate better, and at night, longer ones (3 MHz). Short waves can travel long distances with low transmitter power.

VHF They propagate in a straight line and, as a rule, are not reflected by the ionosphere, but under certain conditions they are able to circle the globe due to the difference in air densities in different layers of the atmosphere. They easily bend around obstacles and have high penetrating ability.

Radio waves propagate in vacuum and in the atmosphere; the earth's surface and water are opaque to them. However, due to the effects of diffraction and reflection, communication is possible between points on the earth's surface that do not have a direct line of sight (in particular, those located at a great distance).

New TV broadcasting bands

· MMDS range 2500-2700 GHz 24 channels for analog TV broadcasting. Used in the system cable television

· LMDS: 27.5-29.5 GHz. 124 TV analogue channels. Since the digital revolution. Mastered by operators cellular communications

· MWS – MWDS: 40.5-42.4 GHz. Cellular television broadcasting system. High 5KM frequencies are quickly absorbed

2. Decompose the image into pixels

256 levels

Key frame, then its changes

Analog-to-digital converter

The input is analog, the output is digital. Digital compression formats

Uncompensated video – three colors in pixels 25 fps, 256 megabits/s

dvd, avi – has a stream of 25 mb/s

mpeg2 – additional compression 3-4 times in satellite

Digital TV

1. Simplify, reduce the number of points

2. Simplify color selection

3. Apply compression

256 levels – dynamic brightness range

Digital is 4 times larger horizontally and vertically

Flaws

· A sharply limited signal coverage area within which reception is possible. But this territory, with equal transmitter power, is larger than that of an analog system.

· Freezing and scattering of the picture into “squares” when the level of the received signal is insufficient.

· Both “disadvantages” are a consequence of the advantages of digital data transmission: data is either received with 100% quality or restored, or received poorly with the impossibility of restoration.

Digital radio- technology wireless transmission digital signal through electromagnetic waves of the radio range.

Advantages:

· Higher sound quality compared to FM radio broadcasts. Currently not implemented due to low bit rate (typically 96 kbit/s).

· In addition to sound, texts, pictures and other data can be transmitted. (More than RDS)

· Mild radio interference does not change the sound in any way.

· More economical use of frequency space through signal transmission.

· Transmitter power can be reduced by 10 - 100 times.

Flaws:

· If the signal strength is insufficient, interference appears in analogue broadcasting; in digital broadcasting, the broadcast disappears completely.

· Audio delay due to the time required to process the digital signal.

· Currently, “field trials” are being carried out in many countries around the world.

· Now the transition to digital is gradually beginning in the world, but it is much slower than television due to its shortcomings. So far there are no mass shutdowns of radio stations in analogue mode, although their number in the AM band is decreasing due to more efficient FM.

In 2012, SCRF signed a protocol according to which the radio frequency band 148.5-283.5 kHz is allocated for the creation of Russian Federation digital broadcasting networks of the DRM standard. Also, in accordance with paragraph 5.2 of the minutes of the SCRF meeting dated January 20, 2009 No. 09-01, research work was carried out “Research on the possibility and conditions of using digital radio broadcasting of the DRM standard in the Russian Federation in the frequency band 0.1485-0.2835 MHz (long waves)".

Thus, for an indefinite period, FM broadcasts will be carried out in analogue format.

In Russia, in the first digital multiplex terrestrial television DVB-T2 is broadcast by federal radio stations Radio Russia, Mayak and Vesti FM.

Internet radio or web radio- a group of technologies for transmitting streaming audio data over the Internet. Also, the term Internet radio or web radio can be understood as a radio station that uses Internet streaming technology for broadcasting.

The technological basis of the system consists of three elements:

Station- generates an audio stream (either from a list of audio files, or by direct digitization from an audio card, or by copying an existing stream on the network) and sends it to the server. (The station consumes minimal traffic because it creates one stream)

Server (stream repeater)- receives an audio stream from the station and redirects its copies to all clients connected to the server; in essence, it is a data replicator. (Server traffic is proportional to the number of listeners + 1)

Client- receives an audio stream from the server and converts it into an audio signal, which is heard by the listener of the Internet radio station. It is possible to organize cascade radio broadcasting systems using a stream repeater as a client. (The client, like the station, consumes a minimum of traffic. The traffic of the client-server of the cascade system depends on the number of listeners of such a client.)

In addition to the audio data stream, text data is usually also transmitted so that the player displays information about the station and the current song.

The station can be a regular audio player program with a special codec plug-in or a specialized program (for example, ICes, EzStream, SAM Broadcaster), as well as a hardware device that converts an analog audio stream into a digital one.

As a client, you can use any media player that supports streaming audio and is capable of decoding the format in which the radio is broadcast.

It should be noted that Internet radio, as a rule, has nothing to do with broadcast radio broadcasting. But rare exceptions are possible, which are not common in the CIS.

Internet Protocol Television(Internet television or on-line TV) - a system based on two-way digital transmission TV signal via internet connections via broadband connection.

The Internet television system allows you to implement:

·Manage each user's subscription package

· Broadcast channels in MPEG-2, MPEG-4 format

· Presentation of television programs

TV registration function

· Search for past TV shows to watch

· Pause function for TV channel in real time

· Individual package of TV channels for each user

New media or new media- a term that at the end of the 20th century began to be used for interactive electronic publications and new forms of communication between content producers and consumers to denote differences from traditional media such as newspapers, that is, this term denotes the process of development of digital, network technologies and communications. Convergence and multimedia newsrooms have become commonplace in today's journalism.

We are talking primarily about digital technologies and these trends are associated with the computerization of society, since until the 80s the media relied on analogue media.

It should be noted that according to Ripple's law, more highly developed media are not a replacement for previous ones, so the task new media This includes recruiting your consumer, searching for other areas of application, “an online version of a printed publication is unlikely to replace the printed publication itself.”

It is necessary to distinguish between the concepts of “new media” and “digital media”. Although both here and there practice digital means of encoding information.

Anyone can become a publisher of a “new media” in terms of process technology. Vin Crosby, who describes "mass media" as a tool for broadcasting "one to many", considers new media as communication “many to many”.

Digital era creates a different media environment. Reporters are getting used to working in cyberspace. As noted, previously “covering international events was a simple matter.”

Talking about relationships information society and new media, Yasen Zasursky focuses on three aspects, highlighting new media as an aspect:

· Media opportunities at the present stage of development of information and communication technologies and the Internet.

· Traditional media in the context of “internetization”

· New media.

Radio studio. Structure.

How to organize a faculty radio?

Content

What to have and be able to do? Broadcasting zones, equipment composition, number of people

No license required

(Territorial body "Roskomnadzor", registration fee, ensure frequency, at least once a year, certificate to a legal entity, radio program is registered)

Creative team

Chief editor and entity

Less than 10 people – agreement, more than 10 – charter

The technical basis for the production of radio products is a set of equipment on which radio programs are recorded, processed and subsequently broadcast. The main technical task of radio stations is to ensure clear, uninterrupted and high-quality operation of technological equipment for radio broadcasting and sound recording.

Radio houses and television centers are an organizational form of the program generation path. Employees of radio and television centers are divided into creative specialists (journalists, sound and video directors, workers in production departments, coordination departments, etc.) and technical specialists - hardware and studio complex (studios, hardware and some support services workers).

Hardware and studio complex- these are interconnected blocks and services united technical means, with the help of which the process of formation and release of audio and television programs is carried out. The hardware-studio complex includes a hardware-studio unit (for creating parts of programs), a broadcasting unit (for radio broadcasting) and a hardware-software unit (for TV). In turn, the hardware-studio block consists of studios and technical and director's control rooms, which is due to various technologies for direct broadcasting and recording.

Radio studios- these are special premises for radio broadcasts that meet a number of acoustic treatment requirements in order to maintain low noise levels from external sources sound, create a uniform sound field throughout the room. With the advent electronic devices To regulate phase and time characteristics, small, completely “attenuated” studios are increasingly being used.

Depending on the purpose, studios are divided into small (on-air) (8-25 sq. m), medium-sized studios (60-120 sq. m), large studios (200-300 sq. m).

In accordance with the sound engineer’s plans, microphones are installed in the studio and their optimal characteristics (type, polar pattern, output signal level) are selected.

Mounting hardware are intended for preparing parts of future programs, from simple editing of musical and speech phonograms after the initial recording to reduction of multi-channel sound to mono or stereo sound. Next, in the hardware preparation of programs, parts of the future transmission are formed from the originals of individual works. Thus, a fund of ready-made phonograms is formed. The entire program is formed from individual transmissions and enters the central control room. The production and coordination departments coordinate the actions of editorial staff. In large radio houses and television centers, to ensure that old recordings correspond to modern ones technical requirements broadcasting, there are hardware restorations of phonograms, where the level of noise and various distortions is edited.

After the program is completely formed, the electrical signals enter the broadcasting room.

Hardware-studio block is equipped with a director's console, a control and loud-speaking unit, tape recorders and sound effects devices. Illuminated signs are installed in front of the studio entrance: “Rehearsal”, “Get ready”, “Microphone on”. The studios are equipped with microphones and an announcer's console with microphone activation buttons, signal lamps, and telephone sets with a light ringing signal. Announcers can contact the control room, the production department, the editorial office, and some other services.

Main device director's control room is a sound engineer's console, with the help of which both technical and creative tasks are solved simultaneously: editing, signal conversion.

IN broadcast hardware In a radio home, a program is formed from various programs. Parts of the program that have undergone sound editing and editing do not require additional technical control, but require the combination of various signals (speech, musical accompaniment, sound prompts, etc.). In addition, modern broadcast control rooms are equipped with equipment for automated program release.

The final control of programs is carried out in the central control room, where additional regulation of electrical signals and their distribution to consumers takes place on the sound engineering console. Here frequency processing of the signal is carried out, its amplification to the required level, compression or expansion, introduction of program call signs and precise time signals.

Composition of the radio station hardware complex.

The main expressive means of radio broadcasting are music, speech and service signals. To bring together in the correct balance (mixing) all sound signals, the main element of the radio broadcasting hardware complex is used - Mixer(mixing console). The signal generated on the remote control from the output of the remote control passes through a number of special signal processing devices (compressor, modulator, etc.) and is supplied (via a communication line or directly) to the transmitter. The console inputs receive signals from all sources: microphones transmitting the speech of presenters and guests on air; sound reproduction devices; signal playback devices. In a modern radio studio, the number of microphones can vary - from 1 to 6 and even more. However, for most cases 2-3 is enough. The microphones used are the most different types.
Before being sent to the remote control input, the microphone signal may be subject to various processing(compression, frequency correction, in some special cases - reverberation, tonal shift, etc.) in order to increase speech intelligibility, level the signal level, etc.
The sound reproduction devices at most stations are CD players and tape recorders. Range of tape recorders used depends on the specifics of the station: these can be digital (DAT - digital cassette recorder; MD - digital minidisc recording and playback device) and analog devices (reel-to-reel studio tape recorders, as well as professional cassette decks). Some stations also play from vinyl discs; For this, either professional “gram tables” are used, or, more often, simply high-quality players, and sometimes special “DJ” turntables, similar to those used in discotheques.
Some stations, where the principle of rotating songs is widely used, play music directly from hard drive computer, where a certain set of songs rotating this week is pre-recorded in the form of wave files (usually in WAV format). Devices for reproducing service signals are used in a variety of types. As in foreign radio broadcasting, analogue cassette devices (jingles) are widely used, in which the sound carrier is a special cassette with tape. As a rule, one signal is recorded on each cassette (intro, jingle, beat, backing, etc.); The tape in jingle drive cassettes is looped, therefore, immediately after use it is ready for playback again. At many radio stations that use traditional types of broadcasting organizations, signals are reproduced from reel-to-reel tape recorders. Digital devices are either devices where the carrier of each individual signal is floppy disks or special cartridges, or devices where the signals are played directly from the computer's hard drive.
The radio broadcasting hardware complex also uses various devices recordings: these can be either analog or digital tape recorders. These devices are used both for recording individual fragments of the broadcast in the archive of a radio station or for the purpose of subsequent repetition, and for continuous control recording of the entire broadcast (the so-called police tape). In addition, the radio broadcasting hardware complex includes monitors Acustic systems both for listening to the program signal (mix at the output from the remote control), and for preliminary listening (“eavesdropping”) on the signal from various media before broadcasting this signal, as well as headphones (headphones) into which the program signal is supplied, etc. .P. Part of the hardware complex may also include an RDS (Radio Data System) device - a system that allows a listener with a special receiving device to receive not only an audio signal, but also a text signal (the name of the radio station, sometimes the name and performer of the sounding work, other information) displayed on a special display.

Classification

By sensitivity

· Highly sensitive

Medium sensitive

Low sensitive (contact)

By dynamic range

· Speech

· Service communications

By direction

Each microphone has a frequency response

· Not directed

· Unidirectional

Stationary

Friday

TV studio

· Special light – studio lighting

Sound-absorbing underfoot

· Scenery

· Means of communication

· Soundproof room for sound engineer

· Director

· Video monitors

· Sound control 1 mono 2 stereo

· Technical staff

Mobile TV station

Mobile reporting station

Video recorder

Sound path

Camcorder

TS time code

Color– brightness of three points of red, green, blue

Clarity or resolution

Bitrate– digital stream

· Sampling 2200 lines

· Quantization

TVL (Ti Vi Line)

Broadcast

Line– unit of measurement of resolution

A/D converter - digital

VHS up to 300 TVL

Broadcast over 400 TVL

DPI – dots per inch

Gloss=600 DPI

Photos, portraits=1200 DPI

TV image=72 DPI

Camera resolution

Lens – megapixels – electric quality. block

720 by 568 GB/s

Digital video DV

HD High Definition 1920\1080 – 25MB\s

7207, Class 740, 6 INVENTION PATENT ABOUT PI and devices for receiving and transmitting sound signals, foreign company Akts. K. P. Hertz Island," (S. R. Ooegg, Or 11 sce Apzta 11 AMepd urge, Czechoslovakia, declared on August 26 11 sbap) 1925 The patent was instituted, Pres er) is granted by E. Gasch Austria, patent published on December 1, 1928, valid for 15 years from December 81, 1928. The proposed invention relates to a device with which, on the one hand, it is possible to determine the direction of arrival of sound impulses from any distant sound source, and, on the other hand, it turns out to be possible to send sound pulses into the distance in a certain isolated direction in the form of a beam of parallel rays. The auditory direction finders or megaphones used for this purpose do not give satisfactory results due to the use of sound receivers or transmitters of arbitrary funnel- or pear-shaped shapes in them, from the action of which sound rays reach their destination after repeated reflection and deflection in an interfered form, and, therefore, having already lost their acoustic purity. Although, correct ones from the point of view of acoustics were also used as sound receivers and transmitters. paraboloids of rotation, at the focus of which microphones or telephones were installed, especially in cases where there was noise. emanating from an aircraft moving at night, and therefore invisible, it was necessary to determine the spatial position of this apparatus, but even in this case, achieving the goal is not entirely flawless, since when using telephones, incoming sound pulses give only very weak current pulses, when used However, with microphones, the necessary changes in the inclination of the telephone membrane to find the direction of sound are accompanied by inevitable movements of graphite balls, which have a detrimental effect on sound reception due to the side noise they cause. The proposed device for receiving and transmitting sound signals is intended to eliminate such shortcomings, for which purpose sound rays , in the case of arrival of ah in a parallel beam from one direction, they are collected at the focus of the receiving parabolon and sent further using a second hollow reflector installed, if possible confocally with the first, so that they fall into the observer’s ear or onto the membrane of a microphone that is rotated only in azimuthal direction direction, in the form of a beam of parallel or converging rays, and to facilitate determining the direction of incoming sound rays, the input hole for these latter in the reflector can be given such a shape that, with a small angular deviation of the incoming sound rays from the axis of the receiving reflector in one direction, the result is only insignificant, and in the other direction - much greater losses in sound power. While for the receiving reflector the most suitable shape is only a paraboloid of rotation, as a discharge reflector a possibly more elongated paraboloid can be used, producing beams of parallel sound rays of great strength , or again installed confocally with a receiving paraboloid, an ellipsoid of rotation, in which the connection of sound rays at its second focus is possible. If, with the help of similar combinations of reflectors, sound pulses are directed in opposite directions, then devices are obtained that can be used to send beams of parallel sound rays. The embodiments of the proposed invention are presented in a schematic drawing, with FIG. 1 shows a side view, FIG. 2 in pl devices with a parabolic diverting reflector, Fig. 3 side view, Fig. 4 in pl devices with an elliptical diverting reflector, Fig. 5 is a front view of a complete sound direction detector with a sound base rotatable around a vertical and horizontal axis, in connection with an optical sighting device for finding a sound source, as well as for setting the direction of sound transmission in sending devices, and figs 6 to 7 in the variable device. In all figures, the line P, - x indicates the direction of the axis of the receiving or transmitting reflector A; line P, in the direction of the axis of the outlet or supply reflector B, and the letter P means the common focus of both reflectors, in which all sound rays coming from the direction x - G intersect, or, conversely, sent in this direction. In Figs 3 and 4, the letter P means the second focus of the outlet or supply ellipsoid. If the indicated combinations of reflectors serve to find the direction of incoming sound rays, it is preferable to limit the surface of the receiving reflector A, and the inlet of the outlet reflector B, by a plane passing perpendicular to the plane xP ,y through the common focus Г, and through the intersection point X of both reflectors located along the main meridian. This achieves the fact that even with a completely insignificant deviation of the directions of sound from the xY axis, very noticeable attenuation of sound in the direction of the y axis is obtained, while even more significant deviations from the named direction give completely imperceptible attenuation of sound in the opposite direction y. If the described device serves for the transmission of directed sound pulses, then the transmission reflector A, as well as the outlet opening of the supply reflector B, located along the P, x axis, should be conical surface X, P, X for which P, X serves as a generatrix. In this case, it is recommended to establish the direction of sound transmission using a simple diopter located with its axis parallel to G, X or some other sighting device. Also in receiving sound devices It is useful, to find a sound source, to attach to a combination of reflectors a ring or some other diopter, the sighting direction of which corresponds to the line P, x. If the sound source is connected to the surface of the earth, then azimuthal rotation is sufficient to find its position Y, x axis , mounted on a tripod of combinations of reflectors, and this strengthening should also allow rotation around the axis 7, y. If, however, the sound source is an invisible aircraft, then its acoustic azimuth and altitude angle must be determined simultaneously, for which the diagram shown in FIG. 5 by a combination of reflectors with the participation of two observers, one of whom must determine the azimuthal and the other the vertical plane of sound. On the vertical axle 1 of the tripod, a fork 2 is mounted with freedom of azimuthal rotation, forming a support for the horizontal support frame 3, on which bushings 4 are firmly mounted , 5 for reflectors, To achieve high sensitivity, the sound reflectors in this case are turned in pairs in different directions. Both combinations of reflectors forming an azimuthal sound base consist, in the described form of execution, of reflectors 7, 8 connected in pairs, both of which are used as a vertical sound base, the combination of reflectors each consists of three, pairwise confocally mounted reflectors 9, 10, 11, namely, paraboloidal input reflectors 9, to which are confocally adjacent perpendicular to the axis of the fork 3, elliptical outlet reflectors 10, connected, in in turn, with internal ellipsoidal or paraboloidal diverting reflectors 11, directing sound rays either to the auditory organ of the observer, or to the vertically located membrane of the microphone 13, which with such a device does not undergo any changes in inclination, and therefore does not produce interfering side noise either at azimuthal, nor when turning the reflectors vertically. In addition, on the support frame 3, to make it easier to find the sound source, a telescope 12 is installed. If we are talking about the perception with the help of a microphone of impulses emanating from sound sources located at a great distance, over a considerable distance, for example, entire theater orchestras and stage performances, then For this purpose, a suitable form of implementation turns out to be in which the receiving paraboloid is confocally adjacent to not one, but two, located along the same axis, hollow surfaces (in the form of ellipsoids or paraboloids), so that with the focus of the receiving paraboloid, as shown in Fig. 6, one focus of both abductor ellipsopdes coincided, in the plane of the second focuses of which a microphone was installed. Sound rays arriving parallel to the axis of the receiving paraboloid are perceived equally by each of the outlet ellipsoids, while the entire set of rays arriving parallel to direction 1 is perceived by ellipsoid B, and the totality of rays arriving parallel to direction H. is perceived and retracted to the microphone by ellipsoid B. Instead of ellipsoidal retractor reflectors, in this case paraboloidal reflectors with mounted cylindrical tubular parts can, of course, be used (Figs. 1, 2). B to the paraboloid receiver can also be confocally attached to four hollow abductor surfaces (ellipsoidal or paraboloidal) in such a way that they all form a rectangular cross. In Fig. 7 schematically shows yet another embodiment, corresponding to FIG. 3. The receiving device itself is made up of two. mirroring each other halves, On the tripod I there is a rotating arc B, connected with sufficient clearance by means of vertical axles I, with diverting reflectors B. The body of each of both diverting reflectors is firmly connected to the segments of the worm wheels I, and I, which are in mesh with a worm spindle BP driven into rotation using a handwheel b. When turning this handwheel, both outlet reflectors B rotate around their respective axles I in opposite directions, as a result of which the axes of both receiving reflectors A are installed at converging angles to each other. In the absence of such a device, it would be possible, for example, when transmitting, performing musical works by an orchestra, then the inconvenience is that the space between the vertical planes passing through the axes of the receiving reflectors would turn out to be dead space. Sound waves that would travel from this space to the described device would not be perceived by the latter, since in the receiving reflector they would be reflected in the direction where there is no outlet reflector. If, therefore, it is necessary to receive sound waves from such a sound source, then the device just described can be installed in such a way that the axes of the receiving reflectors intersect in front of the center of the sound waves, in which case one can be quite sure that all sound waves emerging from the said source to adaptation will be perceived by this latter. The described device can also be used to determine the distance of the sound source from the convergence of the axes of the receiving reflectors and from the distance nx of the foci. Subject of the patent, 1, Device for receiving and so on. transmission of sound signals, consisting of concave surfaces reflecting sound, characterized in that one of the reflecting surfaces A (Figs. 1 and 2), serving as a receiving or transmitting reflector and made in the form of a PA; rabopd of rotation, connected to a confocally mounted second hollow surface of rotation B, used as a diverting or actuating reflector. 2, In the device described in 1 and 1, the device for the outlet or supply reflector, characterized in that the reflector B attached to the paraboloidal receiving or transmitting reflector A, confocally with it, is made either in the form of an ellipsoid of rotation and serves to divert sound rays to one center, or made in the form of a parabolon of rotation and serves to produce a parallel beam of sound rays.3. The device described in paragraph and. 1 and 2., characterized in that the surface of the receiving and output reflectors is limited by a plane passing through their common focus P, and through the intersection point H, the main meridians, lying in the plane of both axes of the reflectors, perpendicular to the said plane (Fig. 3 and 4) ,4. Modification of what is described in i. 3 devices, characterized in that the surface of the supply and transfer reflector is limited along the common axis of a conical surface, the apex of which is located at the focus P, the generating line of which is a straight line connecting this focus with the intersection point H, the main meridians of both surfaces (Fig. 3) .5, The use of the devices described in paragraphs 1 - 4 in the form of a combination of two reflectors for each direction. facing in opposite directions, mounted on rotating supports and serving for more precise definition directions of incoming rays (Fig. 5).6. Modification of what is described in i. 1 - 3 and 5 devices, characterized by the use of additional. edlipsopdal or paraboloidal reflector 11, located confocally with reflector 4 and serving to correspondingly deflect sound rays (Fig. 5).7. With the device described in and, and, 1 - 6, the use of optical sighting devices 12 located parallel to the axis of the reflector A or 9 serves to detect the sound source or determine the direction of transmission (Fig. 5),8. A modification of the device described in paragraph 1, characterized by its use for the purpose of perceiving sound signals from distant sources. at Bil 11 enilgrad, -ditography aK rolyd, 75 sound nicks, additional one or more reflectors B, installed confocally with reflector A (Fig, 6),9. Modification of what is described in i. p, 1 and 2 devices, characterized in that two units, each consisting of an incoming and outgoing reflectors, are mounted rotatably on a support in such a way that the axes of the receiving reflectors can be tilted to each other (Fig. 7).

Application

4127, 26.08.1925

Aks. K. P. Hertz Society, Optical Establishment

M. Maurer, E. Hasek

IPC / Tags

Link code

Device for receiving and transmitting sound signals

Similar patents

Pulse activation of the movement of gun 3 along the polar coordinate at a constant speed of movement of gun 3 along the linear coordinate. Trajectory of moving point "O." in this case passes along the axis of symmetry of the sample. The method for determining the optimal angle of inclination of the beam axis to the surface of the sample is presented in Fig. 1, where an is the initial angle of inclination of the gun (beam) axis; a is the final angle of inclination of the gun (beam) axis; La =a -an is the range of changes in the angles of inclination of the beam; and ach.1 is the angle of inclination of the gun (beam) axis at the beginning of the section with high-quality seam formation; aach.g - the angle of inclination of the axis of the gun (Beam) at the end of the section with high-quality formation of the seam; Laach - a.ch, g -akach, 1 range of changes in the inclination angles of the axis of the gun (beam) at the section with high-quality formation...

Receiving the dissolve, they stop recording at point 3 and rewind the tape back to position E, of course, without doing so. exposition. Then a second sound recording is made at the same place, in which the amplitude of oscillations from its minimum at point b increases to its normal value. at point 3, and then after this place there is a regular sound recording; It is obvious that the transparency of parts O, e of the phonogram will be the same and will correspond to the operating point E X on the tape characteristic. The average transparency in part b of the phonogram will obviously be less than in parts a, c. The operating point on the tape characteristic will move away from the position.U, ne-, ; moving towards point L,a due to the fact that, as noted above, any...

Whether we like it or not, the time will come when we will get rid of wires. There will be a time when all household devices in our homes will not need wired power, everything is leading to this.

Today we will consider a method of transmitting an audio signal wirelessly. While developing this device, I more than once came across problems with signal reception, because in the end the signal was received in an undesirable quality. The next version of the receiver allows you to receive and reproduce a clear signal without wheezing or interference.

There is almost no circuit, only a couple of components - a solar module from the Chinese chargers for a mobile phone (bought for $10), a network step-down transformer for 10 - 15 watts with a transformation ratio of 1:10 or 1:20, two batteries from mobile phones (literally with any capacity), and the laser itself.

Audio receiver:

Audio transmitter:

The device itself is quite simple; there is a receiver and a signal transmitter. An ordinary red laser, which was purchased in a store for $1, was used as a transmitter.

Using a transformer, the initial signal is converted, then amplified by a battery and powers the laser diode. Thus, the laser beam contains information from the initial signal; the laser plays the role of a modulator – converter. The signal arriving at the receiver is amplified and fed to the ULF input.

Using this method, it is possible to transmit an audio signal over a distance of up to 10 meters, then the signal weakens, but if you have a good preliminary ULF and final power amplifier, you can receive the signal over long distances.

Based on this method, it is possible to assemble low-power wireless headphones or audio output extenders.

We apply an audio signal to the secondary (step-down) winding of the transformer, for example from music center or more weak signal from PC. A power source and a laser diode are connected in series to the secondary winding.

What are the wires for? At first glance, this is a stupid question - for signal transmission, of course. You can’t do without wires, they are everywhere, get under your feet and it’s often annoying. In the age of digital technology, there are many more wires in our home. Many of us love listening to music. In order not to disturb others, we often use headphones. But alas, headphones, like any other device for reproducing an audio signal, have wires. In the case of a headset, these wires are very often damaged, making the headset unsuitable for further use. If the headset is from an unknown manufacturer, then the service life is reduced several times. Cheap headsets often use low-quality audio cables that are quite thin and often break. Such breaks are invisible and in some cases there is simply no point in repairing them. In this article we will look at a way to listen to music tracks (and not only) but will not use any wires to transmit sound.

What is the meaning of the idea? The audio signal is an alternating signal, for clarity, let's call it alternating current. Alternating current, as we know, changes magnitude and direction, therefore it can be transformed. We will wind a transformer with two windings. One of the windings is designed for 4 ohms, since this winding will be connected to the output of the audio amplifier. If you plan to use sound card PC or headphone output of a laptop (or other devices), it is recommended to assemble a separate one on available chips. Microcircuits of the TDA2003 series or are the best options (in terms of price and quality). These microcircuits can be powered from the USB output voltage (5.5V) of the computer. Well, what if you have portable speakers or a ready-made power amplifier, then consider yourself lucky. The bottom line is this: the sound signal is sent to primary winding transformer (the transformer has only galvanic isolation), on the secondary winding we get the same signal, which we amplify with a constant voltage source.

Wireless sound transmission - diagram


In our case, as a source direct current used lithium ion battery from mobile phone. In other words, a battery and an LED are connected in series to the transformer winding.


Transformer: ferrite ring of literally any diameter. I want to say that the diameter of the ring, the penetration of the ferrite and the number of turns in both windings are not critical. Transformers with an iron core were also used, and they work very well. Both windings are completely similar, they consist of 60 turns of 0.4-0.8 mm wire. The LED can be taken as a regular white or purple one, it is also possible to use IR LEDs. When using IR LEDs, the device will be less sensitive to external factors (sunlight or light from lighting lamps).


The receiver is a photodiode that can be replaced solar battery or to a homemade photodetector. The melt receiver can be made from MP series transistors. To do this, you can take any of these transistors (regardless of conductivity), cut them off and top part covers. This operation must be carried out extremely carefully so as not to damage the semiconductor crystal of the transistor.


A laser diode (from toy lasers) can also be used as a transmitter, which will make it possible to transmit a signal over fairly long distances. The capacitance of the variable capacitor in the receiver is selected through experiments (0.1-4.4 μF), it can be completely excluded from the circuit, but this may affect the sound quality.

For best work For such a device, the transmitter of the device must be placed in a housing where light will not penetrate. In my case, the photodiode was installed in a plastic sleeve with a reflector to prevent unnecessary light from reaching the photodiode.

Previously, we looked at signal transmission options, this one will be the simplest of its kind, since there are practically no details. The article is printed to familiarize you with the principle of operation of this method. Today, a similar method of signal transmission is used in a variety of fields (directional microphones and other spy technologies).