Presentation of Intel Sandy Bridge processors: model range and architectural features. Material and methods

These days, Intel presents the world with long-awaited processors Sandy Bridge, the architecture of which was previously dubbed as revolutionary. But not only processors have become new these days, but also all the accompanying components of the new desktop and mobile platforms.

So, this week, as many as 29 new processors, 10 chipsets and 4 wireless adapter for laptops and desktop work and gaming computers.

Mobile innovations include:

    processors Intel Core i7-2920XM, Core i7-2820QM, Core i7-2720QM, Core i7-2630QM, Core i7-2620M, Core i7-2649M, Core i7-2629M, Core i7-2657M, Core i7-2617M, Core i5- 2540M, Core i5-2520M, Core i5-2410M, Core i5-2537M, Core i3-2310M;

    Intel QS67, QM67, HM67, HM65, UM67 Express chipsets;

    wireless network controllers Intel Centrino Advanced-N + WiMAX 6150, Centrino Advanced-N 6230, Centrino Advanced-N 6205, Centrino Wireless-N 1030.

In the desktop segment there will be:

    processors Intel Core i7-2600K, Core i7-2600S, Core i7-2600, Core i5-2500K, Core i5-2500S, Core i5-2500T, Core i5-2500, Core i5-2400, Core i5-2400S, Core i5- 2390T, Core i5-2300;

    Intel P67, H67, Q67, Q65, B65 Express chipsets.

But it’s immediately worth noting that the announcement of the new platform is not one-part for all processor models and chipsets - from the beginning of January only “mainstream” class solutions are available, and most of the more widespread and not so expensive ones will go on sale a little later. Along with the release of Sandy Bridge desktop processors, a new processor socket for them was introduced LGA 1155. Thus, the new products do not complement the Intel Core i3/i5/i7 lineup, but are a replacement for processors for LGA 1156, most of which are now becoming a completely unpromising acquisition, because in the near future their production should cease altogether. And only for enthusiasts, until the end of the year, Intel promises to continue releasing older quad-core models based on the Lynnfield core.

However, judging by the roadmap, the long-lived Socket T platform (LGA 775) will still remain relevant at least until the middle of the year, being the basis for systems entry level. For the most productive gaming systems and real enthusiasts, processors based on the Bloomfield core on the LGA 1366 socket will be relevant until the end of the year. As you can see, the life cycle of dual-core processors with an “integrated” graphics adapter based on the Clarkdale core turned out to be very short, only one year, but they are the ones “trodden” the path for the Sandy Bridge presented “today”, accustoming the consumer to the idea that not only a memory controller, but also a video card can be integrated into the processor. Now the time has come to not only release faster versions of such processors, but to seriously update the architecture to ensure a noticeable increase in their efficiency.

The key features of Sandy Bridge architecture processors are:

    production in compliance with the 32 nm process technology;

    significantly increased energy efficiency;

    optimized Intel Turbo Boost technology and Intel Hyper-Threading support;

    a significant increase in the performance of the integrated graphics core;

    implementation of a new set of instructions Intel Advanced Vector Extension (AVX) to speed up the processing of real numbers.

But all of the above innovations would not provide the opportunity to talk about a truly new architecture if all this were not now implemented within a single core (chip), unlike processors based on the Clarkdale core.

Naturally, in order for all processor nodes to work in harmony, it was necessary to organize fast exchange information between them - an important architectural innovation was the Ring Interconnect.

It unites Ring Interconnect through L3 cache memory, now called LLC (Last Level Cache), processor cores, graphics core and System Agent, which includes a memory controller, bus controller PCI Express, DMI controller, power management module and other controllers and modules previously called “uncore”.

The Ring Interconnect bus is the next stage in the development of the QPI (QuickPath Interconnect) bus, which, after being tested in server processors with the updated 8-core Nehalem-EX architecture, has migrated to the core of processors for desktop and mobile systems. The Ring Interconnect creates four 32-bit rings for the Data Ring, Request Ring, Snoop Ring, and Acknowledge Ring. The ring bus operates at the core frequency, so its throughput, latency and power consumption are completely dependent on the operating frequency of the processor's computing units.

The third level cache (LLC - Last Level Cache) is common to all computing cores, the graphics core, the system agent and other blocks. In this case, the graphics driver determines which data streams to place in the cache memory, but any other unit can access all data in the LLC. A special mechanism controls cache memory allocation to ensure that collisions do not occur. In order to speed up work, each of the processor cores has its own cache memory segment, to which it has direct access. Each such segment includes an independent Ring Interconnect bus access controller, but at the same time there is constant interaction with the system agent, which performs overall cache management.

The System Agent is essentially a “north bridge” built into the processor and combines PCI Express bus controllers, DMI, RAM, a video processing unit (media processor and interface management), a power manager and other auxiliary units. The system agent interacts with other processor nodes through a ring bus. In addition to streamlining data flows, the system agent monitors the temperature and load of various blocks, and through the Power Control Unit provides control of supply voltage and frequencies in order to ensure the best energy efficiency at high performance. Here it can be noted that to power new processors, a three-component power stabilizer is needed (or two, if the built-in video core remains inactive) - separately for the computing cores, the system agent and the integrated video card.

The PCI Express bus built into the processor complies with specification 2.0 and has 16 lanes for the ability to increase the power of the graphics subsystem using a powerful external 3D accelerator. In the case of using older sets of system logic and agreeing on licensing issues, these 16 lines can be divided into 2 or three slots in 8x+8x or 8x+4x+4x modes, respectively, for NVIDIA SLI and/or AMD CrossFireX.

To exchange data with the system (drives, I/O ports, peripherals, the controllers of which are located in the chipset), the DMI 2.0 bus is used, which allows pumping up to 2 GB/s useful information in both directions.

An important part of the system agent is the dual-channel DDR3 memory controller built into the processor, which nominally supports modules at frequencies of 1066-1333 MHz, but when used in motherboards based on the Intel P67 Express chipset, it can easily ensure operation of modules at frequencies up to 1600 and even 2133 MHz. Placing the memory controller on the same chip with the processor cores (the Clarkdale core consisted of two chips) should reduce memory latency and, accordingly, increase system performance.

Thanks in part to advanced monitoring of the parameters of all processing cores, cache memory and auxiliary units, which is implemented in the Power Control Unit, Sandy Bridge processors now feature improved Intel Turbo Boost 2.0 technology. Now, depending on the load and tasks performed, processor cores, if necessary, can be accelerated even beyond the thermal package, as with normal manual overclocking. But the system agent will monitor the temperature of the processor and its components, and when “overheating” is detected, the node frequencies will gradually decrease. However, desktop processors have a limited operating time in super-accelerated mode, because here it is much easier to organize much more efficient cooling than a “boxed” cooler. Such an “overboost” will allow for an increase in performance at critical moments for the system, which should give the user the impression of working with a more powerful system, as well as reduce the waiting time for the system’s response. Also, Intel Turbo Boost 2.0 ensures that the built-in video core also has dynamic performance in desktop computers.

The Sandy Bridge processor architecture implies not only changes in the intercomponent communication structure and improvements in the capabilities and energy efficiency of these components, but also internal changes in each computing core. If we ignore the “cosmetic” improvements, the most important are the following:

    return to the allocation of cache memory for approximately 1.5 thousand decoded micro-operations L0 (used in Pentium 4), which is a separate part of L1, which simultaneously ensures more uniform loading of pipelines and reduces power consumption due to increased pauses in operation complex circuits operation decoders;

    increasing the efficiency of the branch prediction block due to an increase in the capacity of the address buffers of branch results, command history, and branch history, which increased the efficiency of pipelines;

    increasing the capacity of the reordered instruction buffer (ROB - ReOrder Buffer) and increasing the efficiency of this part of the processor due to the introduction of a physical register file (PRF - Physical Register File, also a characteristic feature of the Pentium 4) for data storage, as well as expanding other buffers;

    doubling the capacity of registers for working with streaming real data, which in some cases can provide twice the speed of execution of operations using them;

    increasing the efficiency of execution of encryption instructions for the AES, RSA and SHA algorithms;

    introduction of new vector instructions Advanced Vector Extension (AVX);

  • optimization of the cache memory of the first L1 and second L2 levels.

An important feature of the graphics core of Sandy Bridge processors is that it is now located on the same chip with the rest of the blocks, and its characteristics are controlled and its status is monitored at the hardware level by a system agent. At the same time, the block for processing media data and generating signals for video outputs is placed in this very system agent. This integration allows for greater collaboration, lower latency, greater efficiency, and more.

However, there are not as many changes to the graphics core architecture itself as we would like. Instead of the expected DirectX 11 support, DirectX 10.1 support was simply added. Accordingly, not many applications with OpenGL support are limited to hardware compatibility with version 3 of this free API specification only. At the same time, although there is talk about improving the computing units, there are still the same number of them - 12, and then only for older processors. However, increasing the clock frequency to 1350 MHz promises a noticeable performance increase in any case.

On the other hand, creating an integrated video core with really high performance and functionality for modern games with low power consumption is very difficult. Therefore, the lack of support for new APIs will only affect compatibility with new games, and performance, if you really want to play comfortably, will need to be increased using a discrete 3D accelerator. But the expansion of functionality when working with multimedia data, primarily when encoding and decoding video within the framework of Intel Clear Video Technology HD, can be considered one of the advantages of Intel HD Graphics II (Intel HD Graphics 2000/3000).

The updated media processor allows you to offload processor cores when encoding video in MPEG2 and H.264 formats, and also expands the set of post-processing functions with hardware implementation of algorithms for automatically adjusting image contrast (ACE - Adaptive Contrast Enhancement), color correction (TCC - Total Color Control) and improving skin appearance (STE – Skin Tone Enhancement). The implementation of support for the HDMI interface version 1.4, compatible with Blu-ray 3D (Intel InTru 3D), increases the prospects for using the built-in video card.

All of the above architectural features provide the new generation of processors with a noticeable performance superiority over the previous generation models, both in computing tasks and when working with video.

Eventually Intel platform LGA 1155 becomes more productive and functional, replacing LGA 1156.

To summarize, the Sandy Bridge family of processors are designed to solve a very wide range of tasks with high energy efficiency, which should make them truly widespread in new productive systems, especially when more available models in a wide range.

In the near future, 8 processors for desktop systems of different levels will gradually become available to customers: Intel Core i7-2600K, Intel Core i7-2600, Intel Core i5-2500K, Intel Core i5-2500, Intel Core i5-2400, Intel Core i5-2300 , Intel Core i3-2120 and Intel Core i3-2100. Models with index K are distinguished by a free multiplier and a faster built-in Intel HD Graphics 3000 video adapter.

Energy-efficient (index S) and highly energy-efficient (index T) models have also been released for energy-critical systems.

To support the new processors, motherboards based on Intel P67 Express and Intel H67 Express chipsets are available today, and in the near future they are expected to feature Intel Q67 Express and Intel B65 Express, aimed at corporate users and small businesses. All of these chipsets have finally begun to support drives with the SATA 3.0 interface, although not all ports. But they do not support the seemingly even more popular USB 3.0 bus. Interesting features new chipsets for conventional motherboards is that they abandoned support for the PCI bus. In addition, now the clock generator is built into the chipset and it is possible to control its characteristics without affecting the stability of the system only in a very small range, if you are lucky, then only ±10 MHz, and in practice even less.

It should also be noted that different chipsets are optimized for use with different processors in systems intended for different purposes. That is, the Intel P67 Express differs from the Intel H67 Express not only by the lack of support for working with integrated video, but also by expanded capabilities for overclocking and performance tuning. In turn, Intel H67 Express does not notice the free multiplier at all in models with the K index.

But due to architectural features, overclocking Sandy Bridge processors is still possible only with the help of a multiplier if it is a K-series model. Although all models are prone to some optimization and overboost.

Thus, temporarily to create the illusion of working at a very powerful processor even models with a locked multiplier are capable of noticeable acceleration. The time for such acceleration for desktop systems, as mentioned above, is limited by hardware, and not just by temperature, as in mobile PCs.

After presenting all the architectural features and innovations, as well as updated proprietary technologies, all that remains is to once again summarize why Sandy Bridge is so innovative and remind us of its positioning.

In the near future it will be possible to buy processors for high-performance and mass-produced systems Intel series Core i7 and Intel Core i5, which differ in support for Intel Hyper-Threading technology (it is disabled for quad-core Intel Core i5 models) and the amount of third-level cache memory. For more economical buyers, new Intel Core i3 models are presented, which have 2 times fewer computing cores, albeit with support for Intel Hyper-Threading, only 3 MB of LLC cache, do not support Intel Turbo Boost 2.0 and are all equipped with Intel HD Graphics 2000 .

In the middle of the year, Intel Pentium processors will be introduced for mass systems (it is very difficult to abandon this brand, although it was predicted a year ago) based on a very simplified Sandy Bridge architecture. In fact, these “workhorse” processors will be reminiscent in capabilities of yesterday’s current Core i3-3xx on the Clarkdale core, because They will lose almost all the functions inherent in older models for LGA 1155.

It remains to be noted that the release of Sandy Bridge processors and the entire LGA 1155 desktop platform became the next “Tac” within the framework of Intel’s “Tic-Tac” concept, i.e. a major update of the architecture for release on the already established 32 nm process technology. In about a year, we will be waiting for Ivy Bridge processors with an optimized architecture and made using a 22 nm process technology, which, for sure, will again have “revolutionary energy efficiency,” but, we hope, will not eliminate the LGA 1155 processor socket. Well, we’ll wait and see. In the meantime, we have at least a year to study the Sandy Bridge architecture and comprehensively test it , which we are going to begin in the coming days.

Article read 14947 times

Subscribe to our channels

Several years ago, during the reign of the Pentium brand, the first appearance of the Intel Core brand and the microarchitecture of the same name (Architecture 101), the next generation of Intel microarchitecture with the working title Gesher ("bridge" in Hebrew) was first mentioned on slides about future processors, which is slightly later transformed into Sandy Bridge.

In that ancient time of the dominance of NetBurst processors, when the contours of the upcoming Nehalem cores were just beginning to emerge, and we were getting acquainted with the internal structure features of the first representatives of the Core microarchitecture - Conroe for desktop systems, Merom for mobile systems and Woodcrest for server systems...

In short, when the grass was green, and before Sandy Bridge it was still like before the moon, even then Intel representatives said that this would be a completely new processor microarchitecture. This is exactly how, say, today we can imagine the mysterious Haswell microarchitecture, which will appear after the Ivy Bridge generation, which, in turn, will replace Sandy Bridge next year.

However, the closer the release date of the new microarchitecture, the more we learn about its features, the more noticeable the similarities between neighboring generations become, and the more obvious the evolutionary path of changes in processor circuitry. And indeed, if between the initial reincarnations of the first Core architecture - Merom/Conroe, and the firstborn of the second Core generation- Sandy Bridge - in fact, there is an abyss of differences, then the current latest version The Core generation - the Westmere core - and the upcoming first version of the Core II generation being reviewed today - the Sandy Bridge core - may seem similar.

Yet the differences are significant. So significant that now we can finally talk about the end of the 15-year era of the P6 (Pentium Pro) microarchitecture and the emergence of a new generation of Intel microarchitecture.

⇡ Sandy Bridge microarchitecture: a bird's eye view

The Sandy Bridge chip is a quad-core 64-bit processor with out-of-order command execution, support for two data streams per core (HT), execution of four commands per clock cycle; with integrated graphics core and integrated DDR3 memory controller; with a new ring bus, support for 3- and 4-operand (128/256-bit) AVX (Advanced Vector Extensions) vector commands; the production of which is established on lines in compliance with the standards of modern 32-nm technological process Intel.

So, in short, in one sentence you can try to characterize the new generation of Intel Core II processors for mobile and desktop systems, mass deliveries of which will begin in the very near future.

Intel Core II processors based on the Sandy Bridge microarchitecture will be supplied in a new 1155-pin LGA1155 design for new motherboards based on Intel 6 Series chipsets.

Approximately the same microarchitecture will be relevant for server Intel solutions Sandy Bridge-EP, except with actual differences in the form more processor cores (up to eight), a corresponding LGA2011 processor socket, a larger L3 cache, an increased number of DDR3 memory controllers and support for PCI-Express 3.0.

The previous generation, the Westmere microarchitecture performed by Arrandale and Clarkdale for mobile and desktop systems, is a design of two crystals - a 32-nm processor core and an additional 45-nm “coprocessor” with a graphics core and a memory controller on board, placed on a single substrate and exchanging data via the QPI bus. In fact, at this stage, Intel engineers, using mainly previous developments, created a kind of integrated hybrid chip.

When creating the Sandy Bridge architecture, the developers completed the integration process that began during the creation of Arrandale/Clarkdale and placed all the elements on a single 32-nm chip, abandoning the classic look of the QPI bus in favor of a new ring bus. The essence of the Sandy Bridge microarchitecture remained within the framework of Intel's previous ideology, which relies on increasing the overall processor performance by improving the “individual” efficiency of each core.

The structure of the Sandy Bridge chip can be divided into the following main elements: processor cores, graphics core, L3 cache memory and the so-called “System Agent”.

In general, the structure of the Sandy Bridge microarchitecture is clear. Our task today is to find out the purpose and implementation features of each of the elements of this structure.

Ring Interconnect

The entire history of modernization of Intel processor microarchitectures in recent years is inextricably linked with the consistent integration into a single chip of an increasing number of modules and functions that were previously located outside the processor: in the chipset, on motherboard etc. Accordingly, as processor performance and the degree of chip integration increased, the requirements for the throughput of internal intercomponent buses grew at an accelerated pace. For the time being, even after the introduction of the graphics chip into the Arrandale/Clarkdale chip architecture, it was possible to make do with intercomponent buses with the usual cross topology - that was enough.

However, the efficiency of such a topology is high only with a small number of components taking part in data exchange. In the Sandy Bridge microarchitecture, to improve overall system performance, the developers decided to turn to the ring topology of the 256-bit intercomponent bus, based on new version QPI (QuickPath Interconnect) technology, expanded, refined and first implemented in the architecture of the Nehalem-EX server chip (Xeon 7500), and also planned for use in conjunction with the Larrabee chip architecture.

The ring bus in the Sandy Bridge version of the architecture for desktop and mobile systems (Core II) serves to exchange data between six key components of the chip: four x86 processor cores, a graphics core, L3 cache and a system agent. The bus consists of four 32-byte rings: Data Ring, Request Ring, Snoop Ring and Acknowledge Ring, in practice this effectively allows access to the 64-byte last-level cache interface to be split into two different packets. Bus management is carried out using a distributed arbitration communication protocol, while pipeline processing of requests occurs on clock frequency processor cores, which gives the architecture additional flexibility when overclocking. The performance of the ring bus is rated at 96 GB per second per connection at 3 GHz, which is effectively four times faster than previous generation Intel processors.

The ring topology and bus organization ensures minimal latency when processing requests, maximum performance and excellent scalability of the technology for chip versions with different numbers of cores and other components. According to company representatives, in the future, up to 20 processor cores per chip can be “connected” to the ring bus, and such a redesign, as you understand, can be carried out very quickly, in the form of a flexible and responsive response to current market needs. In addition, the ring bus is physically located directly above the L3 cache blocks in the top metalization layer, which simplifies the design layout and allows for a more compact chip.

L3 - last level cache, LLC

As you may have already noticed, on Intel slides the L3 cache is referred to as “last level cache,” that is, LLC - Last Level Cache. In the Sandy Bridge microarchitecture, the L3 cache is distributed not only among the four processor cores, but, thanks to the ring bus, also between the graphics core and the system agent, which, among other things, includes a hardware graphics acceleration module and a video output unit. At the same time, a special tracing mechanism prevents the occurrence of access conflicts between processor cores and graphics.

Each of the four processor cores has direct access to its “own” L3 cache segment, with each L3 cache segment providing half the width of its bus for ring data bus access, while physical addressing of all four cache segments is provided by a single hash function. Each L3 cache segment has its own independent ring bus access controller; it is responsible for processing requests for the placement of physical addresses. In addition, the cache controller constantly communicates with the system agent to monitor failed L3 accesses, monitor intercomponent communication, and uncacheable accesses.

Additional details about the structure and operating features of the L3 cache memory of Sandy Bridge processors will appear further in the text, in the process of getting to know the microarchitecture, as the need arises.

System Agent: DDR Memory Controller3, PCUand others

Previously, instead of the definition of System Agent, Intel terminology included the so-called “Non-Core” - Uncore, that is, “everything that is not included in the Core,” namely the L3 cache, graphics, memory controller, other controllers like PCI Express, etc. Out of habit, we often called most of this elements of the north bridge, transferred from the chipset to the processor.

The Sandy Bridge microarchitecture system agent includes a DDR3 memory controller, a Power Control Unit (PCU), PCI-Express 2.0 controllers, DMI, a video output unit, etc. Like all other elements of the architecture, the system agent is connected to common system via a high-performance ring bus.

The architecture of the standard version of the Sandy Bridge system agent implies the presence of 16 PCI-E 2.0 bus lanes, which can also be distributed over two 8-lane PCI-E 2.0 buses, or one 8-lane PCI-E 2.0 bus and two PCI-E buses. E 2.0 on four lines. The dual-channel DDR3 memory controller has now “returned” to the chip (in Clarkdale chips it was located outside the processor chip) and, most likely, will now provide significantly lower latency.

The fact that the memory controller in Sandy Bridge has become dual-channel is unlikely to please those who have already shelled out considerable sums for overclocking kits of three-channel DDR3 memory. Well, it happens that now sets of only one, two or four modules will be relevant.

We have some thoughts about returning to a dual-channel memory controller design. Perhaps Intel has started preparing microarchitectures to work with DDR4 memory? Which, due to the shift from the “star” topology to the “point-to-point” topology, in versions for desktop and mobile systems will, by definition, only be two-channel (special multiplexer modules will be used for servers). However, these are just guesses; there is not enough information about the DDR4 standard itself to make confident assumptions.

The power management controller located in the system agent is responsible for timely and dynamic scaling of supply voltages and clock frequencies of processor cores, graphics cores, caches, memory controllers and interfaces. What is especially important to emphasize is that power and clock speed are controlled independently for the processor cores and the graphics core.

A completely new version of Turbo Boost technology is implemented not least thanks to this power management controller. The fact is that, depending on the current state of the system and the complexity of the problem being solved, the Sandy Bridge microarchitecture allows Turbo Boost technology to “overclock” the processor cores and integrated graphics to a level significantly exceeding the TDP for quite a long time. And indeed, why not take advantage of this opportunity regularly, while the cooling system is still cold and can provide greater heat removal than an already warm one?

In addition to the fact that Turbo Boost technology now makes it possible to regularly “overclock” all four cores beyond the TDP limits, it is also worth noting that the performance and thermal management of graphics cores in Arrandale/Clarkdale chips are, in fact, only built-in, but not fully integrated into processor, was done using a driver. Now, in the Sandy Bridge architecture, this process is also assigned to the PCU controller. Such tight integration of the supply voltage and frequency control system made it possible to implement in practice much more aggressive scenarios for the operation of Turbo Boost technology, when both the graphics and all four processor cores, if necessary and under certain conditions, can simultaneously operate at increased clock frequencies with a significant excess of TDP, but without any side effects.

The operating principle of the new version of Turbo Boost technology implemented in Sandy Bridge processors is perfectly described in multimedia presentation, shown in September at the Intel Developer Forum in San Francisco. The video below of this moment in the presentation will tell you about Turbo Boost faster and better than any retelling.

We have yet to find out how effectively this technology will work in serial processors, but what Intel specialists showed during a closed demonstration of Sandy Bridge capabilities at IDF in San Francisco is simply amazing: both an increase in clock frequency and, accordingly, processor performance and graphics can instantly reach fantastic levels.

There is information that for standard cooling systems, the mode of such “overclocking” using Turbo Boost and exceeding TDP will be limited in the BIOS to a period of 25 seconds. But what if the producers motherboards will they be able to guarantee better heat dissipation using some exotic cooling system? This is where freedom opens up for overclockers...

Each of the four Sandy Bridge cores can be independently switched to a minimum power consumption mode if necessary, and the graphics core can also be switched to a very economical mode. The ring bus and L3 cache, due to their distribution between other resources, cannot be disabled, however, a special economical standby mode is provided for the ring bus when it is not loaded, and the L3 cache uses the traditional technology of turning off unused transistors, already known to us according to previous microarchitectures. Thus, Sandy Bridge processors in mobile PCs provide long-lasting battery life when powered by a battery.

Video output and multimedia hardware decoding modules are also included in the system agent elements. Unlike its predecessors, where hardware decoding was assigned to the graphics core (we'll talk about its capabilities next time), the new architecture uses a separate, much more productive and economical module for decoding multimedia streams, and only in the process of encoding (compressing) multimedia data, the capabilities of the shader units of the graphics core and the L3 cache are used.

In accordance with modern trends, 3D content playback tools are provided: the Sandy Bridge hardware decoding module can easily process two independent MPEG2, VC1 or AVC streams in Full HD resolution.

Today we got acquainted with the structure of the new generation of Intel Core II microarchitecture with the working title Sandy Bridge, we figured out the structure and operating principle of a number of key elements of this system: the ring bus, L3 cache memory and the system agent, which includes a DDR3 memory controller, a control module power supply and other components.

However, this is only a small part of the new technologies and ideas implemented in the Sandy Bridge microarchitecture; no less impressive and large-scale changes have affected the architecture of the processor cores and the integrated graphics system. So this is not the end of our story about Sandy Bridge - to be continued.

The capabilities of the Sandy Bridge GPU are generally comparable to those of the previous generation of similar Intel solutions, except that now, in addition to DirectX 10 capabilities, support for DirectX 10.1 has been added, instead of the expected support for DirectX 11. Accordingly, not many applications with OpenGL support are limited to hardware compatibility only with Version 3 of this free API specification.

Nevertheless, there are quite a lot of innovations in Sandy Bridge graphics, and they are mainly aimed at increasing performance when working with 3D graphics.

The main emphasis when developing the new graphics core, according to Intel representatives, was on maximizing the use of hardware capabilities for calculating 3D functions, and the same for processing media data. This approach is radically different from the fully programmable hardware model adopted, for example, by NVIDIA, or by Intel itself for the development of Larrabee (with the exception of texture units).

However, in the implementation of Sandy Bridge, the departure from programmable flexibility has its undeniable advantages, due to it, benefits that are more important for integrated graphics are achieved in the form of lower latency in executing operations, better performance against the backdrop of energy savings, a simplified driver programming model, and, importantly, with saving the physical size of the graphics module.

Programmable execution shader modules for Sandy Bridge graphics, traditionally called “execution units” (EU, Execution Units) at Intel, are characterized by increased register file sizes, which allows for efficient execution of complex shaders. Also, branching optimization is used in the new execution units to achieve better parallelization of executed commands.

In general, according to Intel representatives, the new execution units have double the throughput compared to the previous generation of integrated graphics, and the performance of calculations with transcendental numbers (trigonometry, natural logarithms, etc.) due to the emphasis on using the hardware computing capabilities of the model will increase by 4 -20 times.

The internal instruction set, enhanced in Sandy Bridge with a number of new ones, allows most DirectX 10 API instructions to be distributed in a one-to-one fashion, as is the case with the CISC architecture, resulting in significantly higher performance at the same clock speed.

Fast access via a fast ring bus to a distributed L3 cache with dynamically configurable segmentation reduces latency, improves performance, and at the same time reduces the frequency of GPU access to RAM.

Ring bus

The entire history of modernization of Intel processor microarchitectures in recent years is inextricably linked with the consistent integration into a single chip of an increasing number of modules and functions that were previously located outside the processor: in the chipset, on the motherboard, etc. Accordingly, as processor performance and the degree of chip integration increased, the requirements for the throughput of internal intercomponent buses grew at an accelerated pace. For the time being, even after the introduction of the graphics chip into the Arrandale/Clarkdale chip architecture, it was possible to make do with intercomponent buses with the usual cross topology - that was enough.

However, the efficiency of such a topology is high only with a small number of components taking part in data exchange. In the Sandy Bridge microarchitecture, to increase the overall system performance, the developers decided to turn to the ring topology of the 256-bit intercomponent bus (Fig. 6.1), based on a new version of the QPI (QuickPath Interconnect) technology, expanded, modified and first implemented in the architecture of the Nehalem server chip. EX (Xeon 7500), as well as planned for use in conjunction with the Larrabee chip architecture.

The Ring Interconnect in the version of the Sandy Bridge architecture for desktop and mobile systems serves to exchange data between six key components of the chip: four x86 processor cores, a graphics core, L3 cache, now called LLC (Last Level Cache), and system agent. The bus consists of four 32-byte rings: the Data Ring, the Request Ring, the Snoop Ring and the Acknowledge Ring, in practice this actually allows you to share access to the 64-byte interface last level cache into two different packages. Buses are controlled using a distributed arbitration communication protocol, while pipeline processing of requests occurs at the clock frequency of the processor cores, which gives the architecture additional flexibility when overclocking. Ring bus performance is rated at 96 GB per second per link at 3 GHz, effectively four times faster than previous generation Intel processors.

Fig.6.1. Ring Interconnect

The ring topology and bus organization ensures minimal latency when processing requests, maximum performance and excellent scalability of the technology for chip versions with different numbers of cores and other components. According to company representatives, in the future, up to 20 processor cores per chip can be “connected” to the ring bus, and such a redesign, as you understand, can be carried out very quickly, in the form of a flexible and responsive response to current market needs. In addition, the ring bus is physically located directly above the L3 cache blocks in the top metalization layer, which simplifies the design layout and allows for a more compact chip.

Splinting for periodontal diseases

Splinting- one of the methods of treating periodontal diseases, allowing to reduce the likelihood of tooth loss (removal).

Main indication for splinting in orthopedic practice - the presence of pathological mobility of teeth. Splinting is also desirable to prevent re-inflammation in periodontal tissues after treatment in the presence of chronic periodontitis.

Tires can be removable or non-removable.
Removable tires They can also be installed in the absence of some teeth; they create good conditions for oral hygiene and, if necessary, therapy and surgical treatment.

To the advantages fixed tires include the prevention of periodontal overload in any direction of influence, which removable dentures do not provide. The choice of splint type depends on many parameters and without knowledge of the pathogenesis of the disease, as well as the biomechanical principles of splinting, the effectiveness of treatment will be minimal.

Indications for the use of splinting structures of any type include:

To analyze these parameters, X-ray data and other additional research methods are used. In the early stages of periodontal disease and the absence of pronounced tissue damage (dystrophy), splinting can be dispensed with.

To the positive effects of splinting include the following points:

1. The splint reduces tooth mobility. The rigidity of the splint structure prevents teeth from becoming loose, which means it reduces the likelihood of a further increase in the amplitude of tooth vibrations and their loss. Those. the teeth can only move as much as the splint allows.
2. The effectiveness of the splint depends on the number of teeth. The more teeth, the greater the effect of splinting.
3. Splinting redistributes the load on the teeth. The main load when chewing will fall on healthy teeth. Loose teeth will be less susceptible to damage, which provides an additional benefit for healing. The more healthy teeth are included in the splinting, the more pronounced the unloading of mobile teeth will be. Therefore, if most of the teeth in the mouth are loose, the splint's effectiveness will be reduced.
4. The best results are obtained by splinting the front teeth (incisors and canines), and the best splints will be those that combine the largest number of teeth. Therefore, in ideal The splint must cover the entire dentition. The explanation is quite simple - from the point of view of stability, it is the arched structure that will be better than the linear one.
5. Due to the less stability of the linear structure, splinting of mobile molars is carried out symmetrically on both sides, uniting them with a bridge connecting these two almost linear rows. This design significantly increases the splinting effect. Other possible splinting options are considered depending on the characteristics of the disease.

Not all patients are fitted with permanent splints. The clinical picture of the disease, the state of oral hygiene, the presence of dental plaque, bleeding gums, the severity of periodontal pockets, the severity of tooth mobility, the nature of their displacement, etc. are taken into account.

The absolute indication for the use of permanent splinting structures includes pronounced tooth mobility with atrophy of the alveolar process of no more than ¼ of the length of the tooth root. For more pronounced changes, preliminary treatment of inflammatory changes in the oral cavity is initially carried out.

Installation of one or another type of tire depends on the severity of atrophy of the alveolar processes of the jaw, degree of tooth mobility, their location, etc. Thus, with pronounced mobility and atrophy of bone processes up to 1/3 of the height, fixed prostheses are recommended; in more severe cases, the use of removable and fixed prostheses is possible.

When determining the need for splinting, sanitation of the oral cavity is of great importance: dental treatment, treatment of inflammatory changes, removal of tartar, and even removal of some teeth if there are strict indications. All this gives maximum chances for successful treatment with splinting.

Fixed splints in orthopedic dentistry

Splints in orthopedic dentistry are used to treat periodontal diseases, in which pathological tooth mobility is detected. The effectiveness of splinting, like any other treatment in medicine, depends on the stage of the disease, and therefore on the timing of the start of treatment. Splints reduce the load on the teeth, which reduces periodontal inflammation, improves healing and overall well-being of the patient.

Tires must have the following properties:

Non-removable tires include the following types:

Ring tire.
It is a set of soldered metal rings, which, when placed on the teeth, ensure their strong fixation. The design may have individual characteristics of technology and materials for manufacturing. The quality of treatment depends on the accuracy of the fit. Therefore, the production of a splint goes through several stages: taking an impression, making a plaster model, making a splint and determining the amount of treatment of the dentition for reliable fixation of the splint.

Half-ring tire.
A semi-ring splint differs from a ring splint in the absence of a full ring on the outside of the dentition. This allows you to achieve greater aesthetics of the design while maintaining technology similar to the creation of a ring bus.

Cap splint.
It is a series of caps welded together, put on the teeth, covering its cutting edge and the inside (from the tongue). The caps can be solid or made from individual stamped crowns, which are then soldered together. The method is especially good in the presence of full crowns, to which the entire structure is attached.

Inlay tire.
The method is similar to the previous one, with the difference that the liner-cap has a protrusion that is installed in a recess on the top of the tooth, which strengthens its fixation and the entire structure of the tire as a whole. Just as in the previous case, the tire is attached to full crowns to give maximum stability to the structure.

Crown and half-crown splint.
A full-crown splint is used when the gums are in good condition, because... the risk of injury from the crown is high. Typically, metal-ceramic crowns are used, which have the maximum aesthetic effect. If there is atrophy of the alveolar processes of the jaw, equatorial crowns are placed, which slightly do not reach the gums and allow treatment of the periodontal pocket. A half-crown splint is a solid-cast structure or half-crowns welded together (crowns only on the inside of the tooth). Such crowns have the maximum aesthetic effect. But the tire requires virtuoso skill, because... It is quite difficult to prepare and attach such a tire. To reduce the likelihood of the half-crown detaching from the tooth, it is recommended to use pins that “nail” the crown to the tooth.

Interdental (interdental) splint.
The modern version of the splint method is the connection of two adjacent teeth with special implantable inserts that will mutually strengthen the adjacent teeth. Various materials can be used, however Lately preference is given to photopolymers, glass ionomer cement, and composite materials.

Tire of Treiman, Weigel, Strunz, Mamlok, Kogan, Brun etc. Some of these “name” tires have already lost their relevance, some have been modernized.

Fixed prosthetic splints are a special type of tire. They combine the solution of two problems: treatment of periodontal diseases and prosthetics of missing teeth. In this case, the splint has a bridge-like structure, where the main chewing load falls not on the prosthesis itself in place of the missing tooth, but on the supporting platforms of neighboring teeth. Thus, there are quite a few options for splinting with non-removable structures, which allows the doctor to choose a technique depending on the characteristics of the disease, the condition of a particular patient, and many other parameters.

Removable splints in orthopedic dentistry

Splinting with removable structures can be used both in the presence of a complete dentition and in the absence of some teeth. Removable splints usually do not reduce tooth mobility in all directions, but the positive aspects include the absence of the need for grinding or other treatment of teeth, the creation of good conditions for oral hygiene, as well as treatment.

If the dentition is preserved, use the following: types of tires:

Elbrecht tire.
The frame alloy is elastic, but quite durable. This provides protection against mobility of the dentition in all directions except vertical, i.e. does not provide protection during chewing load. That is why such a splint is used in the initial stages of periodontal disease, when moderate chewing load does not lead to progression of the disease. In addition, the Elbrecht splint is used in the presence of degree I tooth mobility (minimal mobility). The splint can have an upper (near the top of the tooth), middle or lower (root) location, and the splint can also be wide. The type of fastening and width of the splint depend on the specific situation, and therefore are selected by the doctor individually for each patient. It is possible to take into account the appearance of artificial teeth to change the design.

Elbrecht tire with T-shaped clasps
in the area of ​​the front teeth.

This design allows for additional fixation of the dental arch. However, this design is suitable only with minimal tooth mobility and the absence of severe periodontal inflammation, because such a design can cause additional trauma to the periodontium in the presence of pronounced inflammatory changes.
Removable splint with molded mouth guard.
This is a modification of the Elbrecht splint, which allows to reduce the mobility of incisors and canines in the vertical (chewing) direction. Protection is provided by the presence of special caps in the area of ​​the front teeth, which reduce the chewing load on them.

Circular tire.
It can be regular or with claw-like processes. Used for mild tooth mobility, because a significant deviation of the teeth from their axis leads to difficulties when trying to put on or remove a denture. If the teeth deviate significantly from their axis, it is recommended to use collapsible structures.
If some teeth are missing, removable dentures can also be used.

Considering the fact that tooth loss can provoke periodontal diseases, it becomes necessary to solve two problems: replacing the lost tooth and using splinting as a means of preventing periodontal diseases. Each patient will have his own characteristics of the disease, therefore the design features of the splint will be strictly individual. Quite often, prosthetics with temporary splinting are allowed to prevent the development of periodontal disease or other pathology. In any case, it is necessary to plan activities that contribute to the maximum therapeutic effect in a given patient. Thus, the choice of splint design depends on the number of missing teeth, the degree of deformation of the dentition, the presence and severity of periodontal diseases, age, pathology and type of occlusion, oral hygiene and many other parameters.

In general, in the absence of several teeth and severe periodontal pathology, preference is given to removable dentures. The design of the prosthesis is selected strictly individually and requires several visits to the doctor. Removable design requires careful planning and a specific sequence of actions:

Diagnosis and examination of periodontal disease.
Preparing the surface of the teeth and taking impressions for the future model
Model study and tire design planning
Modeling a wax reproduction of a splint
Obtaining a casting mold and checking the accuracy of the frame on a plaster model
Checking the splint (prosthetic splint) in the oral cavity
Final finishing (polishing) of the tire

Not all working steps are listed here, but even this list indicates the complexity of the procedure for manufacturing a removable splint (prosthetic splint). The complexity of manufacturing explains the need for several sessions with the patient and the length of time from the first to the last visit to the doctor. But the result of all efforts is always the same - restoration of anatomy and physiology, leading to restoration of health and social rehabilitation.

source: www.DentalMechanic.ru

Interesting articles:

Menstruation problems will get rid of baldness

id="0">According to German scientists, the plant, which was used by the American Indians to normalize the menstrual cycle, can get rid of... baldness.

Researchers at Ruhr University claim that black cohosh is the first known herbal ingredient that can stop hair loss associated with hormonal imbalances and even promote hair growth and thickness.

A substance like estrogen, a female hormone, has been used by Indians for many generations, and is still sold in the United States as a homeopathic remedy for the treatment of rheumatism, back pain and menstrual irregularities.

Black cohosh grows in eastern North America and reaches three meters in height.

A new, gentle testing system was used to test the drug's effects, the researchers said. Guinea pigs acted as experimental animals. Now they are probably more shaggy.

Neurosurgical treatment of neurological complications of lumbar disc herniations

id="1">

K.B. Yrysov, M.M. Mamytov, K.E. Estemesov.
Kyrgyz State Medical Academy, Bishkek, Kyrgyz Republic.

Introduction.

Discogenic lumbosacral radiculitis and other compression complications of lumbar disc herniations occupy a leading place among diseases of the peripheral nervous system. They make up 71-80% of the total number of these diseases and 11-20% of all diseases of the central nervous system. This indicates that lumbar disc pathology is significantly widespread among the population, affecting people predominantly of young and working age (20-55 years), leading them to temporary and/or permanent disability. .

Certain forms of discogenic lumbosacral radiculitis often occur atypically and their recognition causes significant difficulties. This applies, for example, to radicular lesions due to herniated lumbar discs. More serious complications can occur if the root is accompanied and compressed by an additional radiculomedullary artery. Such an artery takes part in the blood supply to the spinal cord, and its occlusion can cause an infarction over several segments. In this case, true cone, epiconus, or combined cone-epiconus syndromes develop. .
It cannot be said that little attention is paid to the treatment of lumbar disc herniations and their complications. In recent years, numerous studies have been conducted with the participation of orthopedists, neurologists, neurosurgeons, radiologists and other specialists. Facts of primary importance were obtained that forced us to reevaluate and rethink a number of provisions of this problem.

However, there are still opposing views on many theoretical and practical issues, in particular, issues of pathogenesis, diagnosis and selection of the most adequate treatment methods require further study.

Purpose of this work was an improvement in the results of neurosurgical treatment and the achievement of stable recovery of patients with neurological complications of herniated lumbar intervertebral discs by improving topical diagnostics and surgical treatment methods.

Material and methods.

For the period from 1995 to 2000. We examined and operated on 114 patients with neurological complications of lumbar intervertebral disc herniations using a posterior neurosurgical approach. Among them there were 64 men and 50 women. All patients were operated on using microneurosurgical techniques and instruments. The age of the patients varied from 20 to 60 years, the majority of patients were aged 25-50 years, mostly male. The main group consisted of 61 patients who, in addition to severe pain, had acute or gradually developed motor and sensory disorders, as well as gross dysfunction of the pelvic organs, operated on using extended approaches such as hemi- and laminectomy. The control group consisted of 53 patients operated on using an interlaminar approach.

Results.

The clinical features of neurological complications of lumbar intervertebral disc herniations were studied and characteristic clinical symptoms of damage to the spinal roots were identified. 39 patients were characterized by a special form of discogenic radiculitis with a peculiar clinical picture, where paralysis of the muscles of the lower extremities came to the fore (in 27 cases - bilateral, in 12 - unilateral). The process was not limited to the cauda equina; spinal symptoms were also detected.
In 37 patients, there was damage to the conus of the spinal cord, where characteristic clinical symptoms were loss of sensitivity in the perineal area, anogenital paresthesia and peripheral dysfunction of the pelvic organs.

The clinical picture in 38 patients was characterized by the phenomena of myelogenous intermittent claudication, which was accompanied by paresis of the feet; Fascicular twitching of the muscles of the lower extremities was noted, and there were pronounced dysfunctions of the pelvic organs - urinary and fecal incontinence.
Diagnosis of the level and nature of damage to the spinal cord roots by disc herniation was carried out on the basis of a diagnostic complex, including a thorough neurological examination, X-ray (102 patients), X-ray contrast (30 patients), computed tomography (45 patients) and magnetic resonance (27 patients) research.

When choosing indications for surgery, we were guided by the clinical picture of neurological complications of lumbar disc herniations, identified during a thorough neurological examination. The absolute indication was the presence in patients of cauda equina root compression syndrome, the cause of which was prolapse of a disc fragment with a medial location. In this case, dysfunction of the pelvic organs predominated. The second undeniable indication was the presence of movement disorders with the development of paresis or paralysis of the lower extremities. The third indication was the presence of severe pain that was not amenable to conservative treatment.

Neurosurgical treatment of neurological complications of lumbar intervertebral disc herniation consisted of eliminating those pathologically altered spinal structures that directly caused compression or reflex vascular-trophic pathology of the cauda equina roots; vessels that run as part of the root and participate in the blood supply to the lower segments of the spinal cord. Pathologically altered anatomical structures of the spine included elements of a degenerated intervertebral disc; osteophytes; hypertrophy of the ligamentum flavum, arches, articular processes; varicose veins of the epidural space; pronounced cicatricial adhesive epiduritis, etc.
The choice of approach was based on the fulfillment of the basic requirements for surgical intervention: minimal trauma, maximum visibility of the object of intervention, ensuring the least likelihood of intra- and postoperative complications. Based on these requirements, in the neurosurgical treatment of neurological complications of lumbar intervertebral disc herniations, we used posterior extended approaches such as hemi- and laminectomy (partial, complete) and laminectomy of one vertebra.

In our study, out of 114 operations for neurological complications of lumbar intervertebral disc herniations, in 61 cases it was necessary to deliberately undergo extended operations. Preference was given to hemilaminectomy (52 patients), laminectomy of one vertebra (9 patients) over interlaminar access, which was used in 53 cases and served as a control group for comparative assessment results of surgical treatment (Table 1).

In all cases of surgical interventions, we had to separate scar-adhesive epidural adhesions. This circumstance acquires special significance in neurosurgical practice, given that the surgical wound is distinguished by significant depth and relative narrowness, and the scar-adhesive process involves exclusively functionally important neurovascular elements of the spinal motion segment.

Table 1. The volume of surgical intervention depending on the location of the disc herniation.

Localization of disc herniation

Total

ILE

GLE

LE

Posterolateral

Paramedian

Middle

Total

Word abbreviations: ILE-interlaminectomy, GLE-hemilaminectomy, LE-laminectomy.

The immediate results of neurosurgical treatment were assessed according to the following scheme:
-Good: absence of pain in the lower back and legs, complete or almost complete restoration of movements and sensitivity, good tone and strength of the muscles of the lower extremities, restoration of impaired functions of the pelvic organs, ability to work is completely preserved.

Satisfactory: significant regression of pain, incomplete restoration of movements and sensitivity, good tone of the leg muscles, significant improvement in the function of the pelvic organs, ability to work is almost preserved or reduced.

Unsatisfactory: incomplete regression of the pain syndrome, motor and sensory disturbances persist, muscle tone and strength of the lower extremities are reduced, the functions of the pelvic organs are not restored, work capacity is reduced or disability.

In the main group (61 patients), the following results were obtained: good - in 45 patients (72%), satisfactory - in 11 (20%), unsatisfactory - in 5 patients (8%). Among the last 5 patients, the operation was performed within 6 months. up to 3 years from the moment of complications development.

In the control group (53 patients), the immediate results were: good - in 5 patients (9.6%), satisfactory - in 19 (34.6%), unsatisfactory - in 29 (55.8%). These data allowed us to consider the interlaminar approach for neurological complications of lumbar intervertebral disc herniations to be ineffective.

When analyzing the results of our study, no serious complications noted in the literature (damage to blood vessels and abdominal organs, air embolism, necrosis of vertebral bodies, discitis, etc.) were noted. These complications were prevented by the use of optical magnification, microsurgical instrumentation, accurate preoperative determination of the level and nature of the lesion, adequate anesthesia, and early mobilization of patients after surgery.

Based on the experience of our observations, it has been proven that early surgical intervention in the treatment of patients with neurological complications of lumbar disc herniations gives a more favorable prognosis.
Thus, the use of a complex of topical diagnostic methods and microneurosurgical techniques in combination with expanded surgical approaches effectively helps to restore patients’ ability to work, reduce their length of hospital stay, and also improve the results of surgical treatment of patients with neurological complications of lumbar intervertebral disc herniations.

Literature:

1. Verkhovsky A.I. Clinic and surgical treatment of recurrent lumbosacral radiculitis // Abstract of thesis. dis... cand. honey. Sci. - L., 1983.
2. Gelfenbein M. S. International Congress dedicated to the treatment of chronic pain syndrome after operations on the lumbar spine "Pain management" 98" (Failed back surgery syndrome) // Neurosurgery. - 2000. - No. 1-2. - P. 65 .
3. Dolgiy A. S., Bodrakov N. K. Experience of surgical treatment of patients with hernias of the lumbosacral spine in the neurosurgery clinic // Current problems of neurology and neurosurgery. - Rostov n/d., 1999. - P. 145.
4. Musalatov Kh.A., Aganesov A.G. Surgical rehabilitation of radicular syndrome in osteochondrosis of the lumbar spine (Microsurgical and puncture discectomy). - M.: Medicine, 1998.- 88c.
5. Shchurova E.N., Khudyaev A.T., Shchurov V.A. Informativeness of laser Doppler flowmetry in assessing the state of microcirculation of the dural sac and spinal root in patients with lumbar intervertebral hernia. Flowmetry methodology, Issue 4, 2000, pp. 65-71.
6. Diedrich O, Luring C, Pennekamp PH, Perlick L, Wallny T, Kraft CN. Effect of posterior lumbar interbody fusion on the lumbar sagittal spinal profile. Z Orthop Ihre Grenzgeb. 2003 Jul-Aug;141(4):425-32.
7. Hidalgo-Ovejero AM, Garcia-Mata S, Sanchez-Villares JJ, Lasanta P, Izco-Cabezon T, Martinez-Grande M. L5 root compression resulting from an L2-L3 disc herniation. Am J Orthop. 2003 Aug;32(8):392-4.
8. Morgan-Hough CV, Jones PW, Eisenstein SM. Primary and revision lumbar discectomy. A 16-year review from one center. J Bone Joint Surg Br. 2003 Aug;85(6):871-4.
9. Schiff E, Eisenberg E. Can quantitative sensory testing predict the outcome of epidural steroid injections in sciatica? A preliminary study. Anesth Analg. 2003 Sep;97(3):828-32.
10. Yeung AT, Yeung CA. Advances in endoscopic disc and spine surgery: foraminal approach. Surg Technol Int. 2003 Jun;11:253-61.

Mercury in fish is not that dangerous

id="2">Mercury, which is formed in fish meat, is actually not as dangerous as previously thought. Scientists have found that mercury molecules in fish are not that toxic to humans.

“We have cause for optimism from our research,” said Graham George, study leader at the Stanford University Radiation Laboratory in California. “Mercury in fish may not be as toxic as many people think, but we still have a lot to learn.” before we can make a final conclusion."

Mercury is a powerful neurotoxin. It enters the body in large quantities, a person may lose sensitivity, have a cramp, have problems with hearing and vision, and in addition, there is a high probability of a heart attack. Mercury in its pure form cannot enter the human body. As a rule, it ends up there along with the eaten meat of animals that ate plants contaminated with mercury or drank water that contained mercury molecules.

The meat of predatory marine fish such as tuna, swordfish, shark, lofolatilus, king mackerel, marlin and red snapper, as well as all types of fish living in polluted waters, most often contains high levels of mercury. By the way, mercury is a heavy metal that accumulates at the bottom of the reservoir where such fish live. Because of this, in the United States, doctors recommend that pregnant women limit their consumption of these fish.

The consequences of consuming fish high in mercury are not yet clear. However, studies of the population in the area of ​​​​a Finnish lake contaminated with mercury indicate a predisposition of local inhabitants to cardiovascular diseases. In addition, it is assumed that even lower concentrations of mercury may lead to certain impairments.

Recent studies in the UK on mercury concentrations in toenail tissue and DHA acid content in fat cells have shown that fish consumption is the main source of mercury ingestion in humans.

A study by specialists from Stanford University proves that in the body of fish, mercury interacts with other substances than in humans. The researchers say they hope that their developments will help create drugs that remove toxins from the body.

Height, weight and ovarian cancer

id="3">Results from a study of 1 million Norwegian women, published in the August 20 issue of the Journal of the National Cancer Institute, suggest that tall height and increased body mass index during puberty are risk factors for cancer. ovaries.

It has previously been shown that height is directly related to the risk of developing malignant tumors, but its connection specifically with ovarian cancer has not received much attention. In addition, results from previous studies have been inconsistent, especially regarding the relationship between body mass index and ovarian cancer risk.

To clarify the situation, a team of scientists from the Norwegian Institute of Public Health, Oslo, analyzed data on approximately 1.1 million women followed for an average of 25 years. Approximately, by the age of 40, 7882 subjects had a confirmed diagnosis of ovarian cancer.

As it turned out, body mass index in adolescence was a reliable predictor of the risk of developing ovarian cancer. Women who had a body mass index score of the 85th percentile or higher during adolescence were 56 percent more likely to develop ovarian cancer than women with an index score between the 25th and 74th percentile. It should also be noted that no significant association has been found between the risk of developing ovarian cancer and body mass index in adulthood.

Researchers state that in women under 60 years of age, height, like weight, is also a reliable predictor of the risk of developing this pathology, especially the endometrioid type of ovarian cancer. For example, women whose height is 175 cm or more are 29 percent more likely to develop ovarian cancer than women who are 160 to 164 cm tall.

Dear girls and women, being graceful and feminine is not only beautiful, but also healthy, in the sense of good for health!

Fitness and pregnancy

id="4">So, you are used to leading an active lifestyle, regularly visiting a sports club... But one fine day you will find out that you will soon become a mother. Naturally, the first thought is that you will have to change your habits and, apparently, give up fitness classes. But doctors believe that this opinion is wrong. Pregnancy is not a reason to stop playing sports.

It must be said that recently more and more women agree with this point of view. After all, performing certain exercises selected by the instructor during pregnancy has absolutely no effect. negative influence on the growth and development of the fetus, and also do not change the physiological course of pregnancy and childbirth.
On the contrary, regular fitness classes increase the physical capabilities of the female body, increase psycho-emotional stability, improve the functioning of the cardiovascular, respiratory and nervous systems, and have a positive effect on metabolism, as a result of which the mother and her unborn baby are provided with a sufficient amount of oxygen.
Before you start exercising, you need to determine the adaptation capabilities to physical activity, take into account the experience of sports activities (whether the person has been involved before or not, his “sports experience”, etc.). Of course, for a woman who has never engaged in any kind of sport, physical exercises should only be carried out under the supervision of a doctor (this could be a fitness doctor in a club).
The training program for the expectant mother should include both general developmental exercises and special ones aimed at strengthening the muscles of the spine (especially the lumbar region), as well as certain breathing exercises (breathing skills) and relaxation exercises.
The training program for each trimester is different, taking into account the woman’s health condition.
By the way, many exercises are aimed at reducing the perception of pain during childbirth. You can do them both at special courses for expectant mothers and in many fitness clubs that have similar programs. Regular walking also reduces discomfort and makes labor easier. In addition, as a result of exercise, the firmness and elasticity of the abdominal wall increases, the risk of visceroptosis decreases, congestion in the pelvic area and lower extremities decreases, and the flexibility of the spine and joint mobility increases.
And according to studies conducted by Norwegian, Danish, American and Russian scientists, it has been proven that sports activities have a positive effect not only on the woman herself, but on the development and growth of the unborn baby.

Where to begin?
Before starting to exercise, a woman must undergo a medical examination to find out about possible contraindications to physical activity and determine her physical level. Contraindications to classes can be general and special.
General contraindications:
acute illness
exacerbation of a chronic disease
· decompensation of functions of any body systems
general severe condition or moderate condition

Special contraindications:
· toxicosis
recurrent miscarriage
· large number of abortions
all cases of uterine bleeding
· risk of miscarriage
multiple pregnancy
polyhydramnios
umbilical cord entanglement
Congenital malformations of the fetus
Features of the placenta

Next, you need to decide what exactly you want to do, whether group training suits you or not. In general, classes can be very different:
· special, individual classes conducted under the supervision of an instructor
· group classes in a variety of fitness areas
Exercises in water have a calming effect
The most important thing when drawing up a training program is the connection between exercises and the duration of pregnancy, an analysis of the state of health and processes in each trimester, and the body’s reaction to the load.

Features of training by trimester
First trimester (up to 16th week)
During this period, tissue formation and differentiation occurs; the connection between the fertilized egg and the maternal body is very weak (and therefore any strong load can cause termination of pregnancy).
During this period, an imbalance of the autonomic nervous system occurs, which often leads to nausea, constipation, flatulence, a restructuring of metabolic processes towards accumulative processes, and the need of body tissues for oxygen increases.
The training carried out should activate the work of the cardiovascular and bronchopulmonary systems, normalize the function of the nervous system, and increase the overall psycho-emotional tone.
During this period, the following are excluded from the set of exercises:
straight leg raises
lifting both legs together
abrupt transition from a lying position to a sitting position
· sharp bends of the body
· sharp bending of the body

Second trimester (from 16 to 32 weeks)
During this period, the formation of the third circle of blood circulation occurs between mother and fetus.
During this period, there may be instability in blood pressure (with a tendency to increase), inclusion of the placenta in the metabolism (estrogens and progesterones produced by it increase the growth of the uterus and mammary glands), changes in posture (increase in lumbar lordosis, pelvic tilt angle and load on the back extensors) . There is a flattening of the foot and an increase in pressure in the veins, which can often lead to swelling and dilation of the veins in the legs.
Classes during this period should form and consolidate the skills of deep and rhythmic breathing. It is also useful to do exercises to reduce venous congestion and strengthen the arch of the foot.
In the second trimester, exercises in the supine position are most often excluded.

Third trimester (from 32 weeks until birth)
During this period, the uterus enlarges, the load on the heart increases, changes occur in the lungs, venous outflow from the legs and pelvis worsens, and the load on the spine and arch of the foot increases.
Classes during this period are aimed at improving blood circulation in all organs and systems, reducing various congestion, as well as stimulating work
intestines.
When drawing up a program for the third trimester, there is always a slight decrease in the overall load, as well as a decrease in the load on the legs and the range of leg movements.
During this period, bending the body forward is excluded, and the initial standing position can be used only in 15-20% of exercises.

15 Principles for Exercising During Pregnancy
REGULARITY – it is better to train 3-4 times a week (1.5-2 hours after breakfast).
A POOL is a great place for safe and healthy exercise.
PULSE CONTROL - on average up to 135 beats/min (at 20 years old it can be up to 145 beats/min).
BREATHING CONTROL – a “talking test” is carried out, that is, during the exercises you must talk calmly.
BASAL TEMPERATURE - no more than 38 degrees.
INTENSIVE LOAD - no more than 15 minutes (the intensity is very individual and depends on training experience).
ACTIVITY - training should not start abruptly and end abruptly.
COORDINATION – exercises with high coordination, with rapid changes in direction of movement, as well as jumping, pushing, balance exercises, with maximum flexion and extension in the joints are excluded.
STARTING POSITION - the transition from horizontal to vertical position and vice versa should be slow.
BREATHING - exclude exercises with straining and holding your breath.
CLOTHING – light, open.
WATER – compliance with the drinking regime is mandatory.
CLASSROOM - well ventilated and with a temperature of 22-24 degrees.
FLOOR (HALL COVERING) – must be stable and non-slippery.
AIR – daily walks are required.

Holland holds the world championship in liberalism

id="5">This week, Holland will become the first country in the world where hashish and marijuana will be sold in pharmacies with a doctor’s prescription, Reuters reported on August 31.

This humanitarian gesture by the government will help alleviate the suffering of patients with cancer, AIDS, multiple sclerosis and various neuralgia. According to experts, more than 7,000 people bought these soft drugs specifically for pain relief purposes.

Hashish was used as a pain reliever for over 5,000 years until it was replaced by stronger synthetic drugs. Moreover, doctors’ views on its medical properties differ: some consider it a natural and therefore more harmless drug. Others claim that cannabis increases the risk of depression and schizophrenia. But both of them agree on one thing: it will bring nothing but relief from suffering to terminally ill people.

Holland is generally famous for its liberal views - let us recall that it was also the first in the world to allow same-sex marriage and euthanasia.

Is the heart a perpetual motion machine?

id="6">Scientists from the Proceedings of the National Academy of Sciences say that stem cells can become a source of myocardiocyte formation during cardiac hypertrophy in humans.

Previously, it was traditionally believed that an increase in heart mass in adulthood is possible only due to an increase in the size of myocardiocytes, but not due to an increase in their number. However, more recently, this truth has been shaken. Scientists have discovered that in particularly difficult situations, myocardiocytes can multiply by fission or regenerate. But still, it is not yet clear how exactly regeneration of heart tissue occurs.

A team of scientists from New York Medical College, Valhalla studied heart muscle taken from 36 patients with aortic valve stenosis during heart surgery. The control was heart muscle material taken from 12 deceased individuals in the first 24 hours after death.

The authors note that the increase in heart mass in patients with aortic valve stenosis is due to both an increase in the mass of each myocardiocyte and an increase in their number in general. Digging deeper into the process, scientists discovered that new myocardiocytes are formed from stem cells that were destined to become these cells.

It was revealed that the content of stem cells in the cardiac tissue of patients with aortic valve stenosis is 13 times higher than in representatives of the control group. Moreover, the state of hypertrophy enhances the process of growth and differentiation of these cells. The scientists state: “The most significant finding of this study is that cardiac tissue contains primitive cells that are typically misidentified as hematopoietic cells due to their similar genetic structure.” The regenerative capacity of the heart, due to stem cells, in the case of aortic valve stenosis is approximately 15 percent. Approximately the same figures are observed in the case of a heart transplant from a female donor to a male recipient. The so-called chimerization of cells occurs, namely, after some time, approximately 15 percent of heart cells have a male genotype.

Experts hope that the data from these studies and the results of previous work on chimerism will generate even greater interest in the field of cardiac regeneration.

August 18, 2003, Proc Natl Acad Sci USA.

Term network topology means a way of connecting computers into a network. You may also hear other names - network structure or network configuration (It is the same). In addition, the concept of topology includes many rules that determine the placement of computers, methods of laying cables, methods of placing connecting equipment, and much more. To date, several basic topologies have been formed and established. Of these, we can note “ tire”, “ring" And " star”.

Bus topology

Topology tire (or, as it is often called common bus or highway ) involves the use of one cable to which all workstations are connected. The common cable is used by all stations in turn. All messages sent by individual workstations are received and listened to by all other computers connected to the network. From this stream, each workstation selects messages addressed only to it.

Advantages of the bus topology:

  • ease of setup;
  • relative ease of installation and low cost if all workstations are located nearby;
  • The failure of one or more workstations does not in any way affect the operation of the entire network.

Disadvantages of the bus topology:

  • bus problems anywhere (cable break, network connector failure) lead to network inoperability;
  • difficulty in troubleshooting;
  • low performance – at any given time, only one computer can transmit data to the network; as the number of workstations increases, network performance decreases;
  • poor scalability - to add new workstations it is necessary to replace sections of the existing bus.

It was according to the “bus” topology that local networks were built on coaxial cable. In this case, sections of coaxial cable connected by T-connectors acted as a bus. The bus was laid through all the rooms and approached each computer. The side pin of the T-connector was inserted into the connector on the network card. This is what it looked like: Now such networks are hopelessly outdated and have been replaced everywhere by “star” twisted pair cables, but equipment for coaxial cable can still be seen in some enterprises.

Ring topology

Ring is a local network topology in which workstations are connected in series to each other, forming a closed ring. Data is transmitted from one workstation to the other in one direction (in a circle). Each PC works as a repeater, relaying messages to the next PC, i.e. data is transferred from one computer to another as if in a relay race. If a computer receives data intended for another computer, it transmits it further along the ring; otherwise, it is not transmitted further.

Advantages of ring topology:

  • ease of installation;
  • almost complete absence of additional equipment;
  • Possibility of stable operation without a significant drop in data transfer speed under heavy network load.

However, the “ring” also has significant disadvantages:

  • each workstation must actively participate in the transfer of information; if at least one of them fails or the cable breaks, the operation of the entire network stops;
  • connecting a new workstation requires a short-term shutdown of the network, since the ring must be open during installation of a new PC;
  • complexity of configuration and setup;
  • Difficulty in troubleshooting.

Ring network topology is used quite rarely. It found its main application in fiber optic networks Token Ring standard.

Star topology

Star is a local network topology where each workstation is connected to a central device (switch or router). The central device controls the movement of packets in the network. Each computer via network card connects to the switch with a separate cable. If necessary, you can combine several networks together with a star topology - as a result you will get a network configuration with tree-like topology. Tree topology is common in large companies. We will not consider it in detail in this article.

The “star” topology today has become the main one in the construction local networks. This happened due to its many advantages:

  • failure of one workstation or damage to its cable does not affect the operation of the entire network;
  • excellent scalability: to connect a new workstation, just lay a separate cable from the switch;
  • easy troubleshooting and network interruptions;
  • high performance;
  • ease of setup and administration;
  • Additional equipment can be easily integrated into the network.

However, like any topology, the “star” is not without its drawbacks:

  • failure of the central switch will result in the inoperability of the entire network;
  • additional costs for network hardware– a device to which all computers on the network will be connected (switch);
  • the number of workstations is limited by the number of ports in the central switch.

Star – the most common topology for wired and wireless networks. An example of a star topology is a network with a twisted pair cable and a switch as the central device. These are the networks found in most organizations.